Support us and view this ad

可选:点击以支持我们的网站

免费文章

1. Introduction: The Challenge of Edge-AI News Aggregation in Constrained Networks Traditional news aggregation relies on cloud-based NLP pipelines with high-bandwidth internet connectivity. However, for scenarios like emergency response, off-grid deployments, or privacy-sensitive environments, a decentralized, low-power solution is required. BLE Mesh, built on Bluetooth Low Energy, offers a scalable, multi-hop network for thousands of nodes. The challenge is to run real-time AI inference (e.g., topic classification, sentiment analysis, or summarization) on these resource-constrained nodes, while keeping latency under 500ms for a news article to be classified and relayed. This article presents a technical architecture where a BLE Mesh node acts as a "provisioned news aggregator." It listens to encrypted news packets, performs on-device inference using a quantized TinyML model, and re-broadcasts the classification result via BLE Mesh. We focus on the Python-based inference engine running on an ESP32-S3 (or nRF5340) with a custom BLE Mesh stack. The core innovation is a time-sliced inference scheduler that interleaves BLE Mesh packet processing with neural network forward passes, avoiding frame drops. 2. Core Technical Principle: Time-Division Inference on a BLE Mesh Node The system is built around a state machine with four states: IDLE, RX_PKT, INFERENCE, and TX_PKT. The BLE Mesh stack runs on a proprietary RTC timer with a slot period of 1 ms. Each slot, the node checks for incoming packets. If a packet is detected (based on the CRC and netKey validation), the node stores the payload in a circular buffer and transitions to RX_PKT state. The inference engine operates only during the INFERENCE state, which is triggered by a threshold of accumulated packets (e.g., 10 news snippets) or a forced timer (every 500 ms). This prevents the neural network from blocking the BLE Mesh radio for more than 10ms at a time. The key parameter is the inference latency budget (ILB). For a typical TinyML model (e.g., a 4-layer CNN with 32 filters), a forward pass on an ESP32-S3 at 240 MHz takes ~35 ms. To avoid desynchronization with the BLE Mesh slot, we split the inference into 5 micro-steps of 7 ms each, with a context save/restore mechanism. This is done using a cooperative multitasking approach: the Python runtime (MicroPython) yields control after each layer computation. Mathematical Model: Let \( T_{slot} = 1 \text{ ms} \), \( N_{packets\_per\_inference} = 10 \). The total time to accumulate packets is \( T_{acc} = N_{packets} \times T_{slot} \times P_{rx} \), where \( P_{rx} \) is the probability of receiving a packet per slot (assume 0.3). Then \( T_{acc} \approx 33 \text{ ms} \)....

继续阅读完整内容

支持我们的网站,请点击查看下方广告

正在加载广告...

Login