BRN Discussion Ongoing

Diogenese

Top 20
Hi @JB49,

Here's the link #89,847 to Diogenese last post about this patent.

Valeo have publicly stated that SCALA 3 is capable of 3D object detection, prediction, and fusion of Lidar, radar, camera, and ultrasonic data. This kind of multi-modal sensor fusion would definitely benefit from state-space models like TENNs that can model long-range temporal dependencies efficiently.

Diogenese has also previously said that "SCALA 3 is just the lidar electro-optical transceiver. They use software to process the signals. TENNS could be in the software."



View attachment 88413
Yes. Unfortunately, the SDV (software defined vehicle) is the Pretender for the moment while we wait in the interregnum for the coronation of NN SoC. Although Akida has pulled the sword from the stone, they're still waiting for the release of the video.
 
  • Like
  • Love
  • Fire
Reactions: 10 users
Yes. Unfortunately, the SDV (software defined vehicle) is the Pretender for the moment while we wait in the interregnum for the coronation of NN SoC. Although Akida has pulled the sword from the stone, they're still waiting for the release of the video.

Maybe one step closer though.

Just up on GitHub.

Suggest readers absorb the whole post to understand the intent of this repository.

Especially terms such as federated learning, scalable, V2X, MQTT, prototype level and distributed.

From what I can find, if correct, the author is as below and doesn't necessarily mean VW involved but suspect would be aware of this work in some division / department.



Fernando Sevilla Martínez​




SevillaFe/SNN_Akida_RPI5

Fernando Sevilla MartínezSevillaFe​



SevillaFe/SNN_Akida_RPI5Public

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware


SNN_Akida_RPI5​

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

This work presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks.

1. Hardware and Software Setup​

The proposed deployment platform integrates two key hardware components: the RPI5 and the Akida board. Together, they enable a power-efficient, cost-effective N-S suitable for real-world edge AI applications.

2. Enabling Secure Remote Access and Distributed Neuromorphic Edge Networks​

The deployment of low-power N-H in networked environments requires reliable, secure, and lightweight communication frameworks. Our system enables full remote operability of the RPI5 and Akida board via SSH, complemented by protocol layers (Message Queuing Telemetry Transport (MQTT), WebSockets, Vehicle-to-Everything (V2X)) that support real-time, event-driven intelligence across edge networks.

3. Training and Running Spiking Neural Networks​

The training pipeline begins with building an ANN using TensorFlow 2.x, which will later be mapped to a spike-compatible format for neuromorphic inference. Because Akida board runs models using low-bitwidth integer arithmetic (4–8 bits), it is critical to align the training phase with these constraints to avoid significant post-training performance degradation.

4. Use case validation: Networked neuromorphic AI for distributed intelligence​

4.1 Use Case: If multiple RPI5 nodes or remote clients need to receive the classification results in real-time, MQTT can be used to broadcast inference outputs​

MQTT-Based Akida Inference Broadcasting​

This project demonstrates how to perform real-time classification broadcasting using BrainChip Akida on Raspberry Pi 5 with MQTT.

Project Structure​

mqtt-akida-inference/
├── config/ # MQTT broker and topic configuration
├── scripts/ # MQTT publisher/subscriber scripts
├── sample_data/ # Sample input data for inference
├── requirements.txt # Required Python packages


Usage​

  1. Install Mosquitto on RPI5
sudo apt update
sudo apt install mosquitto mosquitto-clients -y
sudo systemctl enable mosquitto
sudo systemctl start mosquitto

  1. Run Publisher (on RPI5)
python3 scripts/mqtt_publisher.py


  1. Run Subscriber (on remote device)
python3 scripts/mqtt_subscriber.py


  1. Optional: Monitor from CLI
mosquitto_sub -h <BROKER_IP> -t "akida/inference" -v

Akida Compatibility

python3 outputs = model_akida.predict(sample_image)


Real-Time Edge AI This use case supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics.

Configurations Set your broker IP and topic in config/config.py

4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision​

This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.

Folder Structure​

V2X/
├── config.py # V2X settings
├── v2x_transmitter.py # Simulated Akida alert broadcaster
├── v2x_receiver.py # Listens for incoming V2X alerts
└── README.md


Use Case​

If the Akida accelerator is deployed in an autonomous driving system, this setup allows:

  • Broadcasting high-confidence AI alerts (e.g., "pedestrian detected")
  • Receiving alerts on nearby systems for real-time awareness

Usage​

1. Start the V2X Receiver (on vehicle or infrastructure node)​

python3 receiver/v2x_receiver.py

2. Run the Alert Transmitter (on an RPI5 + Akida node)​

python3 transmitter/v2x_transmitter.py

Notes​

  • Ensure that devices are on the same LAN or wireless network
  • UDP broadcast mode is used for simplicity
  • This is a prototype for real-time event-based message sharing between intelligent nodes

4.3 Use Case: If multiple RPI5-Akida nodes are deployed for federated learning, updates to neuromorphic models must be synchronized between devices​

Federated Learning Setup with Akida on Raspberry Pi 5​

This repository demonstrates a lightweight Federated Learning (FL) setup using neuromorphic AI models deployed on BrainChip Akida PCIe accelerators paired with Raspberry Pi 5 devices. It provides scripts for a centralized Flask server to receive model weight updates and a client script to upload Akida model weights via HTTP.

Overview​

Neuromorphic models trained on individual RPI5-Akida nodes can contribute updates to a shared model hosted on a central server. This setup simulates a federated learning architecture for edge AI applications that require privacy, low latency, and energy efficiency.

Repository Structure​

federated_learning/
├── federated_learning_server.py # Flask server to receive model weights
├── federated_learning_client.py # Client script to upload Akida model weights
├── model_utils.py # (Optional) Placeholder for weight handling utilities
├── model_training.py # (Optional) Placeholder for training-related code
└── README.md


Requirements​

  • Python 3.7+
  • Flask
  • NumPy
  • Requests
  • Akida Python SDK (required on client device)
Install the dependencies using:

pip install flask numpy requests

Getting Started​

1. Launch the Federated Learning Server​

On a device intended to act as the central server:

python3 federated_learning_server.py

The server will listen for HTTP POST requests on port 5000 and respond to updates sent to the /upload endpoint.

2. Configure and Run the Client​

On each RPI5-Akida node:

  • Ensure the Akida model has been trained.
  • Replace the SERVER_IP variable inside federated_learning_client.py with the IP address of the server.
  • Run the script:
python3 federated_learning_client.py

This will extract the weights from the Akida model and transmit them to the server in JSON format.

Example Response​

After a successful POST:

Model weights uploaded successfully.


If an error occurs (e.g., connection refused or malformed weights), you will see an appropriate status message.

Security Considerations​

This is a prototype-level setup for research. For real-world deployment:

  • Use HTTPS instead of HTTP.
  • Authenticate clients using tokens or API keys.
  • Validate the format and shape of model weights before acceptance.

Acknowledgements​

This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.
 
  • Like
  • Fire
  • Love
Reactions: 39 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Great find, as per usual @Fullmoonfever!

I just discovered this research paper titled "Spiking neural networks for autonomous driving" which Fernando Sevilla Martinez (Data Science Specialist, Volkswagen) co-authored. The paper was published in December 2024.


Screenshot 2025-07-12 at 4.49.51 pm.png





Fernando Sevilla Martínez's Github activity, which FMF just uncovered, demonstrates Akida-powered neuromorphic processing for V2X and federated learning prototypes.

CARIAD, Volkswagen's software company, have been working on developing and implementing V2X.



Screenshot 2025-07-12 at 4.31.50 pm.png


Screenshot 2025-07-12 at 4.35.16 pm.png
 

Attachments

  • Screenshot 2025-07-12 at 4.35.01 pm.png
    Screenshot 2025-07-12 at 4.35.01 pm.png
    2.6 MB · Views: 22
Last edited:
  • Like
  • Fire
  • Love
Reactions: 28 users

Cardpro

Regular
Remember the last 4C they noted we just missed out on 540K on engineering revenue. That 540K will appear on the upcoming 4C.

What I'm really watching for in the upcoming 4C is whether that $540K is accompanied by additional engineering revenue, ideally another $500K+. Thats would be a good sign things are ramping up.
thanks for the reminder!!! I totally forgot about those, feels much better!!!
 

Frangipani

Top 20
Maybe one step closer though.

Just up on GitHub.

Suggest readers absorb the whole post to understand the intent of this repository.

Especially terms such as federated learning, scalable, V2X, MQTT, prototype level and distributed.

From what I can find, if correct, the author is as below and doesn't necessarily mean VW involved but suspect would be aware of this work in some division / department.



Fernando Sevilla Martínez​




SevillaFe/SNN_Akida_RPI5

Fernando Sevilla MartínezSevillaFe​



SevillaFe/SNN_Akida_RPI5Public

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware


SNN_Akida_RPI5​

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

This work presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks.

1. Hardware and Software Setup​

The proposed deployment platform integrates two key hardware components: the RPI5 and the Akida board. Together, they enable a power-efficient, cost-effective N-S suitable for real-world edge AI applications.

2. Enabling Secure Remote Access and Distributed Neuromorphic Edge Networks​

The deployment of low-power N-H in networked environments requires reliable, secure, and lightweight communication frameworks. Our system enables full remote operability of the RPI5 and Akida board via SSH, complemented by protocol layers (Message Queuing Telemetry Transport (MQTT), WebSockets, Vehicle-to-Everything (V2X)) that support real-time, event-driven intelligence across edge networks.

3. Training and Running Spiking Neural Networks​

The training pipeline begins with building an ANN using TensorFlow 2.x, which will later be mapped to a spike-compatible format for neuromorphic inference. Because Akida board runs models using low-bitwidth integer arithmetic (4–8 bits), it is critical to align the training phase with these constraints to avoid significant post-training performance degradation.

4. Use case validation: Networked neuromorphic AI for distributed intelligence​

4.1 Use Case: If multiple RPI5 nodes or remote clients need to receive the classification results in real-time, MQTT can be used to broadcast inference outputs​

MQTT-Based Akida Inference Broadcasting​

This project demonstrates how to perform real-time classification broadcasting using BrainChip Akida on Raspberry Pi 5 with MQTT.

Project Structure​

mqtt-akida-inference/
├── config/ # MQTT broker and topic configuration
├── scripts/ # MQTT publisher/subscriber scripts
├── sample_data/ # Sample input data for inference
├── requirements.txt # Required Python packages


Usage​

  1. Install Mosquitto on RPI5
sudo apt update
sudo apt install mosquitto mosquitto-clients -y
sudo systemctl enable mosquitto
sudo systemctl start mosquitto

  1. Run Publisher (on RPI5)
python3 scripts/mqtt_publisher.py


  1. Run Subscriber (on remote device)
python3 scripts/mqtt_subscriber.py


  1. Optional: Monitor from CLI
mosquitto_sub -h <BROKER_IP> -t "akida/inference" -v

Akida Compatibility

python3 outputs = model_akida.predict(sample_image)


Real-Time Edge AI This use case supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics.

Configurations Set your broker IP and topic in config/config.py

4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision​

This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.

Folder Structure​

V2X/
├── config.py # V2X settings
├── v2x_transmitter.py # Simulated Akida alert broadcaster
├── v2x_receiver.py # Listens for incoming V2X alerts
└── README.md


Use Case​

If the Akida accelerator is deployed in an autonomous driving system, this setup allows:

  • Broadcasting high-confidence AI alerts (e.g., "pedestrian detected")
  • Receiving alerts on nearby systems for real-time awareness

Usage​

1. Start the V2X Receiver (on vehicle or infrastructure node)​

python3 receiver/v2x_receiver.py

2. Run the Alert Transmitter (on an RPI5 + Akida node)​

python3 transmitter/v2x_transmitter.py

Notes​

  • Ensure that devices are on the same LAN or wireless network
  • UDP broadcast mode is used for simplicity
  • This is a prototype for real-time event-based message sharing between intelligent nodes

4.3 Use Case: If multiple RPI5-Akida nodes are deployed for federated learning, updates to neuromorphic models must be synchronized between devices​

Federated Learning Setup with Akida on Raspberry Pi 5​

This repository demonstrates a lightweight Federated Learning (FL) setup using neuromorphic AI models deployed on BrainChip Akida PCIe accelerators paired with Raspberry Pi 5 devices. It provides scripts for a centralized Flask server to receive model weight updates and a client script to upload Akida model weights via HTTP.

Overview​

Neuromorphic models trained on individual RPI5-Akida nodes can contribute updates to a shared model hosted on a central server. This setup simulates a federated learning architecture for edge AI applications that require privacy, low latency, and energy efficiency.

Repository Structure​

federated_learning/
├── federated_learning_server.py # Flask server to receive model weights
├── federated_learning_client.py # Client script to upload Akida model weights
├── model_utils.py # (Optional) Placeholder for weight handling utilities
├── model_training.py # (Optional) Placeholder for training-related code
└── README.md


Requirements​

  • Python 3.7+
  • Flask
  • NumPy
  • Requests
  • Akida Python SDK (required on client device)
Install the dependencies using:

pip install flask numpy requests

Getting Started​

1. Launch the Federated Learning Server​

On a device intended to act as the central server:

python3 federated_learning_server.py

The server will listen for HTTP POST requests on port 5000 and respond to updates sent to the /upload endpoint.

2. Configure and Run the Client​

On each RPI5-Akida node:

  • Ensure the Akida model has been trained.
  • Replace the SERVER_IP variable inside federated_learning_client.py with the IP address of the server.
  • Run the script:
python3 federated_learning_client.py

This will extract the weights from the Akida model and transmit them to the server in JSON format.

Example Response​

After a successful POST:

Model weights uploaded successfully.


If an error occurs (e.g., connection refused or malformed weights), you will see an appropriate status message.

Security Considerations​

This is a prototype-level setup for research. For real-world deployment:

  • Use HTTPS instead of HTTP.
  • Authenticate clients using tokens or API keys.
  • Validate the format and shape of model weights before acceptance.

Acknowledgements​

This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.

Great find, @Fullmoonfever!

This is his LinkedIn Profile:


18E95468-A3A7-4827-B17D-CE107CB856E8.jpeg


84861716-EAAF-4A14-9542-ECF3740104BD.jpeg



I came across the name Fernando Sevilla Martínez before, in connection with Raúl Parada Medina, whom I first noticed liking BrainChip LinkedIn posts more than a year ago (and there have been many more since… 😊).

Given that Raúl Parada Medina describes himself as an “IoT research specialist within the connected car project at a Spanish automobile manufacturer”, I had already suggested a connection to the Volkswagen Group via SEAT or CUPRA at the time.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424590

8CAEB3FF-7439-4821-9DA4-40FC96701D51.jpeg

FB05D4C0-4805-4697-8200-60C3B3E8E9A0.jpeg






310C7C5F-2B13-47DB-9148-F5A1239EBFC7.jpeg


Extremely likely the same Raúl Parada Medina whom you recently spotted asking for help with Akida in the DeGirum Community - very disappointingly, no one from our company appears to have been willing to help solve this problem for more than 3 months!

Why promote DeGirum for developers wanting to work with Akida and then not give assistance when needed? Not a good look, if we are to believe shashi from the DeGirum team, who wrote on February 12 he would forward Parada’s request to the BrainChip team, but apparently never got a reply.

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-461608

D94C769F-9B48-4AC6-B144-F559C60A8B26.jpeg


The issue continued, until it was eventually solved on 27 May by another DeGirum team member, stephan-degirum (presumably Stephan Sokolov, who recently demonstrated running the DeGirum PySDK directly on BrainChip hardware at the 2025 Embedded Vision Summit - see the video here: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-469037)





raul.parada.medina
May 27

Hi @alex and @shashi for your reply, it looks there is no update from Brianchip in this sense. Please, could you tell me how to upload this model in the platform? Age estimation (regression) example — Akida Examples documentation. Thanks!

1 Reply



shashiDeGirum Team
May 27

@stephan-degirum
Can you please help @raul.parada.medina ?




stephan-degirum
143_2.png
raul.parada.medina
May 27

Hello @raul.parada.medina , conversion of a model from BrainChip’s model zoo into our format is straightforward:
Once you have an Akida model object, like Step 4 in the example:
model_akida = convert(model_quantized_keras)

You’ll need to map the model to your device and then convert it to a compatible binary:


from akida import devices

# Map model onto your Akida device
dev = devices()[0]
try:
model_akida.map(dev, hw_only=True)
except RuntimeError:
model_akida.map(dev, hw_only=False)

# Extract the C++-compatible program blob
blob = model_akida.sequences[0].program
with open("model_cxx.fbz", "wb") as f:
f.write(blob)

print("C++-compatible model written to model_cxx.fbz")

Note: You want to be sure that the model is supported on your Akida device. There are many models on the BrainChip model zoo that are not compatible with their “version 1 IP” devices.
If your device is a v1 device, you’ll need to add a set_akida_version guard:

from cnn2snn import convert, set_akida_version, AkidaVersion
# Convert the model
with set_akida_version(AkidaVersion.v1):
model_akida = convert(model_quantized_keras)
model_akida.summary()

from akida import devices
# Map model onto your Akida device
# ... (see above)

for more information on v1/v2 model compatibility please see their docs: Akida models zoo — Akida Examples documentation

Once you have a model binary blob created:

Create a model JSON file adjacent to the blob by following Model JSON Structure | DeGirum Docs or by looking at existing BrainChip models on our AI Hub for reference: https://hub.degirum.com/degirum/brainchip
ModelPath is your binary model file
RuntimeAgent is AKIDA
DeviceType is the middle output from akida devices in all caps.
For example for if akida devices shows: PCIe/NSoC_v2/0 you put: NSOC_V2
Your JSON + binary model blob are now compatible with PySDK. Try running the inference on your device locally by specifying the full path to the JSON as a zoo_url, see: PySDK Package | DeGirum Docs
“For local AI hardware inferences you specify zoo_urlparameter as either a path to a local model zoo directory, or a path to model’s .json configuration file.”
You can then zip them up and upload them to your model zoo in our AI Hub.
Let me know if this helped.
P.S. we currently have v1 hardware in our cloud farm, and this model is the face estimation model for NSoC_v2:
https://hub.degirum.com/degirum/brainchip/vgg_regress_age_utkface--32x32_quant_akida_NSoC_1


Anyway, as you had already noticed in your first post on this DeGirum enquiry, Raúl Parada Medina (assuming it is the same person, which I have no doubt about) and Fernando Sevilla Martínez are both co-authors of a paper on autonomous driving:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-450543

90084688-FF1B-44FD-BFFD-334306A07DFD.jpeg


In fact, they have co-published two papers on autonomous driving, together with another researcher: Jordi Casas-Roma. He is director of the Master in Data Science at the Barcelona-based private online university Universitat Oberta de Catalunya, the same department where Fernando Sevilla Martínez got his Master’s degree in 2022 before moving to Wolfsburg the following year, where he now works as a data scientist at the headquarters of the Volkswagen Group.


24B31588-7EAF-4861-8D9F-336E3694CAC6.jpeg



D479F2E0-57C4-414E-8534-3380AB8E6DA9.jpeg
86F4D0A7-52B4-4980-8492-737B56B4BCBD.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Frangipani

Top 20
Speaking of Raúl Parada Medina:

On 13 February I took this screenshot, but never followed up on it:

FE015838-9CCF-4352-ABA3-397CD6EB2338.jpeg


53F2FEF5-8860-4FDF-9673-F983A5821842.jpeg


It appears the planned WISSA workshop as well as some others got eventually cancelled, as only three of the scheduled workshops actually took place in late June:

https://www.ie2025.fraunhofer.de/workshops/

E30F251E-608F-46F0-A39C-F6542324515A.jpeg


Nevertheless it is evidence that Raúl Parada Medina’s work is also relevant in the field of smart agriculture.


Prior to that, he was part of the 5GMED project that ran from September 2020 to August 2024 (sorry, don’t have the time right now to look up the individual links - all the following screenshots were taken in mid-February).


EB41B747-82FE-4B66-BA98-087CC995655F.jpeg
EDC584D1-84EB-4BE3-85DA-D14A24D8A4F2.jpeg
B095D5BC-A5A0-4A92-9ECB-8B0EA9D9A78D.jpeg
F2A464DF-FF66-4A8F-A993-144414B57EA5.jpeg
EED95BF9-EDD4-495C-B7B9-44F5D4EB9603.jpeg
2A535CF0-A8C1-4DD2-BE8E-A3CA45CC807E.jpeg
6d1b6b4f-ec16-481d-97a0-eb80b55872eb-jpeg.88443



Keep in mind that Raúl Parada Medina has a telecommunications background and works as a Senior Researcher for CTTC in Castelldefels near Barcelona, the Centre Tecnològic de Telecomunicacions de Catalunya.

So when he describes himself as an “IoT research specialist within the connected car project at a Spanish automobile manufacturer”, the emphasis is on “connected” rather than on “automobile”. Therefore, any upcoming research projects may not involve cars at all.
 

Attachments

  • 6D1B6B4F-EC16-481D-97A0-EB80B55872EB.jpeg
    6D1B6B4F-EC16-481D-97A0-EB80B55872EB.jpeg
    227.6 KB · Views: 197
Last edited:
  • Like
  • Love
  • Fire
Reactions: 11 users
Anyone seen Tony’s new post on linked in?
 

uiux

Regular
Maybe one step closer though.

Just up on GitHub.

Suggest readers absorb the whole post to understand the intent of this repository.

Especially terms such as federated learning, scalable, V2X, MQTT, prototype level and distributed.

From what I can find, if correct, the author is as below and doesn't necessarily mean VW involved but suspect would be aware of this work in some division / department.



Fernando Sevilla Martínez​




SevillaFe/SNN_Akida_RPI5

Fernando Sevilla MartínezSevillaFe​



SevillaFe/SNN_Akida_RPI5Public

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware


SNN_Akida_RPI5​

Eco-Efficient Deployment of Spiking Neural Networks on Low-Cost Edge Hardware

This work presents a practical and energy-aware framework for deploying Spiking Neural Networks on low-cost hardware for edge computing. We detail a reproducible pipeline that integrates neuromorphic processing with secure remote access and distributed intelligence. Using Raspberry Pi and the BrainChip Akida PCIe accelerator, we demonstrate a lightweight deployment process including model training, quantization, and conversion. Our experiments validate the eco-efficiency and networking potential of neuromorphic AI systems, providing key insights for sustainable distributed intelligence. This letter offers a blueprint for scalable and secure neuromorphic deployments across edge networks.

1. Hardware and Software Setup​

The proposed deployment platform integrates two key hardware components: the RPI5 and the Akida board. Together, they enable a power-efficient, cost-effective N-S suitable for real-world edge AI applications.

2. Enabling Secure Remote Access and Distributed Neuromorphic Edge Networks​

The deployment of low-power N-H in networked environments requires reliable, secure, and lightweight communication frameworks. Our system enables full remote operability of the RPI5 and Akida board via SSH, complemented by protocol layers (Message Queuing Telemetry Transport (MQTT), WebSockets, Vehicle-to-Everything (V2X)) that support real-time, event-driven intelligence across edge networks.

3. Training and Running Spiking Neural Networks​

The training pipeline begins with building an ANN using TensorFlow 2.x, which will later be mapped to a spike-compatible format for neuromorphic inference. Because Akida board runs models using low-bitwidth integer arithmetic (4–8 bits), it is critical to align the training phase with these constraints to avoid significant post-training performance degradation.

4. Use case validation: Networked neuromorphic AI for distributed intelligence​

4.1 Use Case: If multiple RPI5 nodes or remote clients need to receive the classification results in real-time, MQTT can be used to broadcast inference outputs​

MQTT-Based Akida Inference Broadcasting​

This project demonstrates how to perform real-time classification broadcasting using BrainChip Akida on Raspberry Pi 5 with MQTT.

Project Structure​

mqtt-akida-inference/
├── config/ # MQTT broker and topic configuration
├── scripts/ # MQTT publisher/subscriber scripts
├── sample_data/ # Sample input data for inference
├── requirements.txt # Required Python packages


Usage​

  1. Install Mosquitto on RPI5
sudo apt update
sudo apt install mosquitto mosquitto-clients -y
sudo systemctl enable mosquitto
sudo systemctl start mosquitto

  1. Run Publisher (on RPI5)
python3 scripts/mqtt_publisher.py


  1. Run Subscriber (on remote device)
python3 scripts/mqtt_subscriber.py


  1. Optional: Monitor from CLI
mosquitto_sub -h <BROKER_IP> -t "akida/inference" -v

Akida Compatibility

python3 outputs = model_akida.predict(sample_image)


Real-Time Edge AI This use case supports event-based edge AI and real-time feedback in smart environments, such as surveillance, mobility, and robotics.

Configurations Set your broker IP and topic in config/config.py

4.2 Use Case: If the Akida accelerator is deployed in an autonomous driving system, V2X communication allows other vehicles or infrastructure to receive AI alerts based on neuromorphic-based vision​

This Use Cases simulates a lightweight V2X (Vehicle-to-Everything) communication system using Python. It demonstrates how neuromorphic AI event results, such as pedestrian detection, can be broadcast over a network and received by nearby infrastructure or vehicles.

Folder Structure​

V2X/
├── config.py # V2X settings
├── v2x_transmitter.py # Simulated Akida alert broadcaster
├── v2x_receiver.py # Listens for incoming V2X alerts
└── README.md


Use Case​

If the Akida accelerator is deployed in an autonomous driving system, this setup allows:

  • Broadcasting high-confidence AI alerts (e.g., "pedestrian detected")
  • Receiving alerts on nearby systems for real-time awareness

Usage​

1. Start the V2X Receiver (on vehicle or infrastructure node)​

python3 receiver/v2x_receiver.py

2. Run the Alert Transmitter (on an RPI5 + Akida node)​

python3 transmitter/v2x_transmitter.py

Notes​

  • Ensure that devices are on the same LAN or wireless network
  • UDP broadcast mode is used for simplicity
  • This is a prototype for real-time event-based message sharing between intelligent nodes

4.3 Use Case: If multiple RPI5-Akida nodes are deployed for federated learning, updates to neuromorphic models must be synchronized between devices​

Federated Learning Setup with Akida on Raspberry Pi 5​

This repository demonstrates a lightweight Federated Learning (FL) setup using neuromorphic AI models deployed on BrainChip Akida PCIe accelerators paired with Raspberry Pi 5 devices. It provides scripts for a centralized Flask server to receive model weight updates and a client script to upload Akida model weights via HTTP.

Overview​

Neuromorphic models trained on individual RPI5-Akida nodes can contribute updates to a shared model hosted on a central server. This setup simulates a federated learning architecture for edge AI applications that require privacy, low latency, and energy efficiency.

Repository Structure​

federated_learning/
├── federated_learning_server.py # Flask server to receive model weights
├── federated_learning_client.py # Client script to upload Akida model weights
├── model_utils.py # (Optional) Placeholder for weight handling utilities
├── model_training.py # (Optional) Placeholder for training-related code
└── README.md


Requirements​

  • Python 3.7+
  • Flask
  • NumPy
  • Requests
  • Akida Python SDK (required on client device)
Install the dependencies using:

pip install flask numpy requests

Getting Started​

1. Launch the Federated Learning Server​

On a device intended to act as the central server:

python3 federated_learning_server.py

The server will listen for HTTP POST requests on port 5000 and respond to updates sent to the /upload endpoint.

2. Configure and Run the Client​

On each RPI5-Akida node:

  • Ensure the Akida model has been trained.
  • Replace the SERVER_IP variable inside federated_learning_client.py with the IP address of the server.
  • Run the script:
python3 federated_learning_client.py

This will extract the weights from the Akida model and transmit them to the server in JSON format.

Example Response​

After a successful POST:

Model weights uploaded successfully.


If an error occurs (e.g., connection refused or malformed weights), you will see an appropriate status message.

Security Considerations​

This is a prototype-level setup for research. For real-world deployment:

  • Use HTTPS instead of HTTP.
  • Authenticate clients using tokens or API keys.
  • Validate the format and shape of model weights before acceptance.

Acknowledgements​

This implementation is part of a broader effort to demonstrate low-cost, energy-efficient neuromorphic AI for distributed and networked edge environments, particularly leveraging the BrainChip Akida PCIe board and Raspberry Pi 5 hardware.

Sort of looks like this one


Distributed neuromorphic infrastructure​


Current Assignee Microsoft Technology Licensing LLC

In non-limiting examples of the present disclosure, systems, methods and devices for synchronizing neuromorphic models are presented. A sensor input may be received by a first neuromorphic model implemented on a neuromorphic architecture of a first computing device. The neuromorphic model may comprise a plurality of neurons, with each of the plurality of neurons associated with a threshold value, a weight value, and a refractory period value. The first sensor input may be processed by the first model. A first output value may be determined based on the processing. The model may be modified via modification of one or more threshold values, weight values, and/or refractory period values. A modified version of the first neuromorphic model may be saved to the first computing device based on the modification. An update comprising the modification may be sent to a second computing device hosting the model.

1752332264543.png
 
  • Like
  • Wow
Reactions: 4 users

CHIPS

Regular
Top Bottom