Feature | Meta’s Needs | Akida's Strength |
---|---|---|
Event-driven signal processing | Intermittent or subtle biosignal changes (e.g., arrhythmia, stress response) | Akida excels at analyzing sparse, time-dependent data like ECG/PPG without constantly polling a CPU |
Ultra-low power | Critical for all-day wear in AR/smart glasses | Akida is designed for always-on processing with minimal power draw |
Edge inference | Requires local, private on-device processing | Akida enables AI inference directly on-device without cloud reliance |
Time-series pattern detection | Needed for stress, fatigue, HRV anomalies | Akida supports temporal pattern recognition and can learn from evolving signal patterns using spiking neural networks |
Personalization | Biosignal baselines vary per user | Akida supports incremental on-device learning for personalized AI models |
I asked ChatGPT if BrainChip's AKIDA be used for the applications outlined in Meta's patent as shown below?
View attachment 87255
ChatGPT said:
Yes, a neuromorphic processor like BrainChip’s Akida could absolutely be used for many of the applications outlined in Meta’s patent — and in some ways, it may offer significant advantages over conventional processors.
Meta's patent outlines biosignal monitoring in AR/smart glasses, including:
Summary of Meta’s Patent Use Cases (US20230172468)
These involve continuous, time-series data from PPG and/or ECG sensors — which are ideal for real-time analysis with minimal latency and energy overhead.
- Heart rate and heart rate variability
- Blood pressure estimation
- Electrocardiogram (ECG) waveform analysis
- Stress or fatigue detection
- User authentication via biosignals
Why Akida Is a Good Fit
Feature Meta’s Needs Akida's Strength Event-driven signal processing Intermittent or subtle biosignal changes (e.g., arrhythmia, stress response) Akida excels at analyzing sparse, time-dependent data like ECG/PPG without constantly polling a CPU Ultra-low power Critical for all-day wear in AR/smart glasses Akida is designed for always-on processing with minimal power draw Edge inference Requires local, private on-device processing Akida enables AI inference directly on-device without cloud reliance Time-series pattern detection Needed for stress, fatigue, HRV anomalies Akida supports temporal pattern recognition and can learn from evolving signal patterns using spiking neural networks Personalization Biosignal baselines vary per user Akida supports incremental on-device learning for personalized AI models
Example Scenarios Where Akida Could Enhance Meta’s Applications
- Stress Monitoring
→ Akida could detect deviations in heart rate variability or galvanic skin response over time and classify them as stress patterns.- Fatigue Detection for AR Use
→ By continuously analyzing subtle biosignal shifts, Akida could alert users (or apps) to attention lapses, which is crucial in AR/VR safety.- Adaptive UI/UX
→ If a user’s biometric data shows increased heart rate or agitation, the UI could adapt — dim lighting, mute notifications, etc.- On-device Personal Authentication
→ Akida could analyze a user’s unique ECG or PPG pattern to authenticate without needing facial recognition or passcodes.While Akida could support or enhance these use cases:
Limitations
- Sensor interface requirements would need to be adapted (Akida does not have native PPG/ECG analog frontend — that must be paired externally).
- Meta may already have in-house AI pipelines or use Qualcomm’s NPU, making integration dependent on broader ecosystem decisions.
Yes, Akida is well-suited to handle many of the AI inference tasks outlined in Meta’s patent — especially anything involving continuous, energy-efficient biosignal analysis. If Meta values privacy, low power, and real-time responsiveness in their future smart glasses, a neuromorphic edge processor like Akida could be a compelling option.
Conclusion
Would you like a quick visual comparison of Akida vs typical ARM/Qualcomm NPUs for this type of workload?
You said:
yes please
ChatGPT said:
Akida vs Conventional NPU for Smart Glasses AI Workloads
View attachment 87254
The chart above compares BrainChip's Akida processor to a conventional NPU (e.g. Qualcomm's Hexagon) across six key criteria relevant to smart glasses and biosignal analysis:
In conclusion, Akida would be a strong fit for the kinds of biosignal-driven applications described in Meta’s patent and Onsor’s glasses.
- Power Efficiency: Akida is exceptionally power-efficient due to its neuromorphic architecture, which is crucial for wearable devices with small batteries.
- Real-Time Latency: Akida's event-driven design allows for faster response times in signal detection, outperforming frame-based NPUs.
- Edge Inference Capability: Both Akida and NPUs can handle edge inference, but Akida’s architecture is optimized for on-device decision-making without preprocessing overhead.
- Adaptive Learning: Akida supports on-device learning (spiking neural networks), something traditional NPUs generally do not, making it useful for personalized biosignal monitoring.
- Privacy: Akida processes data locally, reducing the need for cloud offloading—essential for sensitive health or biometric data.
- Time-Series Signal Analysis: Akida excels in analyzing continuous data streams such as biosignals, ideal for predicting seizures or recognizing user states in smart glasses.
Wouldn't it be great to see us in such a product instead we see our competitions products shinning. MAYBE we get a run off the bench later this year if the stars line up that is.I asked ChatGPT if BrainChip's AKIDA be used for the applications outlined in Meta's patent as shown below?
View attachment 87255
ChatGPT said:
Yes, a neuromorphic processor like BrainChip’s Akida could absolutely be used for many of the applications outlined in Meta’s patent — and in some ways, it may offer significant advantages over conventional processors.
Meta's patent outlines biosignal monitoring in AR/smart glasses, including:
Summary of Meta’s Patent Use Cases (US20230172468)
These involve continuous, time-series data from PPG and/or ECG sensors — which are ideal for real-time analysis with minimal latency and energy overhead.
- Heart rate and heart rate variability
- Blood pressure estimation
- Electrocardiogram (ECG) waveform analysis
- Stress or fatigue detection
- User authentication via biosignals
Why Akida Is a Good Fit
Feature Meta’s Needs Akida's Strength Event-driven signal processing Intermittent or subtle biosignal changes (e.g., arrhythmia, stress response) Akida excels at analyzing sparse, time-dependent data like ECG/PPG without constantly polling a CPU Ultra-low power Critical for all-day wear in AR/smart glasses Akida is designed for always-on processing with minimal power draw Edge inference Requires local, private on-device processing Akida enables AI inference directly on-device without cloud reliance Time-series pattern detection Needed for stress, fatigue, HRV anomalies Akida supports temporal pattern recognition and can learn from evolving signal patterns using spiking neural networks Personalization Biosignal baselines vary per user Akida supports incremental on-device learning for personalized AI models
Example Scenarios Where Akida Could Enhance Meta’s Applications
- Stress Monitoring
→ Akida could detect deviations in heart rate variability or galvanic skin response over time and classify them as stress patterns.- Fatigue Detection for AR Use
→ By continuously analyzing subtle biosignal shifts, Akida could alert users (or apps) to attention lapses, which is crucial in AR/VR safety.- Adaptive UI/UX
→ If a user’s biometric data shows increased heart rate or agitation, the UI could adapt — dim lighting, mute notifications, etc.- On-device Personal Authentication
→ Akida could analyze a user’s unique ECG or PPG pattern to authenticate without needing facial recognition or passcodes.While Akida could support or enhance these use cases:
Limitations
- Sensor interface requirements would need to be adapted (Akida does not have native PPG/ECG analog frontend — that must be paired externally).
- Meta may already have in-house AI pipelines or use Qualcomm’s NPU, making integration dependent on broader ecosystem decisions.
Yes, Akida is well-suited to handle many of the AI inference tasks outlined in Meta’s patent — especially anything involving continuous, energy-efficient biosignal analysis. If Meta values privacy, low power, and real-time responsiveness in their future smart glasses, a neuromorphic edge processor like Akida could be a compelling option.
Conclusion
Would you like a quick visual comparison of Akida vs typical ARM/Qualcomm NPUs for this type of workload?
You said:
yes please
ChatGPT said:
Akida vs Conventional NPU for Smart Glasses AI Workloads
View attachment 87254
The chart above compares BrainChip's Akida processor to a conventional NPU (e.g. Qualcomm's Hexagon) across six key criteria relevant to smart glasses and biosignal analysis:
In conclusion, Akida would be a strong fit for the kinds of biosignal-driven applications described in Meta’s patent and Onsor’s glasses.
- Power Efficiency: Akida is exceptionally power-efficient due to its neuromorphic architecture, which is crucial for wearable devices with small batteries.
- Real-Time Latency: Akida's event-driven design allows for faster response times in signal detection, outperforming frame-based NPUs.
- Edge Inference Capability: Both Akida and NPUs can handle edge inference, but Akida’s architecture is optimized for on-device decision-making without preprocessing overhead.
- Adaptive Learning: Akida supports on-device learning (spiking neural networks), something traditional NPUs generally do not, making it useful for personalized biosignal monitoring.
- Privacy: Akida processes data locally, reducing the need for cloud offloading—essential for sensitive health or biometric data.
- Time-Series Signal Analysis: Akida excels in analyzing continuous data streams such as biosignals, ideal for predicting seizures or recognizing user states in smart glasses.
...so, finally you go away...I don’t know about you guys, but I’ve come to a conclusion: as long as @Bravo isn’t going for a run, this stock is destined to drop… day by day. You think management is clueless? Think again. They read this stuff too. And they probably say to themselves…
‘Why should we announce anything… if she’s not even showing she wants the share price to rise?’
And honestly? I actually considered selling today because of that thought.
You think it sounds irrational ..but mark my words.” Everything is connected…. EVERYTHING!!!! 🫨
Didn’t I say we probably wouldnt get any more announcement on social media after we all had a moaned that they should putting them on the asx instead. I thinks that’s a kick in the balls for us share holders and we now get nothingGuys… something is brewing… it’s scary … I never experienced this kind long period without the slightest sign from the company… are they all relaxing on palm beach or are they on something big!??? Ask for the dean
Like 99% of the things that get hyped here then nothingAll that hype about Nintendo and then nothing
Well it was 8 years from the 1st switch to the 2nd so switch 4 will probably be around 2044 so please stop down rampingAll hype on alot of things and then nothing. Maybe will be in the Switch 4 in 2032?
I’d like to see that fit in my hearing aid
![]()
Sandia Deploys SpiNNaker2 Neuromorphic System
Some heavy hitters like Intel, IBM, and Google along with a growing number of smaller startups for the past couple of decades have been pushing thewww.nextplatform.com
Sandia Deploys SpiNNaker2 Neuromorphic System
June 16, 2025 Jeffrey Burt
![]()
Some heavy hitters like Intel, IBM, and Google along with a growing number of smaller startups for the past couple of decades have been pushing the development of neuromorphic computing, hardware that looks to mimic the structure and function of the human brain.
It’s a subject that The Next Platform has spent a lot of time tracking its development, from possible operating systems for the systems and military interest to the need for partners and software to help bring useful neuromorphic computing to reality. As with other computing paradigms – quantum comes to mind – these things take time.
That said, sentiment in recent months seemingly has shifted from the technology itself to what will help break it out of its niche status and into the mainstream. Proof-of-concepts are being run and systems are scaling. A goal now appears to be finding that so-called “killer app,” the use case that will propel the technology forward.
Potential abounds. In a paper in Nature in January argued that neuromorphic computing is ready to make the leap into production environments, and in another Nature paper in April noted the power-efficient capabilities of the technology in pointing to such areas as Internet of Things and edge processing. AI is another area, given the electricity-hungry nature of the GPUs and the systems they power for training AI models. Neuromorphic systems may be able alleviate some of these demands.
Flying The Efficiency Flag
SpiNNcloud, the four-year-old company that in 2021 spun out of the Dresden University of Technology and whose chip architecture – SpiNNaker1 (the chip on the left below) – was designed by Steve Furber, the driving force behind the creation of the Arm microprocessor, is carrying that message of power efficiency and AI. On the landing page of its website, the company touts its SpiNNaker2 as the foundation of the “ultra energy-efficient infrastructure for new-generation AI inference,” at 18 times more efficient than the GPUs that are powering many AI systems now.
The upcoming successor, SpiNNext (on the right), will come in 78 times more efficient, according to the company.
SpiNNcloud will now be able do to put the SpiNNaker2 architecture to the test. The German company launched the hybrid AI-level HPC platform, made it commercially available, and said that the Sandia National Laboratories – along with institutions like Technical University of München and Universität Göttingen in Germany – was among its first customers.
This week, company executives announced that Sandia has deployed SpiNNaker2, which simulates about 175 million neurons and is among the top five largest computing platforms based on how the human brain works.
(Photo Credit: Craig Fritz, Sandia National Labs)
“Last time was a generic announcement: We started to work with Sandía,” Hector Gonzalez, co-founder and chief executive officer for SpiNNcloud, told The Next Platform. “They received the system, the supercomputer, and they’re going to be working with it in a few applications.”
24 Boards With 48 Chips Each
What the Sandia scientists stood up is a highly parallel architecture with 24 boards, each of which holds 48 SpiNNaker2 chips that are interconnected in toroidal topologies. Each microchip has 152 Arm-based low-power processing elements that are interconnected in a network-on-chip, Gonzalez said.
![]()
“The microchips get grouped into boards of 48 chips, and then these boards get also interconnected between boards to boards,” he said. “We have high-speed links that have been custom designed to expand the hierarchies of the boards. Then we interconnect the boards in a strategic way so that you build those toroidal large-scale networks. You fold them strategically so that you always ensure the shortest communication path. There is actually software that we use to find the right connectivity so the system starts to send packages and packets. Then there is a lead-based system that we identify for wiring the infrastructure. Once it’s wired, you put them into rack-based systems. Then we built large-scale systems using this technology.”
He noted the power-efficiency advantage over GPUs as well as other benefits. “Something that you can do that you cannot do with a GPU is that you can have super-fine granular control of all these 175K cores. This is one of the distinguishing factors. The system is a globally asynchronous, locally synchronous, so the individual processes can be fully controlled and you can fully isolate paths within the processor. This is … very difficult to do in a GPU because essentially in a GPU, you have these streams of multi-processors [where it’s] harder to isolate the paths.”
In addition, Gonzalez said the microchip is not what’s typically found in a neuromorphic system because it doesn’t commit to spiking neurons [which mimic how the brain processes information through electrical pulses]. With SpiNNaker2, “you can pretty much implement [and] leverage these event-based characteristics from the neuromorphic domain, even in the mainstream DNN domain, even in mainstream deep neural networks. At the same time, it also lets you scale up neural symbolic models. Because it’s fully programmable, you can actually scale up neural symbolic modes, like reasoners that have a symbolic layer, where at the same time you have neural layers.”
Sandia Labs is no stranger to neuromorphic computing. A year ago, it added to its arsenal with the Hala Point system powered by Intel’s Loihi 2 neuromorphic processor, using the system to test AI workloads and how the performance compares with systems running on CPUs, GPUs, and other chips. The addition of SpiNNaker2 is part of the same ongoing initiative at Sandia to use such architectures to run energy-efficient AI applications that consume less power than traditional GPU-based systems, the company said.
SpiNNcloud’s Gonzalez outlined a number of applications for SpiNNaker2, including small multilayer perceptrons (MLPs) that are deployed at scale in every processor. The MLPs are small, so using a GPU would be overkill, he said, “but then you have many, many of them, and these MLPs are designed to find molecules. It’s designed to have pattern matching between molecules in the drug discovery processes and also databases of profiles of patients. This is a strategy to do highly parallel drug discovery very efficiently.”
Others include QUBO–based optimization or logistics problems that can address different types of complex mathematical simulations and challenges that involve random worker algorithms, which use randomness to explore solutions to various problems.
“You just simulate this worker so you deploy this worker at scale and then you can do complex mathematical simulation leveraging the large-scale characteristics of the system,” Gonzalez said.
A Call For Sparsity
SpiNNcloud will continue to create the architecture to support generative AI algorithms that can run machine learning workloads through dynamic sparsity, fueled by recent breakthroughs in machine learning that is moving the industry from dense modeling to extreme dynamic sparsity, where only a subset of neural pathways is activated based on the input into the system, which the company said help address the energy challenges in current AI computing.
“There is very interesting directions where you get to find [and] granularly execute only parts of the network to retrieve the outputs,” Gonzalez said. “This is what is known today as mixture of experts. People in this field have shown that the larger the number of experts you have, you can actually reduce the computational work. The sparsity is very large. You have a computational footprint that’s very small. You get to reduce significantly the computational cost of these models. The problem is that standard hardware today is not designed for this fine granular isolation of paths, and this is where this type of very hybrid hardware that has characteristics from neuromorphic – so it has event-based communication – has a huge impact to offer into this mainstream AI domain. Essentially, you get two isolate paths, whereas the standard architectures like GPUs and all the GPU derivatives – Cerebras, SambaNova – they are optimized towards the regular 10 cases. They work better when you have fully utilized blocks.”
You are very late. This update is already a couple of days available and it was discussed in forums.
Just like the promised revenue........ Coming Soon!