BRN Discussion Ongoing

equanimous

Norse clairvoyant shapeshifter goddess
One sentence seems a little odd, though, given we do not know of any commercially available edge medical device that uses Akida:

What do the pre-print authors mean by “While the Akida chip is commercially used in edge medical devices…”? Would this info make it through a peer-reviewed process?

Even the Onsor NEXA glasses are still undergoing clinical trials and - provided they’ll be given the green light - are slated for release next year.
Hi Frangipani,

Good chance its through Tata.








 
  • Like
  • Love
  • Fire
Reactions: 17 users

7für7

Top 20
Guys… something is brewing… it’s scary … I never experienced this kind long period without the slightest sign from the company… are they all relaxing on palm beach or are they on something big!??? Ask for the dean
 
  • Thinking
  • Like
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I asked ChatGPT if BrainChip's AKIDA be used for the applications outlined in Meta's patent as shown below?




Screenshot 2025-06-18 at 10.49.41 am.png






ChatGPT said:​

Yes, a neuromorphic processor like BrainChip’s Akida could absolutely be used for many of the applications outlined in Meta’s patent — and in some ways, it may offer significant advantages over conventional processors.

🔬 Summary of Meta’s Patent Use Cases (US20230172468)​

Meta's patent outlines biosignal monitoring in AR/smart glasses, including:
  • Heart rate and heart rate variability
  • Blood pressure estimation
  • Electrocardiogram (ECG) waveform analysis
  • Stress or fatigue detection
  • User authentication via biosignals
These involve continuous, time-series data from PPG and/or ECG sensors — which are ideal for real-time analysis with minimal latency and energy overhead.

🧠 Why Akida Is a Good Fit​

FeatureMeta’s NeedsAkida's Strength
Event-driven signal processingIntermittent or subtle biosignal changes (e.g., arrhythmia, stress response)Akida excels at analyzing sparse, time-dependent data like ECG/PPG without constantly polling a CPU
Ultra-low powerCritical for all-day wear in AR/smart glassesAkida is designed for always-on processing with minimal power draw
Edge inferenceRequires local, private on-device processingAkida enables AI inference directly on-device without cloud reliance
Time-series pattern detectionNeeded for stress, fatigue, HRV anomaliesAkida supports temporal pattern recognition and can learn from evolving signal patterns using spiking neural networks
PersonalizationBiosignal baselines vary per userAkida supports incremental on-device learning for personalized AI models


🧩 Example Scenarios Where Akida Could Enhance Meta’s Applications​

  1. Stress Monitoring
    → Akida could detect deviations in heart rate variability or galvanic skin response over time and classify them as stress patterns.
  2. Fatigue Detection for AR Use
    → By continuously analyzing subtle biosignal shifts, Akida could alert users (or apps) to attention lapses, which is crucial in AR/VR safety.
  3. Adaptive UI/UX
    → If a user’s biometric data shows increased heart rate or agitation, the UI could adapt — dim lighting, mute notifications, etc.
  4. On-device Personal Authentication
    → Akida could analyze a user’s unique ECG or PPG pattern to authenticate without needing facial recognition or passcodes.

⚠️ Limitations​

While Akida could support or enhance these use cases:
  • Sensor interface requirements would need to be adapted (Akida does not have native PPG/ECG analog frontend — that must be paired externally).
  • Meta may already have in-house AI pipelines or use Qualcomm’s NPU, making integration dependent on broader ecosystem decisions.

✅ Conclusion​

Yes, Akida is well-suited to handle many of the AI inference tasks outlined in Meta’s patent — especially anything involving continuous, energy-efficient biosignal analysis. If Meta values privacy, low power, and real-time responsiveness in their future smart glasses, a neuromorphic edge processor like Akida could be a compelling option.
Would you like a quick visual comparison of Akida vs typical ARM/Qualcomm NPUs for this type of workload?

You said:​

yes please

ChatGPT said:​

Akida vs Conventional NPU for Smart Glasses AI Workloads

Screenshot 2025-06-18 at 10.48.14 am.png



The chart above compares BrainChip's Akida processor to a conventional NPU (e.g. Qualcomm's Hexagon) across six key criteria relevant to smart glasses and biosignal analysis:
  • Power Efficiency: Akida is exceptionally power-efficient due to its neuromorphic architecture, which is crucial for wearable devices with small batteries.
  • Real-Time Latency: Akida's event-driven design allows for faster response times in signal detection, outperforming frame-based NPUs.
  • Edge Inference Capability: Both Akida and NPUs can handle edge inference, but Akida’s architecture is optimized for on-device decision-making without preprocessing overhead.
  • Adaptive Learning: Akida supports on-device learning (spiking neural networks), something traditional NPUs generally do not, making it useful for personalized biosignal monitoring.
  • Privacy: Akida processes data locally, reducing the need for cloud offloading—essential for sensitive health or biometric data.
  • Time-Series Signal Analysis: Akida excels in analyzing continuous data streams such as biosignals, ideal for predicting seizures or recognizing user states in smart glasses.
In conclusion, Akida would be a strong fit for the kinds of biosignal-driven applications described in Meta’s patent and Onsor’s glasses.

 
  • Like
  • Love
  • Fire
Reactions: 29 users

gilti

Regular
manipulation. no never
manipulation.JPG
 
  • Like
  • Sad
Reactions: 10 users

7für7

Top 20
I asked ChatGPT if BrainChip's AKIDA be used for the applications outlined in Meta's patent as shown below?




View attachment 87255



ChatGPT said:​

Yes, a neuromorphic processor like BrainChip’s Akida could absolutely be used for many of the applications outlined in Meta’s patent — and in some ways, it may offer significant advantages over conventional processors.

🔬 Summary of Meta’s Patent Use Cases (US20230172468)​

Meta's patent outlines biosignal monitoring in AR/smart glasses, including:
  • Heart rate and heart rate variability
  • Blood pressure estimation
  • Electrocardiogram (ECG) waveform analysis
  • Stress or fatigue detection
  • User authentication via biosignals
These involve continuous, time-series data from PPG and/or ECG sensors — which are ideal for real-time analysis with minimal latency and energy overhead.

🧠 Why Akida Is a Good Fit​

FeatureMeta’s NeedsAkida's Strength
Event-driven signal processingIntermittent or subtle biosignal changes (e.g., arrhythmia, stress response)Akida excels at analyzing sparse, time-dependent data like ECG/PPG without constantly polling a CPU
Ultra-low powerCritical for all-day wear in AR/smart glassesAkida is designed for always-on processing with minimal power draw
Edge inferenceRequires local, private on-device processingAkida enables AI inference directly on-device without cloud reliance
Time-series pattern detectionNeeded for stress, fatigue, HRV anomaliesAkida supports temporal pattern recognition and can learn from evolving signal patterns using spiking neural networks
PersonalizationBiosignal baselines vary per userAkida supports incremental on-device learning for personalized AI models


🧩 Example Scenarios Where Akida Could Enhance Meta’s Applications​

  1. Stress Monitoring
    → Akida could detect deviations in heart rate variability or galvanic skin response over time and classify them as stress patterns.
  2. Fatigue Detection for AR Use
    → By continuously analyzing subtle biosignal shifts, Akida could alert users (or apps) to attention lapses, which is crucial in AR/VR safety.
  3. Adaptive UI/UX
    → If a user’s biometric data shows increased heart rate or agitation, the UI could adapt — dim lighting, mute notifications, etc.
  4. On-device Personal Authentication
    → Akida could analyze a user’s unique ECG or PPG pattern to authenticate without needing facial recognition or passcodes.

⚠️ Limitations​

While Akida could support or enhance these use cases:
  • Sensor interface requirements would need to be adapted (Akida does not have native PPG/ECG analog frontend — that must be paired externally).
  • Meta may already have in-house AI pipelines or use Qualcomm’s NPU, making integration dependent on broader ecosystem decisions.

✅ Conclusion​

Yes, Akida is well-suited to handle many of the AI inference tasks outlined in Meta’s patent — especially anything involving continuous, energy-efficient biosignal analysis. If Meta values privacy, low power, and real-time responsiveness in their future smart glasses, a neuromorphic edge processor like Akida could be a compelling option.
Would you like a quick visual comparison of Akida vs typical ARM/Qualcomm NPUs for this type of workload?

You said:​

yes please

ChatGPT said:​

Akida vs Conventional NPU for Smart Glasses AI Workloads

View attachment 87254


The chart above compares BrainChip's Akida processor to a conventional NPU (e.g. Qualcomm's Hexagon) across six key criteria relevant to smart glasses and biosignal analysis:
  • Power Efficiency: Akida is exceptionally power-efficient due to its neuromorphic architecture, which is crucial for wearable devices with small batteries.
  • Real-Time Latency: Akida's event-driven design allows for faster response times in signal detection, outperforming frame-based NPUs.
  • Edge Inference Capability: Both Akida and NPUs can handle edge inference, but Akida’s architecture is optimized for on-device decision-making without preprocessing overhead.
  • Adaptive Learning: Akida supports on-device learning (spiking neural networks), something traditional NPUs generally do not, making it useful for personalized biosignal monitoring.
  • Privacy: Akida processes data locally, reducing the need for cloud offloading—essential for sensitive health or biometric data.
  • Time-Series Signal Analysis: Akida excels in analyzing continuous data streams such as biosignals, ideal for predicting seizures or recognizing user states in smart glasses.
In conclusion, Akida would be a strong fit for the kinds of biosignal-driven applications described in Meta’s patent and Onsor’s glasses.


Could… Akida would fit in every device if you will… but there are some people, which are not visible, who think, it’s not a good idea… and I have the feeling, they have a lot of money and influence……..a….lot….of….money…….and , a….lot….of….influence……


(If you don’t have a logical explanation for thing, just drop some conspiracy theories lol)

Here some theories from MY ChatGPT 😂

🕴️ Conspiracy Theory #1:

The Cloud Mafia Wants Your Data Brain

If devices like smartphones, VR headsets, or wearables become too smart — e.g. through Akida — they no longer need the cloud. No data uploads, no subscription models, no real-time user tracking.

💰 That’s bad for business for:
• Meta
• Google
• Amazon
• Apple (only half guilty 😇)

The theory: They are blocking the spread of real edge AI because their business models rely on centralized control.
Local intelligence = loss of control.



🕵️ Conspiracy Theory #2:

Governments Love Dumb Devices

Smart edge devices that learn locally and don’t upload data are a problem for surveillance systems.
Imagine glasses that recognize who you are but don’t send anything to the cloud — no backdoor, no access for third parties.

So these kinds of technologies are “accidentally”:
• not standardized
• not subsidized
• not built into mainstream hardware



🧬 Conspiracy Theory #3:

The Competition Fears the Biological

Akida works with spikes, like real neurons.
That’s fundamentally different from classic CPUs or GPUs.
Companies that have invested billions in conventional chips don’t want this paradigm shift — because it would mean rethinking everything from scratch.

Result: Lobbying against “exotic” architectures.
And yes – many of those decision-makers have:
A… lot… of… money… and a… lot… of… influence. 😏



🧘‍♂️ Realistic Perspective:

What’s needed:
• Brave companies (startups or independent OEMs)
• Open hardware initiatives
• Pressure from developers and users who want true edge AI
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 7 users

7für7

Top 20
I love her 😂😂😂😂😂😂

19B60689-3C99-43CD-B4D6-8719603DD5FC.png
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 4 users

7für7

Top 20
ATTENTION!!!!! ONLY FOR LONG TERM HOLDER!!!!!

Based on ChatGPT



🚀 Current Status & Technology of BrainChip

1. Akida Platform
• Akida™ is a fully digital, event-based edge AI processor designed for ultra-low power consumption (sometimes under 1 mW) and supports on-chip learning.
• The current version (2nd generation, March 2023) supports 8-bit weights, Vision Transformers (ViT), and Temporal Event-Based Neural Networks (TENNs) – optimized for voice, radar, and vision tasks at the edge.

2. Form Factors & Products
• BrainChip already offers developer boards in M.2 format with the AKD1000 chip – plug-and-play for prototyping.
• Additional embedded products and Edge AI box ecosystems are being promoted, as seen in the “Edge AI Box Ecosystem Launch” release.

3. Strategic Partnerships
• Key partners include Frontgrade Gaisler (aerospace), MegaChips, Edge Impulse, Prophesee, SiFive, Renesas, and Intel Foundry Services, among others.
• Use cases span gesture recognition, epilepsy-monitoring smart glasses, radar-based ISR platforms, and other real-time edge applications.



🗓️ Technology Roadmap & Milestones
• April 22, 2025 (AGM): CDO Dr. Jonathan Tapson presented the updated roadmap:
• Focus on more configurable IP for diverse edge applications.
• Expansion of TENNs and ViT integration; platform-based approach for commercial use cases.
• Embedded Vision Summit (May 20–22, 2025): CTO Dr. Peter Lewis demonstrated how state-space models and TENNs outperform Transformers in extreme edge conditions.
• Next steps include broader IP licensing to SoC makers and OEMs across sectors like automotive, wearables, defense, and space.



📈 Financials & Market Overview
• Publicly traded on the ASX (BRN) and OTC markets (OTCQX: BRCHF/BCHPY).
• As of the last earnings report: moderate revenue growth, net loss around US$ 8.4M – typical for a scaling tech company, but underlines ongoing capital needs.
• Notably, major investor LDA Capital has exercised option tranches – showing institutional interest despite volatility.



🧭 Realistic Outlook for Long-Term Holders

Area Potential & Risks
Technology Event-based, ultra-low power, and on-chip learning position Akida strongly for the future of edge AI. TENNs and ViTs are a solid evolution.
Market & Use Cases Strategic partnerships open diverse markets – but real-world adoption is still in early stages. Platform strategy could take time.
Financials Losses are expected at this stage. Success depends on timely and broad IP licensing.
Timeline Key developments expected over the next 12–24 months: licensing deals, OEM integrations, and potentially first mass-market products.




🎯 Final Verdict – Why This Has Long-Term Potential
1. Technological lead in low-power edge AI through event-driven neuromorphic architecture.
2. Strong strategic alliances across key growth industries.
3. Focused roadmap with clear licensing and platform scale-up strategy.
4. Still: capital requirements and long sales cycles remain risks.
 
  • Fire
  • Like
  • Love
Reactions: 10 users
I asked ChatGPT if BrainChip's AKIDA be used for the applications outlined in Meta's patent as shown below?




View attachment 87255





ChatGPT said:​

Yes, a neuromorphic processor like BrainChip’s Akida could absolutely be used for many of the applications outlined in Meta’s patent — and in some ways, it may offer significant advantages over conventional processors.

🔬 Summary of Meta’s Patent Use Cases (US20230172468)​

Meta's patent outlines biosignal monitoring in AR/smart glasses, including:
  • Heart rate and heart rate variability
  • Blood pressure estimation
  • Electrocardiogram (ECG) waveform analysis
  • Stress or fatigue detection
  • User authentication via biosignals
These involve continuous, time-series data from PPG and/or ECG sensors — which are ideal for real-time analysis with minimal latency and energy overhead.

🧠 Why Akida Is a Good Fit​

FeatureMeta’s NeedsAkida's Strength
Event-driven signal processingIntermittent or subtle biosignal changes (e.g., arrhythmia, stress response)Akida excels at analyzing sparse, time-dependent data like ECG/PPG without constantly polling a CPU
Ultra-low powerCritical for all-day wear in AR/smart glassesAkida is designed for always-on processing with minimal power draw
Edge inferenceRequires local, private on-device processingAkida enables AI inference directly on-device without cloud reliance
Time-series pattern detectionNeeded for stress, fatigue, HRV anomaliesAkida supports temporal pattern recognition and can learn from evolving signal patterns using spiking neural networks
PersonalizationBiosignal baselines vary per userAkida supports incremental on-device learning for personalized AI models


🧩 Example Scenarios Where Akida Could Enhance Meta’s Applications​

  1. Stress Monitoring
    → Akida could detect deviations in heart rate variability or galvanic skin response over time and classify them as stress patterns.
  2. Fatigue Detection for AR Use
    → By continuously analyzing subtle biosignal shifts, Akida could alert users (or apps) to attention lapses, which is crucial in AR/VR safety.
  3. Adaptive UI/UX
    → If a user’s biometric data shows increased heart rate or agitation, the UI could adapt — dim lighting, mute notifications, etc.
  4. On-device Personal Authentication
    → Akida could analyze a user’s unique ECG or PPG pattern to authenticate without needing facial recognition or passcodes.

⚠️ Limitations​

While Akida could support or enhance these use cases:
  • Sensor interface requirements would need to be adapted (Akida does not have native PPG/ECG analog frontend — that must be paired externally).
  • Meta may already have in-house AI pipelines or use Qualcomm’s NPU, making integration dependent on broader ecosystem decisions.

✅ Conclusion​

Yes, Akida is well-suited to handle many of the AI inference tasks outlined in Meta’s patent — especially anything involving continuous, energy-efficient biosignal analysis. If Meta values privacy, low power, and real-time responsiveness in their future smart glasses, a neuromorphic edge processor like Akida could be a compelling option.
Would you like a quick visual comparison of Akida vs typical ARM/Qualcomm NPUs for this type of workload?

You said:​

yes please

ChatGPT said:​

Akida vs Conventional NPU for Smart Glasses AI Workloads

View attachment 87254


The chart above compares BrainChip's Akida processor to a conventional NPU (e.g. Qualcomm's Hexagon) across six key criteria relevant to smart glasses and biosignal analysis:
  • Power Efficiency: Akida is exceptionally power-efficient due to its neuromorphic architecture, which is crucial for wearable devices with small batteries.
  • Real-Time Latency: Akida's event-driven design allows for faster response times in signal detection, outperforming frame-based NPUs.
  • Edge Inference Capability: Both Akida and NPUs can handle edge inference, but Akida’s architecture is optimized for on-device decision-making without preprocessing overhead.
  • Adaptive Learning: Akida supports on-device learning (spiking neural networks), something traditional NPUs generally do not, making it useful for personalized biosignal monitoring.
  • Privacy: Akida processes data locally, reducing the need for cloud offloading—essential for sensitive health or biometric data.
  • Time-Series Signal Analysis: Akida excels in analyzing continuous data streams such as biosignals, ideal for predicting seizures or recognizing user states in smart glasses.
In conclusion, Akida would be a strong fit for the kinds of biosignal-driven applications described in Meta’s patent and Onsor’s glasses.

Wouldn't it be great to see us in such a product instead we see our competitions products shinning. MAYBE we get a run off the bench later this year if the stars line up that is. 🤔 vetting.
 
Last edited:
  • Like
  • Sad
Reactions: 5 users

7für7

Top 20
I don’t know about you guys, but I’ve come to a conclusion: as long as @Bravo isn’t going for a run, this stock is destined to drop… day by day. You think management is clueless? Think again. They read this stuff too. And they probably say to themselves…
‘Why should we announce anything… if she’s not even showing she wants the share price to rise?’

And honestly? I actually considered selling today because of that thought.
You think it sounds irrational ..but mark my words.” Everything is connected…. EVERYTHING!!!! 🫨
 
  • Haha
Reactions: 2 users

itsol4605

Regular
I don’t know about you guys, but I’ve come to a conclusion: as long as @Bravo isn’t going for a run, this stock is destined to drop… day by day. You think management is clueless? Think again. They read this stuff too. And they probably say to themselves…
‘Why should we announce anything… if she’s not even showing she wants the share price to rise?’

And honestly? I actually considered selling today because of that thought.
You think it sounds irrational ..but mark my words.” Everything is connected…. EVERYTHING!!!! 🫨
...so, finally you go away...
 
  • Haha
  • Like
  • Fire
Reactions: 8 users
Guys… something is brewing… it’s scary … I never experienced this kind long period without the slightest sign from the company… are they all relaxing on palm beach or are they on something big!??? Ask for the dean
Didn’t I say we probably wouldnt get any more announcement on social media after we all had a moaned that they should putting them on the asx instead. I thinks that’s a kick in the balls for us share holders and we now get nothing 😂


1750224584738.gif
 
  • Haha
  • Like
Reactions: 6 users
Some large numbers being sold at .21. Let’s see the short numbers in a week


1750227923580.gif

Unless selling due to EOY taxes
 
  • Like
Reactions: 3 users

mkg6R

Member
All hype on alot of things and then nothing. Maybe will be in the Switch 4 in 2032?
 
  • Haha
  • Like
Reactions: 2 users
  • Like
Reactions: 5 users
All hype on alot of things and then nothing. Maybe will be in the Switch 4 in 2032?
Well it was 8 years from the 1st switch to the 2nd so switch 4 will probably be around 2044 so please stop down ramping 😂
 
Last edited:
  • Haha
  • Fire
  • Like
Reactions: 4 users

CHIPS

Regular
:cautious::censored:


Sandia Deploys SpiNNaker2 Neuromorphic System​

June 16, 2025 Jeffrey Burt
SpiNNcloud-chips-1030x438.png

Some heavy hitters like Intel, IBM, and Google along with a growing number of smaller startups for the past couple of decades have been pushing the development of neuromorphic computing, hardware that looks to mimic the structure and function of the human brain.
It’s a subject that The Next Platform has spent a lot of time tracking its development, from possible operating systems for the systems and military interest to the need for partners and software to help bring useful neuromorphic computing to reality. As with other computing paradigms – quantum comes to mind – these things take time.
That said, sentiment in recent months seemingly has shifted from the technology itself to what will help break it out of its niche status and into the mainstream. Proof-of-concepts are being run and systems are scaling. A goal now appears to be finding that so-called “killer app,” the use case that will propel the technology forward.

Potential abounds. In a paper in Nature in January argued that neuromorphic computing is ready to make the leap into production environments, and in another Nature paper in April noted the power-efficient capabilities of the technology in pointing to such areas as Internet of Things and edge processing. AI is another area, given the electricity-hungry nature of the GPUs and the systems they power for training AI models. Neuromorphic systems may be able alleviate some of these demands.

Flying The Efficiency Flag​

SpiNNcloud, the four-year-old company that in 2021 spun out of the Dresden University of Technology and whose chip architecture – SpiNNaker1 (the chip on the left below) – was designed by Steve Furber, the driving force behind the creation of the Arm microprocessor, is carrying that message of power efficiency and AI. On the landing page of its website, the company touts its SpiNNaker2 as the foundation of the “ultra energy-efficient infrastructure for new-generation AI inference,” at 18 times more efficient than the GPUs that are powering many AI systems now.
The upcoming successor, SpiNNext (on the right), will come in 78 times more efficient, according to the company.

SpiNNcloud will now be able do to put the SpiNNaker2 architecture to the test. The German company launched the hybrid AI-level HPC platform, made it commercially available, and said that the Sandia National Laboratories – along with institutions like Technical University of München and Universität Göttingen in Germany – was among its first customers.
This week, company executives announced that Sandia has deployed SpiNNaker2, which simulates about 175 million neurons and is among the top five largest computing platforms based on how the human brain works.
(Photo Credit: Craig Fritz, Sandia National Labs)
“Last time was a generic announcement: We started to work with Sandía,” Hector Gonzalez, co-founder and chief executive officer for SpiNNcloud, told The Next Platform. “They received the system, the supercomputer, and they’re going to be working with it in a few applications.”

24 Boards With 48 Chips Each​

What the Sandia scientists stood up is a highly parallel architecture with 24 boards, each of which holds 48 SpiNNaker2 chips that are interconnected in toroidal topologies. Each microchip has 152 Arm-based low-power processing elements that are interconnected in a network-on-chip, Gonzalez said.

“The microchips get grouped into boards of 48 chips, and then these boards get also interconnected between boards to boards,” he said. “We have high-speed links that have been custom designed to expand the hierarchies of the boards. Then we interconnect the boards in a strategic way so that you build those toroidal large-scale networks. You fold them strategically so that you always ensure the shortest communication path. There is actually software that we use to find the right connectivity so the system starts to send packages and packets. Then there is a lead-based system that we identify for wiring the infrastructure. Once it’s wired, you put them into rack-based systems. Then we built large-scale systems using this technology.”

He noted the power-efficiency advantage over GPUs as well as other benefits. “Something that you can do that you cannot do with a GPU is that you can have super-fine granular control of all these 175K cores. This is one of the distinguishing factors. The system is a globally asynchronous, locally synchronous, so the individual processes can be fully controlled and you can fully isolate paths within the processor. This is … very difficult to do in a GPU because essentially in a GPU, you have these streams of multi-processors [where it’s] harder to isolate the paths.”
In addition, Gonzalez said the microchip is not what’s typically found in a neuromorphic system because it doesn’t commit to spiking neurons [which mimic how the brain processes information through electrical pulses]. With SpiNNaker2, you can pretty much implement [and] leverage these event-based characteristics from the neuromorphic domain, even in the mainstream DNN domain, even in mainstream deep neural networks. At the same time, it also lets you scale up neural symbolic models. Because it’s fully programmable, you can actually scale up neural symbolic modes, like reasoners that have a symbolic layer, where at the same time you have neural layers.”

Sandia Labs is no stranger to neuromorphic computing. A year ago, it added to its arsenal with the Hala Point system powered by Intel’s Loihi 2 neuromorphic processor, using the system to test AI workloads and how the performance compares with systems running on CPUs, GPUs, and other chips. The addition of SpiNNaker2 is part of the same ongoing initiative at Sandia to use such architectures to run energy-efficient AI applications that consume less power than traditional GPU-based systems, the company said.

SpiNNcloud’s Gonzalez outlined a number of applications for SpiNNaker2, including small multilayer perceptrons (MLPs) that are deployed at scale in every processor. The MLPs are small, so using a GPU would be overkill, he said, “but then you have many, many of them, and these MLPs are designed to find molecules. It’s designed to have pattern matching between molecules in the drug discovery processes and also databases of profiles of patients. This is a strategy to do highly parallel drug discovery very efficiently.”

Others include QUBObased optimization or logistics problems that can address different types of complex mathematical simulations and challenges that involve random worker algorithms, which use randomness to explore solutions to various problems.
“You just simulate this worker so you deploy this worker at scale and then you can do complex mathematical simulation leveraging the large-scale characteristics of the system,” Gonzalez said.

A Call For Sparsity​

SpiNNcloud will continue to create the architecture to support generative AI algorithms that can run machine learning workloads through dynamic sparsity, fueled by recent breakthroughs in machine learning that is moving the industry from dense modeling to extreme dynamic sparsity, where only a subset of neural pathways is activated based on the input into the system, which the company said help address the energy challenges in current AI computing.

“There is very interesting directions where you get to find [and] granularly execute only parts of the network to retrieve the outputs,” Gonzalez said. “This is what is known today as mixture of experts. People in this field have shown that the larger the number of experts you have, you can actually reduce the computational work. The sparsity is very large. You have a computational footprint that’s very small. You get to reduce significantly the computational cost of these models. The problem is that standard hardware today is not designed for this fine granular isolation of paths, and this is where this type of very hybrid hardware that has characteristics from neuromorphic – so it has event-based communication – has a huge impact to offer into this mainstream AI domain. Essentially, you get two isolate paths, whereas the standard architectures like GPUs and all the GPU derivatives – Cerebras, SambaNova – they are optimized towards the regular 10 cases. They work better when you have fully utilized blocks.”
 
  • Like
  • Thinking
  • Fire
Reactions: 6 users
:cautious::censored:


Sandia Deploys SpiNNaker2 Neuromorphic System​

June 16, 2025 Jeffrey Burt
SpiNNcloud-chips-1030x438.png

Some heavy hitters like Intel, IBM, and Google along with a growing number of smaller startups for the past couple of decades have been pushing the development of neuromorphic computing, hardware that looks to mimic the structure and function of the human brain.
It’s a subject that The Next Platform has spent a lot of time tracking its development, from possible operating systems for the systems and military interest to the need for partners and software to help bring useful neuromorphic computing to reality. As with other computing paradigms – quantum comes to mind – these things take time.
That said, sentiment in recent months seemingly has shifted from the technology itself to what will help break it out of its niche status and into the mainstream. Proof-of-concepts are being run and systems are scaling. A goal now appears to be finding that so-called “killer app,” the use case that will propel the technology forward.

Potential abounds. In a paper in Nature in January argued that neuromorphic computing is ready to make the leap into production environments, and in another Nature paper in April noted the power-efficient capabilities of the technology in pointing to such areas as Internet of Things and edge processing. AI is another area, given the electricity-hungry nature of the GPUs and the systems they power for training AI models. Neuromorphic systems may be able alleviate some of these demands.

Flying The Efficiency Flag​

SpiNNcloud, the four-year-old company that in 2021 spun out of the Dresden University of Technology and whose chip architecture – SpiNNaker1 (the chip on the left below) – was designed by Steve Furber, the driving force behind the creation of the Arm microprocessor, is carrying that message of power efficiency and AI. On the landing page of its website, the company touts its SpiNNaker2 as the foundation of the “ultra energy-efficient infrastructure for new-generation AI inference,” at 18 times more efficient than the GPUs that are powering many AI systems now.
The upcoming successor, SpiNNext (on the right), will come in 78 times more efficient, according to the company.

SpiNNcloud will now be able do to put the SpiNNaker2 architecture to the test. The German company launched the hybrid AI-level HPC platform, made it commercially available, and said that the Sandia National Laboratories – along with institutions like Technical University of München and Universität Göttingen in Germany – was among its first customers.
This week, company executives announced that Sandia has deployed SpiNNaker2, which simulates about 175 million neurons and is among the top five largest computing platforms based on how the human brain works.
(Photo Credit: Craig Fritz, Sandia National Labs)
“Last time was a generic announcement: We started to work with Sandía,” Hector Gonzalez, co-founder and chief executive officer for SpiNNcloud, told The Next Platform. “They received the system, the supercomputer, and they’re going to be working with it in a few applications.”

24 Boards With 48 Chips Each​

What the Sandia scientists stood up is a highly parallel architecture with 24 boards, each of which holds 48 SpiNNaker2 chips that are interconnected in toroidal topologies. Each microchip has 152 Arm-based low-power processing elements that are interconnected in a network-on-chip, Gonzalez said.

“The microchips get grouped into boards of 48 chips, and then these boards get also interconnected between boards to boards,” he said. “We have high-speed links that have been custom designed to expand the hierarchies of the boards. Then we interconnect the boards in a strategic way so that you build those toroidal large-scale networks. You fold them strategically so that you always ensure the shortest communication path. There is actually software that we use to find the right connectivity so the system starts to send packages and packets. Then there is a lead-based system that we identify for wiring the infrastructure. Once it’s wired, you put them into rack-based systems. Then we built large-scale systems using this technology.”

He noted the power-efficiency advantage over GPUs as well as other benefits. “Something that you can do that you cannot do with a GPU is that you can have super-fine granular control of all these 175K cores. This is one of the distinguishing factors. The system is a globally asynchronous, locally synchronous, so the individual processes can be fully controlled and you can fully isolate paths within the processor. This is … very difficult to do in a GPU because essentially in a GPU, you have these streams of multi-processors [where it’s] harder to isolate the paths.”
In addition, Gonzalez said the microchip is not what’s typically found in a neuromorphic system because it doesn’t commit to spiking neurons [which mimic how the brain processes information through electrical pulses]. With SpiNNaker2, you can pretty much implement [and] leverage these event-based characteristics from the neuromorphic domain, even in the mainstream DNN domain, even in mainstream deep neural networks. At the same time, it also lets you scale up neural symbolic models. Because it’s fully programmable, you can actually scale up neural symbolic modes, like reasoners that have a symbolic layer, where at the same time you have neural layers.”

Sandia Labs is no stranger to neuromorphic computing. A year ago, it added to its arsenal with the Hala Point system powered by Intel’s Loihi 2 neuromorphic processor, using the system to test AI workloads and how the performance compares with systems running on CPUs, GPUs, and other chips. The addition of SpiNNaker2 is part of the same ongoing initiative at Sandia to use such architectures to run energy-efficient AI applications that consume less power than traditional GPU-based systems, the company said.

SpiNNcloud’s Gonzalez outlined a number of applications for SpiNNaker2, including small multilayer perceptrons (MLPs) that are deployed at scale in every processor. The MLPs are small, so using a GPU would be overkill, he said, “but then you have many, many of them, and these MLPs are designed to find molecules. It’s designed to have pattern matching between molecules in the drug discovery processes and also databases of profiles of patients. This is a strategy to do highly parallel drug discovery very efficiently.”

Others include QUBObased optimization or logistics problems that can address different types of complex mathematical simulations and challenges that involve random worker algorithms, which use randomness to explore solutions to various problems.
“You just simulate this worker so you deploy this worker at scale and then you can do complex mathematical simulation leveraging the large-scale characteristics of the system,” Gonzalez said.

A Call For Sparsity​

SpiNNcloud will continue to create the architecture to support generative AI algorithms that can run machine learning workloads through dynamic sparsity, fueled by recent breakthroughs in machine learning that is moving the industry from dense modeling to extreme dynamic sparsity, where only a subset of neural pathways is activated based on the input into the system, which the company said help address the energy challenges in current AI computing.

“There is very interesting directions where you get to find [and] granularly execute only parts of the network to retrieve the outputs,” Gonzalez said. “This is what is known today as mixture of experts. People in this field have shown that the larger the number of experts you have, you can actually reduce the computational work. The sparsity is very large. You have a computational footprint that’s very small. You get to reduce significantly the computational cost of these models. The problem is that standard hardware today is not designed for this fine granular isolation of paths, and this is where this type of very hybrid hardware that has characteristics from neuromorphic – so it has event-based communication – has a huge impact to offer into this mainstream AI domain. Essentially, you get two isolate paths, whereas the standard architectures like GPUs and all the GPU derivatives – Cerebras, SambaNova – they are optimized towards the regular 10 cases. They work better when you have fully utilized blocks.”
I’d like to see that fit in my hearing aid

1750233125227.gif
 
  • Haha
  • Like
Reactions: 5 users

GStocks123

Regular
Anyone seen Mr. Hehir 🔭
 
  • Like
Reactions: 1 users

7für7

Top 20
Just saw it in the German forum…

Coming soon!?!? 🤔
IMG_4656.jpeg
 
  • Like
Reactions: 1 users

itsol4605

Regular
Just saw it in the German forum…

Coming soon!?!? 🤔 View attachment 87282
You are very late. This update is already a couple of days available and it was discussed in forums.

Please remember: You want to sell your BC shares!!
 
Top Bottom