BRN Discussion Ongoing

From the AFRL FY25 Facilities book.

I wonder whose 3UVPX board in the Edge (neuromorphic) space they've been playing with.

Suspect Bascom Hunter Snapcard as we know.

Screenshot_2025-07-18-08-59-43-78_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
IMG_20250718_085456.jpg



 
  • Like
  • Fire
  • Love
Reactions: 22 users

7für7

Top 20
S
ok who was the one with small change of over $660,000 in their pocket that just wiped out the the 22c line over 3mill purchased at 10.35.40

wasn't me
So basically you sold because it went back to 21.8 🙄
 

MDhere

Top 20
  • Haha
  • Like
  • Love
Reactions: 4 users

7für7

Top 20

Bravo

If ARM was an arm, BRN would be its biceps💪!
I asked ChatGPT about V2X technology and where Qualcomm, Valeo and BrainChip might intersect in future in this area





ChatGPT's Response:

Qualcomm’s V2X platforms deliver low latency communications, which are great for time-sensitive messaging—but they’re not designed for ultra‑low latency neuromorphic inference like BrainChip’s Akida+TENNs.



🛣️ What Qualcomm Offers (V2X Context)​

These systems excel at fast message delivery but don’t include event-driven neural processing.



🤖 Qualcomm & AI: Edge, Not Neuromorphic​

  • Qualcomm’s X85/82 modems feature AI data traffic engines and network‑edge NPU cores (e.g., up to 40 TOPS) WikipediaMobile World Live
  • Their Hexagon DSP/NPU accelerators (in Snapdragon chips) deliver on-device AI for vision, audio, and general inference tasks Qualcomm+14Wikipedia+14Wikipedia+14
  • Zeroth, their 2015 brain‑inspired software, signals early interest in event‑based approaches Wikipedia
Bottom line: Qualcomm does offer on‑device AI and cognitive traffic optimization—but not spiking neural networks or state‑space neuromorphic architectures like TENNs.



⚖️ Comparison: Qualcomm vs. Akida+TENNs​


FeatureQualcomm C‑V2X / Hexagon DSPBrainChip Akida + TENNs
LatencyLow-latency packet processing; not event-driven inferenceUltra-low latency, event-based temporal processing
Power EfficiencyOptimized for general inference, moderate powerExceptionally low power (<mW), ideal for SWaP-critical systems
AI Model StyleCNNs, DNNs, traffic optimization; floating/integer precisionSpiking, state-space, event-driven; integer precision
Use CasesV2X messaging; on-device vision/audioSensor fusion, gesture detection, radar signal classification



✅ Could Qualcomm Compete with Akida?​

  • Qualcomm can handle low-latency messaging and general AI, but lacks neuromorphic temporal processing.
  • It’s unlikely Qualcomm’s V2X will match Akida+TENNs’ edge efficiency and event-based inference.



🔮 Implications for Collaboration and Market Strategy​


  • Qualcomm picks the low-latency messaging layer; BrainChip fills the inference gap.
  • Valeo (and others) combining Qualcomm’s V2X communications with BrainChip’s edge AI would yield better sensor fusion and situational awareness.
  • Expect future V2X systems to blend both—Qualcomm for connectivity, BrainChip for real-time inference on sensor data.


📝 Summary​

Qualcomm definitely brings strong low-latency communications and general AI via Hexagon DSPs, but does not cover the ultra-efficient event-driven inferencing required for edge neuromorphic processing like TENNs. Instead, Qualcomm and BrainChip are complementary, not overlapping—making their partnership (direct or via Valeo) a strategic win.









🧩 How Qualcomm & Valeo Partner on V2X & ADAS​

1. ADAS Domain Controllers on Snapdragon Ride


Valeo develops ADAS and parking systems (e.g., Park4U®) hosted on Qualcomm’s Snapdragon Ride platform—a scalable automotive SoC family optimized for vision processing and sensor fusion safestocks.co.uk+13iot-automotive.news+13TelecomDrive+13.

2. Extended Collaboration (MoU) for New Mobility


In November 2023, Valeo and Qualcomm signed an MoU to co-develop connected systems for 2- and 3-wheeler vehicles in India, combining Qualcomm Snapdragon Digital Chassis solutions with Valeo’s telematics, displays, sensors, and cybersecurity Qualcomm+4Valeo+4Automotive World+4.

3. V2X Integration Possibility

So yes—Valeo’s V2X platform likely leverages Qualcomm’s Snapdragon vehicle connectivity stack (for 4G/5G + C‑V2X), though this is often a behind-the-scenes integration.



🔧 Is It Using Snapdragon Ride?​


Yes, Valeo’s ADAS controllers—including Park4U® and domain controllers like in BMW’s Neue Klasse—run on Snapdragon Ride SoCs .
These SoCs support high-performance sensor fusion and are built for automotive-grade safety and functionality.

⚡ Summary​

  • Qualcomm ➝ provides connectivity (4G/5G, C‑V2X, now via Autotalks), and compute (Snapdragon Ride).
  • Valeo ➝ integrates sensors and software, deploying them on Snapdragon hardware.
  • Together, they enable assisted driving, parking automation, and now emerging V2X/5G architectures.


🤝 Implications for BrainChip​

  • Qualifier: Valeo + Qualcomm handle communications and general compute.
  • Opportunity: BrainChip could augment these systems with ultra-low-power, event-based inference using Akida + TENNs. For example, neuromorphic processing could complement high-speed V2X messaging by improving sensor-level decision-making (e.g., pedestrian detection).
  • Strategic Fit: A future tri-party integration—Qualcomm (communications & compute), Valeo (sensors/software), BrainChip (edge AI)—would yield a powerful, multi-layered mobility platform.


In short: Qualcomm’s Snapdragon Ride powers Valeo’s ADAS/V2X controllers. They work closely together—and with Qualcomm’s V2X expansion, Valeo’s recent 5G-V2X demo is a logical extension. Adding BrainChip to the mix would bring neuromorphic edge intelligence to this robust foundation in parsing real-time sensor and V2X data.
 
  • Like
  • Fire
  • Love
Reactions: 18 users

7für7

Top 20
Pass Out Mr Bean GIF
 
  • Haha
Reactions: 3 users

7für7

Top 20
Green Friday??? Something is brewing folks… or the usual pump before we get some reports? I have no idea….

Happy Dance GIF
 
  • Haha
Reactions: 2 users

Diogenese

Top 20
I asked ChatGPT about V2X technology and where Qualcomm, Valeo and BrainChip might intersect in future in this area





ChatGPT's Response:

Qualcomm’s V2X platforms deliver low latency communications, which are great for time-sensitive messaging—but they’re not designed for ultra‑low latency neuromorphic inference like BrainChip’s Akida+TENNs.



🛣️ What Qualcomm Offers (V2X Context)​

These systems excel at fast message delivery but don’t include event-driven neural processing.



🤖 Qualcomm & AI: Edge, Not Neuromorphic​

  • Qualcomm’s X85/82 modems feature AI data traffic engines and network‑edge NPU cores (e.g., up to 40 TOPS) WikipediaMobile World Live
  • Their Hexagon DSP/NPU accelerators (in Snapdragon chips) deliver on-device AI for vision, audio, and general inference tasks Qualcomm+14Wikipedia+14Wikipedia+14
  • Zeroth, their 2015 brain‑inspired software, signals early interest in event‑based approaches Wikipedia
Bottom line: Qualcomm does offer on‑device AI and cognitive traffic optimization—but not spiking neural networks or state‑space neuromorphic architectures like TENNs.



⚖️ Comparison: Qualcomm vs. Akida+TENNs​


FeatureQualcomm C‑V2X / Hexagon DSPBrainChip Akida + TENNs
LatencyLow-latency packet processing; not event-driven inferenceUltra-low latency, event-based temporal processing
Power EfficiencyOptimized for general inference, moderate powerExceptionally low power (<mW), ideal for SWaP-critical systems
AI Model StyleCNNs, DNNs, traffic optimization; floating/integer precisionSpiking, state-space, event-driven; integer precision
Use CasesV2X messaging; on-device vision/audioSensor fusion, gesture detection, radar signal classification



✅ Could Qualcomm Compete with Akida?​

  • Qualcomm can handle low-latency messaging and general AI, but lacks neuromorphic temporal processing.
  • It’s unlikely Qualcomm’s V2X will match Akida+TENNs’ edge efficiency and event-based inference.



🔮 Implications for Collaboration and Market Strategy​


  • Qualcomm picks the low-latency messaging layer; BrainChip fills the inference gap.
  • Valeo (and others) combining Qualcomm’s V2X communications with BrainChip’s edge AI would yield better sensor fusion and situational awareness.
  • Expect future V2X systems to blend both—Qualcomm for connectivity, BrainChip for real-time inference on sensor data.


📝 Summary​

Qualcomm definitely brings strong low-latency communications and general AI via Hexagon DSPs, but does not cover the ultra-efficient event-driven inferencing required for edge neuromorphic processing like TENNs. Instead, Qualcomm and BrainChip are complementary, not overlapping—making their partnership (direct or via Valeo) a strategic win.









🧩 How Qualcomm & Valeo Partner on V2X & ADAS​

1. ADAS Domain Controllers on Snapdragon Ride


Valeo develops ADAS and parking systems (e.g., Park4U®) hosted on Qualcomm’s Snapdragon Ride platform—a scalable automotive SoC family optimized for vision processing and sensor fusion safestocks.co.uk+13iot-automotive.news+13TelecomDrive+13.

2. Extended Collaboration (MoU) for New Mobility


In November 2023, Valeo and Qualcomm signed an MoU to co-develop connected systems for 2- and 3-wheeler vehicles in India, combining Qualcomm Snapdragon Digital Chassis solutions with Valeo’s telematics, displays, sensors, and cybersecurity Qualcomm+4Valeo+4Automotive World+4.

3. V2X Integration Possibility

So yes—Valeo’s V2X platform likely leverages Qualcomm’s Snapdragon vehicle connectivity stack (for 4G/5G + C‑V2X), though this is often a behind-the-scenes integration.



🔧 Is It Using Snapdragon Ride?​


Yes, Valeo’s ADAS controllers—including Park4U® and domain controllers like in BMW’s Neue Klasse—run on Snapdragon Ride SoCs .
These SoCs support high-performance sensor fusion and are built for automotive-grade safety and functionality.

⚡ Summary​

  • Qualcomm ➝ provides connectivity (4G/5G, C‑V2X, now via Autotalks), and compute (Snapdragon Ride).
  • Valeo ➝ integrates sensors and software, deploying them on Snapdragon hardware.
  • Together, they enable assisted driving, parking automation, and now emerging V2X/5G architectures.


🤝 Implications for BrainChip​

  • Qualifier: Valeo + Qualcomm handle communications and general compute.
  • Opportunity: BrainChip could augment these systems with ultra-low-power, event-based inference using Akida + TENNs. For example, neuromorphic processing could complement high-speed V2X messaging by improving sensor-level decision-making (e.g., pedestrian detection).
  • Strategic Fit: A future tri-party integration—Qualcomm (communications & compute), Valeo (sensors/software), BrainChip (edge AI)—would yield a powerful, multi-layered mobility platform.


In short: Qualcomm’s Snapdragon Ride powers Valeo’s ADAS/V2X controllers. They work closely together—and with Qualcomm’s V2X expansion, Valeo’s recent 5G-V2X demo is a logical extension. Adding BrainChip to the mix would bring neuromorphic edge intelligence to this robust foundation in parsing real-time sensor and V2X data.
Hi Bravo,

About 3 years ago, I was prompted to look at a possible Qualcomm/Akida combination:

Hi Proga,

Sadly the ogre has retired, but the short answer is "No".

Qualcomm has put a lot of effort into analog SNNs and this article does not suggest they have switched away from their in-house technology:

More simultaneous AI tasks, while using less power​

AI (yes, artificial intelligence) has been the rising star in mobile computing over the last few years and that trend continues upwards with the 8 Gen 2.

Although every major element of the SoC already leverages AI to some degree, this generation's dedicated Hexagon Processor offers a revised toolset (including a Tensor accelerator that's doubled in size) set to deliver some significant gains.

Qualcomm promises a 4.35x speed increase when performing AI-based tasks, thanks – in part – to the fact that the 8 Gen 2 is the first of its kind to leverage something called INT4 (integer 4) precision; allowing for 60% more AI-based tasks to be performed concurrently per watt.

Through what can only be arcane wizardry, Qualcomm is able to scale 32-bit processes down to 4-bit without compromising the quality of the data sets being processed, which the company's Ziad Asghar – VP of Product Management at Qualcomm – told TechRadar, amounts to a 64x power reduction
.

Upgraded always-on Sensing Hub​

Modern phones can already help us transpose the analogue world into digital; with features like semantic text recognition and object recognition, but the Sensing Hub inside the 8 Gen 2 has been purpose-built to help with such tasks; boasting two AI processing cores for up to double the AI performance compared to 8 Gen 1, along with 50% more memory than previously.

The Sensing Hub supports an 'always-sensing camera' (a rewording from last-generation's 'always-on camera'), that's great for everything from QR code scanning to face proximity detection, facial recognition and even eye-tracking – all without having to actively open your device's camera app.

Asghar confirmed to TechRadar that multiple OEM partners have been particularly interested in this aspect of the Sensing Hub, suggesting the next wave of phones powered by the 8 Gen 2 may well have the ability to scan and action QR codes and the like without even needing to be woken up or for particular apps to be opened to interact with them.

Despite its always-on nature, Qualcomm also states that the data processed by the Sensing Hub doesn't leave your device
.

This patent dates from mid-2017.

US10460817B2 Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors

1668774218800.png

[0009] In this regard, FIG. 1A illustrates a matrix network circuit 100 as a cross-bar network that includes a way of interconnecting memristors and CMOS circuit neurons for STDP learning.

[0010] Neural networks that employ memristor networks for providing synapses can also be used for other applications that require weighted matrix multiplication computations, such as convolution for example.

Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors are disclosed. An MLC NVM matrix circuit includes a plurality of NVM storage string circuits that each include a plurality of MLC NVM storage circuits each containing a plurality of NVM bit cell circuits each configured to store 1-bit memory state. Thus, each MLC NVM storage circuit stores a multi-bit memory state according to memory states of its respective NVM bit cell circuits. Each NVM bit cell circuit includes a transistor whose gate node is coupled to a word line among a plurality of word lines configured to receive an input vector. Activation of the gate node of a given NVM bit cell circuit in an MLC NVM storage circuit controls whether its resistance is contributed to total resistance of an MLC NVM storage circuit coupled to a respective source line.

[006] … Real synapses, however, exhibit non-linear phenomena like spike timing dependent plasticity (STDP) that modulate the weight of an individual synapse based on the activity of the pre- and post-synaptic neurons. The modulation of synaptic weights through plasticity has been shown to greatly increase the range of computations that neural networks can perform.

[0074] Further, if the word lines WL0 -WLm are coupled to a pre-neuron layer and the output nodes 308 ( 0 )- 308 (m ) are coupled to a post-neuron layer, the NVM matrix circuit 300 is also configured to train the channel resistance of the NVM bit cell circuits R00 -Rmn by supporting backwards propagation of a weight update ...



Following on from this, Back in November 2022 I looked at the Qualcomm/Sony/Prophesee joint development, and concluded:

"On the plus side, if Sony/Prophesee had been working with Snapdragon 8 gen 2, then Prophesee's recent endorsement of Akida (2 with TENNs) speaks volumes for the comparative performance of Akida and Snapdragon 8 gen 2's AI analog SNN."

#36,924

This was a reference to Luca's eager anticipation of Akida 2 at the time:

https://brainchip.com/
[Search for "Luca"]
“At Prophesee, we are driven by the pursuit of groundbreaking innovation addressing event-based vision solutions. Combining our highly efficient neuromorphic-enabled Metavision sensing approach with BrainChip’s Akida neuromorphic processor holds great potential for developers of high performance, low-power Edge AI applications. We value our partnership with BrainChip and look forward to getting started with their 2nd generation Akida platform, supporting vision transformers and TENNs.”
Luca Verre, Co-Founder and CEO at Prophesee

Qualcomm does have analog SNN. One problem with analog SNN using ReRAM/MRAM, ,,,, is that they usually cannot be manufactured using ordinary CMOS manufacturing techniques, and this would increase manufacturing costs. I think another problem would be that, while analog may be competitive at single bit weights/activations, analog does not easily accommodate the higher precision (8-bit+) that Akida has now adopted in response to customer feedback.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 17 users

HopalongPetrovski

I'm Spartacus!
Genuine question. Why is our CEO host on the podcasts? Don't they have people to that stuff? Seems a bit unprofessional to me
I think Sean's trying to show that he is engaged and can cite it as both customer and investor engagement.
 
  • Like
Reactions: 2 users
This Author needs to be contacted some how and enlightened ?.

Green Friday??? Something is brewing folks… or the usual pump before we get some reports? I have no idea….

Happy Dance GIF
Don’t you just love XRP when BRN is trading this good
 
  • Like
Reactions: 1 users
  • Haha
Reactions: 6 users

7für7

Top 20
Don’t you just love XRP when BRN is trading this good
At the moment I enjoy both! XRP honestly I pray for an explosion… BRN is kind of fruit which has to get ripe
 
  • Like
Reactions: 2 users

Diogenese

Top 20
Hi Bravo,

About 3 years ago, I was prompted to look at a possible Qualcomm/Akida combination:

Hi Proga,

Sadly the ogre has retired, but the short answer is "No".

Qualcomm has put a lot of effort into analog SNNs and this article does not suggest they have switched away from their in-house technology:

More simultaneous AI tasks, while using less power​

AI (yes, artificial intelligence) has been the rising star in mobile computing over the last few years and that trend continues upwards with the 8 Gen 2.

Although every major element of the SoC already leverages AI to some degree, this generation's dedicated Hexagon Processor offers a revised toolset (including a Tensor accelerator that's doubled in size) set to deliver some significant gains.

Qualcomm promises a 4.35x speed increase when performing AI-based tasks, thanks – in part – to the fact that the 8 Gen 2 is the first of its kind to leverage something called INT4 (integer 4) precision; allowing for 60% more AI-based tasks to be performed concurrently per watt.

Through what can only be arcane wizardry, Qualcomm is able to scale 32-bit processes down to 4-bit without compromising the quality of the data sets being processed, which the company's Ziad Asghar – VP of Product Management at Qualcomm – told TechRadar, amounts to a 64x power reduction
.

Upgraded always-on Sensing Hub​

Modern phones can already help us transpose the analogue world into digital; with features like semantic text recognition and object recognition, but the Sensing Hub inside the 8 Gen 2 has been purpose-built to help with such tasks; boasting two AI processing cores for up to double the AI performance compared to 8 Gen 1, along with 50% more memory than previously.

The Sensing Hub supports an 'always-sensing camera' (a rewording from last-generation's 'always-on camera'), that's great for everything from QR code scanning to face proximity detection, facial recognition and even eye-tracking – all without having to actively open your device's camera app.

Asghar confirmed to TechRadar that multiple OEM partners have been particularly interested in this aspect of the Sensing Hub, suggesting the next wave of phones powered by the 8 Gen 2 may well have the ability to scan and action QR codes and the like without even needing to be woken up or for particular apps to be opened to interact with them.

Despite its always-on nature, Qualcomm also states that the data processed by the Sensing Hub doesn't leave your device
.

This patent dates from mid-2017.

US10460817B2 Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors

1668774218800.png

[0009] In this regard, FIG. 1A illustrates a matrix network circuit 100 as a cross-bar network that includes a way of interconnecting memristors and CMOS circuit neurons for STDP learning.

[0010] Neural networks that employ memristor networks for providing synapses can also be used for other applications that require weighted matrix multiplication computations, such as convolution for example.

Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors are disclosed. An MLC NVM matrix circuit includes a plurality of NVM storage string circuits that each include a plurality of MLC NVM storage circuits each containing a plurality of NVM bit cell circuits each configured to store 1-bit memory state. Thus, each MLC NVM storage circuit stores a multi-bit memory state according to memory states of its respective NVM bit cell circuits. Each NVM bit cell circuit includes a transistor whose gate node is coupled to a word line among a plurality of word lines configured to receive an input vector. Activation of the gate node of a given NVM bit cell circuit in an MLC NVM storage circuit controls whether its resistance is contributed to total resistance of an MLC NVM storage circuit coupled to a respective source line.

[006] … Real synapses, however, exhibit non-linear phenomena like spike timing dependent plasticity (STDP) that modulate the weight of an individual synapse based on the activity of the pre- and post-synaptic neurons. The modulation of synaptic weights through plasticity has been shown to greatly increase the range of computations that neural networks can perform.

[0074] Further, if the word lines WL0 -WLm are coupled to a pre-neuron layer and the output nodes 308 ( 0 )- 308 (m ) are coupled to a post-neuron layer, the NVM matrix circuit 300 is also configured to train the channel resistance of the NVM bit cell circuits R00 -Rmn by supporting backwards propagation of a weight update ...



Following on from this, Back in November 2022 I looked at the Qualcomm/Sony/Prophesee joint development, and concluded:

"On the plus side, if Sony/Prophesee had been working with Snapdragon 8 gen 2, then Prophesee's recent endorsement of Akida (2 with TENNs) speaks volumes for the comparative performance of Akida and Snapdragon 8 gen 2's AI analog SNN."

#36,924

This was a reference to Luca's eager anticipation of Akida 2 at the time:

https://brainchip.com/
[Search for "Luca"]

Luca Verre, Co-Founder and CEO at Prophesee

Qualcomm does have analog SNN. One problem with analog SNN using ReRAM/MRAM, ,,,, is that they usually cannot be manufactured using ordinary CMOS manufacturing techniques, and this would increase manufacturing costs. I think another problem would be that, while analog may be competitive at single bit weights/activations, analog does not easily accommodate the higher precision (8-bit+) that Akida has now adopted in response to customer feedback.

Qualcomm is doing edge Generative AI in software.

https://www.qualcomm.com/products/mobile/snapdragon/smartphones/mobile-ai

First mobile platform designed with Generative AI in mind​

The Snapdragon® 8 Gen 3 Mobile Platform unleashes new-found creativity, pioneering a new era of AI with the first ever AI Engine to support up to 10 billion parameters of generative AI models—solely on-device.

...

Unified AI Software Portfolio​

Qualcomm® AI Stack is a comprehensive AI solution for developers, supporting a wide range of intelligent devices with broader AI software access and compatibility. For the first time, a single AI software portfolio works on every Qualcomm Technologies platform spanning the wide range edge products – including mobile, automotive, XR, compute, IoT and more.


https://www.qualcomm.com/developer/artificial-intelligence

Explore a robust AI software portfolio with Qualcomm AI Stack.


icn-check-circle.svg

Develop once, deploy everywhere
Support popular operating systems across mobile, compute, XR, IoT, auto, and cloud platforms
icn-check-circle.svg

Optimize for performance and power efficiency
Leverage advanced quantization and compression techniques for trained neural network models
icn-check-circle.svg

Multiple levels of access
Freedom to choose different APIs to directly access all accelerator cores


1752824318535.png



The corollary is that this is a major opportunity for Akida GenAI. It may be amplified by RAG (Retrieval Augmented Generation), where LLM subsets are downloaded on demand.
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 12 users
Qualcomm is doing edge Generative AI in software.

https://www.qualcomm.com/products/mobile/snapdragon/smartphones/mobile-ai

First mobile platform designed with Generative AI in mind​

The Snapdragon® 8 Gen 3 Mobile Platform unleashes new-found creativity, pioneering a new era of AI with the first ever AI Engine to support up to 10 billion parameters of generative AI models—solely on-device.

...

Unified AI Software Portfolio​

Qualcomm® AI Stack is a comprehensive AI solution for developers, supporting a wide range of intelligent devices with broader AI software access and compatibility. For the first time, a single AI software portfolio works on every Qualcomm Technologies platform spanning the wide range edge products – including mobile, automotive, XR, compute, IoT and more.


https://www.qualcomm.com/developer/artificial-intelligence

Explore a robust AI software portfolio with Qualcomm AI Stack.


icn-check-circle.svg

Develop once, deploy everywhere
Support popular operating systems across mobile, compute, XR, IoT, auto, and cloud platforms
icn-check-circle.svg

Optimize for performance and power efficiency
Leverage advanced quantization and compression techniques for trained neural network models
icn-check-circle.svg

Multiple levels of access
Freedom to choose different APIs to directly access all accelerator cores


View attachment 88621


The corollary is that this is a major opportunity for Akida GenAI. It may be amplified by RAG (Retrieval Augmented Generation), where LLM subsets are downloaded on demand.
🤞
 
  • Like
Reactions: 6 users

The Pope

Regular
ok who was the one with small change of over $660,000 in their pocket that just wiped out the the 22c line over 3mill purchased at 10.35.40

wasn't me
Had to spend my tax return on something. Hopefully a wise decision. 😉
 
  • Haha
  • Fire
  • Like
Reactions: 11 users

manny100

Top 20
Green Friday??? Something is brewing folks… or the usual pump before we get some reports? I have no idea….

Happy Dance GIF
The last 2 quarters saw SP rises prior to the quarterly release. Prior quarters were however share price down into quarterlies. I posted charts on the crapper showing this.
We have what is called a bollinger band squeeze ATM. The bands have become very narrow and often bounce like a tight coil when let go.
We may for the 3rd consecutive quarter see move up before the quarterly.
It would be nice to see some good news soon if only to fatten up this bounce.
This likely will be traded up higher.
 
  • Like
  • Fire
Reactions: 15 users

Diogenese

Top 20
We've had a burst of AFRL/RTX(ILS) microDoppler news recently, and the SBIR was to run for 6 months to 1 year. It was announced in December 2024, so it could pop up any time from now to the end of the year.

Another opportunity is Chelpis. They are a Taiwanese cybersecurity company. Which countries in all the world would be under more constant cyber attack than Taiwan?

That urgency may explain why the Chelpis agreement is directed to Akida 1 SoC.

There's a lot to unpack in the Chelpis announcement:

1. Akida 1000 chips for immediate inclusion in M2 cybersecurity cards for qualification and deployment

2. Collaboration to develop a PQ-AI robotic chip (PQ = post-quantum computing, ie, hardened against future quantum computer cyber attack).

3. Akida IP to also be used for NPU capabilities (Akida's primary function)

4. Exploring "advanced Akida IP visual GenAI capabilities" (Akida GenAI).

5. Applied for Taiwanese government support for the development

6. Made-in-USA strategy


https://www.chelpis.com/post/brainchip-collaborates-with-chelpis-mirle-on-security-solution

BrainChip Collaborates with Chelpis-Mirle on Security Solution​

  • May 2


LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that Chelpis Quantum Corp. has selected its Akida AKD1000 chips to serve as the processor for built-in post-quantum cryptographic security.

Chelpis, a chip company leading the Quantum Safe Migration ecosystem in Taiwan, is developing an M.2 card using the AKD1000 that can be inserted into targeted products to support their cryptographic security solutions. The M.2 card is based on a design from BrainChip along with an agreement to purchase a significant number of AKD1000 chips for qualification and deployment. Upon completion of this phase, Chelpis is planning to increase its commitment with additional orders for the AKD1000.

This agreement is the first step in a collaboration that is exploring the development of an AI-PQC robotic chip designed to fulfill both next-generation security and AI computing requirements. This project is a joint development effort with Chelpis partner company Mirle (2464.TW) and has been formally submitted for consideration under Taiwan’s chip innovation program. The funding aims to promote a new system-on-chip (SoC) integrating RISC-V, PQC, and NPU technologies. This SoC will specifically support manufacturing markets that emphasize a Made-in-USA strategy. Mirle plans to build autonomous quadruped robotics that mimic the movement of four-legged animals for industrial/factory environments. To enable this vision, Chelpis is exploring BrainChip’s advanced Akida™ IP to incorporate advanced visual GenAI capabilities in the proposed SoC design.

"The ability to add Edge AI security capabilities to our industrial robotics project that provides the low power data processing required is paramount to successfully achieving market validation in the robotics sector," said Ming Chih, CEO of Chelpis. "We believe that BrainChip’s Akida is just the solution that we further need to bring our SoC to fruition. Their event-based processing and advanced models serve as a strong foundation for developing a platform for manufacturing customers looking to leverage advanced robotics in their facilities."

"Akida’s ability to efficiently provide cyber-security acceleration with energy efficiency can help secure autonomous robotic devices," said Sean Hehir, CEO of BrainChip. "Akida’s innovative approach to supporting LLMs and GenAI algorithms could serve as a key contributor to Chelpis as they pursue government funding to develop their SoC and advance their industrial robotic initiatives
."

It looks like Chelpis are in boots and all.
 
  • Like
  • Love
  • Fire
Reactions: 51 users

Tothemoon24

Top 20
IMG_1266.jpeg



IMG_1267.jpeg


Edge AI solutions have become critically important in today’s fast-paced technological landscape. Edge AI transforms how we utilize and process data by moving computations close to where data is generated. Bringing AI to the edge not only improves performance and reduces latency but also addresses the concerns of privacy and bandwidth usage. Building edge AI demos requires a balance of cutting-edge technology and engaging user experience. Often, creating a well-designed demonstration is the first step in validating an edge AI use case that can show the potential for real-world deployment.
Building demos can help us identify potential challenges early when building AI solutions at the edge. Presenting proof-of-concepts through demos enables edge AI developers to gain stakeholder and product approval, demonstrating how AI solutions effectively create real value for users, within size, weight and power resources. Edge AI demos help customers visualize the real-time interaction between sensors, software and hardware, helping in the process of designing effective AI use cases. Building a use-case demo also helps developers experiment with what is possible.

Understanding the Use Case​


The journey of building demos starts with understanding the use case – it might be detecting objects, analyzing the sensor data, interacting with a voice enabled chatbot, or asking AI agents to perform a task. The use case should be able to answer questions like – what problem are we solving? Who can benefit from this solution? Who is your target audience? What are the timelines associated with developing the demo? These answers work as the main objectives which guide the development of the demo.
Let’s consider our Brainchip Anomaly Classification C++ project demonstrating real-time classification of mechanical vibrations from an ADXL345 accelerometer into 5 motion patterns: forward-backward, normal, side-side, up-down and tap. This use case is valuable for industrial use cases like monitoring conveyor belt movements, detecting equipment malfunctions, and many more industrial applications.
Screenshot-2025-07-17-at-8.56.26%E2%80%AFAM.png

Optimizing Pre-processing and Post-processing​


Optimal model performance relies heavily on the effective implementation of both pre-processing and post-processing components. The pre-processing tasks might involve normalization or image resizing or conversion of audio signals to a required format. The post-processing procedure might include decoding outputs from the model and applying threshold filters to refine those results, creating bounding boxes, or developing a chatbot interface. The design of these components must ensure accuracy and reliability.
In the BrainChip anomaly classification project, the model analyzes the data from the accelerometer which records 100HZ three-dimensional vibration through accX, accY, and accZ channels. The data was collected using Edge Impulse’s data collection feature. Spectral analysis of the accelerometer signals was performed to extract features from the time-series data during the pre-processing step. Use this project and retrain them or use your own models and optimize them for Akida IP using the Edge Impulse platform. It provides user friendly no-code interface for designing ML workflow and optimizing model performance for edge devices including BrainChip’s Akida IP.

Balancing Performance and Resource Constraints​


Models at the edge need to be smaller and faster while maintaining accuracy. Quantization along with knowledge distillation and pruning optimization methods allow for sustained accuracy together with improved model efficiency. BrainChip’s Akida AI Acceleration Processor IP leveragesquantization and also adds sparsity processing to realize extreme levels of energy efficiency and accuracy. It supportsreal-time, on-device inferences to take place with extremely low power.

Building Interactive Interfaces​


Different approaches include modern frameworks such as Flask, FastAPI, Gradio, and Streamlit to enable users to build interactive interfaces using innovative approaches. Flask and FastAPI give developers the ability to build custom web applications with flexibility and control, while Gradio and Streamlit enable quick prototyping of machine learning applications using minimal code. Factors like interface complexity together with deployment requirements and customization needs influence framework selection. The effectiveness of the demo depends heavily on user experience such as UI responsiveness and intuitive design. The rise of vibe coding and tools like Cursor and Replit has greatly accelerated the time to build prototypes and enhance the UX, saving time for the users to focus on edge deployment and optimizing performance where it truly matters.
For the Anomaly Classification demo, we implemented user interfaces for both Python and C++ versions to demonstrate real-time inference capabilities. For the Python implementation, we used Gradio to create a simple web-based interface that displays live accelerometer readings and classification results as the Raspberry Pi 5 processes sensor data in real-time. The C++ version features a PyQt-based desktop application that provides more advanced controls and visualizations for monitoring the vibration patterns. Both interfaces allow users to see the model's predictions instantly, making it easy to understand how the system responds to different types of mechanical movements.

Overcoming Common Challenges​


Common challenges in edge AI demo development include handling hardware constraints, performance consistency across different devices, and real-time processing capabilities. By implementing careful optimization combined with robust error handling and rigorous testing under diverse conditions, developers can overcome these challenges. By combining BrainChip'shardware acceleration with Edge Impulse's model optimization tools, the solution canshow consistent performance across different deployment scenarios while maintaining the low latency required for real-time industrial monitoring.

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

Article by:​


Dhvani Kothari is a Machine Learning Solutions Architect at BrainChip. With a background in data engineering, analytics, and applied machine learning, she has held previous roles at Walmart Global Tech and Capgemini. Dhvani has a Master of Science degree in Computer Science from the University at Buffalo and a Bachelor of Engineering in Computer Technology from Yeshwantrao Chavan College of Engineering.
 
  • Like
  • Fire
  • Love
Reactions: 36 users

MDhere

Top 20
View attachment 88635


View attachment 88636


Edge AI solutions have become critically important in today’s fast-paced technological landscape. Edge AI transforms how we utilize and process data by moving computations close to where data is generated. Bringing AI to the edge not only improves performance and reduces latency but also addresses the concerns of privacy and bandwidth usage. Building edge AI demos requires a balance of cutting-edge technology and engaging user experience. Often, creating a well-designed demonstration is the first step in validating an edge AI use case that can show the potential for real-world deployment.
Building demos can help us identify potential challenges early when building AI solutions at the edge. Presenting proof-of-concepts through demos enables edge AI developers to gain stakeholder and product approval, demonstrating how AI solutions effectively create real value for users, within size, weight and power resources. Edge AI demos help customers visualize the real-time interaction between sensors, software and hardware, helping in the process of designing effective AI use cases. Building a use-case demo also helps developers experiment with what is possible.

Understanding the Use Case​


The journey of building demos starts with understanding the use case – it might be detecting objects, analyzing the sensor data, interacting with a voice enabled chatbot, or asking AI agents to perform a task. The use case should be able to answer questions like – what problem are we solving? Who can benefit from this solution? Who is your target audience? What are the timelines associated with developing the demo? These answers work as the main objectives which guide the development of the demo.
Let’s consider our Brainchip Anomaly Classification C++ project demonstrating real-time classification of mechanical vibrations from an ADXL345 accelerometer into 5 motion patterns: forward-backward, normal, side-side, up-down and tap. This use case is valuable for industrial use cases like monitoring conveyor belt movements, detecting equipment malfunctions, and many more industrial applications.
Screenshot-2025-07-17-at-8.56.26%E2%80%AFAM.png

Optimizing Pre-processing and Post-processing​


Optimal model performance relies heavily on the effective implementation of both pre-processing and post-processing components. The pre-processing tasks might involve normalization or image resizing or conversion of audio signals to a required format. The post-processing procedure might include decoding outputs from the model and applying threshold filters to refine those results, creating bounding boxes, or developing a chatbot interface. The design of these components must ensure accuracy and reliability.
In the BrainChip anomaly classification project, the model analyzes the data from the accelerometer which records 100HZ three-dimensional vibration through accX, accY, and accZ channels. The data was collected using Edge Impulse’s data collection feature. Spectral analysis of the accelerometer signals was performed to extract features from the time-series data during the pre-processing step. Use this project and retrain them or use your own models and optimize them for Akida IP using the Edge Impulse platform. It provides user friendly no-code interface for designing ML workflow and optimizing model performance for edge devices including BrainChip’s Akida IP.

Balancing Performance and Resource Constraints​


Models at the edge need to be smaller and faster while maintaining accuracy. Quantization along with knowledge distillation and pruning optimization methods allow for sustained accuracy together with improved model efficiency. BrainChip’s Akida AI Acceleration Processor IP leveragesquantization and also adds sparsity processing to realize extreme levels of energy efficiency and accuracy. It supportsreal-time, on-device inferences to take place with extremely low power.

Building Interactive Interfaces​


Different approaches include modern frameworks such as Flask, FastAPI, Gradio, and Streamlit to enable users to build interactive interfaces using innovative approaches. Flask and FastAPI give developers the ability to build custom web applications with flexibility and control, while Gradio and Streamlit enable quick prototyping of machine learning applications using minimal code. Factors like interface complexity together with deployment requirements and customization needs influence framework selection. The effectiveness of the demo depends heavily on user experience such as UI responsiveness and intuitive design. The rise of vibe coding and tools like Cursor and Replit has greatly accelerated the time to build prototypes and enhance the UX, saving time for the users to focus on edge deployment and optimizing performance where it truly matters.
For the Anomaly Classification demo, we implemented user interfaces for both Python and C++ versions to demonstrate real-time inference capabilities. For the Python implementation, we used Gradio to create a simple web-based interface that displays live accelerometer readings and classification results as the Raspberry Pi 5 processes sensor data in real-time. The C++ version features a PyQt-based desktop application that provides more advanced controls and visualizations for monitoring the vibration patterns. Both interfaces allow users to see the model's predictions instantly, making it easy to understand how the system responds to different types of mechanical movements.

Overcoming Common Challenges​


Common challenges in edge AI demo development include handling hardware constraints, performance consistency across different devices, and real-time processing capabilities. By implementing careful optimization combined with robust error handling and rigorous testing under diverse conditions, developers can overcome these challenges. By combining BrainChip'shardware acceleration with Edge Impulse's model optimization tools, the solution canshow consistent performance across different deployment scenarios while maintaining the low latency required for real-time industrial monitoring.

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

Article by:​


Dhvani Kothari is a Machine Learning Solutions Architect at BrainChip. With a background in data engineering, analytics, and applied machine learning, she has held previous roles at Walmart Global Tech and Capgemini. Dhvani has a Master of Science degree in Computer Science from the University at Buffalo and a Bachelor of Engineering in Computer Technology from Yeshwantrao Chavan College of Engineering.
Thanks @Tothemoon24 interesting how Brn are actively promoting Edge Impulse

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

 
  • Like
  • Fire
Reactions: 20 users
Thanks @Tothemoon24 interesting how Brn are actively promoting Edge Impulse

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

A question was asked to brn management when Qualcomm took over what was the reason behind edge impulses to hold off on further brn business.
The answer was ...they weren't concerned and believe it to be standard business practice while the new management had made full assessment of all the current deals on the table. If Qualcomm do want to include Tenns Genai from brainchip in the future then it's not over for brn and edge Impulse until the fat lady sings as they say.
 
Last edited:
  • Like
  • Thinking
Reactions: 8 users
Top Bottom