BRN Discussion Ongoing

genyl

Member
Genuine question. Why is our CEO host on the podcasts? Don't they have people to that stuff? Seems a bit unprofessional to me
 

MDhere

Top 20
  • Like
Reactions: 1 users
Genuine question. Why is our CEO host on the podcasts? Don't they have people to that stuff? Seems a bit unprofessional to me
As they are still relatively new partners my thinking is Sean likes to be the person whom starts these podcasts with new partners.
It is still a fair way off having any products available on the shelf by the sound of that conversation imo.
 
  • Like
Reactions: 1 users

MDhere

Top 20
ok who was the one with small change of over $660,000 in their pocket that just wiped out the the 22c line over 3mill purchased at 10.35.40

wasn't me
 
  • Haha
  • Like
  • Thinking
Reactions: 8 users

Guzzi62

Regular
Genuine question. Why is our CEO host on the podcasts? Don't they have people to that stuff? Seems a bit unprofessional to me
It's mostly CEO's they are talking to in the podcasts, it's fitting it's another CEO talking to them.

Isn't this a part of a CEO's job? I think it is, but over at the toilet they will likely slam Sean no matter what he does or doesn't.

The downrampers over there are negative no matter what, sad way of existing IMO and not someone I want to hang out with.
 
  • Like
Reactions: 11 users
From the AFRL FY25 Facilities book.

I wonder whose 3UVPX board in the Edge (neuromorphic) space they've been playing with.

Suspect Bascom Hunter Snapcard as we know.

Screenshot_2025-07-18-08-59-43-78_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
IMG_20250718_085456.jpg



 
  • Like
  • Fire
  • Love
Reactions: 20 users

7für7

Top 20
S
ok who was the one with small change of over $660,000 in their pocket that just wiped out the the 22c line over 3mill purchased at 10.35.40

wasn't me
So basically you sold because it went back to 21.8 🙄
 

MDhere

Top 20
  • Love
  • Like
  • Haha
Reactions: 3 users

7für7

Top 20
huh? it was a 3 mill purchase. I never said i sold .

just kidding !


Sailor Moon Yes GIF
 

Bravo

If ARM was an arm, BRN would be its biceps💪!
I asked ChatGPT about V2X technology and where Qualcomm, Valeo and BrainChip might intersect in future in this area





ChatGPT's Response:

Qualcomm’s V2X platforms deliver low latency communications, which are great for time-sensitive messaging—but they’re not designed for ultra‑low latency neuromorphic inference like BrainChip’s Akida+TENNs.



🛣️ What Qualcomm Offers (V2X Context)​

These systems excel at fast message delivery but don’t include event-driven neural processing.



🤖 Qualcomm & AI: Edge, Not Neuromorphic​

  • Qualcomm’s X85/82 modems feature AI data traffic engines and network‑edge NPU cores (e.g., up to 40 TOPS) WikipediaMobile World Live
  • Their Hexagon DSP/NPU accelerators (in Snapdragon chips) deliver on-device AI for vision, audio, and general inference tasks Qualcomm+14Wikipedia+14Wikipedia+14
  • Zeroth, their 2015 brain‑inspired software, signals early interest in event‑based approaches Wikipedia
Bottom line: Qualcomm does offer on‑device AI and cognitive traffic optimization—but not spiking neural networks or state‑space neuromorphic architectures like TENNs.



⚖️ Comparison: Qualcomm vs. Akida+TENNs​


FeatureQualcomm C‑V2X / Hexagon DSPBrainChip Akida + TENNs
LatencyLow-latency packet processing; not event-driven inferenceUltra-low latency, event-based temporal processing
Power EfficiencyOptimized for general inference, moderate powerExceptionally low power (<mW), ideal for SWaP-critical systems
AI Model StyleCNNs, DNNs, traffic optimization; floating/integer precisionSpiking, state-space, event-driven; integer precision
Use CasesV2X messaging; on-device vision/audioSensor fusion, gesture detection, radar signal classification



✅ Could Qualcomm Compete with Akida?​

  • Qualcomm can handle low-latency messaging and general AI, but lacks neuromorphic temporal processing.
  • It’s unlikely Qualcomm’s V2X will match Akida+TENNs’ edge efficiency and event-based inference.



🔮 Implications for Collaboration and Market Strategy​


  • Qualcomm picks the low-latency messaging layer; BrainChip fills the inference gap.
  • Valeo (and others) combining Qualcomm’s V2X communications with BrainChip’s edge AI would yield better sensor fusion and situational awareness.
  • Expect future V2X systems to blend both—Qualcomm for connectivity, BrainChip for real-time inference on sensor data.


📝 Summary​

Qualcomm definitely brings strong low-latency communications and general AI via Hexagon DSPs, but does not cover the ultra-efficient event-driven inferencing required for edge neuromorphic processing like TENNs. Instead, Qualcomm and BrainChip are complementary, not overlapping—making their partnership (direct or via Valeo) a strategic win.









🧩 How Qualcomm & Valeo Partner on V2X & ADAS​

1. ADAS Domain Controllers on Snapdragon Ride


Valeo develops ADAS and parking systems (e.g., Park4U®) hosted on Qualcomm’s Snapdragon Ride platform—a scalable automotive SoC family optimized for vision processing and sensor fusion safestocks.co.uk+13iot-automotive.news+13TelecomDrive+13.

2. Extended Collaboration (MoU) for New Mobility


In November 2023, Valeo and Qualcomm signed an MoU to co-develop connected systems for 2- and 3-wheeler vehicles in India, combining Qualcomm Snapdragon Digital Chassis solutions with Valeo’s telematics, displays, sensors, and cybersecurity Qualcomm+4Valeo+4Automotive World+4.

3. V2X Integration Possibility

So yes—Valeo’s V2X platform likely leverages Qualcomm’s Snapdragon vehicle connectivity stack (for 4G/5G + C‑V2X), though this is often a behind-the-scenes integration.



🔧 Is It Using Snapdragon Ride?​


Yes, Valeo’s ADAS controllers—including Park4U® and domain controllers like in BMW’s Neue Klasse—run on Snapdragon Ride SoCs .
These SoCs support high-performance sensor fusion and are built for automotive-grade safety and functionality.

⚡ Summary​

  • Qualcomm ➝ provides connectivity (4G/5G, C‑V2X, now via Autotalks), and compute (Snapdragon Ride).
  • Valeo ➝ integrates sensors and software, deploying them on Snapdragon hardware.
  • Together, they enable assisted driving, parking automation, and now emerging V2X/5G architectures.


🤝 Implications for BrainChip​

  • Qualifier: Valeo + Qualcomm handle communications and general compute.
  • Opportunity: BrainChip could augment these systems with ultra-low-power, event-based inference using Akida + TENNs. For example, neuromorphic processing could complement high-speed V2X messaging by improving sensor-level decision-making (e.g., pedestrian detection).
  • Strategic Fit: A future tri-party integration—Qualcomm (communications & compute), Valeo (sensors/software), BrainChip (edge AI)—would yield a powerful, multi-layered mobility platform.


In short: Qualcomm’s Snapdragon Ride powers Valeo’s ADAS/V2X controllers. They work closely together—and with Qualcomm’s V2X expansion, Valeo’s recent 5G-V2X demo is a logical extension. Adding BrainChip to the mix would bring neuromorphic edge intelligence to this robust foundation in parsing real-time sensor and V2X data.
 
  • Like
  • Fire
  • Love
Reactions: 15 users

7für7

Top 20
Pass Out Mr Bean GIF
 
  • Haha
Reactions: 3 users

7für7

Top 20
Green Friday??? Something is brewing folks… or the usual pump before we get some reports? I have no idea….

Happy Dance GIF
 
  • Haha
Reactions: 1 users

Diogenese

Top 20
I asked ChatGPT about V2X technology and where Qualcomm, Valeo and BrainChip might intersect in future in this area





ChatGPT's Response:

Qualcomm’s V2X platforms deliver low latency communications, which are great for time-sensitive messaging—but they’re not designed for ultra‑low latency neuromorphic inference like BrainChip’s Akida+TENNs.



🛣️ What Qualcomm Offers (V2X Context)​

These systems excel at fast message delivery but don’t include event-driven neural processing.



🤖 Qualcomm & AI: Edge, Not Neuromorphic​

  • Qualcomm’s X85/82 modems feature AI data traffic engines and network‑edge NPU cores (e.g., up to 40 TOPS) WikipediaMobile World Live
  • Their Hexagon DSP/NPU accelerators (in Snapdragon chips) deliver on-device AI for vision, audio, and general inference tasks Qualcomm+14Wikipedia+14Wikipedia+14
  • Zeroth, their 2015 brain‑inspired software, signals early interest in event‑based approaches Wikipedia
Bottom line: Qualcomm does offer on‑device AI and cognitive traffic optimization—but not spiking neural networks or state‑space neuromorphic architectures like TENNs.



⚖️ Comparison: Qualcomm vs. Akida+TENNs​


FeatureQualcomm C‑V2X / Hexagon DSPBrainChip Akida + TENNs
LatencyLow-latency packet processing; not event-driven inferenceUltra-low latency, event-based temporal processing
Power EfficiencyOptimized for general inference, moderate powerExceptionally low power (<mW), ideal for SWaP-critical systems
AI Model StyleCNNs, DNNs, traffic optimization; floating/integer precisionSpiking, state-space, event-driven; integer precision
Use CasesV2X messaging; on-device vision/audioSensor fusion, gesture detection, radar signal classification



✅ Could Qualcomm Compete with Akida?​

  • Qualcomm can handle low-latency messaging and general AI, but lacks neuromorphic temporal processing.
  • It’s unlikely Qualcomm’s V2X will match Akida+TENNs’ edge efficiency and event-based inference.



🔮 Implications for Collaboration and Market Strategy​


  • Qualcomm picks the low-latency messaging layer; BrainChip fills the inference gap.
  • Valeo (and others) combining Qualcomm’s V2X communications with BrainChip’s edge AI would yield better sensor fusion and situational awareness.
  • Expect future V2X systems to blend both—Qualcomm for connectivity, BrainChip for real-time inference on sensor data.


📝 Summary​

Qualcomm definitely brings strong low-latency communications and general AI via Hexagon DSPs, but does not cover the ultra-efficient event-driven inferencing required for edge neuromorphic processing like TENNs. Instead, Qualcomm and BrainChip are complementary, not overlapping—making their partnership (direct or via Valeo) a strategic win.









🧩 How Qualcomm & Valeo Partner on V2X & ADAS​

1. ADAS Domain Controllers on Snapdragon Ride


Valeo develops ADAS and parking systems (e.g., Park4U®) hosted on Qualcomm’s Snapdragon Ride platform—a scalable automotive SoC family optimized for vision processing and sensor fusion safestocks.co.uk+13iot-automotive.news+13TelecomDrive+13.

2. Extended Collaboration (MoU) for New Mobility


In November 2023, Valeo and Qualcomm signed an MoU to co-develop connected systems for 2- and 3-wheeler vehicles in India, combining Qualcomm Snapdragon Digital Chassis solutions with Valeo’s telematics, displays, sensors, and cybersecurity Qualcomm+4Valeo+4Automotive World+4.

3. V2X Integration Possibility

So yes—Valeo’s V2X platform likely leverages Qualcomm’s Snapdragon vehicle connectivity stack (for 4G/5G + C‑V2X), though this is often a behind-the-scenes integration.



🔧 Is It Using Snapdragon Ride?​


Yes, Valeo’s ADAS controllers—including Park4U® and domain controllers like in BMW’s Neue Klasse—run on Snapdragon Ride SoCs .
These SoCs support high-performance sensor fusion and are built for automotive-grade safety and functionality.

⚡ Summary​

  • Qualcomm ➝ provides connectivity (4G/5G, C‑V2X, now via Autotalks), and compute (Snapdragon Ride).
  • Valeo ➝ integrates sensors and software, deploying them on Snapdragon hardware.
  • Together, they enable assisted driving, parking automation, and now emerging V2X/5G architectures.


🤝 Implications for BrainChip​

  • Qualifier: Valeo + Qualcomm handle communications and general compute.
  • Opportunity: BrainChip could augment these systems with ultra-low-power, event-based inference using Akida + TENNs. For example, neuromorphic processing could complement high-speed V2X messaging by improving sensor-level decision-making (e.g., pedestrian detection).
  • Strategic Fit: A future tri-party integration—Qualcomm (communications & compute), Valeo (sensors/software), BrainChip (edge AI)—would yield a powerful, multi-layered mobility platform.


In short: Qualcomm’s Snapdragon Ride powers Valeo’s ADAS/V2X controllers. They work closely together—and with Qualcomm’s V2X expansion, Valeo’s recent 5G-V2X demo is a logical extension. Adding BrainChip to the mix would bring neuromorphic edge intelligence to this robust foundation in parsing real-time sensor and V2X data.
Hi Bravo,

About 3 years ago, I was prompted to look at a possible Qualcomm/Akida combination:

Hi Proga,

Sadly the ogre has retired, but the short answer is "No".

Qualcomm has put a lot of effort into analog SNNs and this article does not suggest they have switched away from their in-house technology:

More simultaneous AI tasks, while using less power​

AI (yes, artificial intelligence) has been the rising star in mobile computing over the last few years and that trend continues upwards with the 8 Gen 2.

Although every major element of the SoC already leverages AI to some degree, this generation's dedicated Hexagon Processor offers a revised toolset (including a Tensor accelerator that's doubled in size) set to deliver some significant gains.

Qualcomm promises a 4.35x speed increase when performing AI-based tasks, thanks – in part – to the fact that the 8 Gen 2 is the first of its kind to leverage something called INT4 (integer 4) precision; allowing for 60% more AI-based tasks to be performed concurrently per watt.

Through what can only be arcane wizardry, Qualcomm is able to scale 32-bit processes down to 4-bit without compromising the quality of the data sets being processed, which the company's Ziad Asghar – VP of Product Management at Qualcomm – told TechRadar, amounts to a 64x power reduction
.

Upgraded always-on Sensing Hub​

Modern phones can already help us transpose the analogue world into digital; with features like semantic text recognition and object recognition, but the Sensing Hub inside the 8 Gen 2 has been purpose-built to help with such tasks; boasting two AI processing cores for up to double the AI performance compared to 8 Gen 1, along with 50% more memory than previously.

The Sensing Hub supports an 'always-sensing camera' (a rewording from last-generation's 'always-on camera'), that's great for everything from QR code scanning to face proximity detection, facial recognition and even eye-tracking – all without having to actively open your device's camera app.

Asghar confirmed to TechRadar that multiple OEM partners have been particularly interested in this aspect of the Sensing Hub, suggesting the next wave of phones powered by the 8 Gen 2 may well have the ability to scan and action QR codes and the like without even needing to be woken up or for particular apps to be opened to interact with them.

Despite its always-on nature, Qualcomm also states that the data processed by the Sensing Hub doesn't leave your device
.

This patent dates from mid-2017.

US10460817B2 Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors

1668774218800.png

[0009] In this regard, FIG. 1A illustrates a matrix network circuit 100 as a cross-bar network that includes a way of interconnecting memristors and CMOS circuit neurons for STDP learning.

[0010] Neural networks that employ memristor networks for providing synapses can also be used for other applications that require weighted matrix multiplication computations, such as convolution for example.

Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors are disclosed. An MLC NVM matrix circuit includes a plurality of NVM storage string circuits that each include a plurality of MLC NVM storage circuits each containing a plurality of NVM bit cell circuits each configured to store 1-bit memory state. Thus, each MLC NVM storage circuit stores a multi-bit memory state according to memory states of its respective NVM bit cell circuits. Each NVM bit cell circuit includes a transistor whose gate node is coupled to a word line among a plurality of word lines configured to receive an input vector. Activation of the gate node of a given NVM bit cell circuit in an MLC NVM storage circuit controls whether its resistance is contributed to total resistance of an MLC NVM storage circuit coupled to a respective source line.

[006] … Real synapses, however, exhibit non-linear phenomena like spike timing dependent plasticity (STDP) that modulate the weight of an individual synapse based on the activity of the pre- and post-synaptic neurons. The modulation of synaptic weights through plasticity has been shown to greatly increase the range of computations that neural networks can perform.

[0074] Further, if the word lines WL0 -WLm are coupled to a pre-neuron layer and the output nodes 308 ( 0 )- 308 (m ) are coupled to a post-neuron layer, the NVM matrix circuit 300 is also configured to train the channel resistance of the NVM bit cell circuits R00 -Rmn by supporting backwards propagation of a weight update ...



Following on from this, Back in November 2022 I looked at the Qualcomm/Sony/Prophesee joint development, and concluded:

"On the plus side, if Sony/Prophesee had been working with Snapdragon 8 gen 2, then Prophesee's recent endorsement of Akida (2 with TENNs) speaks volumes for the comparative performance of Akida and Snapdragon 8 gen 2's AI analog SNN."

#36,924

This was a reference to Luca's eager anticipation of Akida 2 at the time:

https://brainchip.com/
[Search for "Luca"]
“At Prophesee, we are driven by the pursuit of groundbreaking innovation addressing event-based vision solutions. Combining our highly efficient neuromorphic-enabled Metavision sensing approach with BrainChip’s Akida neuromorphic processor holds great potential for developers of high performance, low-power Edge AI applications. We value our partnership with BrainChip and look forward to getting started with their 2nd generation Akida platform, supporting vision transformers and TENNs.”
Luca Verre, Co-Founder and CEO at Prophesee

Qualcomm does have analog SNN. One problem with analog SNN using ReRAM/MRAM, ,,,, is that they usually cannot be manufactured using ordinary CMOS manufacturing techniques, and this would increase manufacturing costs. I think another problem would be that, while analog may be competitive at single bit weights/activations, analog does not easily accommodate the higher precision (8-bit+) that Akida has now adopted in response to customer feedback.
 
Last edited:
  • Fire
  • Love
  • Like
Reactions: 11 users

HopalongPetrovski

I'm Spartacus!
Genuine question. Why is our CEO host on the podcasts? Don't they have people to that stuff? Seems a bit unprofessional to me
I think Sean's trying to show that he is engaged and can cite it as both customer and investor engagement.
 
  • Like
Reactions: 2 users
  • Like
Reactions: 1 users
  • Haha
Reactions: 5 users

7für7

Top 20
Don’t you just love XRP when BRN is trading this good
At the moment I enjoy both! XRP honestly I pray for an explosion… BRN is kind of fruit which has to get ripe
 
  • Like
Reactions: 2 users

Diogenese

Top 20
Hi Bravo,

About 3 years ago, I was prompted to look at a possible Qualcomm/Akida combination:

Hi Proga,

Sadly the ogre has retired, but the short answer is "No".

Qualcomm has put a lot of effort into analog SNNs and this article does not suggest they have switched away from their in-house technology:

More simultaneous AI tasks, while using less power​

AI (yes, artificial intelligence) has been the rising star in mobile computing over the last few years and that trend continues upwards with the 8 Gen 2.

Although every major element of the SoC already leverages AI to some degree, this generation's dedicated Hexagon Processor offers a revised toolset (including a Tensor accelerator that's doubled in size) set to deliver some significant gains.

Qualcomm promises a 4.35x speed increase when performing AI-based tasks, thanks – in part – to the fact that the 8 Gen 2 is the first of its kind to leverage something called INT4 (integer 4) precision; allowing for 60% more AI-based tasks to be performed concurrently per watt.

Through what can only be arcane wizardry, Qualcomm is able to scale 32-bit processes down to 4-bit without compromising the quality of the data sets being processed, which the company's Ziad Asghar – VP of Product Management at Qualcomm – told TechRadar, amounts to a 64x power reduction
.

Upgraded always-on Sensing Hub​

Modern phones can already help us transpose the analogue world into digital; with features like semantic text recognition and object recognition, but the Sensing Hub inside the 8 Gen 2 has been purpose-built to help with such tasks; boasting two AI processing cores for up to double the AI performance compared to 8 Gen 1, along with 50% more memory than previously.

The Sensing Hub supports an 'always-sensing camera' (a rewording from last-generation's 'always-on camera'), that's great for everything from QR code scanning to face proximity detection, facial recognition and even eye-tracking – all without having to actively open your device's camera app.

Asghar confirmed to TechRadar that multiple OEM partners have been particularly interested in this aspect of the Sensing Hub, suggesting the next wave of phones powered by the 8 Gen 2 may well have the ability to scan and action QR codes and the like without even needing to be woken up or for particular apps to be opened to interact with them.

Despite its always-on nature, Qualcomm also states that the data processed by the Sensing Hub doesn't leave your device
.

This patent dates from mid-2017.

US10460817B2 Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors

1668774218800.png

[0009] In this regard, FIG. 1A illustrates a matrix network circuit 100 as a cross-bar network that includes a way of interconnecting memristors and CMOS circuit neurons for STDP learning.

[0010] Neural networks that employ memristor networks for providing synapses can also be used for other applications that require weighted matrix multiplication computations, such as convolution for example.

Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors are disclosed. An MLC NVM matrix circuit includes a plurality of NVM storage string circuits that each include a plurality of MLC NVM storage circuits each containing a plurality of NVM bit cell circuits each configured to store 1-bit memory state. Thus, each MLC NVM storage circuit stores a multi-bit memory state according to memory states of its respective NVM bit cell circuits. Each NVM bit cell circuit includes a transistor whose gate node is coupled to a word line among a plurality of word lines configured to receive an input vector. Activation of the gate node of a given NVM bit cell circuit in an MLC NVM storage circuit controls whether its resistance is contributed to total resistance of an MLC NVM storage circuit coupled to a respective source line.

[006] … Real synapses, however, exhibit non-linear phenomena like spike timing dependent plasticity (STDP) that modulate the weight of an individual synapse based on the activity of the pre- and post-synaptic neurons. The modulation of synaptic weights through plasticity has been shown to greatly increase the range of computations that neural networks can perform.

[0074] Further, if the word lines WL0 -WLm are coupled to a pre-neuron layer and the output nodes 308 ( 0 )- 308 (m ) are coupled to a post-neuron layer, the NVM matrix circuit 300 is also configured to train the channel resistance of the NVM bit cell circuits R00 -Rmn by supporting backwards propagation of a weight update ...



Following on from this, Back in November 2022 I looked at the Qualcomm/Sony/Prophesee joint development, and concluded:

"On the plus side, if Sony/Prophesee had been working with Snapdragon 8 gen 2, then Prophesee's recent endorsement of Akida (2 with TENNs) speaks volumes for the comparative performance of Akida and Snapdragon 8 gen 2's AI analog SNN."

#36,924

This was a reference to Luca's eager anticipation of Akida 2 at the time:

https://brainchip.com/
[Search for "Luca"]

Luca Verre, Co-Founder and CEO at Prophesee

Qualcomm does have analog SNN. One problem with analog SNN using ReRAM/MRAM, ,,,, is that they usually cannot be manufactured using ordinary CMOS manufacturing techniques, and this would increase manufacturing costs. I think another problem would be that, while analog may be competitive at single bit weights/activations, analog does not easily accommodate the higher precision (8-bit+) that Akida has now adopted in response to customer feedback.

Qualcomm is doing edge Generative AI in software.

https://www.qualcomm.com/products/mobile/snapdragon/smartphones/mobile-ai

First mobile platform designed with Generative AI in mind​

The Snapdragon® 8 Gen 3 Mobile Platform unleashes new-found creativity, pioneering a new era of AI with the first ever AI Engine to support up to 10 billion parameters of generative AI models—solely on-device.

...

Unified AI Software Portfolio​

Qualcomm® AI Stack is a comprehensive AI solution for developers, supporting a wide range of intelligent devices with broader AI software access and compatibility. For the first time, a single AI software portfolio works on every Qualcomm Technologies platform spanning the wide range edge products – including mobile, automotive, XR, compute, IoT and more.


https://www.qualcomm.com/developer/artificial-intelligence

Explore a robust AI software portfolio with Qualcomm AI Stack.


icn-check-circle.svg

Develop once, deploy everywhere
Support popular operating systems across mobile, compute, XR, IoT, auto, and cloud platforms
icn-check-circle.svg

Optimize for performance and power efficiency
Leverage advanced quantization and compression techniques for trained neural network models
icn-check-circle.svg

Multiple levels of access
Freedom to choose different APIs to directly access all accelerator cores


1752824318535.png



The corollary is that this is a major opportunity for Akida GenAI. It may be amplified by RAG (Retrieval Augmented Generation), where LLM subsets are downloaded on demand.
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 5 users
Qualcomm is doing edge Generative AI in software.

https://www.qualcomm.com/products/mobile/snapdragon/smartphones/mobile-ai

First mobile platform designed with Generative AI in mind​

The Snapdragon® 8 Gen 3 Mobile Platform unleashes new-found creativity, pioneering a new era of AI with the first ever AI Engine to support up to 10 billion parameters of generative AI models—solely on-device.

...

Unified AI Software Portfolio​

Qualcomm® AI Stack is a comprehensive AI solution for developers, supporting a wide range of intelligent devices with broader AI software access and compatibility. For the first time, a single AI software portfolio works on every Qualcomm Technologies platform spanning the wide range edge products – including mobile, automotive, XR, compute, IoT and more.


https://www.qualcomm.com/developer/artificial-intelligence

Explore a robust AI software portfolio with Qualcomm AI Stack.


icn-check-circle.svg

Develop once, deploy everywhere
Support popular operating systems across mobile, compute, XR, IoT, auto, and cloud platforms
icn-check-circle.svg

Optimize for performance and power efficiency
Leverage advanced quantization and compression techniques for trained neural network models
icn-check-circle.svg

Multiple levels of access
Freedom to choose different APIs to directly access all accelerator cores


View attachment 88621


The corollary is that this is a major opportunity for Akida GenAI. It may be amplified by RAG (Retrieval Augmented Generation), where LLM subsets are downloaded on demand.
🤞
 
  • Like
Reactions: 5 users
Top Bottom