DingoBorat
Slim
Pico isn't "smart" enough, for anything but very simple tasks, it is absolutely no comparison, to something like Snapdragon 8..Even Pico compared to wearables for defense, I etc
Pico isn't "smart" enough, for anything but very simple tasks, it is absolutely no comparison, to something like Snapdragon 8..Even Pico compared to wearables for defense, I etc
I asked ChatGPT your question and I feel it is a way more balanced synopsis
: Please compare snapdragon with akida and who is the winner?
Snapdragon (by Qualcomm) and Akida (by BrainChip) are very different types of processors designed for different purposes. Here’s a detailed comparison:
1. Purpose and Architecture
- Snapdragon: Snapdragon is a general-purpose system-on-chip (SoC) designed for mobile devices, laptops, and other consumer electronics. It includes CPU, GPU, AI engine, ISP (image signal processor), modem, and more.
- Akida: Akida is a neuromorphic processor designed specifically for low-power AI and edge computing applications. It is optimized for event-based processing and on-device learning.
2. Performance
- Snapdragon: Offers high performance for a wide range of applications, including gaming, photography, and AI inference. It is used in flagship smartphones, tablets, and laptops.
- Akida: Specializes in ultra-low-power AI tasks, such as pattern recognition, anomaly detection, and sensor data processing. It is not designed for general-purpose computing or high-end mobile tasks.
3. AI Capabilities
- Snapdragon: Uses a dedicated AI engine (Hexagon DSP, Tensor Accelerator, and GPU) to perform AI-based tasks like image processing, voice recognition, and natural language processing.
- Akida: Uses a neuromorphic approach, mimicking the human brain’s event-driven processing for real-time, low-power AI computations. It is highly efficient for edge AI applications.
4. Power Efficiency
- Snapdragon: Optimized for mobile battery efficiency but consumes more power than Akida when running AI tasks.
- Akida: Extremely power-efficient, as it uses event-based processing to reduce unnecessary computations. It is ideal for battery-powered and always-on AI applications.
5. Use Cases
- Snapdragon: Smartphones, tablets, laptops, AR/VR devices, automotive infotainment, etc.
- Akida: Edge AI applications such as security cameras, medical devices, industrial sensors, and IoT devices.
Who is the Winner?
- If you need a powerful general-purpose processor with strong AI capabilities, Snapdragon is the better choice.
- If you need an ultra-low-power AI processor for edge computing and event-based learning, Akida is the winner.
It depends on the application! If you’re looking at mobile performance, Snapdragon wins. If you need efficient, real-time AI at the edge, Akida is the clear choice. Let me know your use case, and I can refine the comparison further!![]()
Previous to Pico, the smallest iteration of AKIDA, was 1 node (1 to 2 nodes AKIDA E) which consists of 4 NPEs.Pico wearables are more likely to be smaller and much more power efficient than Snapdragon who are a full SOC.
It's not as simple as that because no chip is the best at everything.Pico isn't "smart" enough, for anything but very simple tasks, it is absolutely no comparison, to something like Snapdragon 8..
I'm not disagreeing, or saying that Pico doesn't have a multitude of applications, as you suggest.It's not as simple as that because no chip is the best at everything.
For always on types of wearables AKIDA is a great choice because of low power.
There are health, IOT and even defence applications for this.
It depends on the specified use and user requirnments.
Pico us a great choice for specific tasks eg, gesture, wake, irregularities detection eg heart issues..
Great as a choice if size of device is very important. Eg, Earbuds, ring, skin patch or even discreet military sensors.
Low power makes it good in extreme conditions, eg military.
Pico is best at single or minimal tasks and there is plenty of demand for that.
Obviously if you require wearables with display apps, power is not an issue and complex models are needed Snapdragon would be a better choice.
No one chip is best for every situation.
![]()
https://lnkd.in/d5rwTRF4 | Alf Kuchenbuch
https://lnkd.in/d5rwTRF4 https://lnkd.in/dh8YcDKa GRAIN=NOEL-V + Akida 😄 🚀📡🛰www.linkedin.com
View attachment 81110
"Akida can handle 1 billion parameters on device unconnected for GENAi applications on a watch battery versus Snapdragon which cannot"Just a quick summary of some of the main points contained in my last two posts, bearing in mind my questions were focussed specifically on comparisons between AKIDA and Snapdragon (Snapdragon 8 Gen 3 and Snapdragon 8 gen 4).
- AKIDA versus Snapdragon -1B+ parameter applications entirely on-device without any internet connectivity
- AKIDA versus Snapdragon -1B+ parameter GenAI applications running on a watch battery
Akida power draw is micro-watt to milliwatt range versus Snapdragon at 2-10 watts.
Akida offers on-device learning (real time learning) versus Snapdragon which doesn't.
Akida is event driven (active when needed) versus Snapdragon which isn't.
Akida battery runtime is hours to days versus Snapdragon which is minutes to one hour.
Akida never requires cooling versus Snapdragon which sometimes requires active cooling.
Akida can handle 1 billion parameters on device unconnected for GENAi applications on a watch battery versus Snapdragon which cannot.
Don't blame me, blame the messenger - ChatGPT!
Happy to see what others discover.
I read this as one sentence -Just a quick summary of some of the main points contained in my last two posts, bearing in mind my questions were focussed specifically on comparisons between AKIDA and Snapdragon (Snapdragon 8 Gen 3 and Snapdragon 8 gen 4).
- AKIDA versus Snapdragon -1B+ parameter applications entirely on-device without any internet connectivity
- AKIDA versus Snapdragon -1B+ parameter GenAI applications running on a watch battery
Akida power draw is micro-watt to milliwatt range versus Snapdragon at 2-10 watts.
Akida offers on-device learning (real time learning) versus Snapdragon which doesn't.
Akida is event driven (active when needed) versus Snapdragon which isn't.
Akida battery runtime is hours to days versus Snapdragon which is minutes to one hour.
Akida never requires cooling versus Snapdragon which sometimes requires active cooling.
Akida can handle 1 billion parameters on device unconnected for GENAi applications on a watch battery versus Snapdragon which cannot.
Don't blame me, blame the messenger - ChatGPT!
Happy to see what others discover.
"Akida can handle 1 billion parameters on device unconnected for GENAi applications on a watch battery versus Snapdragon which cannot"
What "specifications" for AKIDA is this, assuming AKIDA 2.0 IP (node count?).
I'm pretty sure Snapdragon 8 is a defined size?
Whereas "AKIDA" is not.
View attachment 81595
If you look at the video in the link (below) Tony Lewis describes how it can run on a watch battery at approx 2.05 mins.
![]()
Embedded World 2025, what's cooking at BrainChip : Our CTO M Anthony Lewis and the BrainChip team present our demos, straight from the lab: Akida 2.0 IP running on FPGA, using our State-Space-Model… | Alf Kuchenbuch
Embedded World 2025, what's cooking at BrainChip : Our CTO M Anthony Lewis and the BrainChip team present our demos, straight from the lab: Akida 2.0 IP running on FPGA, using our State-Space-Model implementation TENNs for running an LLM (like ChatGPT) with 1B parameters offline/ fully...www.linkedin.com
Who knows.I read this as one sentence -
Qualcomm wants Brainchip.
maybe ChatGPT or ... knows more...Is there any plausible reason or compelling argument why Qualcomm should "not" use Akida in their chips in the near future?
![]()
Kenneth Östberg presented our poster on GRAIN – Radiation-Tolerant Edge AI at the RISC-V in Space Workshop 2025 earlier this week. | Frontgrade Gaisler
Kenneth Östberg presented our poster on GRAIN – Radiation-Tolerant Edge AI at the RISC-V in Space Workshop 2025 earlier this week. The GRAIN line of space computing products includes the newly announced rad-hard GR801 neuromorphic processor based on the BrainChip Akida 1.0 technology. Read...www.linkedin.com
This poster titled GRAIN - Radiation-Tolerant Edge AI, presented by Kenneth Östberg, one of its two co-authors (the other being Daniel Andersson), during the “RISC-V in Space” workshop in Gothenburg on Thursday…
View attachment 81573
View attachment 81577
…finally reveals what NEURAVIS stands for - the name of that R&T project, which ESA awarded to the five consortium partners Airbus Toulouse, Airbus Ottobrunn, BrainChip, Frontgrade Gaisler and Neurobus in mid-2024 (see the July 2024 LinkedIn post by Airbus Space Project Manager Jérémy Lebreton below):
Neuromorphic Evaluation of Ultra-low-power Rad-hard Acceleration for Vision Inferences in Space.
The poster also provides more information with regards to the use cases currently being explored in the NEURAVIS project, although I’m afraid I couldn’t decipher everything due to the small print - maybe someone with eagle eyes or a magic tool to blow up the photo and unblur the small print can add in resp. correct what I’ve gathered so far:
1. Moon landing
Use Case #1: Vision-Based Navigation for Lunar Lander
Also see Alf Kuchenbuch’s recent comment on Argonaut, ESA’s lunar lander programme:
https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-452257
View attachment 81579
2. Debris detection/collect (?)
Use Case #2: Monitoring (?) Building Block for In-orbit Maintenance
3. Docking
4. Object ? (looks like “simulation”, but appears to be a longer word?)
In addition, the poster lists four “Application scenarios” for GRAIN’s Radiation-Tolerant Edge AI:
1. Remote Terminal Unit
2. Stand-alone Controller
3. Near-edge processing unit
4. Auxiliary data-processing module
Lots of small print to decipher here as well!
![]()
Thank you, BrainChip team, for sharing the news. | Jérémy Lebreton
Thank you, BrainChip team, for sharing the news. At Airbus, we are very glad to be starting this new ESA R&T project called NEURAVIS with BrainChip , Neurobus and Frontgrade Gaisler. Our goal is to bring the disruptive technology of neuromorphic computing on an achievable path to the space...www.linkedin.com
View attachment 81578
If I understand the above post correctly, we have yet to hear about what suggestion the NEURAVIS proposal has for AKD1500 (“BrainChip is proud that Airbus selected Akida for both COTS chips and IP in their proposal. ESA awarded the Airbus “NEURAVIS” proposal, including Akida in the Akida 1500 chip and on an FPGA together with Frontgrade Gaisler’s NOEL-V processor.”).
Whereas the underlined appears to refer to Frontgrade Gaisler’s newly revealed GR801 SoC that will incorporate Akida 1.0 IP - greatly benefitting the work of Airbus Toulouse computer vision experts such as Jérémy Lebreton (project lead) and Roland Brochard, as can be inferred from the GRAIN poster’s four listed use cases - there has to be another specific proposal by Airbus how to utilise our COTS chip AKD1500, then.
So I presume Airbus Ottobrunn and Neurobus might be the consortium partners currently collaborating on that second part of the NEURAVIS proposal?