BRN Discussion Ongoing

miaeffect

Oat latte lover
Nice to see the ol' share price continuing to head in the right direction!

Hot diggity dog!



View attachment 91816
Your tub time is coming
_ryvs7.gif
 
  • Haha
  • Fire
  • Wow
Reactions: 13 users
From the other site

Here are some benchmarks comparing PointNet++ running on Akida 2 vs NVIDIA Jetson Xavier NX and Orin NX for real-time LiDAR classification at the edge.

1. Benchmark Sources

  • Akida PointNet++: BrainChip LiDAR Point Cloud Model brochure (Oct 2025).
  • Jetson Xavier NX / Orin NX: Derived from public PyTorch PointNet++ benchmarks on ModelNet40 and KITTI, using TensorRT-optimized inference (batch = 1).
    • Xavier NX: 384 CUDA cores, 21 TOPS INT8
    • Orin NX: 1024 CUDA cores, 100 TOPS INT8
  • Power figures are measured in NVIDIA’s MaxN mode, which reflects real deployment on drones / robotics platforms.

2. PointNet++ Performance Comparison

MetricAkida 2Jetson Xavier NXJetson Orin NX
FPS (ModelNet40)183 FPS65 – 85 FPS (FP16/INT8)110 – 135 FPS (FP16/INT8)
Latency / Frame5 – 6 ms12 – 15 ms7 – 9 ms
Power50 mW10 – 15 W15 – 25 W
Energy / Inference0.28 mJ~150 – 200 mJ~200 – 300 mJ
Model Size1.07 MB~10 – 12 MB~10 – 12 MB
Accuracy (ModelNet40)81.6 % (4-bit QAT)89 – 90 % (FP32 baseline)89 – 90 % (FP32 baseline)
Deployment ModeAlways-on, ultra-low powerEmbedded GPU (fan/heat dissipation required)High-end embedded GPU

3. What This Shows

  1. Throughput
    • Akida 2 actually outperforms Xavier and Orin NX on raw FPS, despite being neuromorphic and consuming three orders of magnitude less power.
    • Orin NX gets closer but still lags slightly at similar batch sizes.
  2. Power & Energy
    • Akida’s ~50 mW is in a completely different regime than the 10–25 W Jetson modules.
    • That’s ~500×–1000× lower energy per inference, which is decisive for always-on payloads (e.g. drones, satellites, smart infrastructure).
  3. Accuracy Trade-off
    • Akida’s quantized model (4-bit QAT) loses ~8 % absolute accuracy vs FP32, but this is expected and often acceptable for edge classification, especially if upstream sensor fusion provides redundancy.
  4. Form Factor / Thermal
    • Jetsons need active cooling and steady power supply — not trivial on space or micro-UAV platforms.
    • Akida can operate fanless, battery-powered, or solar-powered.

4. Strategic Takeaway

Use CaseBest Fit
Battery-powered always-on LiDAR classification (e.g., satellite autonomy, drones, infrastructure nodes)Akida 2 — ultra-low power, high FPS, compact
Onboard AI co-processor with larger perception stack (e.g., autonomous cars, ground robots)Jetson Orin NX — higher model flexibility, better FP32 accuracy, but power-hungry
Mixed sensor fusion payloads with strict SWaP (e.g., ESA cubesats, tactical drones)Akida as front-end classifier + Jetson/FPGA for downstream fusion or planning

Summary Table​

FeatureAkida 2Jetson Xavier NXJetson Orin NX
FPS18365–85110–135
Power0.05 W10–15 W15–25 W
Energy/Inference0.28 mJ150–200 mJ200–300 mJ
Accuracy81.6 %≈ 90 %≈ 90 %
Edge SuitabilityAlways-onThermally constrainedHigh-end only

Bottom line:​

For PointNet++ at the edge, Akida 2 outperforms Jetson Xavier NX and Orin NX on raw FPS, power, and energy efficiency, with a modest accuracy gap from quantization that can be narrowed through improved training and model updates. This is why BrainChip is targeting Akida PointNet++ for autonomous drones, satellites, and infrastructure nodes — it's built for tiny, always-on LiDAR intelligence rather than general AI workloads.

*GPT5
 
  • Like
  • Love
  • Fire
Reactions: 40 users

Tothemoon24

Top 20
Stocks are rising & so am I 🍌


IMG_1594.jpeg

The Hidden Tech in Your Next Wearable: Why Spiking is the New Speed and Efficiency in Hardware.

Most of the AI buzz focuses on massive models in cloud data centers. But the real revolution in low-power intelligence is happening at the very edge: through Neuromorphic Computing.

The challenge for devices like advanced health trackers, industrial sensors, or autonomous drone navigation is simple: continuous processing with crippling power constraints. Running traditional Deep Learning on these devices drains the battery almost instantly.

The solution is brilliant—and brain-inspired.

Enter the Spiking Neural Network (SNN):

Unlike a conventional chip that processes data in large, power-hungry blocks, neuromorphic chips use Spiking Neural Networks (SNNs). These networks only fire (process) a signal when the input data exceeds a certain threshold—just like a biological neuron. This is known as event-based processing.

Why this is a game-changer for niche industries:

1. Extreme Efficiency: A 100x to 1000x improvement in energy efficiency compared to standard GPUs for certain tasks. Imagine a sensor that lasts for years, not days.
2. Instant Reaction: Processing occurs in real-time, right where the data is created (Edge AI), making it perfect for safety-critical systems like autonomous vehicles or real-time medical monitoring where latency can't be tolerated.
3. Adaptive Learning: The SNN architecture inherently supports "one-shot" or continuous on-device learning, adapting to its environment without massive retraining cycles.

If you are a Product Manager, Hardware Engineer, or Investor building solutions that require high-speed, continuous processing under severe power budgets, you need to be integrating this ecosystem now.

What niche application (outside of health) is perfectly primed for a move to neuromorphic processing? Share your 'killer app' idea below! 👇

#NeuromorphicComputing #EdgeAI #HardwareInnovation #IoT #SNN #FutureofTech #DeepLearning

IMG_1595.jpeg

 
  • Like
  • Fire
  • Haha
Reactions: 23 users

7für7

Top 20
@Esq.111 is your parrot Stil alive?

Parrot Birds With Arms GIF
 
  • Haha
  • Like
Reactions: 4 users
FF


AND NOW FROM THE

“VALEO BRAIN DIVISION”
https://www.valeo.com/en/brain-division/

“The emergence of software-defined vehicles (SDV) marks a significant paradigm shift that is reshaping traditional automotive frameworks. Our Valeo Brain Division is actively engaged in this transformation. By accelerating in driving automation with Advanced Driver Assistance Systems (ADAS) and reinventing the interior experience, we are leveraging opportunities offered by the new electric/electronic architectures inherent to the SDV.

Our solutions enhance the safety and enjoyment of driving by using a broad spectrum of sensor technologies (ultrasonic sensors, cameras, thermal imaging, and medium to long-range radars, as well as LiDAR), computing units acting like “brains” (ranging from Zone Controllers and Domain Controllers to Central Computing Units), as well as the software itself with advanced algorithms and artificial intelligence (AI).


To reinvent the interior experience, we develop technology in the field of human-machine interface and vehicle cabin monitoring that improves safety, and that also creates a comfortable cocoon-like interior, offering passengers a personalized, interactive and secure experience.

Lastly, connectivity is key in the SDV transformation. Valeo Brain Division’s solutions offer the high-speed, low-latency connectivity required for state-of-the-art driving automation and interior experience.”


Quite an interesting name given to this new to me Division at Valeo
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Frangipani

Top 20
Extremely interesting 👍🏻 by Fergus Longbottom, who works as a software engineer in Space Domain Awareness / Remote Sensing for Canberra-based Electro Optic Systems (EOS):



View attachment 87642 View attachment 87643




View attachment 87644

View attachment 87645


View attachment 87646 View attachment 87647



3e79a8d6-0edb-4182-9280-d7333c76a35f-jpeg.87654

View attachment 87655
View attachment 87653 View attachment 87652


EOS also have subsidiaries in the US (https://www.eosdsusa.com/) and NZ (https://kiwistaroptics.com/).

And as you can see, it is not the first time Fergus Longbottom has liked BrainChip or BrainChip-related posts… Which of course doesn’t necessarily mean that he is working with Akida at EOS, but it’s definitely good to know someone in the company is very much aware of us, if not more…

#BrainChipDomainAwareness 😊


View attachment 87657


View attachment 87658


View attachment 87659


View attachment 87660

View attachment 87661
View attachment 87662

I noticed our company’s latest LinkedIn post about Akida PointNet++ got yet another vote of approval (this time even a ♥️) from Fergus Longbottom, who works as a Software Engineer in Space Domain Awareness/Remote Sensing for Canberra-headquartered Electro Optic Systems (EOS).

(Note that there seems to be a timeout issue with their homepage https://eos-aus.com at the moment, but in my two earlier posts tagged above you can see screenshots of that website taken in June).



DF6209E8-7C9E-434B-BF8D-0BE60929F350.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Frangipani

Top 20
I don’t recall ever having come across this Akida IP webpage before, although the Akida 1 IP and Akida 2 IP Brochures are not exactly brandnew and were apparently uploaded sometime in April, shortly before the launch of the redesigned BRN website.

Using the 🔍 search function here on TSE didn’t yield any results for me either, but then again I kept on getting the message “The following words were not included in your search because they are too short, too long, or too common…”, so it’s a bit hard to tell whether or not this has been posted before.
A good overview in any case…



ECD6E03B-7EA7-4E46-A194-34712E5F2B0A.jpeg


AkidaTM is a neural processor platform inspired by the brain’s cognitive capabilities and energy efficiency. It delivers low-power, real-time AI processing at the edge using neuromorphic principles for applications like vision, audio, and sensor fusion.

Self-Contained Neural Processor


  • Scalable fabric of 1-128 nodes
  • Each neural node supports 128MACs
  • Configurable 50-130K embedded local SRAM per node
  • DMA for all memory and model operations
  • Multi-layer execution without host CPU
  • Integrate with any Microcontroller or Application Processor
  • Efficient algorithmic mesh
3.-AXI-Bus-Interconnect-diagram-1-1.png


Key Features


Akida 1


  • Supports 4-, 2-, and 1-bit weights and activations
  • Supports multiple layers simultaneously
  • Convolutional Neural Processor (CNP) and
  • Fully-connected Neural Processor (FNP)
Download the Akida 1 IP Brochure

Akida 2


  • Supports 8-, 4-, and 1-bit weights and activations
  • Programmable Activation Functions
  • Skip Connections
  • Support for Spatio-Temporal and Temporal Event-Based Neural Network
Download the Akida 2 IP Brochure


Silicon-Proven, Fully Digital Neuromorphic Implementation​


Cost-effective, predictable design and implementation

Event-Based Hardware Acceleration​


Minimizes compute and communication – Minimizes host CPU usage

On-Chip Learning​


One-shot/few-shot learning. Minimizes sensitive data sent. Improves security and privacy

Configurable and Scalable​


Extremely configurable and post-silicon flexibility





9AEB373A-185F-412E-AF85-A3D3B11B42CA.jpeg
53410CA3-4814-42A1-BC83-32066D661AE9.jpeg





417D4EA2-854F-49C0-AB50-26E1FC8E241A.jpeg
64489937-40FF-427D-88E4-A3077F8E95CC.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 34 users
  • Like
  • Love
  • Fire
Reactions: 18 users

Tothemoon24

Top 20
IMG_1597.jpeg
IMG_1596.jpeg
 
  • Like
  • Fire
  • Wow
Reactions: 16 users

7für7

Top 20
View attachment 91828 View attachment 91829

Your post is misleading but who am I to judge…
 
  • Like
  • Haha
Reactions: 6 users

7für7

Top 20
  • Like
  • Fire
  • Love
Reactions: 4 users

Mt09

Regular
View attachment 91828 View attachment 91829
Delete your post lol, why add the brainchip robot?…
 
  • Like
  • Haha
Reactions: 4 users

Jumpchooks

Regular
  • Like
  • Love
Reactions: 4 users

FJ-215

Regular
Hate that there is a 4 day lag on short positions........

Still.. will be interesting to see what the positions were two days ago...

Shorts for Thursday 2nd... 78,530,026....
1759920508782.png
 
  • Like
Reactions: 9 users

genyl

Regular
Hate that there is a 4 day lag on short positions........

Still.. will be interesting to see what the positions were two days ago...

Shorts for Thursday 2nd... 78,530,026....
View attachment 91830
None of that matters unless you are a daytrader. Most of us are longs. We need news with revenue, anything else dosent really matter
 
  • Like
Reactions: 8 users

FJ-215

Regular
None of that matters unless you are a daytrader. Most of us are longs. We need news with revenue, anything else dosent really matter
If you say so....

I mean, it's kinda weird that this SP jump has happened around a non country wide public holiday and there seems to have been a delay in reporting of the short positions. Oh....and we suddenly have 103 million shares traded in a day ......

Hmmm.....

My bad I guess....

Nothing to see here....

:unsure:
 
  • Like
  • Fire
Reactions: 6 users

Frangipani

Top 20
Any Akidaholics meeting the below eligibility criteria interested in NASA’s Beyond the Algorithm Challenge that was launched today?

View attachment 79697


View attachment 79698 View attachment 79699 View attachment 79706 View attachment 79701


View attachment 90564



One of the nine finalist teams of NASA’s Beyond the Algorithm: Novel Computing Architectures for Flood Analysis Challenge, consisting of four Columbia University Computer Science students, submitted a solution they named NEO-FLOOD:

“This paper introduces NEO-FLOOD (Neuromorphic Onboard Flood-mapping), a satellite architecture that eliminates this latency by deploying autonomous AI directly in orbit. NEO-FLOOD integrates space-validated neuromorphic processors (Intel Loihi 2, BrainChip Akida) consuming just 2-5W with our novel Spike2Former-Flood algorithm-a spiking neural network optimized for real-time optical SAR fusion onboard small satellites.”



View attachment 90565





Five weeks ago, I posted about a quartet of Columbia computer science students who had submitted a proposal named NEO-FLOOD for NASA’s Beyond the Algorithm Challenge (launched in March 👆🏻).

“This paper introduces NEO-FLOOD (Neuromorphic Onboard Flood-mapping), a satellite architecture that eliminates this latency by deploying autonomous AI directly in orbit. NEO-FLOOD integrates space-validated neuromorphic processors (Intel Loihi 2, BrainChip Akida) consuming just 2-5W with our novel Spike2Former-Flood algorithm-a spiking neural network optimized for real-time optical SAR fusion onboard small satellites.”

The four young researchers had been picked as one of nine finalist teams who competed against each other at a live pitch event on 17 September, where they presented their ideas to a panel of judges, NASA experts and venture capitalists.

Three winners were chosen, each team being awarded a $100,000 prize!
Among them the NEO-FLOOD team that also won the Audience Favourite Award!

Now it is on to Phase 3:

“In Phase Three, winners will be invited to attend a "Funding 101" webinar course. Additionally, winners will be contacted 12 months after Pitch Event completion for follow up surveys on further challenge research development and implementation.”



18C848D5-BE11-4730-8365-9419A4F80624.jpeg






CCA801EF-E75D-47DA-BF4C-02977DAF105F.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 24 users
Top Bottom