BRN Discussion Ongoing

7für7

Top 20
IBM unlike most already have a product in Symphony which is immediately AKIDA ready. Huge advantage and shortens the testing process.
Perhaps your views may reflect your pessimism.
Its not about believing anyone. Its about reading and making your own mind up.
The biggest lesson is not about the tech or commercial possibilities but about the lengthy Engagement to commercial product cycle - if you understand and accept that is the way it is then there is no hole to fall into.
If i did n ot believe in Brainchips future i would have offloaded a spike or 2 ago.
I asked AI what the phrase "I am tipping that pretty soon Brainchip will make it a foursome via a partnership with IBM." means.

Reply.
" It means the speaker is predicting (that’s what tipping means in Australian English) that BrainChip will soon announce a fourth major partnership, and they believe that fourth partner will be IBM."
"

" “I am tipping…”​

In Australian usage, tipping means predicting, backing, or expecting something to happen. It’s borrowed from “footy tipping,” where you pick the winners of matches."

Don’t try to make me look stupid now.. I know exactly what that means… yet you keep presenting your viewpoint as the only true one, which basically cancels out your “I’m tipping” relativizing and your claims… that’s what you don’t understand.

Some of your statements are framed like facts, and add-ons like “I’m tipping” get psychologically downgraded when people read it, because the main message carries more weight than the “comparatively unimportant” side note “I’m tipping.”

And I already said: let’s just drop the topic and everyone can form their own opinion. Bravo represented my view very well.

So I don’t know what you’re trying to achieve now by making me look dumb again.
 
  • Fire
Reactions: 1 users

Diogenese

Top 20
IBM unlike most already have a product in Symphony which is immediately AKIDA ready. Huge advantage and shortens the testing process.
Perhaps your views may reflect your pessimism.
Its not about believing anyone. Its about reading and making your own mind up.
The biggest lesson is not about the tech or commercial possibilities but about the lengthy Engagement to commercial product cycle - if you understand and accept that is the way it is then there is no hole to fall into.
If i did n ot believe in Brainchips future i would have offloaded a spike or 2 ago.
I asked AI what the phrase "I am tipping that pretty soon Brainchip will make it a foursome via a partnership with IBM." means.

Reply.
" It means the speaker is predicting (that’s what tipping means in Australian English) that BrainChip will soon announce a fourth major partnership, and they believe that fourth partner will be IBM."
"

" “I am tipping…”​

In Australian usage, tipping means predicting, backing, or expecting something to happen. It’s borrowed from “footy tipping,” where you pick the winners of matches."
You and your vernacular.

I dislocated my vernacular once - very nasty operation.
 
  • Haha
  • Like
Reactions: 8 users

itsol4605

Regular
Nobody can stop him – great !
 
  • Like
  • Fire
  • Love
Reactions: 12 users

Learning

Learning to the Top 🕵‍♂️
More from Kevin

Screenshot_20260302_212903_LinkedIn.jpg

Learning
 
  • Like
  • Love
  • Fire
Reactions: 36 users

manny100

Top 20
Dio, Symphony is interesting in that its so easy it for Kevin to insert the plug and play M.2 Card into Symphony and with minimal additional fuss (compared to integration) and churn out what he describes as great ROI etc.
AKIDA1000 just does what AKIDA does with Symphony. Event based sifting through millions of transactions saving loads of power at a much quicker pace than traditional AI giving users an edge in trading.
However extensive testing still would be needed before unleashing on clients
Compare to say Onsor who have to develop a wearable from scratch - an M.2 Card would look a little dodgy hanging off a pair of glasses.
M.2 cards would not be suitable for the needs of our known defense clients.
 
  • Like
Reactions: 3 users

7für7

Top 20
Kevin likes this


The related article

 
  • Like
  • Love
Reactions: 9 users

Diogenese

Top 20
Dio, Symphony is interesting in that its so easy it for Kevin to insert the plug and play M.2 Card into Symphony and with minimal additional fuss (compared to integration) and churn out what he describes as great ROI etc.
AKIDA1000 just does what AKIDA does with Symphony. Event based sifting through millions of transactions saving loads of power at a much quicker pace than traditional AI giving users an edge in trading.
However extensive testing still would be needed before unleashing on clients
Compare to say Onsor who have to develop a wearable from scratch - an M.2 Card would look a little dodgy hanging off a pair of glasses.
M.2 cards would not be suitable for the needs of our known defense clients.
Hi manny,

The M2 card has 20 NPUs (equivalent to 4 nodes in later versions, but the geography is a bit different). I think a full sized Akida 1 could have 1024 NPU, and this could be ganged together with other Akida 1s, so the M2 is simply a demonstrator. For a mass commercial market, (> 1 million units), it could all be accommodated on a single chip, taking advantage of Akida's multitasking abilities. How good is that ? - multitasking on a massively parallel neuromorphic processor.

https://shop.brainchipinc.com/products/akida-m-2-card

Akida M.2 Card, powered by AKD1000 IC with an ARM M.4 CPU plus 20 Akida 1.0 event-based processing NPUs in a mesh. Akida M.2 AKD1000 Card accelerates CNN-based neural network models using BrainChip’s ultra energy-efficient, and purely digital, event-based processing architecture.

  • Form factor: M.2 2260, B+M Key and E Key available
  • Host interface: PCIe PHY 2-lane
  • Memory interface: LPDDR4 via DMA
  • CPU: 32bit ARM M.4
  • NPU: 20 x Akida 1 Neuron mesh
  • Peak INT8 GOPs: 1.5 TOPs
  • On-chip memory: 8MB high-speed near compute SRAM
  • Clock frequency: 300MHz
  • Operating temperature: 0 – 70°C
  • Thermal solution: no fan or heatsink required
  • Typical application power: 1W

As you say, the advantage for Kevin's project is that the M2 is fully working COTs with ARM processor and comms interface on chip - plug/train/configure and play, and training is a breeze. I think the ease of training is something which will be of interest to olde worlde CNN AI engineers/programmers.

What this illustrates is that there is a potential market for the clunky old Akida 1 IP, because there will be a price differential for Akida 2 IP, and a much greater price differential for Akida 3. Those later models have higher precision and lower latency, but take comparatively larger slices of wafer real estate per chip.

The other thing is that our very good friends at MF have always cited the big budgets of the tech giants for developing AI as a downer for BRN.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

manny100

Top 20
Hi manny,

The M2 card has 20 NPUs (equivalent to 4 nodes in later versions, but the geography is a bit different). I think a full sized Akida 1 could have 1024 NPU, and this could be ganged together with other Akida 1s, so the M2 is simply a demonstrator. For a mass commercial market, (> 1 million units), it could all be accommodated on a single chip, taking advantage of Akida's multitasking abilities. How good is that ? - multitasking on a massively parallel neuromorphic processor.

https://shop.brainchipinc.com/products/akida-m-2-card

Akida M.2 Card, powered by AKD1000 IC with an ARM M.4 CPU plus 20 Akida 1.0 event-based processing NPUs in a mesh. Akida M.2 AKD1000 Card accelerates CNN-based neural network models using BrainChip’s ultra energy-efficient, and purely digital, event-based processing architecture.

  • Form factor: M.2 2260, B+M Key and E Key available
  • Host interface: PCIe PHY 2-lane
  • Memory interface: LPDDR4 via DMA
  • CPU: 32bit ARM M.4
  • NPU: 20 x Akida 1 Neuron mesh
  • Peak INT8 GOPs: 1.5 TOPs
  • On-chip memory: 8MB high-speed near compute SRAM
  • Clock frequency: 300MHz
  • Operating temperature: 0 – 70°C
  • Thermal solution: no fan or heatsink required
  • Typical application power: 1W

As you say, the advantage for Kevin's project is that the M2 is fully working COTs with ARM processor and comms interface on chip - plug/train/configure and play, and training is a breeze. I think the ease of training is something which will be of interest to olde worlde CNN AI engineers/programmers.

What this illustrates is that there is a potential market for the clunky old Akida 1 IP, because there will be a price differential for Akida 2 IP, and a much greater price differential for Akida 3. Those later models have higher precision and lower latency, but take comparatively larger slices of wafer real estate per chip.

The other thing is that our very good friends at MF have always cited the big budgets of the tech giants for developing AI as a downer for BRN.
Thanks Dio, very informative.
Depending on customers needs AKIDA 1000 may well be all that is necessary.
This year should be interesting with AKD1500 samples available for clients since at least Dec'25 (see Dec Q report).
Late 2026 we might see AKD 2500 samples available for early access.
Looking at Kevin's last couple of posts it appears he is testing Symphony which was designed for Finance as a tool for security..
If this works finance companies using both Symphony and Watson would save huge $$ and IBMs Moat would be widened..
IBM could also attract those using say Symphony for Finance but not Watson for security to switch.
Wait and see.
 
  • Like
Reactions: 8 users

Rach2512

Regular
 
  • Fire
  • Like
  • Wow
Reactions: 5 users

Rach2512

Regular
Kevin likes this too.

 
  • Like
  • Fire
Reactions: 13 users

IloveLamp

Top 20
Apologies if posted already

1000020342.jpg
 
  • Like
  • Fire
  • Love
Reactions: 31 users

Frangipani

Top 20
While @ChrisBRN discovered the logos of two new OEM Integration Partners that had appeared on the BRN website overnight (Gbrain and Trusted Semiconductor Solutions), it turns out that the logo of one of our Solutions Enablement Partners has just as silently disappeared: Deep Perception.

Have a look: The Deep Perception logo used to be right there, between the Neurobus and RTX logos:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-478155

C6EF90BA-3902-4F13-958E-B94F5E25B648.jpeg



And now it’s gone:


4EA29ECC-5C3D-4829-8640-F8A402E4CC03.jpeg


Hard to believe, I know, especially given that BrainChip posted the CES 2026 BrainChip & Deep Perception joint demo video only just over a week ago.
(https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-483083)

While it struck me as odd at the time that it was Kurt Manninen who was featured in that demo video, unlike in the BrainChip & HaiLa resp. Quantum Ventura demo videos, when their own staff would explain what they had come up with, I simply assumed whoever was supposed to be representing Deep Perception had fallen sick or were unable to attend CES for other reasons and couldn’t be replaced at short notice. After all, they were a very small company, registered at the CEO’s residential address, if I recall it correctly.

The LinkedIn profile of said CEO, Chris Clason, now shows that he was Co-Founder and CEO of Deep Perception between September 2023 and January 2026 and is currently “Open to Work”.

And the other Co-Founder and CTO Alex Witt apparently left Deep Perception sometime in November and started working for AWS (Amazon Web Services) in December.

Also, the Deep Perception website (http://deepperception.ai/) is no longer active, and neither is the company’s GitHub page (“This organization was marked as archived by an administrator on Feb 16, 2026. It is no longer maintained.”)

As I couldn’t find any info about an acquisition, the most likely explanation sadly is that the small Austin-based company must have folded.


35366AE0-C672-4941-B88B-8F570F2AF53E.jpeg

02D9CE61-72AF-43AD-A295-696561BD8343.jpeg




9C21F404-7C72-433F-92D4-6AFA0FCFF825.jpeg





6B77C952-11DF-4694-BE6E-D42BB721F59A.jpeg


The GitHub repo “gst-aruco-detector” - updated on 26 November 2025 - must have been developed for the Raytheon Autonomous Vehicle Competition, whose 2025/26 round “Operation Touchdown” is sponsored by BrainChip.



Live Partner Demos


“Visual Computing Pipeline
Full compute pipeline using the AKD1000 for drones and mobile devices.”
 
  • Like
  • Sad
  • Wow
Reactions: 20 users

7für7

Top 20
Now it makes sense why the share price decreased instead of increasing… WATCH THAT!
Read till the end

IMG_0481.jpeg
 
  • Haha
  • Like
Reactions: 2 users

jtardif999

Regular
Hi Manny,

Thanks for your response.

I’ve never doubted the technology. My point was simply that we are still (as you say) in the Early Adopter phase, despite management having, for many years, made statements that in hindsight appear overly optimistic. I don’t believe that helps anyone - neither investors nor management themselves.

Phrases like “explosions of sales”, “watch us now”, and even Antonio’s comment in a previous AGM that a five-year licence would make us profitable overnight are powerful statements. At the time, I assumed they must have been grounded in something tangible or at least realistically within reach. But with the benefit of time and still no meaningful licence wins of that nature, it’s hard not to question whether those scenarios were ever genuinely close.

We’re now sitting around 13-14 cents and many of us are trying to reconcile the gap between what felt imminent and what has actually materialised. That disappointment doesn’t mean the tech is broken, but I do think it means expectations were perhaps set poorly.

You mentioned that “there is no such thing as a management team that does not talk up the company.” I understand that to a point. But enthusiasm should be balanced with some sense of realism otherwise it erodes trust. Management commentary materially influences investor behaviour and therefore carries responsibility. Communicating transparency when timelines stretch or challenges arise is, in my view, just as important as communicating optimism when things look promising.

You also referred to the lengthy commercialisation timeframe and suggested that investors in recent years should have understood this through research. Bu the issue here as I see it, is that those timelines were also shaped by management’s own commentary (references to looking toward financials, imminent traction, accelerating pipelines, etc.). That kind of messaging can't help but shape expectations.

I should clarify that I’m not absolving investors of our own responsibility (i.e. research/due diligence). But I don’t think management can be completely absolved either because communication matters.

My original point was simply that what once felt like we were on the cusp of explosive revenue now appears to be a much longer road which is a bitter pill for many of us to swallow.

Having said that, I fully appreciate there are reasons beyond management’s control as to why commercial traction has taken longer than expected (even if those reasons haven’t always been clearly articulated to shareholders).

Some likely factors (to name a few) include:
  • The issues Jonathan Tapson outlined in his recent presentation, some of which are outlined in this post #110,044
  • Lengthy semiconductor adoption cycles
  • The disruptive nature of the tech itself and the difficulty of shifting developers from conventional ML frameworks to neuromorphic approaches
  • Limited SNN engineering talent globally
  • Customers unwilling to share proprietary datasets needed for optimisation
  • Competition from established players integrating NPUs into MCUs (e.g. Arm’s M85 NPU)
  • Larger players bringing edge AI silicon in-house
  • Customers demanding full ecosystem maturity (toolchains, software stacks, long-term support) before committing
  • Lack of regulatory pressure around energy efficiency that might otherwise accelerate adoption
In a nutshell, I question whether it would have been \ prudent for management to have been a little more transparent about the hurdles earlier, instead of allowing enthusiastic messaging to set an expectation of near-term traction that hasn’t materialised.
One thing to consider that might have prolonged BRNs commercial strategy and one that they didn’t see coming is the rise and rise of GenAI and how that affected the capex priorities of the biggest companies -some of whom BRN may have been in a position to obtain IP licences from earlier? Who knows how much plans were disrupted when Chatty came to life!
 
  • Like
Reactions: 2 users

7für7

Top 20
One thing to consider that might have prolonged BRNs commercial strategy and one that they didn’t see coming is the rise and rise of GenAI and how that affected the capex priorities of the biggest companies -some of whom BRN may have been in a position to obtain IP licences from earlier? Who knows how much plans were disrupted when Chatty came to life!

Good point but I don’t think chatty has anything to do with that… it’s a completely different use-case… in fact, we even could improve the quality of chatty in many ways! You have to see the difference between fake AI and real AI

For example…GenAI has pulled a lot of money, people, and management attention at big companies.
But BrainChip can still win in areas where GenAI often isn’t a good fit….ultra-low power , always on operation…runs on the device and sensor on the edge, not in the cloud..eventbased processing…reacts when something happens instead of constantly crunching everything..better privacy, data stays on the device…fast and predictable response time low, consistent latency…

And as mentioned before you can also combine both…chatbot for language and reasoning, plus a neuromorphic , that filters signals efficiently and only forwards what matters.


Only my opinion
 
Last edited:
  • Like
Reactions: 1 users
The old chatty chatty hey , O yes of course of course I see were your coming from now . Maybe it's also the unforeseen price rising of bananas in Costa Rica slowly cutting in on the uptake of neuromorphic compute. Lots of competition around for brainchip imo dyor 🤪.
 
Last edited:
  • Haha
  • Like
Reactions: 3 users

7für7

Top 20
Come on Sean… it’s time for a juicy announcement! Otherwise I can not help you this time on the AGM

Marlon Brando Movie GIF
 
  • Haha
  • Thinking
  • Like
Reactions: 5 users

overpup

Regular
You and your vernacular.

I dislocated my vernacular once - very nasty operation.
My wife kicked me in the vernaculars once - I sympathise :)
 
  • Haha
  • Like
Reactions: 4 users
Ericsson, Intel, 6 G.

Technical Synergy and Neuromorphic AI in 6G
The evolution toward 6G requires "Networks for AI," where the infrastructure itself is programmable and capable of real-time sensing.[1] Neuromorphic computing, as detailed in academic literature, offers a path toward the "extreme low power" requirements of 6G sensing nodes.[10] BrainChip’s technology enables on-chip learning independent of the cloud, which aligns with the Ericsson-Intel goal of reducing latency and improving data security at the network edge
 

Attachments

  • Screenshot_20260303_133356_LinkedIn.jpg
    Screenshot_20260303_133356_LinkedIn.jpg
    201.2 KB · Views: 72
  • Like
  • Fire
  • Love
Reactions: 10 users

TopCat

Regular
Ericsson, Intel, 6 G.

Technical Synergy and Neuromorphic AI in 6G
The evolution toward 6G requires "Networks for AI," where the infrastructure itself is programmable and capable of real-time sensing.[1] Neuromorphic computing, as detailed in academic literature, offers a path toward the "extreme low power" requirements of 6G sensing nodes.[10] BrainChip’s technology enables on-chip learning independent of the cloud, which aligns with the Ericsson-Intel goal of reducing latency and improving data security at the network edge
I couldn’t find that snippet you posted in the LinkedIn article. Can you chuck a link up.
 
  • Like
  • Fire
Reactions: 6 users
Top Bottom