Below is a copy of Marketing Mans recent post over on the crapper.
I am reposting it here because I think it is eloquent and extremely pertinent.
It is in reply to Phil the yank who is a recent blow in, a self proclaimed heavy hitter who, having now "discovered" WBT, seems to be falling out of love with us.
To add, I agree with the core issue you’re highlighting.
The risk isn’t “does Akida work?” it’s whether adoption happens fast enough in a market that often rewards good-enough and easy over better but harder.
That said, a couple of nuances are worth adding.
- First, if neuromorphic adoption is being held back by friction, that friction applies to all neuromorphic approaches, not just BrainChip. In that context, BRN actually appears to be ahead of the pack rather than behind. They’ve invested earlier than most in developer tooling, SDKs, MetaTF to bridge into existing ML workflows, and they’re now explicitly recruiting senior leadership to deepen the software and toolchain side. That doesn’t eliminate friction, but it does suggest management understands exactly where the bottleneck is.
- Second, you’re absolutely right that today BRN’s traction is showing up mainly where SWaP (Space, Weight, and Power) is unavoidable - space, defence, and ultra-low-power wearables like Onsor. That’s not a weakness of the technology; it’s where costs force a different compute approach. In those environments, “good enough” conventional compute doesn't fit. Early adoption almost always starts where the pain is baked-in.
- Third, while much of edge-AI today is advancing quickly on conventional methods, power consumption and heat rejection are becoming high-priority problems even in data centres. We’re already seeing hyper-scalers talk openly about energy limits, cooling constraints, and diminishing returns from brute-force GPUs. For example, several new, large-scale data centres in Western Sydney, particularly projects by AirTrunk, Microsoft, and Amazon in areas like Kemps Creek and Marsden Park, are facing scrutiny over high water consumption for cooling (even though cooling systems are heat-pump based, the cooling towers are typically evaporative). And in the US Microsoft is recommissioning 3-Mile Island Nuclear PLant (the site of one of the worst nuclear accidents before Fukushima and Chernobyl) because their new data centre needs enormous power. Eventually efficiency stops being a “nice to have” and becomes a design constraint.
- Finally, this whole debate has a strong historical echo. Valves weren’t displaced by transistors because transistors were immediately easier. Quite the opposite: valves had a huge industrial base, familiar engineering practices, and plenty of off-the-shelf solutions. Transistors were initially viewed as fragile, unproven, and niche. Yet once power, size, heat and reliability became dominant constraints, the market flipped, slowly at first, then decisively.
None of this guarantees BRN wins.
Adoption speed still matters, and the window isn’t open-ended. But it’s not that BrainChip is falling behind “better” competitors, it’s that
the market hasn’t yet been forced to care enough about the problems neuromorphic solves. Where it has been forced, BRN is already seeing traction.
So I agree with your concern, but I’d say the verdict hinges less on near-term elegance or ease of adoption, and more on how fast power, heat, and autonomy constraints tighten across the broader AI landscape. That’s the real time-frame that matters here.
One final point I’d add is that much of what the market currently labels as “AI progress” is still overwhelmingly centred on LLMs and hyperscale cloud infrastructure and that phase is rapidly maturing.
The next leg of AI growth is far more likely to be about where the compute happens, not just how large models become. As AI pushes out of data centres and into autonomous, mobile, always-on systems, edge constraints like power, latency, heat and
connectivity become dominant.
That shift doesn’t automatically guarantee success for neuromorphic approaches, but it does materially expand the addressable market where neuromorphic makes sense. In that context, neuromorphic isn’t competing head-to-head with cloud AI; it’s positioned for the phase after cloud-centric AI saturates, which is where a lot of the longer-term optionality sits.
Of course, a big application is battle drones - I saw a Youtube recently that reported that 80% of Kills in the Ukraine-Russia conflict have been executed by drones (of all shapes and sizes). No military budget can ignore this trend. With US isolationism and Trump forcing countries to re-arm drone development (UAVs is the correct term) is at an all time high. And AI that is fast, local, small, reliable, low-power, and cheap is what makes UAVs deadlier.
You don't wait for the conflict to start - you fill warehouses with 'em so you're ready.
That's good news for BRN (and us investors) - but World War III - not so good.
***
I had an AI "Chat" which identified the following edge use cases....
“Autonomous, mobile, always-on” systems share three traits:
- They move through the world
- They must perceive continuously
- They cannot rely on constant connectivity or abundant power
Here are the
best, real-world examples, grouped by maturity and importance.
Military & Security (already here, already critical)
Drones (air, land, sea)
This is the archetype.
- ISR drones
- Loitering munitions
- Autonomous naval vessels
- Swarm drones
Why they matter:
- Battery-powered
- Highly mobile
- Must sense continuously
- Must react in milliseconds
- Often disconnected or jammed
Cloud AI is impossible here.
Brute-force edge compute is tolerated — but inefficient.
Counter-drone & perimeter defence systems
- Radar + RF + EO/IR sensing
- Always on
- Must distinguish signal from noise continuously
- False positives are costly
Event-driven, low-power intelligence is a natural fit.
Robotics & Industrial Autonomy (rapidly emerging)
Mobile robots (AMRs, humanoids, legged robots)
- Warehouses
- Hospitals
- Field inspection
- Construction sites
Constraints:
- Battery-limited
- Must operate for hours or days
- Continuous perception (vision, lidar, audio)
- Safety-critical decisions
Most current systems are over-provisioned and power-inefficient — by design.
Autonomous vehicles (non-consumer first)
Not robotaxis — those are cloud-heavy.
But:
- Mining vehicles
- Agricultural machinery
- Military ground vehicles
These:
- Operate in remote areas
- Require deterministic response
- Have limited connectivity
Medical & Human-adjacent Systems (under-appreciated)
Implantable and wearable medical devices
Examples:
- Cochlear implants
- Neural stimulators
- Smart insulin pumps
- Continuous monitoring wearables
Why they’re important:
- Always on
- Ultra-low power
- Safety critical
- Privacy sensitive
Cloud AI is a non-starter.
“Good enough” compute often isn’t good enough.
Assistive technologies
- Prosthetics
- Exoskeletons
- Mobility aids
These require:
- Real-time feedback loops
- Local learning
- Minimal latency
Consumer & Infrastructure (early but inevitable)
Smart sensors & IoT at scale
- Environmental monitoring
- Smart cities
- Infrastructure health
- Border security
Thousands or millions of sensors:
- Always on
- Mostly idle
- Occasionally critical
Processing everything centrally is economically insane.
Space systems
- Satellites
- Spacecraft autonomy
- Debris avoidance
- On-orbit servicing
Constraints:
- Power is scarce
- Communication delayed
- Reliability is existential
The unifying insight (this is the key)
In all these systems:
- Power is a hard constraint
- Latency is non-negotiable
- Connectivity is unreliable
- Intelligence must be continuous, not episodic
That combination is exactly where:
- Cloud AI fails
- Brute-force edge AI struggles
- Event-driven approaches become interesting
Why this matters for your broader thesis
These systems are not fringe.
They are:
- Growing in number
- Growing in strategic importance
- Becoming more autonomous over time
And importantly:
That’s why this category matters more than benchmarks or demos.
Do you own research.