BRN Discussion Ongoing

cosors

👀
Elon's new motto:

"Move fast and fall flat on your face!"
Do you mean the Tesla files 😅
An extract:
"...
The documents that Krupski claims he was able to download from the internal project management system included thousands of customer complaints about Tesla's Autopilot, the company's most strategically important project. They did not match the claims that Musk had been making for years about the alleged quality of his software.
..."
 
  • Like
  • Love
Reactions: 6 users

Frangipani

Top 20

Short interview with Alf Kuchenbuch recorded five weeks ago at Edge AI Milan 2025:






483F4C0A-A3D2-4F54-BA08-C5F2E9078662.jpeg


The video running in loop on the screens at the BrainChip booth shows two people wearing bulky VR headsets. Have we seen this footage before? 🤔

399BB3B5-2E8F-4FC0-8467-5E36846C7837.jpeg

8F73A829-6268-471B-9369-0DC8221C108B.jpeg


Not sure, though, whether this 👇🏻 has anything to do with us, as I don’t recognise any of our tech?

03ED44D1-A37B-4C9F-9442-D5B983032987.jpeg


And are we supposed to interpret more into the fact that the 5V Media team who made the video included footage of these three gentlemen from Arduino twice when Alf Kuchenbuch was asked “What does being part of the foundation mean to you?”, to which he replied “It’s great to work with everybody and finding out if there are maybe some opportunities where we can actually connect and see if we can do something together.” resp. “What excites you most about the future of Edge AI?”, to which he replied “It’s probably this aspect of all the parts of the chain coming together and being super low-power. So in the end [cue picture above, which, however, doesn’t look like this has anything to do with Arduino either, as their boards are always blue as far as I’m aware] solutions that are possible that run on a coin cell battery for months, even for a whole year, that would be something, for more autonomy…”

CAC8B14F-6A18-4089-986A-DDE5CE26341E.jpeg



While we do have some touchpoints with Arduino…
(…) Now here comes the question for the more tech-savvy: Would it still hypothetically be possible that we are involved?

I did find two references online that Portenta X8 now has support for the Akida PCIe module (one by YouTuber Chris Méndez from the Dominican Republic, who works for Arduino and has uploaded several videos featuring the Akida), but couldn’t find anything regarding Portenta H7…


View attachment 72965


View attachment 72966


And since I’m in the mood of maxing out my file uploads again, here are Padma AgRobotics’ SBIR phase I and II applications, in case anyone would like to use these to fuel or end this speculation (note that they refer to a cilantro harvester, though, not to the weed-control AI-powered agricultural robot featured on their website and in the January 2023 video):

View attachment 72967
View attachment 72968
View attachment 72969
View attachment 72970

… the two companies have so far never announced any formal kind of collaboration.

It could of course be some general footage from Edge AI Milan 2025, like the first or last few seconds of the video, but it did strike me as odd.


A9995426-59C6-4762-A0EB-A2E06BE0AEE8.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 10 users

manny100

Top 20
Purely coincidental. Yeah nah. lol
So much BS going on here and on the crapper with some clicky group shite thrown in (happy clappers and negative Nancies). All amusing. Hope all had a good day.
Looks like they are 'happy clapping' at the White house now. Not a bad convert to have on your side.
What is that old, maybe new saying? 'today the White house, tomorow the world'?
This would have got some market interest had it been non price sensitive announced. I guess that is why it was not announced - fear of to much attention and a price spike the ASX do not like??
I guess the power of AKIDA just has to ours and the White House's little secret for now.
 
Last edited:
  • Like
Reactions: 6 users

Frangipani

Top 20
Akida gets a mention alongside Loihi 2 - representing “brain-inspired hardware” - in a chapter of a book titled Machine Vision Analysis in Industry 5.0 Fundamentals, Applications, and Challenges, to be published by Chapman & Hall in September: “Chapter 9: Comprehensive Study of Machine Vision and Image Analysis in Industry 5.0” (pp.216):


A3994BB6-348E-4B88-8C4D-F0FF1DFE7BB3.jpeg
F708914E-E38E-4F5A-A327-6A09D289522E.jpeg
(…)

38E3B4CF-6658-42BB-937E-45A48EF154CF.jpeg

35401078-3C23-4477-82B9-1B8BD1B9560B.jpeg


(…)

97D4FE48-B684-4627-889F-82137A65FC65.jpeg

By describing herself as an “Independent Researcher in AI” and listing her private Gmail address, co-author Royana Anand (possibly related to first author Surabhi Anand?) makes it clear she has not contributed to this publication in her capacity as an AWS (Amazon Web Services) employee.
 
  • Like
  • Love
Reactions: 7 users

IloveLamp

Top 20
🤔

1000009901.jpg
1000009903.jpg
 
  • Thinking
  • Like
  • Fire
Reactions: 8 users

Frangipani

Top 20
Sorry if these two vids have already been posted.

Tony Lewis



Todd Vierra


Need an edge ai account to watch the full thing ^

Here are the presentation slides of our CTO’s tutorial👆🏻at the May 2025 Embedded Vision Summit titled “State-Space Models vs. Transformers for Ultra-Low-Power Edge AI”. (Far too technical for most of us, though…)



33E1D6BA-8B16-4461-A352-3915C2A5A457.jpeg
3D1F7953-953A-4D5E-8ABA-88370B10B6DA.jpeg
C3A4B9BB-A18D-4C3D-8D07-B0B06390E4D2.jpeg
17107DC2-6E6B-436C-AA32-B423445E8E4F.jpeg
72074516-FF07-437A-9B6A-0A7221339EC0.jpeg
7EA9A0E0-78BD-4934-A2CF-29762BA0FEDB.jpeg
259F9A73-F80D-4307-9188-9EE3F716D798.jpeg
FCE3BC0E-E20A-4EE3-80E5-E42A926B8DD5.jpeg

CF701F94-551C-46E6-8F88-10ACCCF10688.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 7 users

Frangipani

Top 20
Easy-and-worthwhile-to-read blog post by Innatera Strategic Business Development Manager Katarzyna ‘Kasia’ Kożdoń that will surely resonate with our staff at BrainChip:



1754332713654

Lessons from the floor: What global events reveal about the future of AI sensing​

Katarzyna (Kasia) Kożdoń

Katarzyna (Kasia) Kożdoń


Strategic Business Development Manager



August 5, 2025

This year, we spoke with industry leaders at some of the most important tech events around the globe, including CES and Sensors Converge in the USA, Mobile World Congress in both Spain and mainland China, Embedded World in Germany, and Computex in Taiwan. Across continents and markets, one thing stood out: everyone is talking about edge AI.

But while the term is everywhere, what people mean by it – and what they need from it – varies wildly. As edge AI gains popularity, its definition is expanding. We’re starting to see powerful “edge AI” solutions – even GPU-based “edge” computers – that deliver tremendous compute at an equally tremendous power draw. These systems are impressive, but they miss the original point: bringing intelligence closer to the source without ballooning energy costs or form factors.

At Innatera, we remain focused on redefining the edge – not by chasing bigger chips, but by taking intelligence to places it simply couldn’t go before. We're talking about AI that fits in the smallest corners of your world: always-on, ultra-efficient, intuitive, and ambient. Think AI that’s woven into the environment – not sitting on a desk with a fan – and adapts to application needs rapidly and with minimal user input requirements. It can anticipate which lights to turn on to suit your activity, enable headphones that instantly adapt to changing acoustic environments, and power smart bikes that provide real-time feedback on your posture and detect maintenance needs from subtle vibration patterns. It can even enable insect-scale autonomous drones that navigate and inspect infrastructure – no operator required.

The "move from cloud to edge" narrative has gained real traction. But what we heard on the floors of these events went deeper, revealing more specific needs and challenges at the device level.

The Edge AI realms​

Talking with engineers and product leads across events, a pattern emerged – across different use cases, a number of bottlenecks arise – but one aspect remains consistent: traditional hardware isn't enough.

The ARM Upgraders​

"We're already on the edge, but we're compute-starved"
These teams are already using microcontrollers or small SoCs. They’ve done the work to build on low-power ARM systems – but they’ve hit a wall. The problem isn’t moving to the edge; it’s what you can do once you’re there.
Their ask: more intelligence, faster reaction times, and richer features – all while staying within tight power and area budgets.

The Pi Downsizers​

"We have the compute, but can't scale down."
These are the Raspberry Pi and Jetson Nano users. They’re used to development boards with plenty of headroom. But now, they want to productize. And suddenly, the size, cost, and power footprint of their go-to platforms just don’t fit.

Their challenge: replicating the intelligence they’ve built on development kits – but in a much smaller, more efficient, and deployable form factor.

The Always-On Seekers​

"We need something that never sleeps."
These teams are often running on chips like Qualcomm Snapdragon or other heavy-duty SoCs. But they only need those systems to wake up when it’s necessary – for a voice command, a gesture, or an unusual event.

Right now, they’re duty cycling or using inefficient wake-up triggers. What they need is true always-on intelligence that doesn’t cost them battery life or generate heat.

The Sensor Hub Pioneers​

"We want to connect everything, but there's no good solution."
This group is one of the most exciting – and underserved. They’re building products that rely on multiple sensor modalities: audio, motion, vision, and touch and others. But there’s no great, power-efficient chip that can act as a unified sensor hub and provide smart preprocessing or inference.

Every time we explain how neuromorphic technology can support multiple sensors, the conversation shifts. Eyes light up.

Why Neuromorphic Makes Sense – Finally​

For years, neuromorphic computing was viewed as researchy, exotic, niche. That perception is shifting. These four user types are showing us exactly why neuromorphic is becoming practically relevant.
Here’s how we map:

  • ARM Upgraders: We offer more compute, but within the same tight constraints.
  • Pi Downsizers: We deliver significant improvements in power and area, with an acceptable trade-off in raw throughput.
  • Always-On Seekers: We give you true always-on AI – not duty cycling hacks.
  • Sensor Hub Pioneers: We provide the missing piece they didn’t know existed – smart sensor fusion on a single, ultra-efficient chip.

This is AI that starts with sensing. That reacts only when it needs to. That lives and breathes in the real world, not in data center specs.

The Ecosystem Reality Check​

We also noticed a stark gap between the market hype and the actual challenges teams are facing:

  • Many embedded teams lack deep AI experience. They know sensors and embedded systems, but they need approachable, adaptable AI.
  • Data is hard to come by. Collecting and labeling real-world data for training takes time and resources most teams don’t have.
  • Benchmarking novel hardware is non-trivial. Neuromorphic, Spiking Neural Networks (SNNs), non-von Neumann architectures – people are interested, but they don’t know how to compare them fairly (or how to integrate them into their current pipeline).

It is clear that innovation at the edge doesn’t happen in isolation. It requires collaboration across the entire stack. From sensor vendors, through solution providers, to infrastructure players – dataset providers, PCB makers, module builders, and distributors – each plays a vital role in bringing smarter edge systems to life.

At Innatera, we sit at the intersection of this network, enabling our partners to build intelligence where it previously wasn’t possible. Whether that means integrating neuromorphic compute into sensor hubs, powering always-on inference, or guiding teams through new AI workflows, we help close the gap between what’s needed and what’s technically achievable today. Together, we’re shaping the next wave of embedded intelligence – one practical step at a time.

Looking Ahead​

The future of AI is ambient – systems that blend into their environment: efficient yet responsive, autonomous yet intuitive when engaged. There is so much potential when tapping into the data already around us. I’m excited to work with partners around the world to bring these kinds of solutions to market.

If you recognize your team in one of the categories above – if you’re compute-starved, power-burdened, or drowning in sensor complexity, or if you're building the missing ecosystem pieces that could help solve these challenges – we should talk and explore what we can create together.
 
Top Bottom