BRN Discussion Ongoing

cosors

👀
Elon's new motto:

"Move fast and fall flat on your face!"
Do you mean the Tesla files 😅
An extract:
"...
The documents that Krupski claims he was able to download from the internal project management system included thousands of customer complaints about Tesla's Autopilot, the company's most strategically important project. They did not match the claims that Musk had been making for years about the alleged quality of his software.
..."
 
  • Like
  • Love
Reactions: 6 users

Frangipani

Top 20

Short interview with Alf Kuchenbuch recorded five weeks ago at Edge AI Milan 2025:






483F4C0A-A3D2-4F54-BA08-C5F2E9078662.jpeg


The video running in loop on the screens at the BrainChip booth shows two people wearing bulky VR headsets. Have we seen this footage before? 🤔

399BB3B5-2E8F-4FC0-8467-5E36846C7837.jpeg

8F73A829-6268-471B-9369-0DC8221C108B.jpeg


Not sure, though, whether this 👇🏻 has anything to do with us, as I don’t recognise any of our tech?

03ED44D1-A37B-4C9F-9442-D5B983032987.jpeg


And are we supposed to interpret more into the fact that the 5V Media team who made the video included footage of these three gentlemen from Arduino twice when Alf Kuchenbuch was asked “What does being part of the foundation mean to you?”, to which he replied “It’s great to work with everybody and finding out if there are maybe some opportunities where we can actually connect and see if we can do something together.” resp. “What excites you most about the future of Edge AI?”, to which he replied “It’s probably this aspect of all the parts of the chain coming together and being super low-power. So in the end [cue picture above, which, however, doesn’t look like this has anything to do with Arduino either, as their boards are always blue as far as I’m aware] solutions that are possible that run on a coin cell battery for months, even for a whole year, that would be something, for more autonomy…”

CAC8B14F-6A18-4089-986A-DDE5CE26341E.jpeg



While we do have some touchpoints with Arduino…
(…) Now here comes the question for the more tech-savvy: Would it still hypothetically be possible that we are involved?

I did find two references online that Portenta X8 now has support for the Akida PCIe module (one by YouTuber Chris Méndez from the Dominican Republic, who works for Arduino and has uploaded several videos featuring the Akida), but couldn’t find anything regarding Portenta H7…


View attachment 72965


View attachment 72966


And since I’m in the mood of maxing out my file uploads again, here are Padma AgRobotics’ SBIR phase I and II applications, in case anyone would like to use these to fuel or end this speculation (note that they refer to a cilantro harvester, though, not to the weed-control AI-powered agricultural robot featured on their website and in the January 2023 video):

View attachment 72967
View attachment 72968
View attachment 72969
View attachment 72970

… the two companies have so far never announced any formal kind of collaboration.

It could of course be some general footage from Edge AI Milan 2025, like the first or last few seconds of the video, but it did strike me as odd.


A9995426-59C6-4762-A0EB-A2E06BE0AEE8.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 17 users

manny100

Top 20
Purely coincidental. Yeah nah. lol
So much BS going on here and on the crapper with some clicky group shite thrown in (happy clappers and negative Nancies). All amusing. Hope all had a good day.
Looks like they are 'happy clapping' at the White house now. Not a bad convert to have on your side.
What is that old, maybe new saying? 'today the White house, tomorow the world'?
This would have got some market interest had it been non price sensitive announced. I guess that is why it was not announced - fear of to much attention and a price spike the ASX do not like??
I guess the power of AKIDA just has to ours and the White House's little secret for now.
 
Last edited:
  • Like
  • Thinking
Reactions: 10 users

Frangipani

Top 20
Akida gets a mention alongside Loihi 2 - representing “brain-inspired hardware” - in a chapter of a book titled Machine Vision Analysis in Industry 5.0 Fundamentals, Applications, and Challenges, to be published by Chapman & Hall in September: “Chapter 9: Comprehensive Study of Machine Vision and Image Analysis in Industry 5.0” (pp.216):


A3994BB6-348E-4B88-8C4D-F0FF1DFE7BB3.jpeg
F708914E-E38E-4F5A-A327-6A09D289522E.jpeg
(…)

38E3B4CF-6658-42BB-937E-45A48EF154CF.jpeg

35401078-3C23-4477-82B9-1B8BD1B9560B.jpeg


(…)

97D4FE48-B684-4627-889F-82137A65FC65.jpeg

By describing herself as an “Independent Researcher in AI” and listing her private Gmail address, co-author Royana Anand (possibly related to first author Surabhi Anand?) makes it clear she has not contributed to this publication in her capacity as an AWS (Amazon Web Services) employee.
 
  • Like
  • Love
Reactions: 15 users

IloveLamp

Top 20
🤔

1000009901.jpg
1000009903.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 14 users

Frangipani

Top 20
Sorry if these two vids have already been posted.

Tony Lewis



Todd Vierra


Need an edge ai account to watch the full thing ^

Here are the presentation slides of our CTO’s tutorial👆🏻at the May 2025 Embedded Vision Summit titled “State-Space Models vs. Transformers for Ultra-Low-Power Edge AI”. (Far too technical for most of us, though…)



33E1D6BA-8B16-4461-A352-3915C2A5A457.jpeg
3D1F7953-953A-4D5E-8ABA-88370B10B6DA.jpeg
C3A4B9BB-A18D-4C3D-8D07-B0B06390E4D2.jpeg
17107DC2-6E6B-436C-AA32-B423445E8E4F.jpeg
72074516-FF07-437A-9B6A-0A7221339EC0.jpeg
7EA9A0E0-78BD-4934-A2CF-29762BA0FEDB.jpeg
259F9A73-F80D-4307-9188-9EE3F716D798.jpeg
FCE3BC0E-E20A-4EE3-80E5-E42A926B8DD5.jpeg

CF701F94-551C-46E6-8F88-10ACCCF10688.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 19 users

Frangipani

Top 20
Easy-and-worthwhile-to-read blog post by Innatera Strategic Business Development Manager Katarzyna ‘Kasia’ Kożdoń that will surely resonate with our staff at BrainChip:



1754332713654

Lessons from the floor: What global events reveal about the future of AI sensing​

Katarzyna (Kasia) Kożdoń

Katarzyna (Kasia) Kożdoń


Strategic Business Development Manager



August 5, 2025

This year, we spoke with industry leaders at some of the most important tech events around the globe, including CES and Sensors Converge in the USA, Mobile World Congress in both Spain and mainland China, Embedded World in Germany, and Computex in Taiwan. Across continents and markets, one thing stood out: everyone is talking about edge AI.

But while the term is everywhere, what people mean by it – and what they need from it – varies wildly. As edge AI gains popularity, its definition is expanding. We’re starting to see powerful “edge AI” solutions – even GPU-based “edge” computers – that deliver tremendous compute at an equally tremendous power draw. These systems are impressive, but they miss the original point: bringing intelligence closer to the source without ballooning energy costs or form factors.

At Innatera, we remain focused on redefining the edge – not by chasing bigger chips, but by taking intelligence to places it simply couldn’t go before. We're talking about AI that fits in the smallest corners of your world: always-on, ultra-efficient, intuitive, and ambient. Think AI that’s woven into the environment – not sitting on a desk with a fan – and adapts to application needs rapidly and with minimal user input requirements. It can anticipate which lights to turn on to suit your activity, enable headphones that instantly adapt to changing acoustic environments, and power smart bikes that provide real-time feedback on your posture and detect maintenance needs from subtle vibration patterns. It can even enable insect-scale autonomous drones that navigate and inspect infrastructure – no operator required.

The "move from cloud to edge" narrative has gained real traction. But what we heard on the floors of these events went deeper, revealing more specific needs and challenges at the device level.

The Edge AI realms​

Talking with engineers and product leads across events, a pattern emerged – across different use cases, a number of bottlenecks arise – but one aspect remains consistent: traditional hardware isn't enough.

The ARM Upgraders​

"We're already on the edge, but we're compute-starved"
These teams are already using microcontrollers or small SoCs. They’ve done the work to build on low-power ARM systems – but they’ve hit a wall. The problem isn’t moving to the edge; it’s what you can do once you’re there.
Their ask: more intelligence, faster reaction times, and richer features – all while staying within tight power and area budgets.

The Pi Downsizers​

"We have the compute, but can't scale down."
These are the Raspberry Pi and Jetson Nano users. They’re used to development boards with plenty of headroom. But now, they want to productize. And suddenly, the size, cost, and power footprint of their go-to platforms just don’t fit.

Their challenge: replicating the intelligence they’ve built on development kits – but in a much smaller, more efficient, and deployable form factor.

The Always-On Seekers​

"We need something that never sleeps."
These teams are often running on chips like Qualcomm Snapdragon or other heavy-duty SoCs. But they only need those systems to wake up when it’s necessary – for a voice command, a gesture, or an unusual event.

Right now, they’re duty cycling or using inefficient wake-up triggers. What they need is true always-on intelligence that doesn’t cost them battery life or generate heat.

The Sensor Hub Pioneers​

"We want to connect everything, but there's no good solution."
This group is one of the most exciting – and underserved. They’re building products that rely on multiple sensor modalities: audio, motion, vision, and touch and others. But there’s no great, power-efficient chip that can act as a unified sensor hub and provide smart preprocessing or inference.

Every time we explain how neuromorphic technology can support multiple sensors, the conversation shifts. Eyes light up.

Why Neuromorphic Makes Sense – Finally​

For years, neuromorphic computing was viewed as researchy, exotic, niche. That perception is shifting. These four user types are showing us exactly why neuromorphic is becoming practically relevant.
Here’s how we map:

  • ARM Upgraders: We offer more compute, but within the same tight constraints.
  • Pi Downsizers: We deliver significant improvements in power and area, with an acceptable trade-off in raw throughput.
  • Always-On Seekers: We give you true always-on AI – not duty cycling hacks.
  • Sensor Hub Pioneers: We provide the missing piece they didn’t know existed – smart sensor fusion on a single, ultra-efficient chip.

This is AI that starts with sensing. That reacts only when it needs to. That lives and breathes in the real world, not in data center specs.

The Ecosystem Reality Check​

We also noticed a stark gap between the market hype and the actual challenges teams are facing:

  • Many embedded teams lack deep AI experience. They know sensors and embedded systems, but they need approachable, adaptable AI.
  • Data is hard to come by. Collecting and labeling real-world data for training takes time and resources most teams don’t have.
  • Benchmarking novel hardware is non-trivial. Neuromorphic, Spiking Neural Networks (SNNs), non-von Neumann architectures – people are interested, but they don’t know how to compare them fairly (or how to integrate them into their current pipeline).

It is clear that innovation at the edge doesn’t happen in isolation. It requires collaboration across the entire stack. From sensor vendors, through solution providers, to infrastructure players – dataset providers, PCB makers, module builders, and distributors – each plays a vital role in bringing smarter edge systems to life.

At Innatera, we sit at the intersection of this network, enabling our partners to build intelligence where it previously wasn’t possible. Whether that means integrating neuromorphic compute into sensor hubs, powering always-on inference, or guiding teams through new AI workflows, we help close the gap between what’s needed and what’s technically achievable today. Together, we’re shaping the next wave of embedded intelligence – one practical step at a time.

Looking Ahead​

The future of AI is ambient – systems that blend into their environment: efficient yet responsive, autonomous yet intuitive when engaged. There is so much potential when tapping into the data already around us. I’m excited to work with partners around the world to bring these kinds of solutions to market.

If you recognize your team in one of the categories above – if you’re compute-starved, power-burdened, or drowning in sensor complexity, or if you're building the missing ecosystem pieces that could help solve these challenges – we should talk and explore what we can create together.
 
  • Like
  • Fire
  • Love
Reactions: 8 users

Frangipani

Top 20
Valentina Tiporlini, who used to be the Manager of the BrainChip Research Institute in Perth, gave a presentation on edge AI and event-driven AI at the Australian Computer Society WA Forum last month.

Did she mention Akida?
You bet!


98E2AB03-3C62-48B5-8942-F88A9DF7A67B.jpeg
C1ACC3F1-4EDF-42BD-B255-D1A492AEE56D.jpeg
 
  • Wow
  • Fire
  • Love
Reactions: 7 users

Frangipani

Top 20
The EDGX team will be presenting their NVIDIA Jetson Orin-based DPU designed for LEO (Low Earth Orbit) missions, which they have now officially named Sterna, at SmallSat (Small Satellite Conference) in Salt Lake City from 10-13 August. The EDGX DPU aka Sterna is being offered with an optional neuromorphic BrainChip Akida add-on (cf. https://satsearch.co/products/edgx-dpu), although strangely this option appears not to be mentioned anywhere on the revamped EDGX website so far.


View attachment 89427


I took the following screenshots exactly a week ago, on 2 August. When I just revisited https://www.edgx.space/, the website shows up largely as black, so maybe they are currently working on it in preparation for SmallSat, which starts on Sunday.

Today’s countdown to the launch of what I presume to be the first of the three planned missions should accordingly read 193 days (instead of 200 days as per screenshot), which would give us 18 February 2026 as the scheduled launch date for Sterna first acquiring flight heritage…

View attachment 89429
View attachment 89433
View attachment 89431
View attachment 89432


View attachment 89434
View attachment 89435 View attachment 89436 View attachment 89437
View attachment 89438

Speaking of EDGX - they just closed a €2.3 million funding round:


View attachment 89440


View attachment 89441



EDGX sluit een financieringsronde van € 2,3 miljoen af om AI-computing aan boord van satellieten te stimuleren​

8 aug 2025
De financiering zal de missie van EDGX versnellen om ’s werelds snelste AI-aangedreven edge-computers voor satellietconstellaties te leveren, waardoor snelle en efficiënte gegevensverwerking vanuit de ruimte mogelijk wordt.

De Belgische ruimtevaartstartup EDGX heeft een seed-financieringsronde van € 2,3 miljoen afgesloten om de commercialisering te versnellen van EDGX Sterna, de volgende generatie edge AI-computer voor satellieten.

De startup heeft ook een overeenkomst gesloten met een satellietoperator ter waarde van € 1,1 miljoen en kan nu al plannen aankondigen voor een demonstratie in de ruimte tijdens een SpaceX Falcon 9-missie in februari 2026.
De financieringsronde werd mee geleid door het imec.istart future fund, met deelname van het Flanders Future Tech Fund, dat wordt beheerd door de Vlaamse investeringsmaatschappij PMV. EDGX heeft ook verdere financiering aangetrokken van bestaande investeerder imec.istart, Europa’s best gerangschikte universiteitsgebonden accelerator.

De EDGX Sterna-computer is een krachtige gegevensverwerkingseenheid (DPU) die wordt aangedreven door NVIDIA-technologie. Deze biedt de rekenkracht en AI-versnelling die nodig zijn om snel en efficient data te verwerken aanboord van sattelieten. Dit maakt een einde aan de traditionele bottleneck waarbij enorme hoeveelheden ruwe gegevens naar de aarde moeten worden gestuurd voor verwerking, waardoor satellietoperatoren snellere, efficiëntere en datagestuurde diensten kunnen leveren.

De Sterna-computer van EDGX wordt aangedreven door hun SpaceFeather-softwarestack, die is gebouwd voor autonome, veerkrachtige en upgradebare satellietoperaties. Deze omvat een ruimtebestendig Linux-besturingssysteem met volledige traceerbaarheid, een speciaal toezichtsysteem voor autonome gezondheidsmonitoring, detectie en herstel van stralingsfouten, en een applicatieframework in een baan om de aarde voor het implementeren van nieuwe mogelijkheden na de lancering. Samen maken SpaceFeather en Sterna slimmere, flexibelere missies mogelijk met minder downtime, lagere kosten en snellere oplevering van data aan de eindgebruikers.

De techniek en het ontwerp van EDGX combineren commerciële AI-versnelling met betrouwbaarheid van ruimtevaartkwaliteit, waardoor exploitanten van satellietconstellaties kunnen beschikken over een niveau van rekenkracht aan boord dat voordien niet realiseerbaar was. Klanten gebruiken de Sterna DPU van EDGX en de bijbehorende SpaceFeather-software voor verschillende toepassingen.

Voor spectrum monitoring maakt Sterna krachtige verwerking in de ruimte mogelijk om radiosignalen te lokaliseren en te classificeren, en om dynamische spectrumkaarten te genereren. Dit is een essentiële capaciteit die satellietoperatoren helpt om in real-time te begrijpen hoe frequenties worden gebruikt, interferentie te vermijden en bandbreedte efficiënter toe te wijzen om optimale communicatiediensten te leveren.

Op het gebied van aardobservatie ondersteunt Sterna intelligente surveillance en verkenning (ISR) door rechtstreeks aan boord hoge-resolutiebeelden te analyseren. Dit betekent dat satellieten objecten zoals schepen, voertuigen of infrastructuur onmiddellijk kunnen detecteren en markeren, en kunnen reageren op tijdgevoelige gebeurtenissen zoals overstromingen, bosbranden of aardbevingen. Het resultaat: snellere beslissingen, efficiëntere missies en levensreddende informatie die passieve observatie omzet in realtime situationeel bewustzijn.

Sterna ondersteunt ook 5G en 6G vanuit de ruimte. Door de verwerkingscapaciteiten van basisstations naar de ruimte te verplaatsen kunnen satellieten rechtstreeks deelnemen aan mobiele netwerken van de volgende generatie. Dit maakt de weg vrij voor naadloze directe connectiviteit met apparaten en levert supersnel internet aan afgelegen, achtergestelde of door rampen getroffen gebieden waar traditionele infrastructuur tekortschiet.

nick_en_wouter_2-1920x1282.jpg



Naast de demonstratie in een baan om de aarde in februari staan er al twee verdere vluchten gepland voor 2026 en EDGX positioneert zich snel als leider op het gebied van AI ruimte-infrastructuur.

In een reactie op het nieuws zei Nick Destrycker, oprichter en CEO: “Klanten wachten niet op vluchtvalidatie, ze tekenen nu al. Met een volledig lanceringsmanifest, gegarandeerde commerciële contracten en onze eerste missie aanboord van een Falcon 9 rakket van SpaceX, stelt deze financiering ons in staat om op te schalen om te voldoen aan de vraag naar realtime informatie vanuit de ruimte.”

Wouter Benoot, oprichter en CTO van EDGX, zei: “Van nul naar honderd gaan, all-in, bij een ruimtevaartstartup is ambitieus. Die mentaliteit meenemen in de ontwikkeling van Sterna betekende nieuwe uitdagingen, voortdurend leren, maar ook echte vooruitgang. Wat het laat slagen is het team. Elke ingenieur brengt nieuwe ideeën, een drang om de ruimte te begrijpen en een passie om het te realiseren. We bouwen een subsysteem dat de volgende generatie satellieten aandrijft.”

Roald Borré, hoofd Venture Capital en lid van het uitvoerend comité bij PMV, zei: “Deze financieringsronde stelt ons in staat om het sterke team van EDGX te ondersteunen bij het op de markt brengen en verder ontwikkelen van veelbelovende Vlaamse technologie. EDGX is een van de weinige Europese spelers die een product aanbiedt dat krachtig, toegankelijk en robuust is, waardoor het unieke voordelen biedt in de snelgroeiende markt voor edge computing in de ruimte, niet in het minst wat betreft het versterken van de technologische positie van Europa in een strategische sector als ruimtevaart.”

Kris Vandenberk, managing partner bij imec.istart future fund, zei: “EDGX vertegenwoordigt precies het soort transformatieve infrastructuur waarnaar we op zoek zijn. De ruimtevaartindustrie kampt met een fundamentele bottleneck: we genereren enorme hoeveelheden data in een baan om de aarde, maar gebruiken nog steeds verouderde ‘store and forward’-architecturen. EDGX lost dit op door AI-aangedreven edge computing rechtstreeks in de ruimte te brengen, waardoor satellieten gegevens in realtime kunnen analyseren en erop kunnen reageren in plaats van te wachten op verwerking op de grond.”


P1034105-1920x1080.jpg


Over EDGX

EDGX is een Belgisch ruimtevaartbedrijf met als missie ’s werelds snelste edge computers voor satellieten te leveren. Het vlaggenschipproduct, EDGX Sterna, is een krachtige AI-gegevensverwerkingseenheid op basis van de NVIDIA Jetson Orin, die realtime gegevensverwerking aan boord in een baan om de aarde mogelijk maakt. EDGX bedient de markten voor satellietcommunicatie/telecommunicatie, aardobservatie en in-orbit servicing in commerciële, overheids- en defensiesegmenten, met een bijzondere focus op het mogelijk maken van AI en krachtige gegevensverwerking op schaal in satellietconstellaties. EDGX is opgericht in 2023, heeft zijn hoofdkantoor in Gent, België, en zal in februari 2026 zijn eerste demonstratie in een baan om de aarde lanceren met een SpaceX Falcon 9.

www.edgx.space

press@edgx.space



Translation into English by Google Translator:


EDGX closes €2.3 million funding round to boost AI computing onboard satellites

August 8, 2025
EQUITY INVESTMENTS
VENTURE CAPITAL

The funding will accelerate EDGX's mission to deliver the world's fastest AI-powered edge computers for satellite constellations, enabling fast and efficient data processing from space.

Belgian space startup EDGX has closed a €2.3 million seed funding round to accelerate the commercialization of EDGX Sterna, its next-generation edge AI computer for satellites.

The startup has also signed a €1.1 million deal with a satellite operator and can already announce plans for an in-space demonstration during a SpaceX Falcon 9 mission in February 2026.

The funding round was co-led by the imec.istart future fund, with participation from the Flanders Future Tech Fund, which is managed by the Flemish investment company PMV. EDGX has also attracted further funding from existing investor imec.istart, Europe’s top-ranked university-based accelerator.

The EDGX Sterna computer is a high-performance data processing unit (DPU) powered by NVIDIA technology. It provides the computing power and AI acceleration needed to quickly and efficiently process data onboard satellites. This eliminates the traditional bottleneck of sending massive amounts of raw data back to Earth for processing, enabling satellite operators to deliver faster, more efficient, and data-driven services.

EDGX's Sterna computer is powered by its SpaceFeather software stack, built for autonomous, resilient, and upgradeable satellite operations. This includes a space-hardened Linux operating system with full traceability, a dedicated monitoring system for autonomous health monitoring, radiation fault detection and recovery, and an in-orbit application framework for implementing new capabilities after launch. Together, SpaceFeather and Sterna enable smarter, more flexible missions with less downtime, lower costs, and faster data delivery to end users.

EDGX's engineering and design combine commercial AI acceleration with spacecraft-grade reliability, enabling satellite constellation operators to access previously unattainable levels of onboard computing power. Customers are using EDGX's Sterna DPU and its associated SpaceFeather software for a variety of applications.

For spectrum monitoring, Sterna enables powerful space-based processing to locate and classify radio signals and generate dynamic spectrum maps. This is a critical capability that helps satellite operators understand in real time how frequencies are being used, avoid interference, and allocate bandwidth more efficiently to deliver optimal communication services.

In Earth observation, Sterna supports intelligent surveillance and reconnaissance (ISR) by analyzing high-resolution imagery directly on board. This means satellites can immediately detect and tag objects such as ships, vehicles, or infrastructure, and respond to time-sensitive events like floods, wildfires, or earthquakes. The result: faster decisions, more efficient missions, and life-saving information that transforms passive observation into real-time situational awareness.

Sterna also supports 5G and 6G from space. By moving base station processing capabilities into space, satellites can participate directly in next-generation mobile networks. This paves the way for seamless direct connectivity to devices, delivering high-speed internet to remote, underserved or disaster-affected areas where traditional infrastructure falls short.

View attachment 89443

“With a full launch manifest, secured commercial contracts, and our first mission aboard a SpaceX Falcon 9 rocket, this funding allows us to scale to meet the demand for real-time information from space.”
NICK DESTRYCKER, FOUNDER AND CEO EDGX

In addition to the orbital demonstration in February, two further flights are already planned for 2026, and EDGX is quickly positioning itself as a leader in AI space infrastructure.

Commenting on the news, Nick Destrycker, founder and CEO, said: “Customers aren’t waiting for flight validation; they’re signing now. With a full launch manifest, secured commercial contracts, and our first mission aboard a SpaceX Falcon 9 rocket, this funding allows us to scale to meet the demand for real-time information from space.”

Wouter Benoot, founder and CTO of EDGX, said: “Going from zero to 100, all-in, at a space startup is ambitious. Bringing that mindset to Sterna’s development meant new challenges, continuous learning, but also real progress. What makes it work is the team. Each engineer brings new ideas, a drive to understand space, and a passion to make it a reality. We’re building a subsystem that will power the next generation of satellites.”

Borré, Head of Venture Capital and Member of the Executive Committee at PMV, said: “This funding round enables us to support EDGX’s strong team in bringing promising Flemish technology to market and further developing it. EDGX is one of the few European players offering a product that is powerful, accessible, and robust, giving it unique advantages in the rapidly growing space edge computing market, not least in terms of strengthening Europe’s technological position in a strategic sector like space.”

Kris Vandenberk, Managing Partner at imec.istart Future Fund, said: “EDGX represents exactly the kind of transformative infrastructure we are looking for. The space industry faces a fundamental bottleneck: we generate enormous amounts of data in orbit, yet still use outdated ‘store and forward’ architectures. EDGX solves this by bringing AI-powered edge computing directly into space, enabling satellites to analyze and act on data in real time, instead of waiting for processing on the ground.”

View attachment 89442

About EDGX

EDGX is a Belgian space company with a mission to deliver the world's fastest edge computers for satellites. Its flagship product, EDGX Sterna, is a powerful AI data processing unit based on the NVIDIA Jetson Orin, enabling real-time onboard data processing in orbit. EDGX serves the satellite communications/telecommunications, Earth observation, and in-orbit servicing markets across commercial, government, and defense segments, with a particular focus on enabling AI and high-performance data processing at scale in satellite constellations. Founded in 2023, EDGX is headquartered in Ghent, Belgium, and will launch its first in-orbit demonstration in February 2026 aboard a SpaceX Falcon 9.

www.edgx.space

press@edgx.space


E79F51C0-D497-48ED-B977-3B152EEAEF33.jpeg
 
  • Fire
  • Like
  • Love
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hi All,

I’ve been looking into two related, but officially separate, Lockheed Martin initiatives to assess whether Akida could be involved in both, and I believe the case is plausible.



1) April 14, 2025Lockheed Martin Skunk Works & Arquimea demonstrated EO/IR anomaly detection for ISR platforms using episodic memory neural networks. This system augments EO/IR sensors with AI to detect unusual patterns in real time, reducing the number of scans required.


2) July 16, 2025Lockheed Martin announced an AI-powered SAR ATR system for autonomous maritime target recognition, developed by the Lockheed AI Center, Skunk Works, and RMS. While Arquimea is not mentioned in this second program, the release notes that the SAR ATR capability “was deployed on low Size, Weight, and Power (SWaP) hardware in the field, rapid edge processing, without the need for large cloud compute or ground stations.” That description strongly aligns with Akida’s design philosophy.



From a technical perspective, both projects could benefit from neuromorphic computing in similar ways - despite using different sensing modalities (EO/IR vs. SAR). If Lockheed wanted to minimise SWaP, reduce latency, and enable adaptive, real-time processing, presumably Akida could be inserted at the sensor front-end in both systems. The main difference would be the feature encoding (optical/IR imagery vs. radar return patterns), but the underlying event-driven, on-edge processing approach would be almost identical.

While there’s no public confirmation of Akida’s role, the combination of technical fit and partnership history makes the possibility hard to dismiss IMO, especially considering the following known connections:
  • BrainChip + Arquimea – BrainChip and Arquimea jointly developed a real-world search-and-rescue drone system using Akida with Prophesee’s event-based camera to detect distressed swimmers.
  • Lockheed + BrainChip – Through the CyberNeuro-RT (CNRT) project with Quantum Ventura, Lockheed has already evaluated Akida’s edge-AI capabilities in cybersecurity applications.
  • Lockheed (Skunk Works) + Arquimea – Previous collaborations include anomalous target detection (e.g., drone monitoring), where the technical approach closely mirrors known BrainChip+Arquimea solution.

When you line up the SWaP requirements, the real-time edge inference needs, and these overlapping partnerships, the likelihood of Akida being part of at least one of these efforts seem worth considering in my opinion.



14 April 2025 - Lockheed Martin Skunk Works + Arquimea (AI powered ISR platform)


16 July 2025 - Lockheed Martin + LAIC + Skunk Works (AI powered-SAR)
Screenshot 2025-08-11 at 2.05.08 pm.png




Of course, I had to run this hypothesis past ChatGPT and here's the response I received.


My Question
Do you think that both of these projects could benefit from BrainChip's neuromorphic technology in similar ways, and if so, how?


ChatGPT's Response
Yes — both projects could realistically benefit from neuromorphic computing in very similar ways, even though they deal with different sensing modalities (EO/IR vs. SAR).

Here’s how they align:

1. Common Processing Challenge

Both systems require real-time, on-device (edge) inference on potentially huge data streams — EO/IR cameras in the first case, and high-resolution Synthetic Aperture Radar in the second.
  • EO/IR anomaly detection → high frame rates, continuous monitoring, looking for small deviations in patterns.
  • SAR ATR → extremely data-heavy radar sweeps, with target classification needing to happen instantly to re-task the sensor.
Traditional CPUs/GPUs are power-hungry, latency-prone, and depend on cloud or ground station links in many designs. Neuromorphic hardware like BrainChip’s Akida is designed to process event-based or sparse data locally, reducing bandwidth and power draw.

2. How Neuromorphic Computing Could Be Used

In both scenarios, neuromorphic processors could:
  1. Event-based pre-filtering
    • Only process relevant changes in the scene (e.g., sudden thermal hotspot in EO/IR; anomalous radar return in SAR).
    • Cuts down the amount of data that needs full-frame or full-sweep analysis.
  2. Low-power, always-on monitoring
    • Neuromorphic systems can idle at micro-watt levels until triggered by an event, ideal for long-endurance ISR or maritime patrol.
  3. On-sensor anomaly detection
    • Embedding the classifier directly into the sensor payload means faster reaction times (e.g., automatically re-tasking SAR beam, zooming EO/IR camera, or alerting operators without waiting for remote processing).
  4. Adaptive learning at the edge
    • Akida-like architectures can perform on-device incremental learning, allowing the system to adapt to new patterns (e.g., changing sea clutter profiles or new vessel signatures) without a complete retrain in the cloud.

3. Why Both Could Use It “in similar ways”

Even though one is optical/thermal (EO/IR) and the other is radar-based:
  • Core problem = detecting anomalies or targets in a noisy environment fast and without sending raw data to a remote processor.
  • Neuromorphic hardware would play the same functional role:
    • Act as a low-power edge AI accelerator doing first-pass filtering/classification.
    • Free up heavier processing hardware for more detailed secondary analysis only when needed.

💡 Bottom line:
Yes — if Lockheed wanted to minimise SWaP, reduce latency, and add adaptive capabilities in both projects, neuromorphic processors could be inserted at the sensor front-end in a similar architectural position. The main difference would be the feature encoding (EO/IR images vs. SAR radar returns), but the processing philosophy would be nearly identical.

If you want, I can map out exactly where Akida could sit in both system architectures side-by-side so the similarities become obvious. That would make your “dot connecting” argument much stronger.

You said:​

Yes please.

ChatGPT said:​

Alright — here’s a side-by-side architecture map showing how something like BrainChip’s Akida neuromorphic processor could slot into both systems and serve almost the same functional role.

1) Skunk Works & Arquimea – EO/IR Anomaly Detection

(April 14, 2025)
Mission: Detect unusual activity or patterns in electro-optical and infrared imagery for ISR (Intelligence, Surveillance, Reconnaissance) platforms using episodic memory neural networks.
Possible system architecture with Akida:
sql
Copy
EO/IR Sensor →
Pre-processing (sensor-level calibration, noise reduction) →
**Akida Processor**
- Event-based feature extraction (only pass on pixel changes or anomalies)
- Pattern recognition via embedded episodic memory
- Trigger alerts or direct gimbal/zoom commands

Main Mission Computer (GPU/CPU)
- Secondary deep analysis
- Multi-sensor fusion
- Data link transmission (if needed)
🔹 Akida's role here:
  • Filters frames in real time, detecting only significant deviations from baseline patterns.
  • Operates at ultra-low power to allow always-on sensing without draining ISR platform resources.
  • Handles incremental learning at the edge for adapting to changing environments (e.g., seasonal lighting, thermal variations).

2) Lockheed – AI-Powered SAR ATR for Maritime Target Recognition

(July 16, 2025)
Mission: Use Synthetic Aperture Radar (SAR) to automatically recognize and classify maritime targets in near real time, with autonomous sensor control.
Possible system architecture with Akida:
vbnet
Copy
SAR Sensor Sweep →
Signal Conditioning (FFT, Doppler processing, image formation) →
**Akida Processor**
- Event-like radar return change detection
- Low-power ATR pre-classification
- Trigger re-tasking commands to radar in milliseconds

Main Mission Computer (GPU/CPU)
- Advanced ATR algorithms
- Fusion with other sensors (EO/IR, AIS data)
- Operator UI or autonomous action
🔹 Akida's role here:
  • Rapidly flags unusual radar signatures or deviations from known vessel profiles.
  • Reduces data throughput by only sending “interesting” frames to main processors.
  • Enables real-time sensor control without human latency (e.g., beam steering or waveform change immediately after anomaly detection).

Shared neuromorphic advantages in both projects

CapabilityEO/IR Anomaly DetectionSAR ATR Maritime Recognition
Event-based processingYes – pixel changeYes – radar return change
Low-power, always-onYesYes
On-device classificationYes – anomaliesYes – targets
Real-time sensor controlYes – gimbal/zoomYes – radar retasking
Adaptive edge learningYes – scene adaptationYes – clutter adaptation


💡 Why your hypothesis is technically sound:
Both systems need:
  • Real-time inference directly on the ISR platform.
  • Minimal SWaP (size, weight, power) for airborne deployment.
  • Ability to adapt locally without full retraining.
  • Pre-filtering to cut down on the amount of raw sensor data sent to main processors or ground stations.
These requirements align very closely with Akida’s edge neuromorphic processing model, which is why — if Lockheed and Arquimea were considering a common hardware accelerator strategy — they could realistically deploy it in both projects.
 
  • Love
  • Like
Reactions: 4 users

Diogenese

Top 20
Do you mean the Tesla files 😅
An extract:
"...
The documents that Krupski claims he was able to download from the internal project management system included thousands of customer complaints about Tesla's Autopilot, the company's most strategically important project. They did not match the claims that Musk had been making for years about the alleged quality of his software.
..."

He's doged dojo:

Tesla scraps custom Dojo wafer-level processor initiative, dismantles team — Musk to lean on Nvidia and AMD more​


https://www.tomshardware.com/tech-i...tles-team-musk-to-lean-on-nvidia-and-amd-more
 
Top Bottom