BRN Discussion Ongoing

FF

https://www.nokia.com/newsroom/noki...-of-the-2019-nokia-open-innovation-challenge/



Charlotte Savage, Founder of HaiLa, said,
“Such an honor to be chosen as the winner, thank you to Nokia for this incredible opportunity. However, this was never about the competition. The goal was to establish a real partnership with one of the leaders of the networking industry to enable sustainable wireless communication. We look forward to be at the forefront of carbon emission reduction with Nokia.”

While Murata is enough Nokia is also standing in the background.

My opinion only DYOR

Fact Finder
 
  • Like
  • Love
Reactions: 8 users
As per the original discovery of QANA on GitHub...


And @Frangipani additional discovery of a subsequent paper on it...


It has just been updated last night and what initially was based on Akida 1000 as per posts above, now appears to have further work done using Akida Gen 2 NSoC and emulation.


4.2.2 Neuromorphic Deployment Hardware: BrainChip Akida​

Neuromorphic inference is performed on the BrainChip Akida neuromorphic processor, which supports direct execution of models compiled with CNN2SNN.

Supported execution targets:

Akida Gen 2 Evaluation Board (NSoC)

  • Executes .fbz neuromorphic binary generated by CNN2SNN
  • Provides:
    • Real-time event-driven inference
    • Temporal spike propagation
    • On-chip quantized operators
    • Accurate latency and energy profiles
In practice, the board enables:

  • Measurement of true inference latency,
  • Evaluation of energy-per-image,
  • Profiling of spike density and distribution layer-by-layer.

Akida Software Backend (CPU Emulation)

When dedicated hardware is unavailable, the Akida SDK automatically falls back to
a CPU-based neuromorphic emulator.

It faithfully reproduces:

  • Spike generation
  • Temporal accumulation
  • Layer-by-layer SNN behavior
  • SNN-compatible quantization dynamics
This ensures reproducibility even without neuromorphic hardware.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Tothemoon24

Top 20

IMG_1835.jpeg




Five years ago “AI” largely meant giant models running in faraway data centers. However, today the story is different, where intelligence is migrating to the device itself, in phones, drones, health wearable’s, factory sensors. This shift is not merely cosmetic, instead it forces the hardware designers to ask: how do you give a tiny, thermally constrained device meaningful perception and decision-making power? As Qualcomm’s leadership puts it, the industry is “in a catbird seat for the edge AI shift,” and the battle is now about bringing capable, power-efficient AI onto the device.


Why edge matters practical constraints, human consequences

There are three blunt facts that drive this migration: latency (milliseconds matter for robots and vehicles), bandwidth (you can’t stream everything from billions of sensors), and privacy (health or industrial data often can’t be shipped to the cloud). The combination changes priorities: instead of raw throughput for training, the trophy is energy per inference and predictable real-time behavior.

How the hardware world is responding

Hardware paths diverge into pragmatic, proven accelerators and more speculative, brain-inspired designs.

  1. Pragmatic accelerators: TPUs, NPUs, heterogeneous SoCs.
    Google’s Edge TPU family and Coral modules demonstrate the pragmatic approach: small, task-tuned silicon that runs quantized CNNs and vision models with tiny power budgets. At the cloud level Google’s new TPU generations (and an emerging Ironwood lineup) show the company’s ongoing bet on custom AI silicon spanning cloud to edge.
  2. Mobile/SoC players double down: Qualcomm and others are reworking mobile chips for on-device AI, shifting CPU micro architectures and embedding NPUs to deliver generative and perception workloads in phones and embedded devices. Qualcomm’s public positioning and product roadmaps are explicit: the company expects edge AI to reshape how devices are designed and monetized.
  3. In-memory and analog compute: to beat the von Neumann cost of moving data. Emerging modules and research prototypes put compute inside memory arrays (ReRAM/PCM) to slash energy per operation, an attractive direction for always-on sensing.
The wild card: neuromorphic computing

If conventional accelerators are an evolutionary path, neuromorphic chips are a more radical reimagination. Instead of dense matrix math and clocked pipelines, neuromorphic hardware uses event-driven spikes, co-located memory and compute, and parallel sparse operations — the same tricks biology uses to run a brain on ~20 W.

Intel, one of the earliest movers, says the approach scales: Loihi research chips and larger systems (e.g., the Hala Point neuromorphic system) show how neuromorphic designs can reach hundreds of millions or billions of neurons while keeping power orders of magnitude lower than conventional accelerators for certain tasks. Those investments signal serious industrial interest, not just academic curiosity.

Voices from the field: what leaders are actually saying

  • “We’re positioning for on-device intelligence not just as a marketing line, but as an architecture shift,” paraphrase of Qualcomm leadership describing the company’s edge AI strategy and roadmap.
  • “Neuromorphic systems let us explore ultra-low power, event-driven processing that’s ideal for sensors and adaptive control,” Intel’s Loihi programme commentary on the promise of on-chip learning and energy efficiency.
  • A recent industry angle: big platform moves (e.g., companies making development boards and tighter dev ecosystems available) reflect a desire to lower barriers. The Qualcomm–Arduino alignment and new low-cost boards aim to democratize edge AI prototyping for millions of developers.
Where hybrid architecture wins: pragmatic use cases

Rather than “neuromorphic replaces everything,” the likely near-term scenario is hybrid systems:

  • Dense pretrained CNNs (object detection, segmentation) run on NPUs/TPUs.
  • Spiking neuromorphic co-processors handle always-on tasks: anomaly detection, low-latency sensor fusion, prosthetic feedback loops.
  • Emerging in-memory modules reduce the energy cost of massive matrix multiplies where appropriate.
Practical example: an autonomous drone might use a CNN accelerator for scene understanding while a neuromorphic path handles collision avoidance from event cameras with microsecond reaction time.

Barriers: the messy middle between lab and product

  • Algorithmic mismatch: mainstream ML is dominated by backpropagation and dense tensors; mapping these workloads efficiently to spikes or in-memory analog is still an active research problem.
  • Tooling and developer experience: frameworks like PyTorch/TensorFlow are not native to SNNs; toolchains such as Intel’s Lava and domain projects exist but must mature for broad adoption.
  • Manufacturing & integration: moving prototypes into volume production and integrating neuromorphic blocks into SoCs poses yield and ecosystem challenges.
Market dynamics & the investment climate

There’s heavy capital flowing into edge AI and neuromorphic startups, and forecasts project notable growth in neuromorphic market value over the coming decade. That influx is tempered by a broader market caution — public leaders have noted hype cycles in AI investing but history shows that even bubble phases can accelerate technological foundations that persist.

Practical advice for engineering and product teams

  1. Experiment now prototype with Edge TPUs/NPUs and cheap dev boards (Arduino + Snapdragon/Dragonwing examples are democratizing access) to validate latency and privacy requirements.
  2. Start hybrid design thinking split workloads into dense inference (accelerator) vs event-driven (neuromorphic) buckets and architect the data pipeline accordingly.
  3. Invest in tooling and skill transfer train teams on spiking networks, event cameras, and in-memory accelerators, and contribute to open frameworks to lower porting costs.
  4. Follow system co-design unify hardware, firmware, and model teams early; the edge is unforgiving of mismatches between model assumptions and hardware constraints.
Conclusion: what will actually happen

Expect incremental but practical wins first: more powerful, efficient NPUs and smarter SoCs bringing generative and perception models to phones and industrial gateways. Parallel to that, neuromorphic systems will move from research novelties into niche, high-value roles (always-on sensing, adaptive prosthetics, extreme low-power autonomy).

The real competitive winners will be organizations that build the whole stack: silicon, software toolchains, developer ecosystems, and use-case partnerships. In short: intelligence will increasingly live at the edge, and the fastest adopters will design for hybrid, energy-aware systems where neuromorphic and conventional accelerators complement not replace each other.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

7für7

Top 20
FF

https://www.nokia.com/newsroom/noki...-of-the-2019-nokia-open-innovation-challenge/



Charlotte Savage, Founder of HaiLa, said,
“Such an honor to be chosen as the winner, thank you to Nokia for this incredible opportunity. However, this was never about the competition. The goal was to establish a real partnership with one of the leaders of the networking industry to enable sustainable wireless communication. We look forward to be at the forefront of carbon emission reduction with Nokia.”

While Murata is enough Nokia is also standing in the background.

My opinion only DYOR

Fact Finder


“Helping enable edge-located intelligence to be benefited from is another key prospect that is now starting to be explored. Cooperative work with Brain Chip is already underway. Together, the companies have shown how applications like anomaly detection and condition monitoring can be addressed by bringing together Brain Chip’s Akida event-based AI processor with HaiLa’s BSC2000 RF chip. “What we're starting to see is some of the sensing companies out there want to do intelligent manipulation of acquired data before sending it across the network, so as to keep duty cycles down, but they need to do it in battery-powered devices that run off very small energy storage reserves. By working together with Brain Chip, we can provide them with a combination of the radio and neural processing capabilities they are looking for, while keeping the system level power consumption in the hundreds of µW,” Kuhn stated.”
 
  • Like
  • Love
Reactions: 6 users

Diogenese

Top 20

View attachment 93472



Five years ago “AI” largely meant giant models running in faraway data centers. However, today the story is different, where intelligence is migrating to the device itself, in phones, drones, health wearable’s, factory sensors. This shift is not merely cosmetic, instead it forces the hardware designers to ask: how do you give a tiny, thermally constrained device meaningful perception and decision-making power? As Qualcomm’s leadership puts it, the industry is “in a catbird seat for the edge AI shift,” and the battle is now about bringing capable, power-efficient AI onto the device.


Why edge matters practical constraints, human consequences

There are three blunt facts that drive this migration: latency (milliseconds matter for robots and vehicles), bandwidth (you can’t stream everything from billions of sensors), and privacy (health or industrial data often can’t be shipped to the cloud). The combination changes priorities: instead of raw throughput for training, the trophy is energy per inference and predictable real-time behavior.

How the hardware world is responding

Hardware paths diverge into pragmatic, proven accelerators and more speculative, brain-inspired designs.

  1. Pragmatic accelerators: TPUs, NPUs, heterogeneous SoCs.
    Google’s Edge TPU family and Coral modules demonstrate the pragmatic approach: small, task-tuned silicon that runs quantized CNNs and vision models with tiny power budgets. At the cloud level Google’s new TPU generations (and an emerging Ironwood lineup) show the company’s ongoing bet on custom AI silicon spanning cloud to edge.
  2. Mobile/SoC players double down: Qualcomm and others are reworking mobile chips for on-device AI, shifting CPU micro architectures and embedding NPUs to deliver generative and perception workloads in phones and embedded devices. Qualcomm’s public positioning and product roadmaps are explicit: the company expects edge AI to reshape how devices are designed and monetized.
  3. In-memory and analog compute: to beat the von Neumann cost of moving data. Emerging modules and research prototypes put compute inside memory arrays (ReRAM/PCM) to slash energy per operation, an attractive direction for always-on sensing.
The wild card: neuromorphic computing

If conventional accelerators are an evolutionary path, neuromorphic chips are a more radical reimagination. Instead of dense matrix math and clocked pipelines, neuromorphic hardware uses event-driven spikes, co-located memory and compute, and parallel sparse operations — the same tricks biology uses to run a brain on ~20 W.

Intel, one of the earliest movers, says the approach scales: Loihi research chips and larger systems (e.g., the Hala Point neuromorphic system) show how neuromorphic designs can reach hundreds of millions or billions of neurons while keeping power orders of magnitude lower than conventional accelerators for certain tasks. Those investments signal serious industrial interest, not just academic curiosity.

Voices from the field: what leaders are actually saying

  • “We’re positioning for on-device intelligence not just as a marketing line, but as an architecture shift,” paraphrase of Qualcomm leadership describing the company’s edge AI strategy and roadmap.
  • “Neuromorphic systems let us explore ultra-low power, event-driven processing that’s ideal for sensors and adaptive control,” Intel’s Loihi programme commentary on the promise of on-chip learning and energy efficiency.
  • A recent industry angle: big platform moves (e.g., companies making development boards and tighter dev ecosystems available) reflect a desire to lower barriers. The Qualcomm–Arduino alignment and new low-cost boards aim to democratize edge AI prototyping for millions of developers.
Where hybrid architecture wins: pragmatic use cases

Rather than “neuromorphic replaces everything,” the likely near-term scenario is hybrid systems:

  • Dense pretrained CNNs (object detection, segmentation) run on NPUs/TPUs.
  • Spiking neuromorphic co-processors handle always-on tasks: anomaly detection, low-latency sensor fusion, prosthetic feedback loops.
  • Emerging in-memory modules reduce the energy cost of massive matrix multiplies where appropriate.
Practical example: an autonomous drone might use a CNN accelerator for scene understanding while a neuromorphic path handles collision avoidance from event cameras with microsecond reaction time.

Barriers: the messy middle between lab and product

  • Algorithmic mismatch: mainstream ML is dominated by backpropagation and dense tensors; mapping these workloads efficiently to spikes or in-memory analog is still an active research problem.
  • Tooling and developer experience: frameworks like PyTorch/TensorFlow are not native to SNNs; toolchains such as Intel’s Lava and domain projects exist but must mature for broad adoption.
  • Manufacturing & integration: moving prototypes into volume production and integrating neuromorphic blocks into SoCs poses yield and ecosystem challenges.
Market dynamics & the investment climate

There’s heavy capital flowing into edge AI and neuromorphic startups, and forecasts project notable growth in neuromorphic market value over the coming decade. That influx is tempered by a broader market caution — public leaders have noted hype cycles in AI investing but history shows that even bubble phases can accelerate technological foundations that persist.

Practical advice for engineering and product teams

  1. Experiment now prototype with Edge TPUs/NPUs and cheap dev boards (Arduino + Snapdragon/Dragonwing examples are democratizing access) to validate latency and privacy requirements.
  2. Start hybrid design thinking split workloads into dense inference (accelerator) vs event-driven (neuromorphic) buckets and architect the data pipeline accordingly.
  3. Invest in tooling and skill transfer train teams on spiking networks, event cameras, and in-memory accelerators, and contribute to open frameworks to lower porting costs.
  4. Follow system co-design unify hardware, firmware, and model teams early; the edge is unforgiving of mismatches between model assumptions and hardware constraints.
Conclusion: what will actually happen

Expect incremental but practical wins first: more powerful, efficient NPUs and smarter SoCs bringing generative and perception models to phones and industrial gateways. Parallel to that, neuromorphic systems will move from research novelties into niche, high-value roles (always-on sensing, adaptive prosthetics, extreme low-power autonomy).

The real competitive winners will be organizations that build the whole stack: silicon, software toolchains, developer ecosystems, and use-case partnerships. In short: intelligence will increasingly live at the edge, and the fastest adopters will design for hybrid, energy-aware systems where neuromorphic and conventional accelerators complement not replace each other.
Hi TTM,

We keep seeing this furphy about spiking neuromorphic being used for the less compute-intensive tasks (always on watchdog), but not for heavy lifting like classification.

I think this arises by conflating the limitations of analog with SNN. Analog struggles with multiple bits, which is a problem when a neuron needs to process hundreds of synaptic spikes.

However, Akida is moving to FP32, which I think may give it CPU/GPU-like mathematical precision beyond its inference/classification capabilities. Already, GenAI is in the pipeline. Who knows where that could lead when integrated with RISC-V?
 
  • Like
  • Fire
  • Wow
Reactions: 21 users
I hear you Food, yes, it's frustrating and any 5/10 or beyond years holders would be lying if they didn't admit to
some level of frustration, some may even use different words or expressions, but keep focusing on the end game,
we are and have progressed a tremendous amount over the last 3.5 years.

To deny that we aren't structured any better; to deny that we aren't engaged with some super big players, who
are the real gatekeepers of how their business models will unfold, their timelines, answering to their clients and
stakeholders, to me it's like a big pack of dominoes (not the pizza variety) :ROFLMAO: and we are at different points
in time all interconnected, it takes many tech companies coming together to deliver the ultimate product, well
in most cases, from my understanding.

Can you imagine the level of excitement that will be exploding from within the walls of our US headquarters
when we really hit solid paydirt, all our scientists, PhDs, all our top-class engineers and office admin staff etc
will be feeling the overwhelming joy of real success, and I think they would have earnt it quietly, try for a
minute to appreciate what they have achieved to date; it's a lot more than many give our staff credit for.

Yes, we want real revenue streams, and we want them yesterday, don't you think our staff want the same thing,
of course they do.

Hang tight is my suggestion, our tech is brilliant.

💘 AKD
$10 a share brilliant?
 
  • Like
  • Haha
Reactions: 5 users
FF

https://www.nokia.com/newsroom/noki...-of-the-2019-nokia-open-innovation-challenge/



Charlotte Savage, Founder of HaiLa, said,
“Such an honor to be chosen as the winner, thank you to Nokia for this incredible opportunity. However, this was never about the competition. The goal was to establish a real partnership with one of the leaders of the networking industry to enable sustainable wireless communication. We look forward to be at the forefront of carbon emission reduction with Nokia.”

While Murata is enough Nokia is also standing in the background.

My opinion only DYOR

Fact Finder
 
  • Wow
  • Like
  • Fire
Reactions: 3 users
  • Like
Reactions: 6 users

IloveLamp

Top 20
1000014361.jpg
1000014358.jpg
 
  • Like
  • Love
Reactions: 9 users

TECH

Regular
We’ve known since March 2024 that Sony AI (Switzerland) partially funded research done at Uni Tübingen that involved event-cameras and neuromorphic processors. Remember the video with the table tennis-playing robot arm?

The associated paper titled “Detection of Fast-Moving Objects with Neuromorphic Hardware” - first published in March 2024 and revised in September 2024 - described experiments with an event-based camera and three neuromorphic processors: Akida, DynapSNN (SynSense) and Loihi.

The paper’s first author was Andreas Ziegler, a PhD Candidate in Robotics and Computer Vision in collaboration with the Cognitive Systems Group at the University of Tübingen and Sony AI. From November 2023 to March 2024 he was a research intern at Sony AI’s Europe office in Schlieren near Zurich.

Paper and video can be found here:


Akida came out top in the benchmarking at the time - however, the Sony AI researchers emphasised that the neuromorphic processors’ different form factors had also influenced the results. I commented on this last October:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-437994

View attachment 93465



So does the fact that Akida fared best in the above benchmarking mean that Sony now only has eyes for BrainChip with regard to neuromorphic computing and is no longer interested in what our competitors have to offer?

Not at all.

Sony researchers based in Switzerland and Japan have just published two papers about a prototype resp. proof-of-concept utilising neuromorphic hardware developed by SynSense (“Realizing Fully-Integrated, Low-Power, Event-Based Pupil Tracking with Neuromorphic Hardware”, based on the commercially available Speck SoC) resp. Intel (“Privacy-preserving fall detection at the edge using Sony IMX636 event-based vision sensor and Intel Loihi 2 neuromorphic processor”):



View attachment 93459



View attachment 93456

View attachment 93457


View attachment 93458
View attachment 93466




View attachment 93460 View attachment 93461

Even Alf Kuchenbuch gave Lyes Khacef’s post a 👍🏻 (who has in turn liked a number of BrainChip posts over the years).

Also note the comment by Mike Davies.



View attachment 93462 View attachment 93467
Beautiful research, may I point out that no one at Brainchip or on any forum I have observed to date over the last decade has ever insinuated that Brainchip's Akida event based processor and any future iterations would ever command the entire Edge AI market, 10% maybe ? so of course there's other players who may move out into the commercial space and that's fantastic, more exposure, more competition is healthy.

A share of the pie would be welcomed by all Brainchip shareholders and staff, but at the end of the day, it's not us who decides who signs with who, is it ?

I do enjoy your balanced posts and keeping everyone grounded, thanks.

Tech (Perth)
 
  • Like
Reactions: 10 users

Diogenese

Top 20
Beautiful research, may I point out that no one at Brainchip or on any forum I have observed to date over the last decade has ever insinuated that Brainchip's Akida event based processor and any future iterations would ever command the entire Edge AI market, 10% maybe ? so of course there's other players who may move out into the commercial space and that's fantastic, more exposure, more competition is healthy.

A share of the pie would be welcomed by all Brainchip shareholders and staff, but at the end of the day, it's not us who decides who signs with who, is it ?

I do enjoy your balanced posts and keeping everyone grounded, thanks.

Tech (Perth)
Hi Tech,

I think TENNs will be Akida's pie-sticking/plum-pulling thumb.
 
  • Like
  • Haha
  • Fire
Reactions: 16 users

TECH

Regular
Hi Tech,

I think TENNs will be Akida's pie-sticking/plum-pulling thumb.
Good on you Dio, by the way, I did try to get a more detailed answer following the initial reply, but all quiet on the western front.

Regards Colonel Hogan 🤣
 
  • Like
Reactions: 4 users

7für7

Top 20
Mixed feelings somehow.. wanted to go sleep and have a look to the German market before…

anyone who want to enjoy this view?

IMG_8763.jpeg
 
  • Like
  • Thinking
Reactions: 8 users

Frangipani

Top 20
Last edited:
  • Like
  • Love
Reactions: 15 users

Frangipani

Top 20
ANavS GmbH (Advanced Navigation Solutions), a Munich-based company “providing accurate positioning and mapping systems based on sensor fusion and AI approaches”, has been playing with Akida, as evidenced by an AKD1000 PCIe Card on the poster they presented at the symposium ‘Research and Technology for Autonomous Driving’ in Berlin.

The poster is titled “HybridNeuroPerception - Energieeffiziente hybride KI-Perzeption mit neuromorpher und klassischer Elektronik für mobile Plattformen” (HybridNeuroPerception - Energy-efficient hybrid AI perception with neuromorphic and classical electronics for mobile platforms”).

ANavS was founded in 2011 as a TUM (Technical University of Munich) spin-off.



41440109-9F4C-4C24-ABAD-93F1AA8A4D92.jpeg



9B322961-6EA7-482E-A6B6-5F85D42226AE.jpeg


https://www.linkedin.com/company/anavs-–-advanced-navigation-solutions/about/

A1644874-7FF5-4F02-B98B-1FCE3E4A8E54.jpeg

84B45124-2E81-4568-BDDD-E28F621D2F94.jpeg



C11EBB30-BEB7-43A0-B647-E45F65F90E75.jpeg





According to NorthData, ANavS received 400,404 € in funding for the HybridNeuroPerception project in August.

While I cannot open the link to find out more (you need to be a premium service customer to do that), the ANavS LinkedIn post tells us that the funding came from the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung), which was actually officially renamed Federal Ministry of Research, Technology and Space (Bundesministerium für Forschung, Technologie und Raumfahrt) earlier this year.

I can also spot the FZI logo (Forschungszentrum Informatik in Karlsruhe / FZI Research Center for Information Technology) as well as that of a company from Ulm called InMach (Intelligente Maschinen GmbH) - I assume they would be the project’s consortium partners, and ANavS the project lead.


D2E86EA2-8381-46C2-93D1-E7C836C4DDE1.jpeg




AF525904-E17A-4FBD-92C7-F5234D7942CD.jpeg





AF5E082C-469A-4BDF-AA7C-D0077ED94F46.jpeg
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 24 users

Frangipani

Top 20
Mercedes-Benz CTO Markus Schäfer (60), who has had a successful 30 year career with the Stuttgart-headquartered German carmaker, will leave the company on the conclusion of his current contract ending 30 November 2025:



Changes to the Board of Management of Mercedes-Benz Group AG.

September 24, 2025 – The Supervisory Board of Mercedes-Benz Group AG has decided to appoint Michael Schiebe – currently CEO of Mercedes-AMG GmbH and Head of the Top End Vehicle (TEV) Group – to succeed Jörg Burzer as Member of the Board of Management responsible for Production, Quality & Supply Chain Management. As of 1 December 2025, Jörg Burzer will then take over responsibility for the Board Division of Development & Procurement and the role of Chief Technology Officer from Markus Schäfer, who will retire from the company at the conclusion of his contract after more than 30 highly successful years.

With these personnel decisions, the Supervisory Board of Mercedes-Benz Group AG is consistently pursuing its strategy of maintaining experience and continuity in top management while providing fresh impetus and a deliberate rejuvenation on the Board of Management. The objective is to further accelerate the company’s transformation with a clear focus on customer benefit, technological excellence and operational efficiency. The upcoming change in the CTO position stands for renewal in times of transformation and dynamic market challenges. Building on the recent successes of its product initiatives, Mercedes-Benz thereby strengthens an agile, efficient and innovative vehicle development. The company thus creates room for new ideas and creative approaches, while sustainably consolidating the innovative strength of both organization and products.

Martin Brudermüller, Chairman of the Supervisory Board of Mercedes-Benz Group AG​

“Two outstanding managers from our own ranks, Jörg Burzer and Michael Schiebe, are taking over key divisions that are of decisive importance for the future success of Mercedes-Benz Group. In recent years, Jörg Burzer has taken vehicle production at Mercedes-Benz to a new level with vision, consistency and a clear focus on innovation, flexibility and efficiency. His impressive strength in execution and his ability to combine technological developments with industrial realization make him the ideal choice to lead and further develop Development & Procurement. Michael Schiebe brings more than 20 years of highly varied experience at Mercedes-Benz, which he will apply to leading and continuously enhancing the division of Production, Quality & Supply Chain Management. Most recently at Mercedes-AMG, he has demonstrated how he combines strategic thinking and decisive action with operational excellence to great effect. We expect further improvements in our competitiveness through even closer cooperation between Production and Development.

With Markus Schäfer we say farewell to a highly esteemed colleague, who commands the highest respect both internally and externally. As the architect of our technology strategy, he has been instrumental in driving Mercedes-Benz’s transformation from a traditional car manufacturer to the electrification of our portfolio and the integration of digital systems. The current product offensive clearly bears his signature. His commitment to technological excellence and his deep connection to the Mercedes-Benz brand are reflected not only in his long-standing service to the company but also in his role as Chairman of the Supervisory Board of Mercedes-Benz Grand Prix Ltd. On behalf of the entire Supervisory Board, I thank Markus Schäfer for his outstanding achievements and wish him every possible success for the future.”

Jörg Burzer
has been a Member of the Board of Management of Mercedes-Benz AG since 2019 and of Mercedes-Benz Group AG since December 2021. He is currently responsible for Production, Quality & Supply Chain Management. In this role, he oversees the global production network with more than 30 sites for vehicles, powertrains and batteries, as well as worldwide logistics processes. Previously, he held various international leadership positions within the company. He began his career at the former DaimlerChrysler AG in 1999 after completing his degree in engineering (Diplom-Ingenieur) at the University of Erlangen-Nuremberg and earning a doctorate (Dr-Ing.).

View attachment 91498
Jörg Burzer.

Michael Schiebe has been CEO of Mercedes-AMG GmbH and Head of the Top End Vehicle (TEV) Group since March 2023. He has been with the company since 2004, starting his career in the area of Strategic Product Projects at the former Daimler AG, later moving into Controlling and Marketing & Sales. Among other roles, he served as President & CEO of Mercedes-Benz Luxembourg S.A. and subsequently headed Mercedes-Benz Passenger Car Sales in Germany. From 2020 to 2023, he reported directly to CEO Ola Källenius as Chief of Staff and Head of Corporate Office Mercedes-Benz Group AG.

View attachment 91499
Michael Schiebe.

Markus Schäfer has been a Member of the Board of Management since May 2019. He has been responsible for Development & Procurement and has served as CTO since December 2021. In this capacity, he has been responsible for the holistic development process of Mercedes-Benz Cars as well as Procurement. Schäfer began his career in 1990 through the international management associate programme. Over the following decades, he held numerous leadership roles in Germany and abroad, including Plant Manager in Egypt, President & CEO of the Mercedes-Benz plant in Tuscaloosa (USA), Head of Production Planning, Divisional Board Member for Mercedes-Benz Cars Production & Supply Chain, and Chief Operating Officer.

View attachment 91501
Markus Schäfer.

From Road to R.A.I.L.

Markus Schäfer, until last week CTO of Mercedes-Benz and Member of their Board of Management, is embarking on a new adventure as co-founder of Russell AI Labs, a startup in Silicon Valley!

Russell AI Labs (R.A.I.L.) will be “Building and Backing Next-Generation Intelligent Technologies” - they call themselves an AI Foundry:
“We build and back transformative AI and frontier technology companies, helping them grow from breakthrough ideas into enduring global leaders.”

Markus Schäfer’s co-founders are Austin Russell, Founder of Luminar Technologies, and Murtaza Ahmed, Founder and Managing Partner of Chiltern Street Capital.



3E644582-762D-4CA7-BFA6-FAA68FD3028E.jpeg




5A7C8D4E-959D-425F-A809-8AAA2E499F92.jpeg


2B3B51C6-A274-41A7-93DE-374FBDDABD85.jpeg




24DDAF01-277B-4BB4-856B-1CD9D662721B.jpeg




53586522-99B3-4C44-9B20-602C4648EC4E.jpeg

C008C1D7-8D75-41E9-94EA-B23270C4E06A.jpeg

D8B05D74-A129-4B8C-87A2-50E6FD9FA82C.jpeg




adc683cb-fe9c-4ecf-86a8-a9cb93d61b1c-jpeg.93505

DEC30642-2892-42E7-B08F-D0D62D593648.jpeg
2F2E3D25-C029-4C3F-8478-5B1A9B9B83AA.jpeg
 

Attachments

  • ADC683CB-FE9C-4ECF-86A8-A9CB93D61B1C.jpeg
    ADC683CB-FE9C-4ECF-86A8-A9CB93D61B1C.jpeg
    746.9 KB · Views: 564
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 13 users

yogi

Regular
Investors in AI — FYI: NEUROMORPHIC!

Your brain runs full human intelligence on 20 watts.
The world’s biggest AI supercomputer needs a small power plant to do far less.

That gap isn’t just big — it’s the spark for the biggest computing revolution since the transistor.

Neuromorphic chips = biology’s cheat code, now in silicon:

 
  • Like
  • Fire
  • Love
Reactions: 27 users

Frangipani

Top 20
ANavS GmbH (Advanced Navigation Solutions), a Munich-based company “providing accurate positioning and mapping systems based on sensor fusion and AI approaches”, has been playing with Akida, as evidenced by an AKD1000 PCIe Card on the poster they presented at the symposium ‘Research and Technology for Autonomous Driving’ in Berlin.

The poster is titled “HybridNeuroPerception - Energieeffiziente hybride KI-Perzeption mit neuromorpher und klassischer Elektronik für mobile Plattformen” (HybridNeuroPerception - Energy-efficient hybrid AI perception with neuromorphic and classical electronics for mobile platforms”).

ANavS was founded in 2011 as a TUM (Technical University of Munich) spin-off.



View attachment 93487


View attachment 93488

https://www.linkedin.com/company/anavs-–-advanced-navigation-solutions/about/

View attachment 93489
View attachment 93490


View attachment 93495




According to NorthData, ANavS received 400,404 € in funding for the HybridNeuroPerception project in August.

While I cannot open the link to find out more (you need to be a premium service customer to do that), the ANavS LinkedIn post tells us that the funding came from the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung), which was actually officially renamed Federal Ministry of Research, Technology and Space (Bundesministerium für Forschung, Technologie und Raumfahrt) earlier this year.

I can also spot the FZI logo (Forschungszentrum Informatik in Karlsruhe / FZI Research Center for Information Technology) as well as that of a company from Ulm called InMach (Intelligente Maschinen GmbH) - I assume they would be the project’s consortium partners, and ANavS the project lead.


View attachment 93491



View attachment 93492




View attachment 93493

Further to my earlier post about Munich-headquartered company ANavS (Advanced Navigation Solutions) that has been doing research with Akida for their HybridNeuroPerception project:

The two gentlemen in the photo posted on LinkedIn earlier today are ANavS Founder and General Manager Patrick Henkel resp. Head of Computer Vision Robert Bensch.


756BC8EE-9312-449B-9660-2EDBF95B2860.jpeg



D789C28B-4F62-4A71-90B2-B955A8CC782F.jpeg




F482F55D-8886-495D-B55E-138113152F2A.jpeg



89B75A24-D896-4509-A7DA-709DF72CBA72.jpeg




A0696A59-B62F-4004-9B92-4BA35F666901.jpeg



ANavS’s list of customers is quite impressive:


7C5B3CEE-1879-4F8D-ABBF-B5753770CD8F.jpeg
 

Attachments

  • 8EE9FD37-83A5-4EE7-A36F-3908F00814FE.jpeg
    8EE9FD37-83A5-4EE7-A36F-3908F00814FE.jpeg
    212.4 KB · Views: 47
  • Like
  • Love
  • Fire
Reactions: 17 users

AARONASX

Holding onto what I've got
Alright who got greedy? :D
1764801307704.png
 
  • Like
  • Love
  • Fire
Reactions: 18 users
Hi Tech,

I think TENNs will be Akida's pie-sticking/plum-pulling thumb.
Pie.PNG

I think we will have our hands full:D
 
  • Like
  • Haha
  • Love
Reactions: 7 users
Top Bottom