BRN Discussion Ongoing

cosors

👀

"BrainChip Boasts of a Strong Ecosystem for Its Akida Edge AI Box​

A post-pre-order price hike to $1,495, though, puts a damper on its claims of cost efficiency for direct deployment.​


Neuromorphic edge artificial intelligence (AI) specialist BrainChip is celebrating a year since it unveiled its Akida Edge AI Box development platform at the Consumer Electronics Show (CES) in Las Vegas — and it's back in Sin City again this year to announce "an all-star lineup" of partners on the project.

"The Akida Edge Box is a great platform for running AI in standalone edge environments where footprint, cost and efficiency is critical, while not compromising performance," claims BrainChip chief executive officer Sean Hehir. "We look forward to announcing more partners developing edge AI for their customers' specific use cases and more importantly, we look forward to the ideas these companies will bring to life with the Akida Edge AI Box."

BrainChip opened orders for the Akida Edge Box, developed in partnership with VVDN, back in February last year, after showcasing the device at CES 2024. The idea: delivering a low-cost single-unit development platform for those looking to experiment with the company's Akida neuromorphic processor, with two AKD1000 chips installed in the compact device alongside an NXP Semiconductors i.MX 8M Plus system-on-chip.

While there's no new Akida Edge Box for CES 2025, BrainChip still has plenty to announce in the form of partnership on the project. The Akida Edge AI Box ecosystem now includes, the company says: support in Edge Impulse for rapid AI model development, training, and deployment; gesture recognition support from BeEmotion; climate forecasting developed by AI Labs; model evaluation from DeGirum; cybersecurity projects from Quantum Ventura; and computer vision analysis from Vedya Labs.

BrainChip is also repositioning the Akida Edge AI Box, which is priced below its previous development kits: "the Akida Edge AI Box is so cost-effective," the company claims, "it can be utilized in production applications: in every patient's room to monitor their health and safety; in every store aisle to gauge shopping experience; in every car, truck, boat, or plane in the fleet to manage logistics."
The device isn't quite as affordable as it used to be, though: pre-orders for the Akida Edge Box launched in February 2024 at just $799, but the company is currently asking for $1,495 on its official web store — with a 10-12 week shipping estimate."


Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin.
 
  • Like
  • Fire
  • Wow
Reactions: 18 users

cosors

👀
Hey Gang!

I took some time out from my undercover ops - hiding behind pot plants at CES 2025, to do some reading and stumbled upon this NASA article published yesterday.

From the areas I've highlighted in orange, you can see that NASA's recently updated inventory consists of a few AI use cases describing autonomous navigation for the Perseverance Rover on Mars. I hadn't heard the term "Mars2020 Rover" referenced before and so I searched for it on TSEx and sure enough nothing came up.

What I thought of immediately was the 2020 SBIR, which I have posted below for your convenience, which described how AKIDA was to potentially be utilised to make autonomous rovers travel faster. So it occurred to me that this 2020 SBIR which AKIDA was featured in might be part of the whole "Mars2020 Rover" thingamajig.

I had a quick Google search under "Mars2020 Rover" and I found this NASA Fact Sheet from 2019. The second page states "A new autonomous navigation system will allow the rover to drive faster in challenging terrain", which 100% ties into the goals described in the 2020 SBIR!

Oh, and I might as well also add that the whole NASA High Performance Spaceflight Computer (HPSC) that I've been so obsessed about and in which I'm convinced our tech will be incorporated into at some point in time, well... the HPSC runs the software that controls the spacecraft's various subsystems, such as navigation, communication, power management, etc.


The HPSC processor which is being built by Microchip and will be utilising SiFive's 'Intelligence' X280 core. NASA has stated previously that initial availability will be sometime in 2024 (which didn't occur obviously, so maybe it will be ready this year) and the chip won't just be for space missions but is also expected to be utilised in applications on Earth such as defense, commercial aviation, robotics and medical equipment.



NASA’s AI Use Cases: Advancing Space Exploration with Responsibility​


Kate Halloran​

Jan 07, 2025
Article

Contents​

NASA's 2024 AI Use Case inventory highlights the agency’s commitment to integrating artificial intelligence in its space missions and operations. The agency’s updated inventory consists of active AI use cases, ranging from AI-driven autonomous space operations, such as navigation for the Perseverance Rover on Mars, to advanced data analysis for scientific discovery.
NASA’s 2024 AI Use Case inventory highlights the agency’s commitment to integrating artificial intelligence in its space missions and operations. The agency’s updated inventory consists of active AI use cases, ranging from AI-driven autonomous space operations, such as navigation for the Perseverance Rover on Mars, to advanced data analysis for scientific discovery.

AI Across NASA​

NASA’s use of AI is diverse and spans several key areas of its missions:

Autonomous Exploration and Navigation

  • AEGIS (Autonomous Exploration for Gathering Increased Science): AI-powered system designed to autonomously collect scientific data during planetary exploration.
  • Enhanced AutoNav for Perseverance Rover: Utilizes advanced autonomous navigation for Mars exploration, enabling real-time decision-making.
  • MLNav (Machine Learning Navigation): AI-driven navigation tools to enhance movement across challenging terrains.
  • Perseverance Rover on Mars – Terrain Relative Navigation: AI technology supporting the rover’s navigation across Mars, improving accuracy in unfamiliar terrain.

Mission Planning and Management

  • ASPEN Mission Planner: AI-assisted tool that helps streamline space mission planning and scheduling, optimizing mission efficiency.
  • AWARE (Autonomous Waiting Room Evaluation): AI system that manages operational delays, improving mission scheduling and resource allocation.
  • CLASP (Coverage Planning & Scheduling): AI tools for resource allocation and scheduling, ensuring mission activities are executed seamlessly.
  • Onboard Planner for Mars2020 Rover: AI system that helps the Perseverance Rover autonomously plan and schedule its tasks during its mission.

Environmental Monitoring and Analysis

  • SensorWeb for Environmental Monitoring: AI-powered system used to monitor environmental factors such as volcanoes, floods, and wildfires on Earth and beyond.
  • Volcano SensorWeb: Similar to SensorWeb, but specifically focused on volcanic activity, leveraging AI to enhance monitoring efforts.
  • Global, Seasonal Mars Frost Maps: AI-generated maps to study seasonal variations in Mars’ atmosphere and surface conditions.

Data Management and Automation

  • NASA OCIO STI Concept Tagging Service: AI tools that organize and tag NASA’s scientific data, making it easier to access and analyze.
  • Purchase Card Management System (PCMS): AI-assisted system for streamlining NASA’s procurement processes and improving financial operations.

Aerospace and Air Traffic Control

  • NextGen Methods for Air Traffic Control: AI tools to optimize air traffic control systems, enhancing efficiency and reducing operational costs.
  • NextGen Data Analytics: Letters of Agreement: AI-driven analysis of agreements within air traffic control systems, improving management and operational decision-making.

Space Exploration

  • Mars2020 Rover (Perseverance): AI systems embedded within the Perseverance Rover to support its mission to explore Mars.
  • SPOC (Soil Property and Object Classification): AI-based classification system used to analyze soil and environmental features, particularly for Mars exploration.

Ethical AI: NASA’s Responsible Approach​

NASA ensures that all AI applications adhere to Responsible AI (RAI) principles outlined by the White House in its Executive Order 13960. This includes ensuring AI systems are transparent, accountable, and ethical. The agency integrates these principles into every phase of development and deployment, ensuring AI technologies used in space exploration are both safe and effective.

Looking Forward: AI’s Expanding Role​

As AI technologies evolve, NASA’s portfolio of AI use cases will continue to grow. With cutting-edge tools currently in development, the agency is poised to further integrate AI into more aspects of space exploration, from deep space missions to sustainable solutions for planetary exploration.
By maintaining a strong commitment to both technological innovation and ethical responsibility, NASA is not only advancing space exploration but also setting an industry standard for the responsible use of artificial intelligence in scientific and space-related endeavors.



View attachment 75442



Mars2020 Fact Sheet


View attachment 75443
View attachment 75444









That would be 555 times as fast with Akida on board.
Screenshot_2025-01-08-17-58-39-92_40deb401b9ffe8e1df2f1cc5ba480b12.jpg

________
Radiation tolerance is certainly not a problem.

"...Frontgrade Gaisler, a leading provider of radiation-hardened microprocessors for space applications, has licensed its Akida™ IP for incorporation into space-grade, fault-tolerant system-on-chip solutions for hardware AI acceleration. ..."
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 21 users

Frangipani

Top 20

BrainChip Brings Neuromorphic Capabilities to M.2 Form Factor

January 08, 2025 12:00 PM Eastern Standard Time
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced the availability of its Akida™ advanced neural networking processor on the M.2 form factor, enabling a low-cost, high-speed and low-power consumption option for those looking to build their own edge AI boxes.

“BrainChip’s AKD1000 chips and boards are available today for industry evaluation, development, proof of concept and demonstration platforms with the IP available to license for integration into SoCs. Releasing the AKD1000 on the M.2 form factor continues our commitment to aid developers in creating AI solutions with our Akida IP”
Post this

BrainChip’s neural processor Al IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases, such as convolutional neural networks with high throughput and unsurpassed performance in low power budgets. The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt – to unlock capabilities for a wide array of edge AI applications where space and power is limited and speed is critical, including industrial, factory service centers, network access devices and more.

BrainChip’s AKD1000 product is available in both B+M Key and E Key configurations of the M.2 2260 form factor. It can be purchased integrated into stand-alone Raspberry PI or Edge AI box enclosures, or for integration into custom designed products. Pricing starts at $249. Visit shop.brainchipinc.com or the Buy Now button at www.brainchip.com/.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning locally to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be designed into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

Contacts​

Media Contact:
Mark Smith
JPR Communications
818-398-1424
Investor Relations:
Tony Dawe
Director, Global Investor Relations
tdawe@brainchip.com
 
  • Like
  • Fire
  • Love
Reactions: 45 users

"BrainChip Boasts of a Strong Ecosystem for Its Akida Edge AI Box​

A post-pre-order price hike to $1,495, though, puts a damper on its claims of cost efficiency for direct deployment.​


Neuromorphic edge artificial intelligence (AI) specialist BrainChip is celebrating a year since it unveiled its Akida Edge AI Box development platform at the Consumer Electronics Show (CES) in Las Vegas — and it's back in Sin City again this year to announce "an all-star lineup" of partners on the project.

"The Akida Edge Box is a great platform for running AI in standalone edge environments where footprint, cost and efficiency is critical, while not compromising performance," claims BrainChip chief executive officer Sean Hehir. "We look forward to announcing more partners developing edge AI for their customers' specific use cases and more importantly, we look forward to the ideas these companies will bring to life with the Akida Edge AI Box."

BrainChip opened orders for the Akida Edge Box, developed in partnership with VVDN, back in February last year, after showcasing the device at CES 2024. The idea: delivering a low-cost single-unit development platform for those looking to experiment with the company's Akida neuromorphic processor, with two AKD1000 chips installed in the compact device alongside an NXP Semiconductors i.MX 8M Plus system-on-chip.

While there's no new Akida Edge Box for CES 2025, BrainChip still has plenty to announce in the form of partnership on the project. The Akida Edge AI Box ecosystem now includes, the company says: support in Edge Impulse for rapid AI model development, training, and deployment; gesture recognition support from BeEmotion; climate forecasting developed by AI Labs; model evaluation from DeGirum; cybersecurity projects from Quantum Ventura; and computer vision analysis from Vedya Labs.

BrainChip is also repositioning the Akida Edge AI Box, which is priced below its previous development kits: "the Akida Edge AI Box is so cost-effective," the company claims, "it can be utilized in production applications: in every patient's room to monitor their health and safety; in every store aisle to gauge shopping experience; in every car, truck, boat, or plane in the fleet to manage logistics."
The device isn't quite as affordable as it used to be, though: pre-orders for the Akida Edge Box launched in February 2024 at just $799, but the company is currently asking for $1,495 on its official web store — with a 10-12 week shipping estimate."


Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin.
"with a 10-12 week shipping estimate."

Wow, that's a lot, if that's the current wait time..

Up to 3 months..

Is that due to pent up demand?
Lack of chips?
Lengthy time of production and testing?

If it's not due to pent up demand, then having to wait that long, will hurt it and if it is, it will hurt demand going forward.

Maybe get VVDN, to stop making that Nvidia junk and concentrate on our units, if production is too slow..
 
  • Like
  • Fire
  • Haha
Reactions: 12 users

Frangipani

Top 20

BrainChip Brings Neuromorphic Capabilities to M.2 Form Factor

January 08, 2025 12:00 PM Eastern Standard Time
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced the availability of its Akida™ advanced neural networking processor on the M.2 form factor, enabling a low-cost, high-speed and low-power consumption option for those looking to build their own edge AI boxes.



BrainChip’s neural processor Al IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases, such as convolutional neural networks with high throughput and unsurpassed performance in low power budgets. The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt – to unlock capabilities for a wide array of edge AI applications where space and power is limited and speed is critical, including industrial, factory service centers, network access devices and more.

BrainChip’s AKD1000 product is available in both B+M Key and E Key configurations of the M.2 2260 form factor. It can be purchased integrated into stand-alone Raspberry PI or Edge AI box enclosures, or for integration into custom designed products. Pricing starts at $249. Visit shop.brainchipinc.com or the Buy Now button at www.brainchip.com/.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning locally to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be designed into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

Contacts​

Media Contact:
Mark Smith
JPR Communications
818-398-1424
Investor Relations:
Tony Dawe
Director, Global Investor Relations
tdawe@brainchip.com



A33EDCBC-7D4C-4367-8DC0-F8F0BE3BC27F.jpeg



 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 18 users

charles2

Regular
Unfortunately, so far, the only market moving news has been the credit raise. Needed but predictably sparked a selloff.

Perhaps they are saving the best for last.

Clairvoyants need apply.
 
  • Like
  • Haha
  • Thinking
Reactions: 6 users

cosors

👀
"with a 10-12 week shipping estimate."

Wow, that's a lot, if that's the current wait time..

Up to 3 months..

Is that due to pent up demand?
Lack of chips?
Lengthy time of production and testing?

If it's not due to pent up demand, then having to wait that long, will hurt it and if it is, it will hurt demand going forward.

Maybe get VVDN, to stop making that Nvidia junk and concentrate on our units, if production is too slow..
I saw a smart watch advertised at the current CES. It's already sold out on the manufacturer's website. It doesn't say anything about delivery times.
Weren't there similar supply bottlenecks with Nvidia's graphics cards and the price increased drastically with resellers?
 
Last edited:
  • Wow
  • Like
  • Fire
Reactions: 5 users

BrainChip Brings Neuromorphic Capabilities to M.2 Form Factor

January 08, 2025 12:00 PM Eastern Standard Time
LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced the availability of its Akida™ advanced neural networking processor on the M.2 form factor, enabling a low-cost, high-speed and low-power consumption option for those looking to build their own edge AI boxes.



BrainChip’s neural processor Al IP is an event-based technology that is inherently lower power when compared to conventional neural network accelerators. BrainChip IP supports incremental learning and high-speed inference in a wide variety of use cases, such as convolutional neural networks with high throughput and unsurpassed performance in low power budgets. The AKD1000-powered boards can be plugged into the M.2 slot – around the size of a stick of gum, with a power budget of about 1 watt – to unlock capabilities for a wide array of edge AI applications where space and power is limited and speed is critical, including industrial, factory service centers, network access devices and more.

BrainChip’s AKD1000 product is available in both B+M Key and E Key configurations of the M.2 2260 form factor. It can be purchased integrated into stand-alone Raspberry PI or Edge AI box enclosures, or for integration into custom designed products. Pricing starts at $249. Visit shop.brainchipinc.com or the Buy Now button at www.brainchip.com/.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning locally to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be designed into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.
Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

Contacts​

Media Contact:
Mark Smith
JPR Communications
818-398-1424
Investor Relations:
Tony Dawe
Director, Global Investor Relations
tdawe@brainchip.com
"..today announced the availability of its Akida™ advanced neural networking processor on the M.2 form factor, enabling a low-cost, high-speed and low-power consumption option for those looking to build their own edge AI boxes."

"BrainChip’s AKD1000 product is available in both B+M Key and E Key configurations of the M.2 2260 form factor. It can be purchased integrated into stand-alone Raspberry PI or Edge AI box enclosures, or for integration into custom designed products. Pricing starts at $249. Visit shop.brainchipinc.com or the Buy Now button at www.brainchip.com/."


The website hasn't been updated yet?
Because the lowest price tech is the PCIe board, at $499...
 
  • Like
  • Thinking
Reactions: 10 users

cosors

👀
"with a 10-12 week shipping estimate."

Wow, that's a lot, if that's the current wait time..

Up to 3 months..

Is that due to pent up demand?
Lack of chips?
Lengthy time of production and testing?

If it's not due to pent up demand, then having to wait that long, will hurt it and if it is, it will hurt demand going forward.

Maybe get VVDN, to stop making that Nvidia junk and concentrate on our units, if production is too slow..
Does the higher price perhaps correlate with the delivery time?
So high demand higher price?
It would be nice if the higher price did not discourage demand but, on the contrary, did not suppress it.

Unfortunately, I can't think of an analogue example where high demand would have increased the OEM's price. Maybe you?
 
  • Thinking
  • Like
Reactions: 4 users

Quiltman

Regular
Looks like Rudy is a valuable employee adding to the value of our investment !

1736362832024.png


1736362905709.png


1736362938195.png
 
  • Like
  • Fire
  • Love
Reactions: 27 users
  • Like
Reactions: 4 users

yogi

Regular
Just a thought looks like we wont see any wow effect from CES25 this time.
 
  • Like
  • Wow
  • Sad
Reactions: 9 users
  • Like
  • Fire
  • Love
Reactions: 16 users

Frangipani

Top 20

View attachment 75504



I bet Nimble AI’s project coordinator Xabier Iturbe, Senior Research Engineer at IKERLAN (Basque Country, Spain), will be very pleased to hear about this new offering by BrainChip and will keep his fingers crossed that the same form factor option will be made available for the AKD1500 soon.

Today’s announcement of AKD1000 now being offered on the M.2 form factor reminded me of a post (whose author sadly made up his mind to leave the forum months ago) I had meant to reply to for ages…

@Frangipani and I have posted about Nimble AI before. I've noticed that their recent content no longer mention Brainchip and the AKIDA 1500. It appears we've been overshadowed by IMEC, a multi-billion-dollar company and research partner on the Nimble project. IMEC is heavily involved in nearly every EU-sponsored neuromorphic project and has been developing their own SNN for several years. What is news is that In Q1 2025, IMEC plans to do a foundry run of their SNN based neuromorphic processor called SENeCA (Scalable Energy-efficient Neuromorphic Computer Architecture).

View attachment 66880

View attachment 66881

Some details on SENeCA are in the below paper (few years old now).


Are they developing the hardware/processor, though the IP may not be in-house? Hard to tell from the info online around SENeCA. Other aspects that make me wonder about the use of Akida as the IP include reference to digital IP, RISC-V based architecture and designed for GF22nm.

I thought this was worth mentioning as IMEC could be a customer or a potential rival. If they're doing a foundry run Q1 2025 and we're involved, would expect some kind of IP license or arrangement prior. Would line up with Seans comments around deals before end of 2024.

8AD5935D-3B31-4A4E-B70B-2EE03EF554B4.jpeg


I reached out directly to the project director for Nimble AI and asked has SENeCA replaced the use of Akida 1500, reply below:

View attachment 66909
Reading between the lines, it seems they have been forced to sub out Akida for IMEC’s SENeCA (which does not include our IP) due to their partnership. This means there is another confirmed competitor for SNN processors, with a chip planned for tape-out in January 2025. We need to pick up the pace. What happened to the patent fortress?

90803B0E-9389-414C-B178-F9C1E7B09AC8.jpeg



Hi @AI_Inquirer,

what a shame you decided to leave TSE last August - miss your contributions!
Maybe you still happen to hang around, though, reading in stealth - that’s why I am addressing you anyway.

Thank’s for reaching out to Xabier Iturbe, whose reply you seem to have misunderstood at the time: The way I see it, we haven’t been overshadowed or replaced by imec’s SENeCA chip, which was always going to be used alongside us resp. the Hailo Edge AI accelerator.

Have a look at the slightly updated illustration and project description of the Nimble AI neuromorphic 3D vision prototype I had posted in May 2024:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424893



C9C3D46A-E409-40D4-ABE4-2F1A53195DB3.jpeg


The Nimble AI researchers were always planning to produce two different neuromorphic 3D vision prototypes based on the Prophesee IMX636 sensor manufactured by Sony, and both of them were going to use imec’s neuromorphic SENeCA chip for early perception: One will additionally have the AKD1500 as a neuromorphic processor to perform 3D perception inference. This will be benchmarked against another prototype utilising a non-neuromorphic Edge AI processor by Hailo (on an M.2 form factor).

This latter prototype has apparently been progressing well (not sure, however, whether Prophesee’s financial difficulties will now delay the 3 year EU-funded project which started in November 2022) as can be seen on their website (https://www.nimbleai.eu/technology/)…

6CEBFC46-AC74-4B2B-A395-B12847EC146D.jpeg


…as well as in this October 7, 2024 video:





As for the second prototype slated to utilise our technology, the Nimble AI researchers are hoping that BrainChip will ideally be offering the AKD1500 on an M.2 form factor - just like Hailo does and just like BrainChip does now (as of today) for the AKD1000.
I believe that’s what Xabier Iturbe was trying to tell you:

25400C8A-3B32-4B60-AFEC-C5676360DC1E.jpeg


Regards,
Frangipani
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Frangipani

Top 20
Not sure if posted here today at all but did anyone see what Nimble AI are up to with our 1500 and Hailo8 courtesy of @Rayz on the other site.

Full credit to Rayz who is a great poster over there for finding info like many others over here. If u still frequent over there, worth giving a like and a follow (y)



View attachment 74968

eu
Perceiving a 3D world
from a 3D silicon architecture
100x 50x ≈10s mW
Energy-efficiency Latency reduction Energy budget improvement

Expected outcomes
World’s first light-field dynamic vision sensor and SDK for monocular-image- based depth perception.
Silicon-proven implementations
for use in next-generation commercial neuromorphic chips.
EDA tools to advance 3D silicon integration and exceed the pace of Moore’s Law.
World’s first event-driven full perception stack that runs industry standard convolutional neural networks.
Prototypic platform and programming tools to test new AI and computer vision algorithms.
Applications that showcase the competitive advantage of NimbleAI technology.
World’s first Light-field
Dynamic Vision Sensor Prototype

In NimbleAI, we are designing a
3D integrated sensing-processing neuromorphic chip that mimics
the efficient way our eyes and brains capture and process visual information. NimbleAI also advances towards new vision modalities
not present in humans, such as insect-inspired light-field vision, for instantaneous 3D perception.
Key features of our chip are:
The top layer in the architecture senses light and delivers meaningful visual information to processing and inference engines in the interior layers to achieve efficient end-to-end perception. NimbleAI adopts the biological data economy principle systematically across the chip layers, starting
in the light-electrical sensing interface.
Sense
Ignore?
Process
Adaptive
3D
light and depth
or recognise
efficiently
visual pathways
integrated silicon
Sensing, memory, and processing components are physically fused
in a 3D silicon volume to boost the communication bandwidth.
ONLY changing light is sensed, inspired by the retina. Depth perception is inspired by the insect compound eye.
Our chip ONLY processes feature- rich and/or critical sensor regions.
ONLY significant neuron state changes are propagated and processed by other neurons.
Sensing and processing are adjusted at runtime to operate jointly
at the optimal temporal and data resolution.

How it works
Sensing
Sensor pixels generate visual events ONLY if/when significant light changes are detected. Pixels can be dynamically grouped and ungrouped to allocate different resolution levels across sensor regions. This mimics the foveation mechanism in eyes, which allows foveated regions to be
n seen in greater detail than peripheral regions.
evird- The NimbleAI sensing layer enables depth perception in the sub-ms range tne by capturing directional information of incoming light by means of light- vE field micro-lenses by Raytrix. This is the world’s first light-field DVS sensor, which estimates the origin of light rays by triangulating disparities from neighbour views formed by the micro-lenses. 3D visual scenes are thus encoded in the form of sparse visual event flows.
Early Perception:
Our always-on early perception engine continuously analyzes the sensed n
visual events in a spatio-temporal mode to extract the optical flow and evir
identify and select ONLY salient regions of interest (ROIs) for further
d-
processing in high-resolution (foveated regions). This engine is powered tne
by Spiking Neural Networks (SNNs), which process incoming visual events vE
and adjust foveation settings in the DVS sensor with ultra-low latency and minimal energy consumption.
Processing:
Format and properties of visual event flows from salient regions are adapted in the processing engine to match data structures of user AI models (e.g., Convolutional Neural Networks - CNNs) and to best exploit optimization mechanisms implemented in the inference engine (e.g., sparsity). Processing kernels are tailored to each salient region properties, including size, shape and movement patterns of objects in those regions. The processing engine uses in-memory computing blocks by CEA and a Menta eFPGA fabric, both tightly coupled to a Codasip RISC-V CPU.
Inference with user AI models:
We are exploring the use of event-driven dataflow architectures that exploit sparsity properties of incoming visual data. For practical use in real-world applications, size-limited CNNs can be run on-chip using the NimbleAI processing engine above, while industry standard AI models can be run in mainstream commercial architectures, including GPUs and NPUs.

Light-field DVS using Prophesee IMX 636
Foveated DVS testchip
Prototyping MPSoC XCZU15EG
HAILO-8 /Akida 1500 (ROI inference)
SNN testchip (ROI selection)
Digital foveation settings
Harness the biological advantage
in your vision pipelines
NimbleAI will deliver a functional prototype of the 3D integrated sensing-processing neuromorphic chip along with the corresponding programming tools and OS drivers (i.e., Linux/ROS) to enable users run their AI models on it. The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.
Sensing
User RTL IP
NimbleAI RTL IP
Processing
Inference
User CNN models
SNN models
Early perception
Reach out to test combined use of your vision pipelines and NimbleAI technology.
PCIe M2
Modules

Use cases
Hand-held medical imaging
Smart monitors with 3D perception for highly automated and autonomous cars by AVL
Human attention for worm-inspired neural networks by TU Wien
device by ULMA
Eye-tracking sensors for smart
glasses by Viewpointsystem Follow our journey!
@NimbleAI_EU NimbleAI.eu
Partners NimbleAI coordinator: Xabier Iturbe (xiturbe@ikerlan.es)
nimbleai.eu

The prototype will be flexible to accommodate user RTL IP in a Xilinx MPSoC and combines commercial neuromorphic and AI chips (e.g., HAILO, BrainChip, Prophesee) and NimbleAI 2D testchips (e.g., foveated DVS sensor and SNN engine).
Raytrix is advancing its light-field SDK to support event-based inputs, making it easy for researchers and early adopters to seamlessly integrate nimbleAI‘s groundbreaking vision modality –
3D perception DVS – and evolve this technology with their projects, prior to deployment on the NimbleAI functional prototype. The NimbleAI light-field SDK by Raytrix will be compatible with Prophesee’s Metavision DVS SDK.


View attachment 74969

Wait a minute, @Fullmoonfever! 🤣
Don’t you remember? 👇🏻

Or was it..... :unsure:

I gotta protect my billable (wish) DD IP hours... I'll happily take any effective SP rise as payment though :ROFLMAO::LOL::ROFLMAO:

Thankfully through our collective DD efforts info is generally found on this site first most of the time.

It shouldn’t come as a surprise to you, then, that this also holds true for info on Nimble AI and our connection to them, which both @AI_Inquirer and I had posted about several times in the past…
We’ve actually known about the Nimble AI researchers’ intention to use AKD1500 for almost a year here on TSE! 🥳

Happy to receive some free shares in lieu of credit, though, in case you don’t have the heart to ask Rayz to return some of the “full credit” you so generously gave him… 🤣
 
  • Like
Reactions: 5 users

manny100

Top 20
Thanks Frangipani, looks like some sales there already for M2.
Your reference to Prophesee and its financial woes in connection with Nimble certainly makes our LDA financing decision look very smart.
Can this be our product that really starts to move? Cheap and allows others to do their own thing- and that is its beauty.
 
  • Like
  • Fire
  • Wow
Reactions: 14 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
We can 100% exclude that the 2020 NASA SBIR proposal which featured Akida has anything to do with NASA’s Mars 2020 mission and the Perseverance Mars Rover, given the fact that it embarked on its voyage to the Red Planet on July 30, 2020 (hence the mission name!) and landed on the Martian surface on February 18, 2021…

View attachment 75445


Apart from the fact that the timelines just don’t match - Perseverance left Planet Earth 4.5 years ago, the same year the SBIR proposal was published, while BrainChip celebrated Akida being first launched into space on March 4, 2024 (in ANT61’s Brain) - the 2020 SBIR proposal itself clearly indicates it is out of the question that it could have anything to do with the Perseverance Mars Rover’s autonomous navigation system: the research project relates to TRL (Technology Readiness Level) 1-2, which is considered very basic and speculative research. I’ll leave it up to you to figure out what TRL would be required for any mission-critical technology destined for Mars…

View attachment 75448




View attachment 75446
View attachment 75447


Well, something must be ongoing in relation to "Mars2020 Rover" because it is listed on NASA's 2024 updated inventory, which is the reason why I posted it.

The article includes a link to the full inventory which describes the Stage of System as "in production".

If you feel this information is incorrect, you can contact NASA"s Chief Artificial Intelligence Officer, David Salvagnini, who put the list together.



Screenshot 2025-01-09 at 8.37.50 am.png


Screenshot 2025-01-09 at 8.45.26 am.png
 
  • Like
  • Fire
  • Love
Reactions: 30 users

Xray1

Regular
I bet Nimble AI’s project coordinator Xabier Iturbe, Senior Research Engineer at IKERLAN (Basque Country, Spain), will be very pleased to hear about this new offering by BrainChip and will keep his fingers crossed that the same form factor option will be made available for the AKD1500 soon.

Today’s announcement of AKD1000 now being offered on the M.2 form factor reminded me of a post (whose author sadly made up his mind to leave the forum months ago) I had meant to reply to for ages…



View attachment 75505



View attachment 75514


Hi @AI_Inquirer,

what a shame you decided to leave TSE last August - miss your contributions!
Maybe you still happen to hang around, though, reading in stealth - that’s why I am addressing you anyway.

Thank’s for reaching out to Xabier Iturbe, whose reply you seem to have misunderstood at the time: The way I see it, we haven’t been overshadowed or replaced by imec’s SENeCA chip, which was always going to be used alongside us resp. the Hailo Edge AI accelerator.

Have a look at the slightly updated illustration and project description of the Nimble AI neuromorphic 3D vision prototype I had posted in May 2024:

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-424893



View attachment 75508

The Nimble AI researchers were always planning to produce two different neuromorphic 3D vision prototypes based on the Prophesee IMX636 sensor manufactured by Sony, and both of them were going to use imec’s neuromorphic SENeCA chip for early perception: One will additionally have the AKD1500 as a neuromorphic processor to perform 3D perception inference. This will be benchmarked against another prototype utilising a non-neuromorphic Edge AI processor by Hailo (on an M.2 form factor).

This latter prototype has apparently been progressing well (not sure, however, whether Prophesee’s financial difficulties will now delay the 3 year EU-funded project which started in November 2022) as can be seen on their website (https://www.nimbleai.eu/technology/)…

View attachment 75515

…as well as in this October 7, 2024 video:





As for the second prototype slated to utilise our technology, the Nimble AI researchers are hoping that BrainChip will ideally be offering the AKD1500 on an M.2 form factor - just like Hailo does and just like BrainChip does now (as of today) for the AKD1000.
I believe that’s what Xabier Iturbe was trying to tell you:

View attachment 75513

Regards,
Frangipani

Frangipani ............ I totally agree with you, that it was a real loss for TSE to lose a very technically minded and informative poster like AI_Inquirer, who was more than happy to share his ongoing research and discussions with various other organisations where b
BrainChip had some kind of connection / affiliation with, but he was eventually push out from posting here, due mainly imo to the poor form and lack of respect shown here by other questionable posters.
 
  • Like
  • Love
Reactions: 12 users

TECH

Regular
Question...The 10 to 12 weeks time lag in shipping out the edge box tends to suggest to me at least the time period
is linked to the wafer process, I strongly expect that we have no AKD 1000 SoC's...if you listen carefully to what Sean
stated in the recent podcast when talking about VVDN, he said we supply AKD 1000 SoC's to them to fulfil any orders
they receive (in large volumes)...yes it's a guess, but 10/12 weeks isn't good enough in my opinion...we are obviously not
holding any stock whatsoever, or VVDN have us way down the food chain as far as production of said box's.

We all know the AI EDGE Box is just a vehicle to get people into discovering what the AKIDA suite of products can currently
offer, and it's not an earner as such, but promoting something, then in the same breath saying, wait for 10/12 weeks doesn't
sound very practical to my business brain...purely my opinion, neither company appears to be holding any stock ??

Tech.
 
  • Like
  • Thinking
  • Fire
Reactions: 22 users
It isn't any secret that I am expecting something from Tata. Not sure where this is heading and haven't time to follow up as have to go to work. I'll put up what I have found. Not sure what it means for BRN. Was announced at CES.

https://www.tatatechnologies.com/en...-for-next-gen-software-defined-vehicles-sdvs/

This is a link to Telechips current product. They do mention neural processing so more research to do when have the time.

https://www.telechips.com/view/technology/prod03

SC
 
  • Like
  • Fire
  • Love
Reactions: 13 users
Top Bottom