BRN Discussion Ongoing

Diogenese

Top 20
The plan may be to use the Hybrid AI that was talked about prior to CES 25?
See my earlier post.
That combines our Edge with minimal cloud.
Tony L and his team may have found a way to efficiently utilise FGPA with TENNs using hybrid AI?
There is a lot of work on FGPA at the Edge going on ATM.
Hi Manny,

Yes. FPGAs may be useful for low volume niche applications where speed/power efficiency do not need to be maximally optimized, but they are primarily a development enabler and prouf-of-concept tool.

The flexibility, in-field upgradability, and interoperability of Field Programmable Gate Arrays (FPGAs), combined with low power, low latency, and parallel processing capabilities make them an essential tool for developers looking to overcome these challenges and optimize their Contextual Edge AI applications.

[Yes - everybody claims low latency/low power, but they are usually an order of magnitude or more worse than Akida's SNN, and worse again compared to TENNs]

Gate arrays have been around for donkey's years. Then came programmabel gate arrays which were set in stone once the configuration was programmed. Then came FPGAs which allow the configuration to be changed by having a programmable interconnexion network. Now we have AI optimized FPGAs in which selectable NPUs are part of the FPGA. Most NPUs use MACs (a sort of electronic mathematical "abacus"). The Akida NPU are spike/event based which utilized sparsity more effectively than MACs.

But commercializing FPGAs is moving away from our stated primary business model of IP licensing. Don't get me wrong - i've always been an advocate for COTS Akida in silicon, but I don't see FPGA as the mass market commercial solution. Obviously I don't have Tony Lewis' expertise in the field, but was he talking about a COTS FPGA or a demonstrator?
 
  • Like
  • Fire
  • Love
Reactions: 14 users
  • Like
Reactions: 2 users

7für7

Top 20
It’s interesting that Germany closed up 5% yesterday, while Australia was down around 2%

🧐
 
  • Like
Reactions: 1 users

Diogenese

Top 20
The plan may be to use the Hybrid AI that was talked about prior to CES 25?
See my earlier post.
That combines our Edge with minimal cloud.
Tony L and his team may have found a way to efficiently utilise FGPA with TENNs using hybrid AI?
There is a lot of work on FGPA at the Edge going on ATM.
Hi Manny,

RAG (retrieval augmented generation) would be one area where cloud hybrid [as distinct from analog/digital hybrid] could be implemented. A cloud based LLM could be sub-divided into specific sub-topic models which could be downloaded as required.

For a cloud-free implementation, the main LLM would be stored on a separate memory in the same device as the NN. A variation on this would be where the main LLM could be updated via the cloud.
 
  • Like
  • Fire
  • Love
Reactions: 6 users

Iseki

Regular
Maybe my above question was not formulated clearly enough:

I would like to know on which company‘s FPGA chip aka hardware (e.g. Xilinx, Altera, Lattice Semiconductor, Microchip, …) were used for the „software/algorithm/IP“ demos (the ones that didn’t run on a Akida 1000 or similar).
I asked Tony but didn't get a response.
In the mean time I see that Frontgrade produce a couple. (Certus)
I'm not sure how many synapses we need to run Akida2.
 
  • Like
Reactions: 1 users

Doz

Regular
My guess for the supply of the FPGA’s is QuickLogic and maybe Edward got us a special deal .


1742871759560.png


1742871815109.png
 
  • Like
  • Fire
  • Thinking
Reactions: 14 users

Diogenese

Top 20
Regarding inefficiency: you‘re absolutely right and even more so if we only consider energy restricted devices or applications.

But the more I‘am reading about the current trends in the semiconductor space (especially chiplets and packaging memory on top of logic etc.) and the increasing statements in industry interviews about the breakneck speed regarding AI/ML development in general and how fast new approaches/algorithms get established, I started wondering if we might see some transition period (on the inference side), at least in areas where energy consumption is not a (primary) problem as long as we‘re not talking GPU levels but flexibility (future proofing). Lets say devices that are plugged but might need updates for 10 years or so (in an area of rapid changes like AI/ML), maybe mobile network base stations, industrial wireless communication systems etc.

I might be totally off, but I could imagine for custom semi customers there might even be a FPGA chiplet option available in the future - e.g integrated into an AMD/Intel CPU (maybe just as a safety net alternative to / or accompanying additional tensor cores or whatever highly specialized accelerator flavor there might be integrated in the future). - So basically a trade off regarding energy consumption & cost in favor of flexibility.

Edit - examples added:

FPGA as a chiplet:

Ultra low power FPGA (starting at 25 µW):
Achronix

Chiplets Built with Speedcore eFPGA IP​


Think about that ... you get a licence for the IP to make an eFPGA so you can make an inferior version of a SNN at great expense.

... and the Lettuce one ...

iCE40 LP/HX FPGAs can be used in countless ways to add differentiation to mobile products. Shown below are four of the most common iCE40 LP/HX design categories along with specific application examples.

Enhance Application Processor Connectivity​


Increase Battery Life by Offloading Timing Critical Functions​



Increase System Performance through Hardware Acceleration​

  • Reduce processor workload by pre-processing sensor data to generate nine-axis output
  • Rotate, combine and scale image data with efficient FPGA-based implementations
  • Use logic-based multipliers to implement high-performance digital signal filtering
1742872492781.png



Not sure you could use any of those to make Akida SNN.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 6 users

Diogenese

Top 20
  • Haha
  • Like
Reactions: 8 users

Guzzi62

Regular
Page 40:

The Role of
Neuromorphic Chips
Neuromorphic chips represent an
emerging technology that is designed
to mimic the human brain’s neural
architecture. These chips are inherently
efficient at processing sensory data
in real time due to their event-driven
nature. Therefore, they hold promise
to advance edge AI based on a new
wave of low-power solutions that will
be handling complex tasks like pattern
recognition or anomaly detection.
In the next few years, neuromorphic
chips will become embedded in
smartphones, enabling real-time AI
capabilities without relying on the
cloud. This will allow tasks like speech
recognition, image processing, and
adaptive learning to be performed
locally on these devices with minimal
power consumption. Companies
like Intel and IBM are advancing
neuromorphic chip designs (e.g., Loihi
2 [71] and TrueNorth [72] , respectively)
that consume 15–300 times less
energy than traditional chips while at
the same time delivering exceptional
performance.

WTF!!!


Page 64:

Neuromorphic Chips + 6G +
Quantum Computing
The long-term trajectory of
neuromorphic computing extends
beyond existing edge AI systems. The
integration of neuromorphic AI with 6G
networks and quantum computing is
expected to enable ultra-low-latency,
massively parallel processing at the
edge. BrainChip’s Akida processor and
state-space models, such as Temporal
Event-Based Neural Networks
(TENNs), are early indicators of this
direction, demonstrating the feasibility
of lightweight, event-driven AI
architectures for high-speed, real-time
applications [120] .
As scaling challenges are addressed,
neuromorphic chips will move from
niche applications to mainstream
adoption, powering the next
generation of autonomous machines,
decentralized AI, and real-time
adaptive systems. The future of edge
AI will depend on how efficiently
intelligence is deployed. Neuromorphic
computing is positioned to make that
shift possible.

Whole page 65 is about Brainchip:

EDGE AI INSIGHTS: Brainchip

65
New Approaches for GenAI
Innovation at the Edge

Recent advances in LLM training algorithms, such as
Deepseek’s release of V3, have rattled the traditional AI
marketplace and challenged the belief that large language
models require high computational investments to achieve
performant results. This demonstrates that trying a new
approach can have a big impact on a key challenge the
AI market faces: high computational power and resultant
costs to train and execute LLM models. New approaches
must be considered to address similar challenges at the
edge, such as the large compute and memory bandwidth
requirements of transformer-based GenAI models that
result in on costly and power-hungry edge AI devices.


State Space Models:
More Efficient than Transformers

State Space Models (SSMs), with high-performance models
like Mamba, have emerged in the last two years in cloud-
based LLM applications to address the high computational
complexity and power required to execute transformers in
data centers. Now, there is growing interest in using SSMs
to implement LLMs at the edge and replace transformers,
as they can achieve comparable performance with fewer
parameters and less overall complexity.
Like transformers, SSMs can process long sequences
(context windows). However, their complexity is on the
order of the sequence length O(L), compared to the order of
the square of the sequence length O(L 2 ) for transformers—
and with 1/3 as many parameters. Not to mention that SSM
implementations are less costly and require less energy.
These models leverage efficient algorithms that deliver
comparable or even superior performance. Brainchip’s
unique approach is to constrain an SSM model to better
fit physical time series or streaming data and achieve
higher model accuracy and efficiency. BrainChip innovation
of SSM models constrained to streaming data are called
Temporal Enabled Neural Networks, or TENNs. Combined
with optimized LLM training, they pave the way for a new
category of price and performance LLM and VLM solutions
at the edge.

Deploying LLMs at the Edge:
Efficiency and Scalability
BrainChip addresses the challenge of deploying LLMs at
the edge by using SSMs that minimize computations, model
size, and memory bandwidth while producing state-of-the-
art (SOTA) accuracy and performance results to support
applications like real-time translation, contextual voice
commands, and complete LLM models with RAG extensions.
Brainchip can condense the software model and the
implementation into a tiny hardware design. A specialized
LLM design can execute the edge LLM execution in under
a watt and for a few dollars using a dedicated IP core that
can be integrated into the customer’s SoCs. This enables a
whole new class of consumer products that do not require
costly cloud connectivity and services.
This ultra low power execution makes edge LLMs viable
for always-on devices like smart assistants and wearables.
Cloud LLM services are neither private nor personalized. A
completely local edge AI design enables real-time GenAI
capabilities without compromising privacy, ensuring users
have greater control over their data and enabling a new
class of personalization you can bring wherever you go.
Emerging designs like BrainChip’s Akida core offer a
scalable and efficient solution for engineers and product
developers who want to integrate advanced AI capabilities
into private, personalized consumer products, including
home, mobile, and wearable products.


I might have missed something, it was a quick skim over!

BrainChip is one of the many sponsors, therefore, mentioned in the end.
 
  • Like
  • Fire
  • Love
Reactions: 23 users

manny100

Top 20
Hi Manny,

RAG (retrieval augmented generation) would be one area where cloud hybrid [as distinct from analog/digital hybrid] could be implemented. A cloud based LLM could be sub-divided into specific sub-topic models which could be downloaded as required.

For a cloud-free implementation, the main LLM would be stored on a separate memory in the same device as the NN. A variation on this would be where the main LLM could be updated via the cloud.
Tony has up his sleeve. Looks like we will have wait and see. May well be using RAG.
Interesting t
Hi Manny,

Yes. FPGAs may be useful for low volume niche applications where speed/power efficiency do not need to be maximally optimized, but they are primarily a development enabler and prouf-of-concept tool.

The flexibility, in-field upgradability, and interoperability of Field Programmable Gate Arrays (FPGAs), combined with low power, low latency, and parallel processing capabilities make them an essential tool for developers looking to overcome these challenges and optimize their Contextual Edge AI applications.

[Yes - everybody claims low latency/low power, but they are usually an order of magnitude or more worse than Akida's SNN, and worse again compared to TENNs]

Gate arrays have been around for donkey's years. Then came programmabel gate arrays which were set in stone once the configuration was programmed. Then came FPGAs which allow the configuration to be changed by having a programmable interconnexion network. Now we have AI optimized FPGAs in which selectable NPUs are part of the FPGA. Most NPUs use MACs (a sort of electronic mathematical "abacus"). The Akida NPU are spike/event based which utilized sparsity more effectively than MACs.

But commercializing FPGAs is moving away from our stated primary business model of IP licensing. Don't get me wrong - i've always been an advocate for COTS Akida in silicon, but I don't see FPGA as the mass market commercial solution. Obviously I don't have Tony Lewis' expertise in the field, but was he talking about a COTS FPGA or a demonstrator?
Ta, hopefully the expected tech advance ann will be a beauty. We should get the ann before the AGM.
Given all the client and tech news since September'24 i also expect we will see news from BRN concerning value prior to the AGM. This will also assist getting support for the possible US move.
I also expect we will likely get some positive client news before the AGM.
Looking forward to April onwards.
 
  • Like
Reactions: 12 users
Comment from crapper

Coby at weebit today in a webinar said they are talking with brainchip on a few different projects….
thought that was interesting
 
  • Like
  • Fire
  • Love
Reactions: 45 users
Tony has up his sleeve. Looks like we will have wait and see. May well be using RAG.
Interesting t

Ta, hopefully the expected tech advance ann will be a beauty. We should get the ann before the AGM.
Given all the client and tech news since September'24 i also expect we will see news from BRN concerning value prior to the AGM. This will also assist getting support for the possible US move.
I also expect we will likely get some positive client news before the AGM.
Looking forward to April onwards.
1742895502921.gif
 
  • Like
  • Wow
Reactions: 6 users

TopCat

Regular
These positions supervised by Chang Gao and NXP Semiconductors



IMG_0597.jpeg



I am excited to announce two fully-funded PhD openings in AI algorithm-hardware co-design in my group (https://www.tudemi.com) at TU Delft, Netherlands:

🔍 Position 1: Full-Custom Digital ASIC Design for Edge AI Design energy-efficient chips for real-time AI in IoT (https://lnkd.in/ehZx3Gcj)

🔍 Position 2: Algorithm-Hardware Co-Design for AI-Enhanced Transceivers (with NXP Semiconductors, https://lnkd.in/evh9hUvP)

We offer a collaborative environment bridging academia and industry, a competitive salary for a 4-year PhD position (~€2.9k–3.7k/month), and visa sponsorship.

If you are interested, please apply by April 25, 2025.
 
  • Like
  • Fire
Reactions: 6 users

Slade

Top 20
Comment from crapper

Coby at weebit today in a webinar said they are talking with brainchip on a few different projects….
thought that was interesting
Nice. If it was said then I guess it was said at the investor briefing. Perhaps in the Q & A at the end. I haven’t had a chance to listen.

 
  • Like
Reactions: 13 users

FJ-215

Regular
Did somebody say FPGA?????

From the Investor Presentation from last years surprise CR...

Investor Presentation 25/07/2024

"Use of Funds
Proceeds from the Placement and SPP will be used primarily to support the ongoing commercialisation of Akida 2.0
technology and the development, productization and commercialization of the new TENNs product. This represents the next
expansion in BrainChip’s product portfolio and builds on its existing leadership position in the field of neuromorphic
technology.
Non-deal related use of funds includes:
• Accelerate development of TENNs technology and derivative products & models for sales opportunities and product
portfolio expansion.
• Development of an Akida 2.0 derivative to support LLM (Large Language Models) on edge devices.
• Development of a cloud-based FPGA system to run Akida 2.0 for customer evaluation purposes.
• Ongoing investment in research & development to analyze new and emerging technology opportunities for novel edge
AI applications and product roadmap expansion

Investor Presentation 2024 |"



It's a marketing tool, not a commercial product!!!
 
  • Like
  • Love
  • Thinking
Reactions: 20 users

FJ-215

Regular
Did somebody say FPGA?????

From the Investor Presentation from last years surprise CR...

Investor Presentation 25/07/2024

"Use of Funds
Proceeds from the Placement and SPP will be used primarily to support the ongoing commercialisation of Akida 2.0
technology and the development, productization and commercialization of the new TENNs product. This represents the next
expansion in BrainChip’s product portfolio and builds on its existing leadership position in the field of neuromorphic
technology.
Non-deal related use of funds includes:
• Accelerate development of TENNs technology and derivative products & models for sales opportunities and product
portfolio expansion.
• Development of an Akida 2.0 derivative to support LLM (Large Language Models) on edge devices.
• Development of a cloud-based FPGA system to run Akida 2.0 for customer evaluation purposes.
• Ongoing investment in research & development to analyze new and emerging technology opportunities for novel edge
AI applications and product roadmap expansion

Investor Presentation 2024 |"



It's a marketing tool, not a commercial product!!!
Yes,
Boo, hiss!!
 
  • Like
Reactions: 1 users

Getupthere

Regular
Nice. If it was said then I guess it was said at the investor briefing. Perhaps in the Q & A at the end. I haven’t had a chance to listen.

 

Getupthere

Regular
Weebit CEO is incredible, and we’re left with Sean.
 
  • Like
  • Fire
Reactions: 12 users
Did somebody say FPGA?????

From the Investor Presentation from last years surprise CR...

Investor Presentation 25/07/2024

"Use of Funds
Proceeds from the Placement and SPP will be used primarily to support the ongoing commercialisation of Akida 2.0
technology and the development, productization and commercialization of the new TENNs product. This represents the next
expansion in BrainChip’s product portfolio and builds on its existing leadership position in the field of neuromorphic
technology.
Non-deal related use of funds includes:
• Accelerate development of TENNs technology and derivative products & models for sales opportunities and product
portfolio expansion.
• Development of an Akida 2.0 derivative to support LLM (Large Language Models) on edge devices.
• Development of a cloud-based FPGA system to run Akida 2.0 for customer evaluation purposes.
• Ongoing investment in research & development to analyze new and emerging technology opportunities for novel edge
AI applications and product roadmap expansion

Investor Presentation 2024 |"



It's a marketing tool, not a commercial product!!!
I have many times that my tech knowledge would fit on a pinhead, but would that be linked to DeGirium?
SC
 
  • Like
Reactions: 1 users
Top Bottom