BRN Discussion Ongoing

Round 1 between Hailo & BRN has been won by BRN. Renesas who is partnered with both chose BRN IP for their new RA series MCU based on ARM Cortex-M85.

Round 1 between SynSense & BRN appears to be going in favour of BRN. Prophesee is partnered with both & appears will choose the BRN IP. BRN is a superior product more suited to Prophesee's metavision sensor & SynSense released Speck using IniVation's (Prophesee competitor) sensor instead of Prophesee's sensor.
Absolutely correct.

Just think about what certain posters would now be saying if the SynSense shoe was on the Brainchip foot.

What if Brainchip, like SynSense had hooked up with Prophesee first.

Then six months later Prophesee’s CEO came out and said until we hooked up with SynSense we were only half the solution and could not realise our technology’s potential.

These posters would be running around like Chicken Little claiming the sky was falling.

Instead Prophesee by logical extension pours cold water all over SynSense and lauds Brainchip AKIDA as it’s perfect match, the Ying to its Yang, the Juliet to its Romeo, its winged keel to its 12 metre Americas Cup Challenger and they ignore the significance.

Can this truly be a genuine reaction or one motivated by other interests.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 43 users
There's also Hailo & GrAI Matter Labs. They sell chips so will be more difficult to scale as you have mentioned.



Akida appears to be unique with learning capabilities & IP business model can be scaled rapidly.

Akida could become the dominant processor similar to ARM due to lack of competitors & superior product.

Intel could advance their Loihi chips & IBM could advance their True North chips in a few years providing more competition along with a few new start ups should their technology prove to be as good or superior to Akida. However, Akida should have a commanding market share by the time the others catch up with their tech.

In the next few years we could probably allow 50% market share for BRN due to limited competition. Will also depend on whether or not the majority of AI applications require Akida's learning feature. There may be many basic AI applications that don't require the learning feature or vice versa.

I am being conservative with 10-20% long term market share & will not be surprised if it's much higher.
I think GraAI Matter Labs really have something going for them. They also sell IP through ARM and Synopsys. They have a configuration that supports up to 18 million neurons. This is surprising considering a couple of years ago, they only supported around 200.000 neurons. I know the count of neurons is not a way to measure performance, but it is an indication of the size of network they can handle.

It looks like it's going to be a fight between GrAI Matter Labs and Brainchip.

Being a public company probably helps spreading the word:

1679402240872.png
 
Last edited:
  • Like
  • Fire
Reactions: 10 users

TechGirl

Founding Member
Just received BrainChip March newsletter & this has peaked my interest.

On the Road Again​

mail
The BrainChip team traveled to Nuremberg, Germany for EmbeddedWorld 2023, where we showcased our 2nd generation Akida™ platform. Akida processors power next-generation edge AI in a range of industrial, home, automotive, and scientific environments. BrainChip was a member at the TinyML.org pavilion along with our partner, Edge Impulse, demonstrating Akida in action. Coverage from the event will be picking up in the coming days and weeks regarding Akida’s fully digital, customizable, event-based neural processor and IP, which is ideal for advanced AI/ML devices such as intelligent sensors, medical devices, high-end video-object detection, and ADAS/autonomous systems. This trip kickstarted a pretty active spring of events.
 
  • Like
  • Fire
  • Love
Reactions: 50 users

DK6161

Regular
Just received BrainChip March newsletter & this has peaked my interest.

On the Road Again​

mail
The BrainChip team traveled to Nuremberg, Germany for EmbeddedWorld 2023, where we showcased our 2nd generation Akida™ platform. Akida processors power next-generation edge AI in a range of industrial, home, automotive, and scientific environments. BrainChip was a member at the TinyML.org pavilion along with our partner, Edge Impulse, demonstrating Akida in action. Coverage from the event will be picking up in the coming days and weeks regarding Akida’s fully digital, customizable, event-based neural processor and IP, which is ideal for advanced AI/ML devices such as intelligent sensors, medical devices, high-end video-object detection, and ADAS/autonomous systems. This trip kickstarted a pretty active spring of events.
My interest peaked based on what being sampled in this photo. No offence to Akida but my tastebud can go further and identify which one is the best 🤣.
Note however that unlike Akida, I consume a lot of energy to be this good 🤣
 
  • Like
Reactions: 4 users

Diogenese

Top 20
I think GraAI Matter Labs really have something going for them. They also sell IP through ARM and Synopsys. They have a configuration that supports up to 18 million neurons. This is surprising considering a couple of years ago, they only supported around 200.000 neurons. I know the count of neurons is not a way to measure performance, but it is an indication of the size of network they can handle.

It looks like it's going to be a fight between GrAI Matter Labs and Brainchip.

Being a public company probably helps spreading the word:

View attachment 32740
Yes. I have long thought that Grai Matter and Hailo would be likely competitors.

from a 2020 article:

Spiking Neural Networks: Research Projects Or Commercial Products?
Opinions differ widely, but in this space that isn’t unusual.
MAY 18TH, 2020 - BY: BRYON MOYER

...
All of these coding approaches aside, GrAI Matter uses a more direct approach. “We encode values directly as numbers – 8- or 16-bit integers in GrAI One or Bfloat16 in our upcoming chip. This is a key departure from other neuromorphic architectures, which have to use rate or population or time or ensemble codes. We can use those, too, but they are not efficient,” said Tapson (GraiMatter CSO).

GraiMatter use a time-multiplexed SNN arrangement, where a shared processor plays an integral part in the processing of each event.

GraiMatter:-
WO2020025680A1 DATA PROCESSING MODULE, DATA PROCESSING SYSTEM AND DATA PROCESSING METHOD

1679404910776.png


A neuromorphic processing module (1) for time-multiplexed execution of a spiking neural network is provided that comprises a plurality of neural units. Each neural unit is capable of assuming a neural state, and has a respective addressable memory entry in a neuron state memory unit (11) for storing state information specifying its neural state. The state information for each neural unit is computed and updated in a time-multiplexed manner by a processing facility (10, neural controller) in the processing module, depending on event messages destined for said neural unit. When the processing facility computes computing that an updated neural unit assumes a firing state, it resets the updated neural unit to an initial state, accesses a respective entry for the updated neural unit in an output synapse slice memory unit, and retrieves from said respective entry an indication for a respective range of synapse indices, wherein the processing facility for each synapse index in the respective range accesses a respective entry in a synapse memory unit, retrieves from the synapse memory unit synapse property data and transmits a firing event message to each neural unit associated with said synapse property data.

...
One approach is to mimic such a complex system with a time-multiplexed design wherein a plurality of neural units share a processing facility. Since digital hardware can run orders of magnitudes faster than the speed at which biological neurons work the shared processing facility can realistically emulate neuron behavior, while this approach saves space to implement a higher density of virtual neurons and their synapses.

...

1. A neuromorphic processing module (1) for time-multiplexed execution of a spiking neural network comprising a plurality of neural units, each neural unit being capable of assuming a neural state selected from a plurality of states comprising an initial state, one or more transitional states and a firing state, each neural unit having a respective addressable memory entry in a neuron state memory unit (11) for storing state information specifying its neural state, the state information for each neural unit being computed and updated in a time- multiplexed manner by a processing facility (10) incorporated in said processing module, depending on event messages destined for said neural unit, wherein

the processing facility

upon computing that an updated neural unit assumes the firing state,

resets the updated neural unit to the initial state,

accesses a respective entry for the updated neural unit in a output synapse slice memory unit,

and retrieves from said respective entry an indication for a respective range of output synapse indices
,

wherein the processing facility for each output synapse index in the respective range:

accesses a respective entry in an output synapse memory unit (13),

retrieves output synapse property data from said respective entry,

the output synapse property data specifying a transmission delay and a respective input synapse index corresponding to a respective entry in an input synapse memory unit (14), the respective entry in the input synapse memory unit comprising a reference to an associated neural unit; and

transmits a firing event message to the associated neural unit with the specified delay
.
 
  • Like
  • Fire
Reactions: 22 users

Diogenese

Top 20
Just received BrainChip March newsletter & this has peaked my interest.

On the Road Again​

mail
The BrainChip team traveled to Nuremberg, Germany for EmbeddedWorld 2023, where we showcased our 2nd generation Akida™ platform. Akida processors power next-generation edge AI in a range of industrial, home, automotive, and scientific environments. BrainChip was a member at the TinyML.org pavilion along with our partner, Edge Impulse, demonstrating Akida in action. Coverage from the event will be picking up in the coming days and weeks regarding Akida’s fully digital, customizable, event-based neural processor and IP, which is ideal for advanced AI/ML devices such as intelligent sensors, medical devices, high-end video-object detection, and ADAS/autonomous systems. This trip kickstarted a pretty active spring of events.
There's that "Built on ARM Official Partners" gizmo again ... and astronaut Spike.
 
  • Like
  • Love
Reactions: 16 users

KMuzza

Mad Scientist
There's that "Built on ARM Official Partners" gizmo again ... and astronaut Spike.
YES - A BRAINCHIP PCLE - Check out the library - it is all there.

wait until the Akida 1500 has been released and watch the customers sign up.

AKIDA BALLISTA UBQTS
 
  • Like
Reactions: 14 users

KMuzza

Mad Scientist
OH yes and - and it's built on ARM - Official Partner.-(y)

AKIDA BALLISTA UBQTS
 
  • Like
Reactions: 6 users

mrgds

Regular
There's that "Built on ARM Official Partners" gizmo again ... and astronaut Spike.
Ive always seen the astronaut as "Ken "
Thought Akida was "Spike " ............................:unsure:
And, i see they have learnt their lesson with the tablecloth ............................😂

AKIDA BALLISTA
 
  • Like
  • Haha
Reactions: 12 users

Violin1

Regular
I will visit the shop next week! I will be for a Business trip there! Unfortunately, the cherry blossoms season was very early this year! 🙋🏻‍♂️
There is still snow on the beach in Aomori so sakura not blooming yet. Even next week may be too early.
 
  • Like
Reactions: 3 users

Fox151

Regular
Just received BrainChip March newsletter & this has peaked my interest.

On the Road Again​

mail
The BrainChip team traveled to Nuremberg, Germany for EmbeddedWorld 2023, where we showcased our 2nd generation Akida™ platform. Akida processors power next-generation edge AI in a range of industrial, home, automotive, and scientific environments. BrainChip was a member at the TinyML.org pavilion along with our partner, Edge Impulse, demonstrating Akida in action. Coverage from the event will be picking up in the coming days and weeks regarding Akida’s fully digital, customizable, event-based neural processor and IP, which is ideal for advanced AI/ML devices such as intelligent sensors, medical devices, high-end video-object detection, and ADAS/autonomous systems. This trip kickstarted a pretty active spring of events.
I think I know why the share price is falling - someone left the table cloth behind! Someone should contact BRN and ask where the ASX announcement about the missing table cloth is.
 
  • Haha
  • Like
Reactions: 13 users

Makeme 2020

Regular

Product
Solutions
619cfa198370e2b151f119a6_Polygon%201.svg

Developers
619cfa198370e2b151f119a6_Polygon%201.svg

Pricing
Company
619cfa198370e2b151f119a6_Polygon%201.svg

Blog
LoginGet started


Edge ML Series San Jose​

62d7d07b355696906b4a4ca3_Vector%20(1).svg

DATE & TIME (PT)​

March 27, 2023 9:00 AM
62d7d1c289a8a21475f932d0_Vector%20(2).svg

LOCATION​

San Jose

Register now
See event
618e2bf4bc5fe676e4adce9a_Group%20(2).svg

The Edge ML Series is coming to San Jose on March 27th! Mark your calendars and request your invite now.
This exclusive, in-person, one-day event will explore the benefits of edge machine learning, ways to differentiate your products with embedded intelligence, and how to deliver value in less time while lowering operational cost using AI tools like Edge Impulse. Featuring:
• Keynotes from industry leaders
• Hands-on workshops
• Customer stories
• Insights on deploying ML solutions at scale
• Demos
• Networking opportunities

Agenda (PT)

8:30–9:00 Registration and coffee
9:00–10:40 Keynotes

  • Welcome and housekeeping (10 min)
  • “Demystifying Edge ML” — Edge Impulse keynote (30 min)
    The fast-evolving ecosystem of edge ML includes silicon, sensors, connectivity, and more. We'll guide you through the world of edge ML and uncover its potential for your business.
  • "Advancing Intelligence at the Edge with Texas Instruments" — Texas Instruments keynote (30 min)
  • “Edge AI: Ready When You Are!” — BrainChip keynote (30 min)
    AI promises to provide new capabilities, efficiencies, and economic growth, all with the potential for improving the human condition. There are numerous challenges to delivering on this, in particular the need for performant, secure edge AI. We'll describe the readiness of the industry to deliver edge AI, and the path to the imminent transition.
10:40–11:00 Coffee break (demo area)
11:00–12:00 Seminars

  • Edge Impulse — use case focus (20 min)
  • “Why AI acceleration matters for Edge Devices” — Alif (20 min)
    Three key parameters to consider when selecting a hardware platform for AI-enable edge-ML are
    performance, power consumption, and price. This talk will help you maximize your projects chance of success by showing you how to maximize the performance parameter, while keeping the others in-check.
  • “The advantages of Nordic’s ultra-low-power wireless solutions and machine learning” — Nordic Semiconductors (20 min)
    Nordic Semiconductor creates low-power wireless devices that utilize popular protocols including Bluetooth Low Energy, Wi-Fi 6, and cellular IoT. Although optimizing the radio transmitter can just save power to a point, implementing ML on the processor to detect and send only relevant data can significantly further reduce power requirements on SoCs and SiPs. Discover how Nordic's wireless technology can leverage ML to achieve ultra-low power — without needing a dedicated machine learning core.
12:00–13:00 Lunch (demo area)
  • Demos from TI, BrainChip, Alif, Nordic, Sony, MemryX, and NovTech
13:00–15:30 Workshops
  • "Hands-On With the TDA4VM" — Texas Instruments workshop
    (75 min)
  • Break (15 min)
  • “Enabling the Age of Intelligent Machines” — Alif workshop
    (60 min)
    Alif Semiconductor and Edge Impulse will demonstrate how anyone can create, train, tune, and deploy advanced machine learning models on the next generation Ensemble family of AI accelerated microcontrollers.
15:30–16:00 Wrap and close

‍​


More events​



APR​

25​

-​

APR 25, 2023​

Edge ML Series Chicago​


Chicago
Invite-only, in-person-only, one-day event series to explore how to harness real-world data to deploy advanced edge machine learning (ML) solutions at scale.


MAR​

22​

-​

MAR 22, 2023​

Unlock the Power of AI with Edge Impulse Studio and Renesas’ RZ/V2L​


Virtual
Join Renesas and Edge Impulse to learn about cutting-edge innovation with Edge Impulse Studio and Renesas' RZ/V2L, and much more!


MAY​

17​

-​

MAY 17, 2023​

Edge ML Series London​


London
Invite-only, in-person-only, one-day event series to explore how to harness real-world data to deploy advanced edge machine learning (ML) solutions at scale.


Stay informed of our future events​


Are you interested in connecting with the ML community? 
Subscribe to our newsletter to get updates on future events.

Subscribe
618e2b930ea82103ffc4eda1_Group%20(1).svg



618e33d89badebdfe7c5897a_Logo%20(2).svg

Edge Impulse
HomeProductCase studiesBlogCareersAbout
 
  • Like
  • Fire
Reactions: 18 users

Foxdog

Regular
Absolutely correct.

Just think about what certain posters would now be saying if the SynSense shoe was on the Brainchip foot.

What if Brainchip, like SynSense had hooked up with Prophesee first.

Then six months later Prophesee’s CEO came out and said until we hooked up with SynSense we were only half the solution and could not realise our technology’s potential.

These posters would be running around like Chicken Little claiming the sky was falling.

Instead Prophesee by logical extension pours cold water all over SynSense and lauds Brainchip AKIDA as it’s perfect match, the Ying to its Yang, the Juliet to its Romeo, its winged keel to its 12 metre Americas Cup Challenger and they ignore the significance.

Can this truly be a genuine reaction or one motivated by other interests.

My opinion only DYOR
FF

AKIDA BALLISTA
They'd be freaking out FF. But we do need to see this validation of AKIDA by our partners converted into revenue streams. Doesn't even need to be that much to start with - just sufficient to cover annual operating expenses. It would certainly take the pressure off and de-risk the company in the eyes of the general investment community. We might then be on the way to the magic figure of $2.75 🤞😉
 
  • Like
  • Fire
Reactions: 12 users

cosors

👀
A compromise proposal by the EU Commission regarding e-fuels states that the vehicle must recognize whether e-fuels are being refueled or conventional fuels. I know of a sensor ~system that can taste and smell. A new field for Akida when it comes? Brand new, submitted today.
 
  • Like
  • Fire
  • Love
Reactions: 19 users

cosors

👀
And an old beautiful realization that should be unknown to many here. Unfortunately it is impossible for me to make my own better photo of this magazine. Only speed records from 1898 with the beginning and EVs. How the story went you know.
IMG20220519202811.jpg


Elektrowagen means EV
It began with EVs ;)
 
Last edited:
  • Like
  • Love
Reactions: 10 users
  • Like
  • Fire
Reactions: 23 users

mrgds

Regular
I think I know why the share price is falling - someone left the table cloth behind! Someone should contact BRN and ask where the ASX announcement about the missing table cloth is.
No, No, No, @Fox151 ......................... ur reading it wrong,
This is a "huge DOT join " with TESLA ......................... you see, Tesla"s mantra is, "No part is the best part "
So, a subliminal message to all shareholders is why we"ve ditched the tablecloth !!! 😂😂😂

A interesting use of words too in the recent podcast from our CEO,
Quote ............... "Maniacal Focus " .......................really?? :eek:
Lets hope its with reference to #2 😲

Screenshot (61).png
 
  • Like
  • Haha
Reactions: 10 users

Gemmax

Regular
And an old beautiful realization that should be unknown to many here. Unfortunately it is impossible for me to make my own better photo of this magazine. Only speed records from 1898 with the beginning and EVs. How the story went you know.
View attachment 32745

Elektrowagen means EV
It began with EVs ;)
FWIW. A young Ferdinand Porsche exhibited a hybrid Austro Daimler car at the 1900 World Exposition in Paris. It had electric motors on the front wheels and an internal combustion engine on the rear. ( Truly ahead of his time.)
 
  • Like
  • Love
  • Fire
Reactions: 13 users

Tothemoon24

Top 20
This reduced in size article is from Sally Ward EE times .


Embedded World 2023




Also on the STMicro booth were another couple of fun demos, including a washing machine that could tell how much laundry was in the machine in order to optimize the amount of water added. This system is sensorless; it is based on AI analysis of the current required to drive the motor, and predicted the weight of the 800g laundry load to within 30g. A robot vacuum cleaner equipped with a time-of-flight sensor also used AI to tell what type of floor surface it was cleaning, to allow it to select the appropriate cleaning method.

Renesas

Next stop was the Renesas booth, to see the Arm Cortex-M85 up and running in a not-yet-announced product (due to launch in June). This is the first time EE Times has seen AI running on a Cortex-M85 core, which was announced by Arm a year ago.


The M85 is a larger core than the Cortex-M55, but both are equipped with Helium—Arm’s vector extensions for the Cortex-M series—ideal for accelerating ML applications. Renesas’ figures had the M85 running inference 5.3× faster than a Renesas M7-based design, though the M85 was also running faster (480 MHz compared with 280).

Renesas’ demo had Plumerai’s person-detection model up and running in 77 ms per inference.

Renesas AI on Cortex-M85 model from Plumerai at Embedded World 2023 Renesas’ not-yet-announced Cortex-M85 device is the first we’ve seen running AI on the M85. Shown here running Plumerai people-detection model. (Source: EE Times/Sally Ward-Foxton)
Renesas field application engineer Stefan Ungerechts also gave EE Times an overview of the DRP-AI (dynamically reconfigurable processor for AI), Renesas’ IP for AI acceleration. A demo of the RZ/V2L device, equipped with a 0.5 TOPS @ FP16 (576 MACs) DRP-AI engine, was running tinyYOLOv2 in 27 ms at 500 mW (1 TOPS/W). This level of power efficiency means no heat sink is required, Ungerechts said.

The DRP-AI is, in fact, a two-part accelerator; the dynamically reconfigurable processor handles acceleration of non-linear functions, then there is a MAC array alongside it. Non-linear functions in this case might be image preprocessing functions or model pooling layers of a neural network. While the DRP is reconfigurable hardware, it is not an FPGA, Ungerechts said. The combination is optimized for feed-forward networks like convolutional neural networks commonly found in computer vision, and Renesas’ software stack allows either the whole AI workload to be passed to the DRP-AI or use of a combination of the DRP-AI and the CPU.

Also available with a DRP-AI engine are the RZ/V2MA and RZ/V2M, which offer 0.7 TOPS @ FP16 (they run faster than the -V2L at 630 MHz compared to 400, and have higher memory bandwidth).

A next-generation version of the DRP-AI that supports INT8 for greater throughput, and is scaled up to 4K MACs, will be available next year, Ungerechts said.

Squint

Squint, an AI company launched earlier this year, is taking on the challenge of explainable AI.

Squint CEO Kenneth Wenger told EE Times that the company wants to increase trust in AI decision making for applications like autonomous vehicles (AVs), healthcare and fintech. The company takes pre-production models and tests them for weaknesses—identifying in what situations they are more likely to make a mistake.

This information can be used to set up a mitigating factors, which might include human-in-the-loop—perhaps flagging a medical image to a doctor—or trigger a second, more specialized model that has been specifically trained for that situation. Squint’s techniques can also be used to tackle “data drift”—for maintaining models over longer periods of time.

Embedl

Swedish AI company Embedl is working on retraining models to optimize them for specific hardware targets. The company has a Python SDK that fits into the training pipeline. Techniques include replacing operators with alternatives that may run more efficiently on the particular target hardware, as well as quantization-aware retraining. The company’s customers so far have included automotive OEMs and tier 1s, but they are expanding to Internet of Things (IoT) applications.

Embedl has also been a part of the VEDL-IoT project, an EU-funded project in collaboration with Bielefeld University that aims to develop an IoT platform, which distributes AI across a heterogeneous cluster.

Their demo showed managing AI workloads across different hardware: an Nvidia AGX Xavier GPU in a 5G basestation and an NXP i.MX8 application processor in a car. With sufficient 5G bandwidth available, “difficult” layers of the neural network could be computed remotely in the basestation, and the rest in the car, for optimum latency. Reduce the 5G bandwidth available, and more or all of the workload goes to the i.MX8. Embedl had optimized the same model for both hardware types.

VEDL-IoT/Embedl demo The VEDL-IoT project demo shows splitting AI workloads across 5G infrastructure and embedded hardware. (Source: EE Times/Sally Ward-Foxton)

Silicon Labs

Silicon Labs had several xG24 dev kits running AI applications. One had a simple Sparkfun camera with the xG24 running people counting, and calculating the direction and speed of movement.

A separate wake word demo ran in 50 ms on the xG24’s accelerator, and a third board was running a gesture recognition algorithm.

BrainChip

BrainChip had demos running on a number of partner booths, including Arm and Edge Impulse. Edge Impulse’s demo showed the company’s FOMO (faster objects, more objects) object detection network running on a BrainChip Akida AKD1000 in under 1 mW.
 
  • Like
  • Love
  • Fire
Reactions: 23 users
So what is peoples take on how socio does not already have an IP licence. They literally have designed products that rely on Akida.

I ask because if they have somehow avoided the need what sort of deal has been struck.

Why would they not sign it now rather than later as they are not hiding the relationship or product.

Because if they can get away with not having one is this the same situation that will apply to prophesee?
At a guess I would say that they may have received permission to use akida for their help in tapping out akida 1000 and would only pay a royalty on chips produced. Who the hell knows what deals are being done.
 
  • Like
Reactions: 3 users
Top Bottom