BRN Discussion Ongoing

mcm

Regular
Our friends Nviso will be spruiking us at CES 2023 😉

Product Development
Following on from our announcement of successful neuromorphic interoperability last summer, we will now
be announcing support for two new high performance AI Apps from its Human Behavior AI App catalogue,
Gaze and Action Unit Detection. We are making available a full SDK release for manufacturers looking to
mission critical human-interaction features meeting the most demanding performance, cost, and power
budgets. NVISO’s Neuro SDK running on the BrainChip platform will be demonstrated on the Socionext
stand at CES2023.
Excellent! Can only think that those selling BRN are not privy to the extraordinary research being carried out on this forum ... because just reading today's posts ... you'd be mad to part with a single share. I'm delighted to buy more at sub 70c. What a gift!
 
  • Like
  • Fire
  • Love
Reactions: 43 users

jtardif999

Regular
I guess SOCIONEXT were telling the truth in 2019 despite the complaints of the scumbag manipulators:

We are excited to join BrainChip in the design, development and introduction of Akida,” said Noriaki Kubo, Corporate Executive Vice President of Socionext. “Bringing artificial intelligence to edge applications is a major industry development and a strategic application segment for Socionext. Socionext provides suppliers such as BrainChip with a large engineering solutions platform, ranging from integrated circuit design through final test and assembly, to bring high quality products to market efficiently.

We are extremely pleased to work with Socionext, one of the world’s leading SoC development and manufacturing teams,” said Louis DiNardo, CEO of BrainChip. “With the firm backing by such a preeminent partner, we are confident we will be supplying the world’s first complete AI Edge network solution.

If you panic about the next 4C if it does not deliver income you do not deserve to be a Brainchip shareholder.

My opinion only DYOR
FF

AKIDA BALLISTA
I wonder if BrainChip and Socionext came to some arrangement regarding Socionext not having to pay a licence fee for the Akida IP and then quietly went about their business embedding the IP in these products. It would make sense to me that they might come to this kind of arrangement since at that time BrainChip needed help to produce the Akida hardware and may have added in a sweetener to confirm the partnership with Socionext to conceive Akida and tape out for silicon.

Socionext May also have said: ‘We‘ll make the tape out process cheaper for you if you give us access to embedding your IP in our products without having to pay a licence fee.’ Who knows?
 
  • Like
  • Thinking
  • Fire
Reactions: 20 users

SERA2g

Founding Member
Out of the NVISO company update, @Diogenese I'd be keen to know if you've looked at the neuromorphic technology being developed by Alexera AI and GrAI Matter Labs?
 
  • Like
Reactions: 4 users

Dhm

Regular
So, in a couple of days we have discovered about Intel, and have had updates on both Socionext and NVISO. It doesn’t come much sexier than that.

sexy homer simpson GIF
 
  • Like
  • Haha
  • Love
Reactions: 38 users
D

Deleted member 118

Guest
So, in a couple of days we have discovered about Intel, and have had updates on both Socionext and NVISO. It doesn’t come much sexier than that.

sexy homer simpson GIF


 
  • Haha
  • Like
Reactions: 11 users

wilzy123

Founding Member


1671158433078.png
 
  • Like
  • Fire
  • Love
Reactions: 103 users

Townyj

Ermahgerd
  • Haha
  • Like
  • Love
Reactions: 37 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Well, I for one am getting a bit worried because my friends and I like to pull faces at each other in the car. It's just a "thing" we've developed over time which has turned into a bit of a competition I'm afraid. So my question is, what happens if NVISO's technology detects a human emotion it doesn't know how to classify? Do you think it brings the car safely to a stop and calls for emergency services to attend? Because that would be a bit embarrassing...



uCHmygB.gif
 
  • Haha
  • Like
  • Thinking
Reactions: 31 users

Damo4

Regular
My thoughts on the NVISO release:

We know we are working with Mercedes and this could extend to a link via NVISO for the monitoring and smart cabin features on the new head unit (1st German OEM).
My guess is the latest info we have now seen surrounding BMW with autonomous driving, monitoring etc is the 2nd German OEM.


Edit: Also the don't "name" the Japan OEM, yet the Brainchip segment is nested between two paragraphs mentioning Panasonic...smells like a duck no?
 
  • Like
  • Fire
  • Love
Reactions: 35 users
ARM
EDGE IMPULSE
INTEL
ISL
MEGACHIPS
MERCEDES BENZ
MOSCHIPS
NASA
NUMEN
NVISO
PROPHESEE
RENESAS
SiFIVE
SOCIONEXT
VALEO

POLICE HAVE ADVISED THAT THE ABOVE NAMED ENTITIES ARE MEMBERS OF THE AKIDA CULT AND ARE LIKELY TO ATTRACT GENUINE RETAIL INVESTOR INTEREST.

CARE SHOULD BE TAKEN WHEN ESTABLISHING SHORT AND OR TRADING POSITIONS.
Seems like the Police were right Socionext is a cult member.
 
  • Like
  • Haha
  • Love
Reactions: 33 users

HopalongPetrovski

I'm Spartacus!
Well, I for one am getting a bit worried because my friends and I like to pull faces at each other in the car. It's just a "thing" we've developed over time which has turned into a bit of a competition I'm afraid. So my question is, what happens if NVISO's technology detects a human emotion it doesn't know how to classify? Do you think it brings the car safely to a stop and calls for emergency services to attend? Because that would be a bit embarrassing...



View attachment 24508



Classify this Nviso, you bitch! 🤣

1671162390606.jpeg
 
  • Haha
  • Like
Reactions: 16 users

overpup

Regular
My thoughts on the NVISO release:

We know we are working with Mercedes and this could extend to a link via NVISO for the monitoring and smart cabin features on the new head unit (1st German OEM).
My guess is the latest info we have now seen surrounding BMW with autonomous driving, monitoring etc is the 2nd German OEM.


Edit: Also the don't "name" the Japan OEM, yet the Brainchip segment is nested between two paragraphs mentioning Panasonic...smells like a duck no?
socionext is part owned by Panasonic (and Fujitsu i think?)
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Diogenese

Top 20
Out of the NVISO company update, @Diogenese I'd be keen to know if you've looked at the neuromorphic technology being developed by Alexera AI and GrAI Matter Labs?
Hi Sera,

I flagged graiMatter a couple of years ago as one to watch.

This article discusses the different tech approaches of graiMatter and Akida:


Spiking Neural Networks: Research Projects Or Commercial Products?

Opinions differ widely, but in this space that isn’t unusua
l.

MAY 18TH, 2020 - BY: BRYON MOYER

https://semiengineering.com/spiking-neural-networks-research-projects-or-commercial-products/


Temporal coding is said by some to be closer to what happens in the brain, although there are differing opinions on that, with some saying that that’s the case only for a small set of examples: “It’s actually not that common in the brain,” said Jonatha Tapson, GrAI Matter’s chief scientific officer. An example where it is used is in owl’s ears. “They use their hearing to hunt at night, so their directional sensitivity has to be very high.” Instead of representing a value by a frequency of spikes, the value is encoded as the delay between spikes [ # rate coding #]. Spikes then represent events, and the goal is to identify meaningful patterns in a stream of spikes.



Temporally coded SNNs can be most effective when driven by sensors that generate temporal-coded data – that is, event-based sensors. Dynamic vision sensors (DVS) are examples. They don’t generate full frames of data on a frames-per-second basis. Instead, each pixel reports when its illumination changes by more than some threshold amount. This generates a “change” event, which then propagates through the network. Valentian said these also can be particularly useful in AR/VR applications for “visual odometry,” where inertial measurement units are too slow.



Meanwhile, BrainChip started with rate coding, but decided that wasn’t commercially viable. Instead, it uses rank coding (or rank-order coding), which uses the order of arrival of spikes (as opposed to literal timing) to a neuron as a code. This is a pattern-oriented approach, with arrivals in the prescribed order (along with synaptic weighting) stimulating the greatest response and arrivals in other orders providing less stimulation.



All of these coding approaches aside, GrAI Matter uses a more direct approach. “We encode values directly as numbers – 8- or 16-bit integers in GrAI One or Bfloat16 in our upcoming chip. This is a key departure from other neuromorphic architectures, which have to use rate or population or time or ensemble codes. We can use those, too, but they are not efficient,” said Tapson.



The [ # BrainChip #] neural fabric is fully configurable for different applications. Each node in the array contains four neural processing units (NPUs), and each NPU can be configured for event-based convolution (supporting standard or depthwise convolution) or for other configurations, including fully connected. Events are carried as packets on the network.

While NPU details or images are not available, [ # WO2020092691 published 20202507 # ] BrainChip did further explain that each NPU has digital logic and SRAM, providing something of a processing-in-memory capability, but not using an analog-memory approach. An NPU contains eight neural processing engines that implement the neurons and synapses. Each event is multiplied by a synaptic weight upon entering a neuron
.

According to this article, GraiMatter are not using SNNs. From their choice of 8 bit or 16 bit integers/FP I assume they need a MAC matrix circuit to process weights and activations, as in CNN. This is not a sparse process, as every bit must be processed. hence GraiMatter would use more power and would be slower than Akida.

GraiMatter's assertion that "other neuromorphic architectures, which have to use rate or population or time or ensemble codes." does not apply to Akida, which uses rank coding, from which Simon Thorpe'se N-of-M code is derived. This is based on the discovery that the strongest signals trigger retinal receptors and pixels earlier than weaker signals. Most of the information is carried in the earlier arriving spikes, and the later-arriving spikes can be discarded. When you think about it, this is quite like how the DVS/event camera works. N-of-M coding uses the order of arrival, and does not need to track the time of arrival. It just counts the first N spikes and closes the gate.

GraiMatter uses 8 bit or 16 bit precision mathematics, whereas Akida uses inference based on probability. You may recall that some demonstrations of Akida show a bar chart with the probabilities of the subject item being one of a number of different articles, eg, dog, cat, parrot, elephant. Akida does the comparison and selects the one which is the best fit. Of course, the model libraries are much larger than that.

I find this an amazing leap of imagination, to conceive that such a process could be implemented in silicon, and N-of-M is pretty clever too.
 
  • Like
  • Fire
  • Love
Reactions: 35 users

Pmel

Regular
The one thing people need to understand who are selling or waiting for the revenue to kick in, they will be paying way more then what the current sp is at. Are you okay to take the risk . You may have the chance to have few folds then or get in now and have double the few fold. I am writing patiently and hopefully its not too long. Fingers crossed .

My opinion.
Dyor
Dyor
 
  • Like
  • Fire
  • Love
Reactions: 13 users

jk6199

Regular
With all this news I feel the pressure building.

Could be the excitement of next year, could be the bad curry I had.

Either way, we may see explosive actions hopefully soon!
 
  • Haha
  • Like
Reactions: 29 users
D

Deleted member 118

Guest
  • Haha
  • Like
Reactions: 8 users

Diogenese

Top 20
Well, I for one am getting a bit worried because my friends and I like to pull faces at each other in the car. It's just a "thing" we've developed over time which has turned into a bit of a competition I'm afraid. So my question is, what happens if NVISO's technology detects a human emotion it doesn't know how to classify? Do you think it brings the car safely to a stop and calls for emergency services to attend? Because that would be a bit embarrassing...



View attachment 24508
As my mother used to say "You'll get stuck like that."

Turned out to be true when we did the belly-barging competition.

https://www.bing.com/videos/search?...6D0F9DE6650B4674606D6D0F9DE6650B4&FORM=WRVORC
 
Last edited:
  • Haha
  • Like
Reactions: 10 users

SERA2g

Founding Member
Hi Sera,

I flagged graiMatter a couple of years ago as one to watch.

This article discusses the different tech approaches of graiMatter and Akida:


Spiking Neural Networks: Research Projects Or Commercial Products?

Opinions differ widely, but in this space that isn’t unusua
l.

MAY 18TH, 2020 - BY: BRYON MOYER

https://semiengineering.com/spiking-neural-networks-research-projects-or-commercial-products/


Temporal coding is said by some to be closer to what happens in the brain, although there are differing opinions on that, with some saying that that’s the case only for a small set of examples: “It’s actually not that common in the brain,” said Jonatha Tapson, GrAI Matter’s chief scientific officer. An example where it is used is in owl’s ears. “They use their hearing to hunt at night, so their directional sensitivity has to be very high.” Instead of representing a value by a frequency of spikes, the value is encoded as the delay between spikes [ # rate coding #]. Spikes then represent events, and the goal is to identify meaningful patterns in a stream of spikes.



Temporally coded SNNs can be most effective when driven by sensors that generate temporal-coded data – that is, event-based sensors. Dynamic vision sensors (DVS) are examples. They don’t generate full frames of data on a frames-per-second basis. Instead, each pixel reports when its illumination changes by more than some threshold amount. This generates a “change” event, which then propagates through the network. Valentian said these also can be particularly useful in AR/VR applications for “visual odometry,” where inertial measurement units are too slow.



Meanwhile, BrainChip started with rate coding, but decided that wasn’t commercially viable. Instead, it uses rank coding (or rank-order coding), which uses the order of arrival of spikes (as opposed to literal timing) to a neuron as a code. This is a pattern-oriented approach, with arrivals in the prescribed order (along with synaptic weighting) stimulating the greatest response and arrivals in other orders providing less stimulation.



All of these coding approaches aside, GrAI Matter uses a more direct approach. “We encode values directly as numbers – 8- or 16-bit integers in GrAI One or Bfloat16 in our upcoming chip. This is a key departure from other neuromorphic architectures, which have to use rate or population or time or ensemble codes. We can use those, too, but they are not efficient,” said Tapson.



The [ # BrainChip #] neural fabric is fully configurable for different applications. Each node in the array contains four neural processing units (NPUs), and each NPU can be configured for event-based convolution (supporting standard or depthwise convolution) or for other configurations, including fully connected. Events are carried as packets on the network.

While NPU details or images are not available, [ # WO2020092691 published 20202507 # ] BrainChip did further explain that each NPU has digital logic and SRAM, providing something of a processing-in-memory capability, but not using an analog-memory approach. An NPU contains eight neural processing engines that implement the neurons and synapses. Each event is multiplied by a synaptic weight upon entering a neuron
.

According to this article, GraiMatter are not using SNNs. From their choice of 8 bit or 16 bit integers/FP I assume they need a MAC matrix circuit to process weights and activations, as in CNN. This is not a sparse process, as every bit must be processed. hence GraiMatter would use more power and would be slower than Akida.

GraiMatter's assertion that "other neuromorphic architectures, which have to use rate or population or time or ensemble codes." does not apply to Akida, which uses rank coding, from which Simon Thorpe'se N-of-M code is derived. This is based on the discovery that the strongest signals trigger retinal receptors and pixels earlier than weaker signals. Most of the information is carried in the earlier arriving spikes, and the later-arriving spikes can be discarded. When you think about it, this is quite like how the DVS/event camera works. N-of-M coding uses the order of arrival, and does not need to track the time of arrival. It just counts the first N spikes and closes the gate.

GraiMatter uses 8 bit or 16 bit precision mathematics, whereas Akida uses inference based on probability. You may recall that some demonstrations of Akida show a bar chart with the probabilities of the subject item being one of a number of different articles, eg, dog, cat, parrot, elephant. Akida does the comparison and selects the one which is the best fit. Of course, the model libraries are much larger than that.

I find this an amazing leap of imagination, to conceive that such a process could be implemented in silicon, and N-of-M is pretty clever too.
Thank you for the response mate, appreciated greatly!
 
  • Like
Reactions: 6 users

Damo4

Regular
  • Haha
Reactions: 3 users
Top Bottom