BRN Discussion Ongoing

I don't know if this has been posted, but tghis could bve the Renensas offering....?

"Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency"

This is referring to the N:M pruning, I think and is probably the same one that was previously discussed, though not at length?

My interpretation of what @Diogenese said, is that this is more likely (or possibly) their own version of the N of M coding, used in AKIDA.

It's also possible, that they've just decided to change the terminology, as they have no need to mention us, since they've bought the licence, as others have said..
 
  • Like
  • Fire
Reactions: 8 users
Yep, was quiet, to me anyway.

Wasn't GML considered and early competitor at one stage, or for that wrong?


Grai Matter Labs quietly snapped up by Snap​

Nieke Roos
22 February

Last October, neuromorphic computing company Grai Matter Labs (GML) was acquired by Snap, the American developer of the instant messaging app Snapchat. Headquartered in Paris with offices in Eindhoven and Silicon Valley (San Jose), GML is working on a full-stack AI system-on-chip platform. The takeover didn’t receive much publicity except for a couple of mentions in French media, tracing back to an original article by the economic investigation site L’Informé. It appears that Snap considers it a confidential matter. According to L’Informé, GML was driven into American arms because it couldn’t find funding in Europe.
If they waste the technology on developing Snapcat to do better filters etc 🙄 it can only be good for us.
 
  • Haha
  • Like
Reactions: 5 users

IloveLamp

Top 20
We have a new employee at Brainchip🥰😘

View attachment 58420

Kurt Manninen

Senior Solutions Architect at Brainchip, Inc

Info

I am a Senior Solutions Architect at Brainchip, Inc. I work directly with Brainchip's current and future customers to help them utilize Brainchip's Akida Neuromorphic System-on-Chip to solve problems for computer vision (classification, object detection) , audio processing (keyword spotting), sensor fusion, and anomaly detection.



Beat ya 😜

1000013868.gif
 
Last edited:
  • Haha
Reactions: 15 users

IloveLamp

Top 20
"Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency"

This is referring to the N:M pruning, I think and is probably the same one that was previously discussed, though not at length?

My interpretation of what @Diogenese said, is that this is more likely (or possibly) their own version of the N of M coding, used in AKIDA.

It's also possible, that they've just decided to change the terminology, as they have no need to mention us, since they've bought the licence, as others have said..


Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-413765
 
  • Thinking
  • Fire
  • Like
Reactions: 4 users

IloveLamp

Top 20
🤔


1000013871.jpg
 
  • Like
  • Thinking
  • Fire
Reactions: 13 users


Post in thread 'BRN Discussion Ongoing' https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-413765
I'm not sure what you're saying IloveLamp..

Not using a cooling fan, doesn't prove it's us, as the N:M pruning increases speed and efficiency and doesn't need it because of that.

What we need to know...

Is N:M pruning a renaming of the N of M coding, used by AKIDA, or is it their own "inspiration" after examining how our IP works?

I don't think @Diogenese has found any Renesas patents regarding N:M pruning, but they have 18 months or something? Before they'd need to produce them?
 
  • Like
  • Fire
Reactions: 7 users

IloveLamp

Top 20
I'm not sure what you're saying IloveLamp..

Not using a cooling fan, doesn't prove it's us, as the N:M pruning increases speed and efficiency and doesn't need it because of that.

What we need to know...

Is N:M pruning a renaming of the N of M coding, used by AKIDA, or is it their own "inspiration" after examining how our IP works?

I don't think @Diogenese has found any Renesas patents regarding N:M pruning, but they have 18 months or something? Before they'd need to produce them?
I wasn't saying anything, simply showing you what had been previously posted, i prefer not to draw conclusions for others (most of the time 😏)
 
  • Like
  • Haha
  • Fire
Reactions: 10 users

Tothemoon24

Top 20
IMG_8541.jpeg
 

Attachments

  • IMG_8540.jpeg
    IMG_8540.jpeg
    409.2 KB · Views: 194
  • Like
  • Fire
  • Love
Reactions: 51 users

wilzy123

Founding Member
  • Haha
  • Fire
Reactions: 13 users

Diogenese

Top 20
"Additionally, pruning technology employed in the DRP-AI3 accelerator significantly improves AI computing efficiency"

This is referring to the N:M pruning, I think and is probably the same one that was previously discussed, though not at length?

My interpretation of what @Diogenese said, is that this is more likely (or possibly) their own version of the N of M coding, used in AKIDA.

It's also possible, that they've just decided to change the terminology, as they have no need to mention us, since they've bought the licence, as others have said..
Hi DB,

Our N:M coding is applied to the activation signal, a time-varying signal. The first N of M signals are processed and the remainder discarded. This is because the strongest signals from the optic nerves arrive before weaker signals (the stronger input signal causes the nerve to reach its firing threshold earlier). The strongest signals carry the most relevant information.

Renesas apply N:M coding to the static weights stored in memory, so there is no time element. Renesas base their selection on the magnitude of the stored signal. It's not about which arrives first. It's about which is strongest.

It's similar but different (and derivative).
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 24 users
Hi DB,

Our N:M coding is applied to the activation signal, a time-varying signal. The first N of M signals are processed and the remainder discarded. This is because the strongest signals from the optic nerves arrive before weaker signals (the stronger input signal causes the nerve to reach its firing threshold earlier). The strongest signals carry the most relevant information.

Renesas apply N:M coding to the static weights stored in memory, so there is no time element. Renesas base their selection on the magnitude of the stored signal. It's not about which arrives first. It's about which is strongest.

It's similar but different (and derivative).
So basically they've found a workaround, to achieve a similar result, as AKIDAs N of M coding..

That's only one aspect, of what makes AKIDA special though, but considering their use case for our IP was limited (2 nodes) then it looks like this may be the reason, why Renesas "may" have not proceeded with producing chips with our IP (the "tape-out" was supposed to be at the end of 2023?).
Edit- end of 2022.

Sorry folks... But I'm the kind of person who will turn over stones, even though they may have a scorpion or centipede underneath..
(as a kid, I was actually looking for lizards)..
 
Last edited:
  • Like
  • Sad
  • Fire
Reactions: 16 users

Slade

Top 20
So basically they've found a workaround, to achieve a similar result, as AKIDAs N of M coding..

That's only one aspect, of what makes AKIDA special though, but considering their use case for our IP was limited (2 nodes) then it looks like this may be the reason, why Renesas "may" have not proceeded with producing chips with our IP (the "tape-out" was supposed to be at the end of 2023?).

Sorry folks... But I'm the kind of person who will turn over stones, even though they may have a scorpion or centipede underneath..
(as a kid, I was actually looking for lizards)..
What is your understanding of the term ‘tape out’?
 
  • Haha
  • Like
Reactions: 2 users
What is your understanding of the term ‘tape out’?
Tape-out is actually a redundant term, for what they used to do in preparation for making chips..

Industry still uses the term though.
(I learnt that from LDN).

Without Googling, my simple explanation, is that it is preparing the "masks" for the chips.

It's one of the first steps of producing the chips, it isn't "producing" the chips and taping out a chip, doesn't guarantee a chip will be produced.
 
Last edited:
  • Like
  • Fire
Reactions: 8 users

Guzzi62

Regular
Tape-out is actually a redundant term, for what they used to do in preparation for making chips..

Industry still uses the term though.
(I learnt that from LDN).

Without Googling, my simple explanation, is that it is preparing the masks for the chips.

It's one of the first steps of producing the chips, it isn't "producing" the chips and taping out a chip, doesn't guarantee a chip will be produced.
Article from Dec 2022.

Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.




I don't know if it actually happened, the article is quite interesting to read,

Quote:

Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.

The take up of the technology depends on the market adoption, he says.

“We want to see where the market reception is the highest, that is what determines whether we bring things in house or through a third party.”
 
  • Like
  • Love
  • Fire
Reactions: 14 users

Slade

Top 20
Tape-out is actually a redundant term, for what they used to do in preparation for making chips..

Industry still uses the term though.
(I learnt that from LDN).

Without Googling, my simple explanation, is that it is preparing the "masks" for the chips.

It's one of the first steps of producing the chips, it isn't "producing" the chips and taping out a chip, doesn't guarantee a chip will be produced.
The reason I asked is because the tape out was in Dec 2022, not the end of 2023. At this stage I am not concerned that they haven’t released the chip. With time for fabrication, testing along with a timed release to the market I am not surprised that we haven’t heard any news yet. We saw how long it took for Akida between tapping out and getting the engineering chips to our EAPs. Renesas have been releasing a lot of new products lately and will want the market to absorb them before marketing yet another chip. That’s not to say that some established Renesas customers haven’t already got engineering samples in their hands. IMO
 
Last edited:
  • Like
  • Love
Reactions: 17 users
Article from Dec 2022.

Renesas is taping out a chip using the spiking neural network (SNN) technology developed by Brainchip.




I don't know if it actually happened, the article is quite interesting to read,

Quote:

Brainchip and Renesas signed a deal in December 2020 to implement the spiking neural network technology. Tools are vital for this new area. “The partner gives us the training tools that are needed,” he said.

The take up of the technology depends on the market adoption, he says.

“We want to see where the market reception is the highest, that is what determines whether we bring things in house or through a third party.”
From the same article and I mulled over this comment when it came out, about whether the 3rd party inference was client or foundry and also the reference to device which implies a product of some sort not just the chip imo.

If client then we wouldn't see anything from Renesas I suspect.

“Now you have accelerators for driving AI with neural processing units rather than a dual core CPU. We are working with a third party taping out a device in December on 22nm CMOS,” said Chittipeddi.

Was also all interesting timing as Renesas said doing a 22nm CMOS and we come out around Jan saying we've taped (past tense) out the 1500 in 22nm but in FDSOI and was an article by Nick Flaherty at the time saying we were working with Renesas to use the 1500 IP in a MCU which would be a path for the Minsky AI engine in industrial use cases.
 
  • Like
  • Fire
  • Love
Reactions: 19 users
The reason I asked is because the tape out was in Dec 2022, not the end of 2023. At this stage I am not concerned that they haven’t released the chip. With time for fabrication, testing along with a timed release to the market I am not surprised that we haven’t heard any news yet. We saw how long it took for Akida between tapping out and getting the engineering chips to our EAPs. Renesas have been releasing a lot of new products lately and will want the market to absorb them before marketing yet another chip. That’s not to say that some established Renesas customers haven’t already got engineering samples in their hands. IMO
Yeah I thought end of '23 didn't sound right..

Renesas are huge, so who knows what their plans are..

According to the recent BrainChip presentation, there is another physical chip, with our IP in it, other than AKD1000 and AKD1500, but nobody seems too interested, curious, or excited about it..

20240305_015051.jpg


I've already made the observation, that, not only did they "highlight" the customer SOC, like "Look at this folks!" but AKD2000 isn't there, because it's not physical yet.

This isn't a slide showing theoretical products, but actual physical integrated circuits, in my opinion.
 
  • Like
  • Fire
  • Love
Reactions: 35 users

TopCat

Regular

Probably old, but I hadn’t seen this before.​

Cupcake Edge AI Server in Full Production​

NEWS PROVIDED BY
EIN Presswire
Feb 21, 2024, 12:24 PM ET
Unigen Corporation Announces Milestone Achievement
NEWARK, CALIFORNIA, UNITED STATES, February 21, 2024 /EINPresswire.com/ -- Unigen Corporation proudly announces the successful production launch of its highly anticipated Cupcake Edge AI Server. The first units have been produced at our cutting-edge facilities in Hanoi, Vietnam, and Penang, Malaysia, marking a significant milestone in Unigen's commitment to delivering AI solutions to the global market.

Certified for compliance with FCC, CE, VCCI, KCC, and WEEE standards, Cupcake has successfully completed rigorous testing protocols, ensuring its adherence to the highest industry regulations and quality benchmarks. The initial production units have been delivered from our state-of-the-art facilities in Vietnam and Malaysia. With mass production tooling now in place, we are fully equipped to meet the escalating demand for Cupcake, empowering businesses worldwide with unparalleled AI capabilities.

"Bringing our Cupcake Edge AI Server to life has been an exciting journey for us at Unigen," shared Paul W. Heng, Unigen’s founder and CEO. "It's been a company-wide effort to quickly bring groundbreaking technology to the market. By seamlessly integrating every aspect of Cupcake, from the motherboard to the enclosures, and collaborating closely with our Silicon partners, we’re finally able to see our customers receiving the fruits of our effort."

About Cupcake
Unigen’s Cupcake Edge AI Server delivers a reliable, high-performance, low-latency, low-power platform for Machine Learning and Inference AI in a compact and rugged enclosure. Cupcake integrates a flexible combination of I/O Interfaces and expansion capabilities to capture and process video and multiple types of signals through its Power-Over-Ethernet (POE) ports, and then delivers the processed data to the client either over a wired or wireless network. Neural Networks are supported by the leading ISV providers allowing for a highly customizable solution for multiple applications.
Cupcake is a small form factor fanless design in a ruggedized case perfect for environments where Visual Security is important (e.g., secure buildings, transportation, warehouses, or public spaces). External interfaces included are Ethernet, POE, HDMI, USB 3.0, USB Type-C, CANbus, RS232, SDCard, antennas for WIFI, and internal interfaces for optional M.2 SATA III, M.2 NVMe and SO-DIMMs. The flexibility in IO renders the Cupcake Edge AI Server suitable for multiple applications and markets.


( my bold )
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 53 users

cosors

👀
3... 2... 0... and 🚀
Screenshot_2024-03-04-20-04-19-15_7d6541707e0ad471ad1a839839bd7d1b.jpg


1000006917.gif
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 37 users

Boab

I wish I could paint like Vincent
  • Like
  • Love
  • Thinking
Reactions: 12 users
Top Bottom