BRN Discussion Ongoing

Diogenese

Top 20
Loihi is a test chip, the amount of silicon it occupies would be too big to be commercially viable and AFAICT Intel have never tried to commercialise it. If Intel intend to embed nueromorpic processing in the PCs that they brand then I reckon they must have come knocking on our door.
Hi jtardif999,

I've been down the Meteor Lake rabbit hole and the news is not good if the press speculation is correct.


Inside Meteor Lake: Intel’s radical new Core chip is optimized for the future (msn.com)



The NPU is one part of that. Intel’s NPU includes a pair of neural compute engines, each with two VLIW Shade DSPs inside, with inference engines capable of up to eight instructions per cycle. Even for consumers used to parsing the number of cores per chip, base clocks, and turbo clocks, this won’t make a lot of sense. What Intel is trying to convey is that AI requires a ton of multiply-accumulate (MAC) instructions per cycle, and that those engines can perform 2,048 MAC calculations each.
...
Intel’s secret sauce, though, isn’t just the AI NPU, but how the CPU, GPU, and NPU can all help assist each other. Take the following example. Running 20 iterations of Stable Diffusion, Intel tried various combinations: performing all of the calculations on the CPU, all on the GPU, all on the NPU, and a combination of all three. Performing them all on the NPU took 20.7 seconds and 10 total watts, the most efficient use. But performing them all on the GPU and NPU finished the task in 11.3 seconds, consuming 30W.

Akida does not do "instructions per cycle".
Akida does not do lots of MACs.
Akida does not use Watts.
 
  • Like
  • Fire
  • Love
Reactions: 38 users

jtardif999

Regular
Hi jtardif999,

I've been down the Meteor Lake rabbit hole and the news is not good if the press speculation is correct.


Inside Meteor Lake: Intel’s radical new Core chip is optimized for the future (msn.com)



The NPU is one part of that. Intel’s NPU includes a pair of neural compute engines, each with two VLIW Shade DSPs inside, with inference engines capable of up to eight instructions per cycle. Even for consumers used to parsing the number of cores per chip, base clocks, and turbo clocks, this won’t make a lot of sense. What Intel is trying to convey is that AI requires a ton of multiply-accumulate (MAC) instructions per cycle, and that those engines can perform 2,048 MAC calculations each.
...
Intel’s secret sauce, though, isn’t just the AI NPU, but how the CPU, GPU, and NPU can all help assist each other. Take the following example. Running 20 iterations of Stable Diffusion, Intel tried various combinations: performing all of the calculations on the CPU, all on the GPU, all on the NPU, and a combination of all three. Performing them all on the NPU took 20.7 seconds and 10 total watts, the most efficient use. But performing them all on the GPU and NPU finished the task in 11.3 seconds, consuming 30W.

Akida does not do "instructions per cycle".
Akida does not do lots of MACs.
Akida does not use Watts.
Indeed.. my comment was around if they did want to embed Neuromorphic - based on the above, sounds like not.
 
  • Like
Reactions: 3 users

Frangipani

Top 20
There's some IFS and BUTs here. Lets start with Intel Foundry Services ...
… and leave aside any Baseless Unfounded Theories for the time being? 🤭

Just an observation on my part, which may or may not be of any significance:
I’ve been wondering (especially after Colleen Vitolo’s comment “Great job Todd and Rob!” that @Dhm had spotted on Sept 8: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-359269) whether there was any particular reason why Tony Dawe did not comment on the fact that the entity under number 17 in Fact Finder’s list of corporate and academic engagements reads “Intel” rather than “Intel Foundry Services (IFS)”?

E9E45949-A64F-4491-9708-290B61A0F739.jpeg


It may just seem a technicality, as of course Intel is the company behind IFS and Brainchip even uses the (old, pre-2020) Intel logo on their partnership website (whereas recent Brainchip presentations have been using the IFS logo instead), but my point is “Intel” being listed could suggest to readers that there is more collaboration with Brainchip than just the foundry business, which, however, is the only technology partnership between Brainchip and Intel that has so far been officially announced. So in a list of relationships that have been publicly acknowledged to date, I’d personally expect to find “IFS” rather than “Intel”.

I was actually going to point this out to @Fact Finder himself, after he had reposted that compilation of companies and universities in his recent article questioning NVIDIA’s long-term survival, but I noticed he has since changed the name from “Intel” to “Intel Foundries”. Maybe he is willing to shed some light on the reason for his editing? His own proofreading or someone else’s? If so, was that someone else by any chance a Brainchip employee?

So the question is: Does the fact that neither Rob Telson nor his colleagues in the US found it necessary to correct the entity’s name under number 17 on the list (initially at least) indicate there is further collaboration with Intel, not only through the foundry business alone (however, overlooking the fact that no public announcement has so far been made) or do they simply not bother to differentiate between Intel Foundry Services and Intel? 🤔
 
  • Like
  • Love
  • Fire
Reactions: 14 users

Frangipani

Top 20
Yes, it’s very interesting if Sean has keep his cards very close (glued?) to his chest.
A interesting part from the news report discussion between the reporters was the lady mentioned the word “foundry” linked to intels new AI chip in the PC to be hopefully rolled out in mid Dec 23.
So if intel mentioned it’s from the foundry then why would it be it’s own IP etc in their AI chip?

Yes, the lady said just foundry and not Intel foundry services so just stating as it was mentioned. Yes same thing really in reference to intel in this manner during their discussion

The below press release dated June 21st may provide an answer to your question:



“Intel leaders told analysts and investors during a webinar Wednesday that its transition to a new internal foundry model will be a key enabler to achieving its stated cost savings goal of more than $8-10 billion exiting 2025. In this new operating model, Intel’s internal product groups move to a foundry-style relationship with the company’s manufacturing group. As a result, company execs say they are projecting a broad class of increased efficiencies that will be reflected in greater profitability as Intel pursues its long-term ambition to achieve non-GAAP gross margins of 60%.

The Background:

Intel is embarking on the most significant business transformation in its 55-year history. With IDM 2.0, Intel set out to regain process technology leadership, expand the use of third-party foundry capacity and build a world-class foundry business with a significant expansion of Intel’s manufacturing capacity. With these efforts well on track,
Intel now is making a fundamental shift in how its product business units work with technology development and manufacturing to ensure long-term growth while achieving efficiencies and cost savings.

In this new “internal foundry” model, Intel’s product business units will engage with the company’s manufacturing group in a similar arm’s-length fashion that fabless semiconductor companies engage with external foundries.
 
  • Like
  • Fire
Reactions: 16 users
I'm loving Sean's level of enthusiasm here:


"As a leader in Edge AI, we see just how ubiquitous the technology will be and will highlight how we are driving AI into a plethora of use cases that weren’t possible before.”
 
  • Like
  • Love
  • Fire
Reactions: 51 users

FlipDollar

Never dog the boys
That's my issue with the forum and the reason all of my friends left months ago already.
As soon as someone does not agree, logical thinking just stops, and you get called a lier, downramper, stupid, thrown gifs at you ladididi dadidi.
A product, the only one you have, 3 years into commercialization that doesn't sell and even gets called not robust enough by its own company is a failure. End and over.

I can name you an example for a lie regarding this topic.
When the technology was ready for licensing, they said that revenue growth would outgrow expenses. They said that there will be a lot of ip licensing going forward, or other phrases liked it'd become the defacto standard.
Regarding the second gen, there was only an outdated timeline available.
I don't recall when it was, but I think it was some months maybe even only some weeks before the last AGM someone dropped that the "AKD1000 never intended to be a revenue stream"
Now that's a lie with consequences that we can all witness right now.
Can't wait for people making excuses like "he was only talking about physical chips and not IP" or whatever people will come up with.
I agree - it’s sickening some days reading grown adults fanboying over specific posters long winded monologues too.

The echo chamber is alive and well and a balance of opinion is too hard for many on here.
 
  • Like
  • Love
  • Haha
Reactions: 13 users

AARONASX

Holding onto what I've got
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 30 users

AARONASX

Holding onto what I've got
[
 
  • Like
  • Fire
  • Love
Reactions: 32 users
  • Like
  • Fire
  • Love
Reactions: 23 users

TheFunkMachine

seeds have the potential to become trees.
“The Emerging Growth in AI Conference is an ideal venue to showcase how advanced IoT infrastructure and AI technologies inspired by the human brain, deliver highly efficient problem solving, with more intuitive and secure human-to-machine interactions,” said Hehir. “As a leader in Edge AI, we see just how ubiquitous the technology will be and will highlight how we are driving AI into a plethora of use cases that weren’t possible before.”https://brainchip.com/brainchip-to-...ng-growth-in-ai-presented-by-maxim-group-llc/
 
  • Like
  • Fire
  • Love
Reactions: 25 users

skutza

Regular
As a leader in Edge AI, we see just how ubiquitous the technology will be and will highlight how we are driving AI into a plethora of use cases that weren’t possible before" then under his breath....(Unfortunately other than playing with it, nobody outside the company seems to give a rats arse).
 
  • Like
  • Haha
Reactions: 18 users
Hi jtardif999,

I've been down the Meteor Lake rabbit hole and the news is not good if the press speculation is correct.


Inside Meteor Lake: Intel’s radical new Core chip is optimized for the future (msn.com)



The NPU is one part of that. Intel’s NPU includes a pair of neural compute engines, each with two VLIW Shade DSPs inside, with inference engines capable of up to eight instructions per cycle. Even for consumers used to parsing the number of cores per chip, base clocks, and turbo clocks, this won’t make a lot of sense. What Intel is trying to convey is that AI requires a ton of multiply-accumulate (MAC) instructions per cycle, and that those engines can perform 2,048 MAC calculations each.
...
Intel’s secret sauce, though, isn’t just the AI NPU, but how the CPU, GPU, and NPU can all help assist each other. Take the following example. Running 20 iterations of Stable Diffusion, Intel tried various combinations: performing all of the calculations on the CPU, all on the GPU, all on the NPU, and a combination of all three. Performing them all on the NPU took 20.7 seconds and 10 total watts, the most efficient use. But performing them all on the GPU and NPU finished the task in 11.3 seconds, consuming 30W.

Akida does not do "instructions per cycle".
Akida does not do lots of MACs.
Akida does not use Watts.
Hi Dio

Should your final summary read with two simple words “ operating alone” ie

“Akida operating alone does not do "instructions per cycle".
Akida operating alone does not do lots of MACs.
Akida operating alone does not use Watts.”

I say this because surely a PC containing a SOC being operated would not just use one part alone. All 4 tiles contains cumpute function and would be in operation. Akida could be in there still.

It’s a bit like mixed fruit juice containing apples and oranges. Doesn’t tast like apple. Doesn’t taste like orange. But still contains both 😜
 
  • Like
  • Love
Reactions: 17 users

skutza

Regular
Yes, I've gone slightly mad.....

1695246022788.png
 
  • Haha
  • Like
Reactions: 11 users

BrainChip Engages VVDN to Deliver Industry’s First Commercial Edge Box Based on Neuromorphic Technology

September 20, 2023 05:40 PM Eastern Daylight Time

LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that it has partnered with VVDN Technologies, a premier electronics engineering and manufacturing company, to deliver the industry’s first Edge box based on neuromorphic technology.

“Edge AI systems specifically architected to meet the performance, power, cooling, portability, and cost requirements are necessary to drive market growth.”

VVDN Technologies is an industry-leading device manufacture and solutions provider that has extensive experience in developing and deploying vision-based solutions for various domains, such as automotive, industrial, security surveillance, enterprise, medical and others. VVDN will complement BrainChip’s AI capability with their domain expertise in hardware design, firmware development, cloud integration and manufacturing for the Edge box product.

“We are excited to bring the benefits of neuromorphic computing to the Edge AI market with VVDN as our lead partner,” said Sean Hehir, CEO of BrainChip. “This portable and compact Edge box is a game-changer that enables customers to deploy AI applications cost-effectively with unprecedented speed and efficiency to proliferate the benefits of intelligent compute.”

The Edge box is a compact and powerful device that can run various AI applications at the Edge of the network, such as video analytics, face recognition, object detection and more. The Edge box leverages BrainChip’s Akida™ processors, which are designed to mimic the structure and function of the human brain. Akida processors offer high performance, low power consumption and scalability for Edge AI solutions.

“Edge boxes are a rapidly growing segment in AI and are currently based on platforms from major players,” said Bram Geenen Co-founder of Wevolver, providers of one of the most subscribed Edge AI analyst reports. “The cost-effectiveness, efficiency and scalability of BrainChip’s Akida neuromorphic processor, coupled with VVDN’s solutions expertise should deliver a boost to the proliferation of customizable and secure AI applications at the Edge.”

“Edge AI is currently dominated by Nvidia and Qualcomm. But these systems are not ideal for the Edge from a power, size, and cost perspective,” said Marc Staimer, President Dragon Slayer Consulting. “Edge AI systems specifically architected to meet the performance, power, cooling, portability, and cost requirements are necessary to drive market growth.”

The Edge box will be available for presale from BrainChip and VVDN later this year.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning local to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be integrated into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.

Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

Contacts
Media Contact:
Mark Smith
JPR Communications
818-398-1424

Investor Contact:
Tony Dawe
Director, Global Investor Relations
BrainChip
tdawe@brainchip.com


BRAINCHIP HOLDINGS LTD
ASX:BRN
 
  • Like
  • Fire
  • Love
Reactions: 94 users

AARONASX

Holding onto what I've got
 
  • Like
  • Fire
  • Love
Reactions: 54 users

Bloodsy

Regular

BrainChip Engages VVDN to Deliver Industry’s First Commercial Edge Box Based on Neuromorphic Technology

September 20, 2023 05:40 PM Eastern Daylight Time

LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that it has partnered with VVDN Technologies, a premier electronics engineering and manufacturing company, to deliver the industry’s first Edge box based on neuromorphic technology.

“Edge AI systems specifically architected to meet the performance, power, cooling, portability, and cost requirements are necessary to drive market growth.”

VVDN Technologies is an industry-leading device manufacture and solutions provider that has extensive experience in developing and deploying vision-based solutions for various domains, such as automotive, industrial, security surveillance, enterprise, medical and others. VVDN will complement BrainChip’s AI capability with their domain expertise in hardware design, firmware development, cloud integration and manufacturing for the Edge box product.

“We are excited to bring the benefits of neuromorphic computing to the Edge AI market with VVDN as our lead partner,” said Sean Hehir, CEO of BrainChip. “This portable and compact Edge box is a game-changer that enables customers to deploy AI applications cost-effectively with unprecedented speed and efficiency to proliferate the benefits of intelligent compute.”

The Edge box is a compact and powerful device that can run various AI applications at the Edge of the network, such as video analytics, face recognition, object detection and more. The Edge box leverages BrainChip’s Akida™ processors, which are designed to mimic the structure and function of the human brain. Akida processors offer high performance, low power consumption and scalability for Edge AI solutions.

“Edge boxes are a rapidly growing segment in AI and are currently based on platforms from major players,” said Bram Geenen Co-founder of Wevolver, providers of one of the most subscribed Edge AI analyst reports. “The cost-effectiveness, efficiency and scalability of BrainChip’s Akida neuromorphic processor, coupled with VVDN’s solutions expertise should deliver a boost to the proliferation of customizable and secure AI applications at the Edge.”

“Edge AI is currently dominated by Nvidia and Qualcomm. But these systems are not ideal for the Edge from a power, size, and cost perspective,” said Marc Staimer, President Dragon Slayer Consulting. “Edge AI systems specifically architected to meet the performance, power, cooling, portability, and cost requirements are necessary to drive market growth.”

The Edge box will be available for presale from BrainChip and VVDN later this year.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning local to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be integrated into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.

Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

Contacts
Media Contact:
Mark Smith
JPR Communications
818-398-1424

Investor Contact:
Tony Dawe
Director, Global Investor Relations
BrainChip
tdawe@brainchip.com


BRAINCHIP HOLDINGS LTD
ASX:BRN

Rabbit out of the hat, while the building is on fire/the sky is falling

Classic BRN :cool::ROFLMAO:
 
  • Like
  • Haha
  • Fire
Reactions: 21 users

Townyj

Ermahgerd

BrainChip Engages VVDN to Deliver Industry’s First Commercial Edge Box Based on Neuromorphic Technology

September 20, 2023 05:40 PM Eastern Daylight Time

LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI IP, today announced that it has partnered with VVDN Technologies, a premier electronics engineering and manufacturing company, to deliver the industry’s first Edge box based on neuromorphic technology.

“Edge AI systems specifically architected to meet the performance, power, cooling, portability, and cost requirements are necessary to drive market growth.”

VVDN Technologies is an industry-leading device manufacture and solutions provider that has extensive experience in developing and deploying vision-based solutions for various domains, such as automotive, industrial, security surveillance, enterprise, medical and others. VVDN will complement BrainChip’s AI capability with their domain expertise in hardware design, firmware development, cloud integration and manufacturing for the Edge box product.

“We are excited to bring the benefits of neuromorphic computing to the Edge AI market with VVDN as our lead partner,” said Sean Hehir, CEO of BrainChip. “This portable and compact Edge box is a game-changer that enables customers to deploy AI applications cost-effectively with unprecedented speed and efficiency to proliferate the benefits of intelligent compute.”

The Edge box is a compact and powerful device that can run various AI applications at the Edge of the network, such as video analytics, face recognition, object detection and more. The Edge box leverages BrainChip’s Akida™ processors, which are designed to mimic the structure and function of the human brain. Akida processors offer high performance, low power consumption and scalability for Edge AI solutions.

“Edge boxes are a rapidly growing segment in AI and are currently based on platforms from major players,” said Bram Geenen Co-founder of Wevolver, providers of one of the most subscribed Edge AI analyst reports. “The cost-effectiveness, efficiency and scalability of BrainChip’s Akida neuromorphic processor, coupled with VVDN’s solutions expertise should deliver a boost to the proliferation of customizable and secure AI applications at the Edge.”

“Edge AI is currently dominated by Nvidia and Qualcomm. But these systems are not ideal for the Edge from a power, size, and cost perspective,” said Marc Staimer, President Dragon Slayer Consulting. “Edge AI systems specifically architected to meet the performance, power, cooling, portability, and cost requirements are necessary to drive market growth.”

The Edge box will be available for presale from BrainChip and VVDN later this year.

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the worldwide leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, Akida™, uses neuromorphic principles to mimic the human brain, analyzing only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Akida uniquely enables Edge learning local to the chip, independent of the cloud, dramatically reducing latency while improving privacy and data security. Akida Neural processor IP, which can be integrated into SoCs on any process technology, has shown substantial benefits on today’s workloads and networks, and offers a platform for developers to create, tune and run their models using standard AI workflows like Tensorflow/Keras. In enabling effective Edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers’ products, as well as the planet. Explore the benefits of Essential AI at www.brainchip.com.

Follow BrainChip on Twitter: https://www.twitter.com/BrainChip_inc
Follow BrainChip on LinkedIn: https://www.linkedin.com/company/7792006

Contacts
Media Contact:
Mark Smith
JPR Communications
818-398-1424

Investor Contact:
Tony Dawe
Director, Global Investor Relations
BrainChip
tdawe@brainchip.com


BRAINCHIP HOLDINGS LTD
ASX:BRN
hotline bling drake GIF
 
  • Like
  • Haha
  • Fire
Reactions: 16 users

Tothemoon24

Top 20
Rabbit out of the hat, while the building is on fire/the sky is falling

Classic BRN :cool::ROFLMAO:
Is it a rabbit out of the 🎩 or a hehir out of the ass

😜
 
  • Haha
  • Like
  • Fire
Reactions: 29 users

BaconLover

Founding Member
Screenshot 2023-09-21 080906.png


Old news from last year.

Also, not announced on ASX, so immaterial.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 8 users
Top Bottom