BRN Discussion Ongoing

cosors

👀

The BMW i Vision Dee is a future EV sport sedan that can talk back to you​

DEE.png

1672914641016.png

Looks great!
Afeela movement deep in my heart :ROFLMAO:

____
https://www.theverge.com/2023/1/4/23538742/bmw-i-vision-dee-concept-ces-av-ar-voice-assistant-e-ink
https://www.theverge.com/2023/1/4/23539863/sony-honda-electric-vehicle-afeela-ces-reveal-photos
_____
Why are they producing graphics like this ☝️ for CES 2023 and why a name like AFEELA?! I can't follow that. By the way, DEE stands for Digital Emotional Experience.
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 9 users

Kachoo

Regular
Screenshot_20230105-203212_Chrome.jpg

If this was not true or misleading then any of these companies should have had this removed instantly.
 
  • Like
  • Love
Reactions: 24 users

Ethinvestor

Regular
Damn it sharing facts with other shareholders is a compulsion I just cannot help it. I will try to find some false facts to bring balance to my posts if I can over come my compulsion to tell the truth:


Now think about this if when you run FOMO with AKIDA you get performance figures that make Sony look like a failure how will Sony and the market for Edge semiconductors react???

My opinion only DYOR
FF

AKIDA BALLISTA
@factfinder thank you for the factual research and info. you not just hyping .. but actually giving me usable facts. So rare these days…
As a professional investor I appreciate Your work and input you always do on this forum … and thank you spending all that time and teaching novices and put them in place :) 😁🙏
 
  • Like
  • Love
  • Fire
Reactions: 39 users
Had a crook day today and been passed out most of the day.

Didn't miss much by looks haha

Anyway, just saw this CES update from about 5 hrs ago and a piece on Prophesee.


CES 2023: how Auralpine startups plan to do well​

Olivia CarterJanuary 4, 2023

After having already industrialized the first four generations of its neuromorphic and bioinspired sensor and raised 50 million in series C at the end of 2022, another Isérois, Prophesee, has meanwhile chosen to rent, not a stand but a suite, in the heart of the one of the most prestigious hotels at the show, the Venezian, in order to meet a hundred potential prospects… And to present them with three technologies, each targeting a key market: a new sensor prototype, co-developed with the Sony group and intended for the improvement of the image for the world of mobile telephony, a second sensor intended for the immersive experience for players in augmented reality, as well as a sensor for detecting presence within a room for the sector of the IOT, co-developed with the American Brainchip.

This will be the first time that we show these demonstrators publicly, some of which will also be subject to confidentiality clauses., slips to La Tribune Luca Verre, Ceo and co-founder of Prophesee, which today has 110 employees and three locations (Grenoble, Paris and Shanghai).
I was just going to turn in as had a hectic day and almost did not have a final look and am I glad that I did.

Great find generously shared FMF.

My opinion only DYOR
FF

AKIDA BALLISTA
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Getupthere

Regular
 
  • Like
  • Fire
Reactions: 8 users
A thought bubble which I think is supported by logic and known facts.

The Thought:

Brainchip AKIDA technology has already cracked automotive.

The facts.

Socionext is presenting AKIDA technology for automotive use.

Renesas is presenting AKIDA technology for automotive use - 3rd largest supplier of MCU’s to automotive in the World.

VVDN is presenting AKIDA technology for automotive use.

ARM presents AKIDA technology for just about everything including automotive.

Nviso presents AKIDA technology for automotive use.

Brainchip AKIDA technology is trusted by Mercedes Benz.

Brainchip AKIDA technology is trusted by Valeo and the original EAP was to explore use cases in ADAS and AV.

FORD continues to be an ASX announced customer for automotive.

EDGE IMPULSE supports AKIDA FOMO which has an in cabin automotive use case for driver fatigue and attention monitoring.

Brainchip has consistently stated in presentations for years now that they are working with automotive OEM’s and vehicle manufacturers.

NASA & DARPA approved firms are working with AKIDA technology for radar guidance, cognitive communications and autonomous vehicle navigation all of which are extreme technology use cases that could scale well into automotive on Earth.

Prophesee event based sensors are enhanced by AKIDA technology and use cases for these sensors are most certainly shown by Prophesee as being in automotive ADAS.

The issue for every Electric Vehicle now and in the future will always be battery life and range. Range is increased the more of the battery life that can be reserved for the driving wheels. AKIDA out competes GPUs on power, and price by factors before you get to its unique one shot learning and real time performance so it presents an overly compelling argument for adoption in ADAS and sensors in automotive described by Edge Impulse as Science Fiction.

If you do not think the above is sufficient to justify the ‘Thought’ then please present the opposing facts for consideration.

My opinion only DYOR
FF

AKIDA BALLISTA
Let me get this straight FF. Are you saying Akida may be in cars? Nooo Waaay. That's outrageous! We could end up taking over the world at this rate.

SC
 
  • Haha
  • Like
Reactions: 15 users

HME909

Emerged
Rough day
 

GeorgHbg

Emerged
As a European from Germany, I am currently wondering if there is a regulation in Australia similar to the European regulation

- (https://registers.esma.europa.eu/publication/searchRegister?core=esma_registers_mifid_shsexs

and here for Germany


i have the feeling that in Europe short selling of shares is not the rule.
 
  • Like
  • Thinking
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Love
  • Sad
Reactions: 18 users
D

Deleted member 118

Guest
Had a crook day today and been passed out most of the day.

Didn't miss much by looks haha

Anyway, just saw this CES update from about 5 hrs ago and a piece on Prophesee.


CES 2023: how Auralpine startups plan to do well​

Olivia CarterJanuary 4, 2023

After having already industrialized the first four generations of its neuromorphic and bioinspired sensor and raised 50 million in series C at the end of 2022, another Isérois, Prophesee, has meanwhile chosen to rent, not a stand but a suite, in the heart of the one of the most prestigious hotels at the show, the Venezian, in order to meet a hundred potential prospects… And to present them with three technologies, each targeting a key market: a new sensor prototype, co-developed with the Sony group and intended for the improvement of the image for the world of mobile telephony, a second sensor intended for the immersive experience for players in augmented reality, as well as a sensor for detecting presence within a room for the sector of the IOT, co-developed with the American Brainchip.

This will be the first time that we show these demonstrators publicly, some of which will also be subject to confidentiality clauses., slips to La Tribune Luca Verre, Ceo and co-founder of Prophesee, which today has 110 employees and three locations (Grenoble, Paris and Shanghai).
Hope your felling better soon
 
  • Like
  • Love
Reactions: 10 users
Please tag or message if you answered the quiz question or the other option to win month of subscription blah whatever rules applied. Will sort it later
Dad's live life alarm went off early this morning so pretty stressed currently with current situation.
I hope everything is ok with your Dad @Rise from the ashes
 
  • Like
  • Love
Reactions: 10 users

Deadpool

Did someone say KFC

The BMW i Vision Dee is a future EV sport sedan that can talk back to you​

View attachment 26280
View attachment 26281

AFEELA movement deep in my heart.

____
https://www.theverge.com/2023/1/4/23538742/bmw-i-vision-dee-concept-ces-av-ar-voice-assistant-e-ink
https://www.theverge.com/2023/1/4/23539863/sony-honda-electric-vehicle-afeela-ces-reveal-photos
_____
Why are they producing graphics like this ☝️ for CES 2023 and why a name like AFEELA?! I can't follow that. By the way, DEE stands for Digital Emotional Experience.
That's one ugly looking vehicle🤢, even the female avatar in the window, looks like she even wants to get out of it.🆘:LOL:
 
  • Like
  • Haha
Reactions: 23 users

Proga

Regular
Yeah. I wonder if Braichip plan to do the big 2K reveal @ CES to gain maximum exposure.
Looks like Edge Impulse and BRN have that very plan in mind.
 
  • Like
  • Thinking
Reactions: 9 users

Proga

Regular
Let me get this straight FF. Are you saying Akida may be in cars? Nooo Waaay. That's outrageous! We could end up taking over the world at this rate.

SC
Not yet but soon will be. All points to 2025 models which begin production in the 2nd half of 2024. All models are planned years in advance.
 
  • Like
  • Love
  • Fire
Reactions: 6 users

Gman

Member
I think Xilinx ai (which they recently acquired) is said to be behind this.
They seem to be working together in some capacity from 2017…


1672920297815.jpeg
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Diogenese

Top 20
"We’re proving that on-chip AI, close to the sensor, has a sensational future, for our customers’ products, as well as the planet."
I have to ask something about Qualcomm again. What exactly do we know about this "an AI accelerator chip"? Can someone say something about that?

The Snapdragon Ride Flex was first mentioned during Qualcomm’s Automotive investor day in September 2022, but more details are available now. The original Ride platform was based around a two-chip solution with an ADAS SoC and an AI accelerator chip.

Hi Sirod69,

This Qualcomm patent application relates to a large split NN over 2 or more SoCs because the weights are too large for the on SoC memory of a single NN SoC.

US2020250545A1 SPLIT NETWORK ACCELERATION ARCHITECTURE

Priority: 20190206

1672918039636.png


[0022] As noted, an artificial intelligence accelerator may be used to train a neural network. Training of a neural network generally involves determining one or more weights associated with the neural network. For example, the weights associated with a neural network are determined by hardware acceleration using a deep learning accelerator. Once the weights associated with a neural network are determined, an inference may be performed using the trained neural network, which computes results (e.g., activations) by processing input data based on the weights associated with the trained neural network.

[0023] In practice, however, a deep learning accelerator has a fixed amount of memory (e.g., static random access memory (SRAM) with a capacity of 128 megabytes (MB)). As a result, the capacity of a deep learning accelerator is sometimes not large enough to accommodate and store a single network. For example, some networks have weights of a larger size than the fixed amount of memory available from the deep learning accelerator. One solution to accommodate large networks is to split the weights into a separate storage device (e.g., a dynamic random access memory (DRAM)). These weights are then read from the DRAM during each inference. This implementation, however, uses more power and can result a memory bottleneck.

[0024] Another solution to accommodate large networks is splitting the network into multiple pieces and passing intermediate results from one accelerator to another through a host. Unfortunately, passing intermediate inference request results through the host consumes host bandwidth. For example, using a host interface (e.g., a peripheral component interconnect express (PCIe) interface) to pass intermediate inference request results consumes the host memory bandwidth. In addition, passing intermediate inference request results through the host (e.g., a host processor) consumes central processing unit cycles of the host processor and adds latency to an overall inference calculation.

[0025] One aspect of the present disclosure splits a large neural network into multiple, separate artificial intelligence (AI) inference accelerators (AIIAs). Each of the separate AI inference accelerators may be implemented in a separate system-on-chip (SoC). For example, each AI inference accelerator is allocated and stores a fraction of the weights or other parameters of the neural network. Intermediate inference request results are passed from one AI inference accelerator to another AI inference accelerator independent of a host processor. Thus, the host processor is not involved with the transfer of the intermediate inference request results.

The system passes partial results from one partial NN SoC to another NN SoC.

Now, I don't know how his differs from having 2 or more Akida 1000s connected up.

But, if Qualcomm think they've invented it, that suggests that 2 years ago, they were not planning to use Akida.

Our patent has a priority of 20181101 which pre-dates Qualcomm's priority by 3 months.
 
  • Like
  • Fire
  • Thinking
Reactions: 37 users

Edge AI Chip Company Syntiant Unveils NDP115 Neural Decision Processor at CES 2023​

https://www.semiconductor-digest.co...ndp115-neural-decision-processor-at-ces-2023/
"The Syntiant NDP115 is now shipping in production volumes. Pricing for 10Ku quantities is $3.25 per unit"

That's damn expensive actually, considering..
Then you have the cost of fitting to whatever product it's going into and no on chip learning?

What's their cost to manufacture the chips?..

I wonder why they can't offer it as IP?
Maybe because of the analog/digital architecture 🤔..

BrainChip's IP royalties from customers, could easily be a 3rd of the cost to them (in volume) at next to no cost to us..

They'd better hope they have good OEM marketing..
 
  • Like
  • Fire
Reactions: 11 users

Sirod69

bavarian girl ;-)
"We’re proving that on-chip AI, close to the sensor, has a sensational future, for our customers’ products, as well as the planet."

Hi Sirod69,

This Qualcomm patent application relates to a large split NN over 2 or more SoCs because the weights are too large for the on SoC memory of a single NN SoC.

US2020250545A1 SPLIT NETWORK ACCELERATION ARCHITECTURE

Priority: 20190206

View attachment 26284

[0022] As noted, an artificial intelligence accelerator may be used to train a neural network. Training of a neural network generally involves determining one or more weights associated with the neural network. For example, the weights associated with a neural network are determined by hardware acceleration using a deep learning accelerator. Once the weights associated with a neural network are determined, an inference may be performed using the trained neural network, which computes results (e.g., activations) by processing input data based on the weights associated with the trained neural network.

[0023] In practice, however, a deep learning accelerator has a fixed amount of memory (e.g., static random access memory (SRAM) with a capacity of 128 megabytes (MB)). As a result, the capacity of a deep learning accelerator is sometimes not large enough to accommodate and store a single network. For example, some networks have weights of a larger size than the fixed amount of memory available from the deep learning accelerator. One solution to accommodate large networks is to split the weights into a separate storage device (e.g., a dynamic random access memory (DRAM)). These weights are then read from the DRAM during each inference. This implementation, however, uses more power and can result a memory bottleneck.

[0024] Another solution to accommodate large networks is splitting the network into multiple pieces and passing intermediate results from one accelerator to another through a host. Unfortunately, passing intermediate inference request results through the host consumes host bandwidth. For example, using a host interface (e.g., a peripheral component interconnect express (PCIe) interface) to pass intermediate inference request results consumes the host memory bandwidth. In addition, passing intermediate inference request results through the host (e.g., a host processor) consumes central processing unit cycles of the host processor and adds latency to an overall inference calculation.

[0025] One aspect of the present disclosure splits a large neural network into multiple, separate artificial intelligence (AI) inference accelerators (AIIAs). Each of the separate AI inference accelerators may be implemented in a separate system-on-chip (SoC). For example, each AI inference accelerator is allocated and stores a fraction of the weights or other parameters of the neural network. Intermediate inference request results are passed from one AI inference accelerator to another AI inference accelerator independent of a host processor. Thus, the host processor is not involved with the transfer of the intermediate inference request results.

The system passes partial results from one partial NN SoC to another NN SoC.

Now, I don't know how his differs from having 2 or more Akida 1000s connected up.

But, if Qualcomm think they've invented it, that suggests that 2 years ago, they were not planning to use Akida.

Our patent has a priority of 20181101 which pre-dates Qualcomm's priority by 3 months.

Thank you @Diogenese for your answer.
Are you now completely ruling out that they are using Akida, or could it be? Namely, I see very large connections from Qualcomm to Brainchip.
 
  • Like
Reactions: 5 users

Diogenese

Top 20
Thank you @Diogenese for your answer.
Are you now completely ruling out that they are using Akida, or could it be? Namely, I see very large connections from Qualcomm to Brainchip.
Well the patent is over 2 years old, and we do have partners in common with Qualcomm, so anything is possible, but, I fear that, like Renesas, they will be reluctant to abandon their in-house development.

On the other hand, if their split NN infringes one or more of our patents ... ?

They may be able to avoid infringement because they use a Frankenstein (hybrid) NN which has several analog layers and a final digital layer.
 
  • Like
  • Love
  • Sad
Reactions: 19 users
Top Bottom