BRN Discussion Ongoing

The point is, who needs endless speculations without any background day by day.
And if you don't understand the connections between speeding tickets and speculation activities like this forum, sorry, I can't help you.
Speculation, discussion, debate, gifs 🙄..

Are the lifeblood of share forums, or else why be here at all?

Just shut all share forums down around the World?

I feel safe in asserting, that the recent surge in volume and resulting share price movement, was not a result, of speculation on this forum.

If any of this movement was attributable to this forum, then it was a direct result, of facts about us being present at the Intel Foundry Services presentation and not speculation.
 
  • Like
  • Fire
  • Love
Reactions: 33 users

Fenris78

Regular
Hi FK,

I wish that were true, but the available evidence does not support your conclusion.

The X320 is a mini version of the Prophesee Gen 4 DVS, having about a tenth the number of pixels.

Qualcomm has their own Hexagon NPU in Snapdragon 8.2.

View attachment 57991


Snapdragon 8 Gen 2 deep dive: Everything you need to know (androidauthority.com)



To boost performance, the Tensor accelerator inside the DSP has doubled in size for twice the performance and has new optimizations specifically for language processing. Qualcomm is also debuting what it calls micro tile inference support, essentially chopping up imaging and other problems into smaller tiles to save on memory at the expense of some result accuracy. Along those lines, the addition of INT4 also means that developers can now implement machine learning problems requiring high bandwidth at the expense of some accuracy if compressing a larger model. Qualcomm is providing tools to partners to help support INT4, so it will require a retooling of existing applications to work.
...
Qualcomm doubled the physical link between the image signal processor (ISP), Hexagon DSP, and Adreno GPU, driving higher bandwidth and lowering latency. This allows the Snapdragon 8 Gen 2 to run much more powerful machine-learning tasks on imaging data right off the camera sensor. RAW data, for instance, can be passed directly to the DSP/AI Engine for imaging workloads, or Qualcomm can use the link to upscale low-res gaming scenarios to assist with GPU load balancing.

So it is possible that Prophesee dumbed-down its DVS so it would work with Snapdragon 8.2.
Is there any chance Qualcomm have discarded their Hexagon NPU for Snap dragon 8.3... and seen the light with Akida?
 
  • Like
  • Thinking
Reactions: 5 users

Sirod69

bavarian girl ;-)
The following is a link to a presentation containing a Brainchip created Competitive Analysis Chart at page 13.


You will note beyond Intel and IBM that Google Coral and Deep Learning Accelerators from Nvidia and others are listed and compared to AKIDA which ticks all the boxes.

Today we are told the competitors names are not included basically for politeness sake.

Until today Brainchip has seen no reason to be so polite about Google, Nvidia & others when listing competitors failings compared to AKIDA technology.

What has changed???

My opinion only DYOR
Fact Finder
Do you think they might have come into closer contact with them? Conclusion: maybe they already work together somehow and that's why they're so polite?
 
  • Like
  • Love
  • Thinking
Reactions: 18 users

Xhosa12345

Regular
200w-11.gif


Ooooooo might be time for a trading parcel!!
 
  • Like
  • Haha
Reactions: 4 users
Hi FK,

I wish that were true, but the available evidence does not support your conclusion.

The X320 is a mini version of the Prophesee Gen 4 DVS, having about a tenth the number of pixels.

Qualcomm has their own Hexagon NPU in Snapdragon 8.2.

View attachment 57991


Snapdragon 8 Gen 2 deep dive: Everything you need to know (androidauthority.com)



To boost performance, the Tensor accelerator inside the DSP has doubled in size for twice the performance and has new optimizations specifically for language processing. Qualcomm is also debuting what it calls micro tile inference support, essentially chopping up imaging and other problems into smaller tiles to save on memory at the expense of some result accuracy. Along those lines, the addition of INT4 also means that developers can now implement machine learning problems requiring high bandwidth at the expense of some accuracy if compressing a larger model. Qualcomm is providing tools to partners to help support INT4, so it will require a retooling of existing applications to work.
...
Qualcomm doubled the physical link between the image signal processor (ISP), Hexagon DSP, and Adreno GPU, driving higher bandwidth and lowering latency. This allows the Snapdragon 8 Gen 2 to run much more powerful machine-learning tasks on imaging data right off the camera sensor. RAW data, for instance, can be passed directly to the DSP/AI Engine for imaging workloads, or Qualcomm can use the link to upscale low-res gaming scenarios to assist with GPU load balancing.

So it is possible that Prophesee dumbed-down its DVS so it would work with Snapdragon 8.2.
Thanks for the detailed response.
Ive deleted my post. to reduce potential confusion and reduce accusations of pumping :(
 
  • Haha
  • Like
  • Love
Reactions: 7 users

CHIPS

Regular
Germany -28,97% 😭

I remember well that last week I wrote that I hope the stockprice will not go down considerably because of the annual results and somebody replied that this would not make sense, as all figures are known already.

May I say that I do not like this -28,97% "does not make sense" at all?! 😡
 
  • Like
  • Haha
  • Love
Reactions: 6 users

cassip

Regular
Frankfurter Vermögen increased the position of Brainchip in its "DigiTrend"-Fonds from 3,98% (14.2.) to 7,09% (23.2.).
 
  • Like
  • Fire
  • Wow
Reactions: 32 users

Diogenese

Top 20
Frankfurter Vermögen increased the position of Brainchip in its "DigiTrend"-Fonds from 3,98% (14.2.) to 7,09% (23.2.).
It looks like Jurgen Bruckner is on the ball:

https://www.frankfurter-vermoegen.com/institutionelle-kunden/unsere-fonds

Jürgen Brückner

  • Mainly responsible for DigiTrends since inception
  • Diploma in Economics,
  • ➣ Co-founder and Managing Director of Wertefinder VV from 2009 – 2019
  • ➣ 25 years of asset management at Deutsche Bank and Dresdner Bank, Managing Director Deutsche Bank, Moscow
  • ➣ Management of a japan. Mutual funds (€500 million)
  • ➣ Various functions in derivatives and bond trading in Düsseldorf, Frankfurt, Moscow and Madrid
  • ➣ Over 10 years as an independent asset manager



https://www.frankfurter-vermoegen.com/institutionelle-kunden/unsere-fonds/digitrends-aktienfonds

DigiTrends is a pure equity fund that focuses primarily on technology, medical technology, environmental technology and renewable energies. High barriers to market entry ensure above-average growth rates and sustainable returns on sales in these business segments. An extremely important success factor is that not only established companies are taken into account, but also those that can be assigned to the so-called "Emerging Growth" values. These companies have an above-average position in the value chain due to their key technologies or competencies and are therefore able to achieve high margins. The products of these companies are indispensable for the production of a wide range of other products and are therefore used in a wide variety of sectors. Examples include augmented reality, the Internet of Things, 5G and wearables. This multiplier effect increases return opportunities while reducing risk within the framework of broad diversification.


https://documents.anevis-solutions.com/fraverm/DigiTrends_Aktienfonds_I.pdf 20240223

1709046826306.png
 
  • Like
  • Fire
  • Love
Reactions: 49 users

FKE

Regular
Hi toasty
On time horizon I will need to listen again but I believe I heard Sean Hehir communicate that:

1. The restructure is complete and the sales team is now in place to drive sales during 2024.

2. They have strong relationships with partners evidenced by Intel displaying Brainchip centre stage.

3. The company is most definitely engaging with potential customers to sell further IP licences as it has to Renesas and MegaChips.

4. The Edge Boxes are an opportunity for which there appears to be strong demand and they have sufficient AKD1000 chips in hand to match that demand and the ability to produce more when required.

5. The move to produce an actual AKIDA 2.0 silicon chip is clearly no longer a certainty as they do not want customers who decide to produce silicon containing AKIDA 2.0 IP to feel threatened by Brainchip becoming a competitor.

(My wild speculation is that to be considering deferring permanently the production of AKIDA 2.0 having been so close to Tapeout as stated by Todd Vierra at CES 2024 Brainchip must be very well advanced in some sort of real world commercial engagement with a significant customer. Could it be Intel???)

While I don’t know your age I am 70 and I am still anticipating being here in February, 2025 which should mean on the basis of today’s presentation I will be well positioned to enjoy the commercial success that will commence in 2024.

My opinion only DYOR
Fact Finder

Page 6 - Annual Report (Page 8 of the PDF)

Operational Highlights

.....While the Company did not secure royalty-bearing IP sales agreements in 2023, it laid the foundations for future commercial success through a more focused, targeted, and qualified customer engagement strategy that prioritized engagements with qualified technology customers that were already at an advanced stage of product development with defined budgets and timelines and where there were competitive opportunities to bring neuromorphic technology into consideration against existing products.

This strategy was rewarded with strong levels of interest from potential customers and an encouraging pipeline of detailed technical assessments which, if BrainChip is successful in managing, would lead to commercialization in 2024.
 
  • Like
  • Love
  • Fire
Reactions: 45 users

Kozikan

Regular
The following is a link to a presentation containing a Brainchip created Competitive Analysis Chart at page 13.


You will note beyond Intel and IBM that Google Coral and Deep Learning Accelerators from Nvidia and others are listed and compared to AKIDA which ticks all the boxes.

Today we are told the competitors names are not included basically for politeness sake.

Until today Brainchip has seen no reason to be so polite about Google, Nvidia & others when listing competitors failings compared to AKIDA technology.

What has changed???

My opinion only DYOR
Fact Finder
👍👍👍👍👍
 
  • Like
Reactions: 3 users

cosors

👀
The point is, who needs endless speculations without any background day by day.
And if you don't understand the connections between speeding tickets and speculation activities like this forum, sorry, I can't help you.
Wow, I am thrilled. I read here the proof that we retailers on TSE can move the SP significantly. So get to work dear TSEers! We can do it! Speeding tickets in a row! Yeah!

Are you serious?
You're a funny guy.

___
And yes I have read about the two pawn sacrifices on HC and ASIC alibis to be active in the last decades.

And me too, I don't need any help either. But thank you.
 
Last edited:
  • Haha
  • Fire
  • Like
Reactions: 15 users

charles2

Regular
  • Like
  • Love
  • Fire
Reactions: 30 users

Manhattan New York!
We've hit the Big Time of University stakes, the Big Apple!

20240228_034659.jpg


This is now our 6th University program...

Yes, this is "only" Future priming material, but I Love it 😉



We will clean up here, eventually..
 
  • Like
  • Love
  • Fire
Reactions: 33 users
Germany -28,97% 😭

I remember well that last week I wrote that I hope the stockprice will not go down considerably because of the annual results and somebody replied that this would not make sense, as all figures are known already.

May I say that I do not like this -28,97% "does not make sense" at all?! 😡
Hey... Keep your cool CHIPS 😉

20240228_040743.jpg
 
  • Haha
  • Like
  • Love
Reactions: 12 users

cosors

👀
Do you know CORDIS? I 'updated' it today for Talga as they have several branches in Europe. But you could also look the other way round and check from possible Brainchip R&D partners if they are confirmed here in the EU? All you have to do is enter the name in the top right-hand corner, select it and then press the dot and you'll be shown the R&D network that is based in Europe and branches out into the world and is covered by CORDIS. I find it very interesting.

CORDIS, which stands for Community Research and Development Information Service, is a research and development information service of the European Community and offers Internet users information on European research and development activities.


____

The networks are documented around the world. I already know that from TLG.

___
e.g. HORIZON projects
 
Last edited:
  • Fire
  • Like
  • Love
Reactions: 5 users

IloveLamp

Top 20
One for the list @Fact Finder


1000013676.jpg
 
  • Like
  • Fire
  • Love
Reactions: 38 users

IloveLamp

Top 20

1000013678.jpg
 
  • Like
  • Fire
Reactions: 12 users

cosors

👀
One for the list @Fact Finder


View attachment 58010
And an update for Neuromorphia!
 
  • Like
  • Fire
  • Haha
Reactions: 7 users

Tothemoon24

Top 20

Renesas Develops New AI Accelerator for Lightweight AI Models and Embedded Processor Technology to Enable Real-Time Processing​


Businesswire
ByBusinesswire
February 26, 2024

Results of Operation Verification Using an Embedded AI-MPU Prototype Announced at ISSCC 2024

Renesas Electronics Corporation, a premier supplier of advanced semiconductor solutions, today announced the development of embedded processor technology that enables higher speeds and lower power consumption in microprocessor units (MPUs) that realize advanced vision AI. The newly developed technologies are as follows: (1) A dynamically reconfigurable processor (DRP)-based AI accelerator that efficiently processes lightweight AI models and (2) Heterogeneous architecture technology that enables real-time processing by cooperatively operating processor IPs, such as the CPU. Renesas produced a prototype of an embedded AI-MPU with these technologies and confirmed its high-speed and low-power-consumption operation. It achieved up to 16 times faster processing (130 TOPS) than before the introduction of these new technologies, and world-class power efficiency (up to 23.9 TOPS/W at 0.8 V supply).
Amid the recent spread of robots into factories, logistics, medical services, and stores, there is a growing need for systems that can autonomously run in real time by detecting surroundings using advanced vision AI. Since there are severe restrictions on heat generation, particularly for embedded devices, both higher performance and lower power consumption are required in AI chips. Renesas developed new technologies to meet these requirements and presented these achievements on February 21, at the International Solid-State Circuits Conference 2024 (ISSCC 2024), held between February 18 and 22, 2024 in San Francisco.
The technologies developed by Renesas are as follows:
(1) An AI accelerator that efficiently processes lightweight AI models
As a typical technology for improving AI processing efficiency, pruning is available to omit calculations that do not significantly affect recognition accuracy. However, it is common that calculations that do not affect recognition accuracy randomly exist in AI models. This causes a difference between the parallelism of hardware processing and the randomness of pruning, which makes processing inefficient.
To solve this issue, Renesas optimized its unique DRP-based AI accelerator (DRP-AI) for pruning. By analyzing how pruning pattern characteristics and a pruning method are related to recognition accuracy in typical image recognition AI models (CNN models), we identified the hardware structure of an AI accelerator that can achieve both high recognition accuracy and an efficient pruning rate, and applied it to the DRP-AI design. In addition, software was developed to reduce the weight of AI models optimized for this DRP-AI. This software converts the random pruning model configuration into highly efficient parallel computing, resulting in higher-speed AI processing. In particular, Renesas’ highly flexible pruning support technology (flexible N:M pruning technology), which can dynamically change the number of cycles in response to changes in the local pruning rate in AI models, allows for fine control of the pruning rate according to the power consumption, operating speed, and recognition accuracy required by users.
This technology reduces the number of AI model processing cycles to as little as one-sixteenth of pruning incompatible models and consumes less than one-eighth of the power.
(2) Heterogeneous architecture technology that enables real-time processing for robot control
Robot applications require advanced vision AI processing for recognition of the surrounding environment. Meanwhile, robot motion judgment and control require detailed condition programming in response to changes in the surrounding environment, so CPU-based software processing is more suitable than AI-based processing. The challenge has been that CPUs with current embedded processors are not fully capable of controlling robots in real time. That is why Renesas introduced a dynamically reconfigurable processor (DRP), which handles complex processing, in addition to the CPU and AI accelerator (DRP-AI). This led to the development of heterogeneous architecture technology that enables higher speeds and lower power consumption in AI-MPUs by distributing and parallelizing processes appropriately.
A DRP runs an application while dynamically changing the circuit connection configuration between the arithmetic units inside the chip for each operation clock according to the processing details. Since only the necessary arithmetic circuits operate even for complex processing, lower power consumption and higher speeds are possible. For example, SLAM (Simultaneously Localization and Mapping), one of the typical robot applications, is a complex configuration that requires multiple programming processes for robot position recognition in parallel with environment recognition by vision AI processing. Renesas demonstrated operating this SLAM through instantaneous program switching with the DRP and parallel operation of the AI accelerator and CPU, resulting in about 17 times faster operation speeds and about 12 times higher operating power efficiency than the embedded CPU alone.
Operation Verification
Renesas created a prototype of a test chip with these technologies and confirmed that it achieved the world-class, highest power efficiency of 23.9 TOPS per watt at a normal power voltage of 0.8 V for the AI accelerator and operating power efficiency of 10 TOPS per watt for major AI models. It also proved that AI processing is possible without a fan or heat sink.
Utilizing these results helps solve heat generation due to increased power consumption, which has been one of the challenges associated with the implementation of AI chips in a variety of embedded devices such as service robots and automated guided vehicles. Significantly reducing heat generation will contribute to the spread of automation into various industries, such as the robotics and smart technology markets. These technologies will be applied to Renesas’ RZ/V series—MPUs for vision AI applications.
 
  • Like
  • Fire
  • Love
Reactions: 33 users
Top Bottom