BRN Discussion Ongoing

Diogenese

Top 20
Because of Akida's 4-bit capability, it can have "home-made" models which are more compact than the standard 8-bit models, and, of course, Akida can handle the 8-bit models as well, with the accompanying reduction in efficiency advantage. And let's not forget Akida's 1 and 2-bit capability for super low power consumption.

I think that developing Akida-specific models for Valeo, Mercedes and others is where a lot of our effort will be focussed. These will not be universal models. They will be more functionally biassed.
Come to think about it, maintaining and updating models could be a nice little ongoing earner.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

wilzy123

Founding Member
Forget th edge boxes guys, we will start to get traction in th next 3 quarters from auto, industrial and euro space industry!!! 🚀 🌌..... Yeah! ....
Caution, delusional up ramper... Who is creeping into top holder territory on pure faith and speculation....
God save us.

Autonomous Vehicles, Military and Defense, Unmanned Aerial Vehicles, Robotics
 
  • Like
Reactions: 10 users

Boab

I wish I could paint like Vincent
  • Like
  • Fire
Reactions: 7 users

Slade

Top 20
Nice update to our website.

 
  • Like
  • Fire
  • Love
Reactions: 37 users

SERA2g

Founding Member
The question that should have been raised at the AGM to Sean is: what is BRN’s strategy if none of the current client engagements are successful or extend? If there are 2-3 year lead times involved in evaluating Akida prior to any product involvement, where does it leave the business if these current engagements are unsuccessful? Clearly none have been successful to date and in my view the company has been poorly mismanaged and Sean is not the right CEO to instil confidence for shareholders and steer BRN in the right direction.
Ok Bacon Lover 🤣
Can see your writing style from a mile away hahahahaah
 
  • Haha
  • Like
  • Wow
Reactions: 14 users
Wow we never ended up red like the rest of the asx.

1725431188347.gif
 
  • Haha
  • Like
Reactions: 11 users

CHIPS

Regular
Been a few years since our last child was born, but brings back so many happy memories after becoming a grandad. View attachment 68922

Very cute :love: ... the baby I mean :LOL:!

Congratulations to the parents and grandparents 💐💐.
Have a great time with her (it's a girl?) but don't give her too many kisses 😁

baby kiss GIF
 
  • Like
  • Love
  • Haha
Reactions: 6 users

manny100

Top 20
By a few edge boxes that will help you current holding Mate.

On a serious not its quite interesting.

We have finally been given guidance of some growth numbers be it what it is but it's progress.

Edge boxes for sale again with a price increase to 1495 USD.

Clearly not indicative of what the companies SP reflects in a way just my opinion.

So from what I see it looks like Akida 1000 is and will still be produced where the Akida 2.0 seems to be tabYoo IP only for some reason likely the higher cost to make and better performance. Maybe some other reasons one can speculate with a non competitive clause for a buyer as was mentioned
I think you might find there is a fair bit of interest in Gen 2 and engagements.
Chips on that basis not necessary for GEN 2.
AKIDA 1000 and 1500 chips were produced to get the industries 'head around, this new fangled contraption'.
 
  • Like
Reactions: 3 users

rgupta

Regular
I concur Diogenese 👍
And he said we "will" be the first @Bravo.

However, Kaist claimed to be running ChatGPT2 in full, on their neuromorphic research chip, back in March this year, which post date's TL's comments.


"Neuromorphic computing is a technology that even companies like IBM and Intel have not been able to implement, and we are proud to be the first in the world to run the LLM with a low-power neuromorphic accelerator," Yoo said.
So they said they can do transformers as well on their SNN.
As per brainchip we can also take transformer load with akida 2000 but the same is do able because of TENNs.
On top brainchip also said earlier the results are very encouraging when comparing akida with gpt 2.
Is there a co incidence??
On top the processor used is samsung on 28 nm.
 
  • Like
  • Fire
Reactions: 6 users
So they said they can do transformers as well on their SNN.
As per brainchip we can also take transformer load with akida 2000 but the same is do able because of TENNs.
On top brainchip also said earlier the results are very encouraging when comparing akida with gpt 2.
Is there a co incidence??
On top the processor used is samsung on 28 nm.
I don't think Kaist, has anything to do with us, personally.

Process size, doesn't mean anything, it's just a good proven one, that doesn't cost as much as the smaller ones (nobody, is going to produce "research chips" in 7nm for example).

I don't think their chip is pure digital either (I think @Diogenesed looked into it?)..

No surprise, that Samsung is involved and I believe they have a history with us, but that is one of their main foundries.

Any input from BrainChip, is inspired in my opinion.

And I'd Love for Samsung, to be onboard.
 
  • Like
  • Fire
  • Love
Reactions: 8 users

Diogenese

Top 20
I don't think Kaist, has anything to do with us, personally.

Process size, doesn't mean anything, it's just a good proven one, that doesn't cost as much as the smaller ones (nobody, is going to produce "research chips" in 7nm for example).

I don't think their chip is pure digital either (I think @Diogenesed looked into it?)..

No surprise, that Samsung is involved and I believe they have a history with us, but that is one of their main foundries.

Any input from BrainChip, is inspired in my opinion.

And I'd Love for Samsung, to be onboard.
Yes. KAIST are into analog. The term "in-memory compute" is usually used in relation to analog, in that the calculations are performed by the memory circuits by accumulating a voltage whose amplitude is proportional to the number of input signals.
 
  • Like
  • Fire
Reactions: 10 users

KKFoo

Regular
Because of Akida's 4-bit capability, it can have "home-made" models which are more compact than the standard 8-bit models, and, of course, Akida can handle the 8-bit models as well, with the accompanying reduction in efficiency advantage. And let's not forget Akida's 1 and 2-bit capability for super low power consumption.

I think that developing Akida-specific models for Valeo, Mercedes and others is where a lot of our effort will be focussed. These will not be universal models. They will be more functionally biassed.
Hi Diogenese, I believe you are best person for this question.. Are the VVDN edge box use for research and development purposes or it can be used by end user? Let's say I want to set up a face recognition security system in my office, can I just buy an edge box and plug it into my camera system and it is ready to go or I still need to go and develop some software system to interact with the edge box?
Thank you in advance if you can provide me with the answer..
 
  • Like
Reactions: 6 users

Diogenese

Top 20
Hi Diogenese, I believe you are best person for this question.. Are the VVDN edge box use for research and development purposes or it can be used by end user? Let's say I want to set up a face recognition security system in my office, can I just buy an edge box and plug it into my camera system and it is ready to go or I still need to go and develop some software system to interact with the edge box?
Thank you in advance if you can provide me with the answer..
Hi KK,

The Edge Boxes are definitely suitable for end use. Of course they can also be used for R&D, but they are the real thing.

They come with pre-made models, but you can also develop your own model library or adapt the pre-made ones using on-chip learning.
 
  • Like
  • Love
  • Fire
Reactions: 25 users

rgupta

Regular
I don't think Kaist, has anything to do with us, personally.

Process size, doesn't mean anything, it's just a good proven one, that doesn't cost as much as the smaller ones (nobody, is going to produce "research chips" in 7nm for example).

I don't think their chip is pure digital either (I think @Diogenesed looked into it?)..

No surprise, that Samsung is involved and I believe they have a history with us, but that is one of their main foundries.

Any input from BrainChip, is inspired in my opinion.

And I'd Love for Samsung, to be onboard.
The reason I try to relate is not only 28nm chip, but also a report by brainchip where they compare akida with chat gpt2. But the biggest shock to me is they said their chip can do transformers though 1000 was not able to perform that function but akida 2 can do the same.
Anyway at the end a lot of wires are entangled and only time will have all the answers. But one thing for sure there is a lot of happening behind the scenes and still no concrete news.
 
  • Like
Reactions: 2 users
The reason I try to relate is not only 28nm chip, but also a report by brainchip where they compare akida with chat gpt2. But the biggest shock to me is they said their chip can do transformers though 1000 was not able to perform that function but akida 2 can do the same.
Anyway at the end a lot of wires are entangled and only time will have all the answers. But one thing for sure there is a lot of happening behind the scenes and still no concrete news.
It definitely is, a tangled web out there Rgupta..

That's what makes the dot joining Fun, Frustrating and potentially Fruitless, all at the same time..
 
  • Like
Reactions: 6 users

Used Google translator from Korean to English
Their SNN also compairing on GPT-2

Through this, it was possible to reduce the parameters of the GPT-2 giant model from 708 million to 191 million and the parameters of the T5 model used for translation from 402 million to 76 million. As a result of this compression work, we succeeded in reducing the power consumed by loading language model parameters from external memory by 70%. According to the researchers, the complementary transformer consumes 1/625th the power of the NVIDIA A100 GPU, while enabling high-speed operation of 0.4 seconds for language generation using the GPT-2 model and 0.2 seconds for language translation using the T5 model. In the case of language generation due to parameter lightweighting, the accuracy decreased by 1.2 branching coefficient (lower means that the language model was learned better), but the researchers explained that it is at a level where people will not feel awkward when reading the generated sentences. In the future, the research team plans to expand the scope of neuromorphic computing to various application fields rather than limiting it to language models.
Sounds like a bit of stretch now, saying it's running ChatGPT2, when it's actually a compressed model of it, with just over a quarter of the parameters...

Would be more accurate I think, to say it was running a SLM "derived" from ChatGPT2..
 
  • Like
Reactions: 2 users

rgupta

Regular
It definitely is, a tangled web out there Rgupta..

That's what makes the dot joining Fun, Frustrating and potentially Fruitless, all at the same time..
That is a screen shot of Dr Tony Lewis at AGM where akida was compared with Chat gpt 2 and it was claimed our model can use 5000 times less power hungry and can run on the edge.
And then team brainchip raised last CR just to work on LLMs for neurophonic chips.
I donot know why everyone is comparing with chat gpt 2 while we already have chat gpt 4 plus.
But again for joining but yes Dr Tony Lewis was saying almost the same thing.
 

Attachments

  • Screenshot_20240904-223843.png
    Screenshot_20240904-223843.png
    1.1 MB · Views: 67
  • Like
Reactions: 1 users
That is a screen shot of Dr Tony Lewis at AGM where akida was compared with Chat gpt 2 and it was claimed our model can use 5000 times less power hungry and can run on the edge.
And then team brainchip raised last CR just to work on LLMs for neurophonic chips.
I donot know why everyone is comparing with chat gpt 2 while we already have chat gpt 4 plus.
But again for joining but yes Dr Tony Lewis was saying almost the same thing.
It's to do with the number of parameters Rgupta.

ChatGPT4 is huge..


I was always told size didn't matter 😔..

20240904_221756.jpg



No chance at all, of running anything like ChatGPT4, outside of a data center, with current technology.

Interestingly, this shows over twice the parameters, for the full ChatGPT2, that Kaist claims it has (and they are running with just over a quarter of that).
 
Last edited:
  • Like
  • Haha
Reactions: 8 users

stockduck

Regular

"....
Being a business-centric device, Dell offers several security features on the Latitude 5455 including SafeBIOS, IR biometric login, optional fingerprint reader in the power button, camera privacy shutter, and optional security hardware authentication bundles.

Customers can also configure additional hardware security measures including a chassis intrusion switch, hard drive wipes, and tamper-evident packaging along with software measures such as Crowdstrike (ahem!), Secureworks, and Netskope.

The Latitude 5455 uses Qualcomm's FastConnect 7800 Wi-Fi 7 WLAN card with Bluetooth 5.4 for fast wireless networking. Additionally, users also get 2x USB4 Type-C 40 Gbps ports, 1x USB 3.2 Gen1 Type-A with Power Share, a microSD card reader, and a combo audio jack."


Well,..... that sounds interesting to me...what the hell is Dell doing?:rolleyes:
But there is no specific point for Brainchip IP insn`t it?

Here a description in german language....:


"...Dell is launching a new notebook with a computing chip from Qualcomm. With these, the company is likely to be aimed at customers who are looking for a notebook for more general tasks and who would like to benefit from a long battery life.
....
Users should benefit from various options, such as a search function with input in natural language or a special suppression of disturbing ambient noise.
....."

translated from google translator
 
  • Like
  • Fire
Reactions: 5 users

rgupta

Regular
It's to do with the number of parameters Rgupta.

ChatGPT4 is huge..


I was always told size didn't matter 😔..

View attachment 68940


No chance at all, of running anything like ChatGPT4, outside of a data center, with current technology.

Interestingly, this shows over twice the parameters, for the full ChatGPT2, that Kaist claims it has (and they are running with just over a quarter of that).
There is no doubt it is almost impossible to make a 1trillion parter model on the edge. But do we need all trillion parameters all the time. It is just like I have 10 wardrobes full of clothes but I can only wear a few at a given time. Which arises the question of what is important in a given situation, in a given industry etc.
There are a lot of instances where we may not need full chat gpt and that is why it is expected industry specific Small language models will be the future.
 
  • Like
Reactions: 5 users
Top Bottom