BRN Discussion Ongoing

Frangipani

Top 20
And calling GPT2 a LLM, isn't quite accurate, in the context, that even GPT3 has more than ten times the parameters.

It is accurate, nevertheless, as all members of Open AI’s GPT family qualify as LLMs.



1519ECF3-6DCA-44F6-BB3A-D3C4D6BE4C74.jpeg


D19D2D09-0931-4E59-8102-826114D2122A.jpeg
 
  • Like
  • Fire
Reactions: 9 users

BrainShit

Regular
BrainChip's #neuromorphic tech in Lower Orbit! Akida's journey unfolds, bringing groundbreaking possibilities on earth and beyond. Stay tuned as the story continues to unfold!
 

Attachments

  • Screenshot_20240306_180954_X.jpg
    Screenshot_20240306_180954_X.jpg
    534 KB · Views: 146
  • Like
  • Love
  • Fire
Reactions: 49 users
Now we know why Peter retired.

Hopkins engineers collaborate with ChatGPT4 to design brain-inspired chips | Hub (jhu.edu)

https://hub.jhu.edu/2024/03/04/chatgpt4-brain-inspired-chips/

HOPKINS ENGINEERS COLLABORATE WITH CHATGPT4 TO DESIGN BRAIN-INSPIRED CHIPS

Systems could power energy-efficient, real-time machine intelligence for next-generation autonomous vehicles, robots 20240303



Through step-by-step prompts to ChatGPT4, starting with mimicking a single biological neuron and then linking more to form a network, they generated a full chip design that could be fabricated.
I actually sent an email to the Company a couple or so months ago (after someone here, posted an article about a company that had developed an enhanced electric motor design, through the use of Generative A.I.) asking whether they were making use of Generative A.I. to advance developments and problem solve (fearing that pride may make them avoid, or not consider this).

The reply, was along the lines of, that they were using whatever "tools" were available.

One of the things that concerns me, is that Generative A.I. when appropriately directed at a specific task, can achieve things, that our brightest minds, may not have actually "thought" of.

AlphaGo, the A.I. developed to play "Go" against human players (considered one of the hardest and oldest board games) came up with game strategies, that humans hadn't "come up with" in the ~4000 years, that the game had been played.
(It has been said, that there are more possible combinations of moves, than atoms in the Universe).

If you look at the way Sora (OpenA.I.) is able to generate visuals from text, it is just incredible.
Imagine future versions, where whole books, by your favourite author, can be reproduced as a film, trimmed to whatever length you like.
Want to watch it again, but with maybe a "darker" theme, just prompt it.
In fact just redoing it, is likely to produce a different result (watch the same film again, but not..)
Turn the Texas Chainsaw Massacre, into a children's fairytale? Easy.. (well maybe 🤔..)

It doesn't surprise me, that these tools can produce these kind of things, even patents can be easily circumnavigated.

Sorry, that's my rain for this early morning..

That's why sealing deals, is so important now.

The Technological Clock hands are spinning faster than ever before.
At some point, the hands will become irrelevant.
 
Last edited:
  • Like
  • Thinking
  • Love
Reactions: 18 users
It is accurate, nevertheless, as all members of Open AI’s GPT family qualify as LLMs.



View attachment 58614

View attachment 58615
Fair enough, but I still disagree 😛

The definition of something, with changing relative context, can not remain constant, in my opinion.

Not even sure if that made sense, but it's past even my bedtime...
 
  • Haha
  • Like
Reactions: 5 users

Frangipani

Top 20
BrainChip's #neuromorphic tech in Lower Orbit! Akida's journey unfolds, bringing groundbreaking possibilities on earth and beyond. Stay tuned as the story continues to unfold!

Hi BrainShit,

I am just reposting the picture in the X post you shared as a thumbnail image, as people who don’t click on the attachment may not even notice that the Brainchip synapse symbol in it is actually stylised as a satellite! Love it! 😍

Whoever thought of this and also of the catchy phrase “The sky is no longer the limit.” (that was already featured in the rocket liftoff images posted shortly after the Transporter-10 rideshare mission launch) deserves an extra round of applause! Awesome! 👏 👏 👏

C41AC988-A020-46A4-91E3-9F125442AE5D.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 69 users

IloveLamp

Top 20
This is hit and run too, Fact Finder just put up a post on HC at 6:02PM this evening post# 72778782 regarding Ericsson 6G zero energy and Akida.

I am having problems with getting links and images up on here at present, hence no charts lately and being time poor means some one else needs to go over to HC and get this very interesting post copied and put up here.
Hi McHale,

I had the same trouble a while ago. Have you tried pressing this icon?

1000013955.jpg
 
  • Like
Reactions: 3 users

TECH

Regular
BrainChip's #neuromorphic tech in Lower Orbit! Akida's journey unfolds, bringing groundbreaking possibilities on earth and beyond. Stay tuned as the story continues to unfold!

Is that a UAP ? oh no sorry, it's Peters Spiking Neuron, one of Brainchip's Trade Marks....great promo article.

Tech.
 
  • Like
  • Love
Reactions: 14 users

IloveLamp

Top 20
With no news about NEW IP agreement, is it possible that if a Company did take up
an IP agreement but specify a NDA, could this be happening❓
I believe that the answer to your question is yes and no.....

It could be happening in the early stages, but non disclosure becomes illegal at a point where the company "knows" it's material in nature..... (my interpretation, dyor) ......

.....a lot of grey in there to delay announcing things to give customers the advantage they so desperately seek.....

I also would not be surprised to learn that the u.s government had a hand in keeping us under wraps just a little longer too considering our military and government links......

Pure speculation,....... but is it impossible?
 
Last edited:
  • Like
  • Thinking
  • Fire
Reactions: 20 users

Andy38

The hope of potential generational wealth is real
Hi Tech.
Just wondering if you were up for a catch up as I am currently staying in Manganui (pub) for a few days on way up to the 90 mile next week.
I haven't met anyone who is into Brainchip other than friends and family who I coerced into joining me.
I do like you thoughts and input very positive.

Cheers
Great spot!
 

IMG_1286.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 42 users

mcm

Regular

View attachment 58622
Anyone have an idea as to who the customer might be that Sean is meeting with in Australia?
 
  • Like
  • Haha
Reactions: 4 users

AARONASX

Holding onto what I've got
  • Like
  • Thinking
  • Fire
Reactions: 7 users

Boab

I wish I could paint like Vincent
Anyone have an idea as to who the customer might be that Sean is meeting with in Australia?
I'm buying him lunch and he's going to tell me everything
No Doubt About it.
 
  • Haha
  • Like
Reactions: 28 users

mcm

Regular
  • Like
  • Fire
Reactions: 7 users

JDelekto

Regular
Exactly.
And finally answer your legitimate question whether or not Sean Hehir had indeed told select participants of that by-invitation-only shareholder meeting in Sydney in November that Brainchip had succeeded in developing and running a number of LLMs on Akida 2.0 (as claimed by a poster here on TSE), and if true, why the remaining shareholders (which constitute the vast majority) still have not been informed about this amazing breakthrough via official channels four months later.

I also wonder why none of the other attendees of that Sydney gathering have so far shared with us their recollection of what Sean Hehir actually said. 🤔

There is something that I would like to point out about LLMs. LLMs can have large memory requirements, depending upon the model quality, the number of parameters, and quantization. For example, a 70-billion parameter CodeLlama Instruct model (useful for writing computer code) with an 8-bit quantization requires about 70+ GB of RAM. You can still get decent results and save space with a lower quantization but it suffers a loss in quality if the quantization is too low.

That same CodeLlama Instruct model with 4-bit quantization is about 40 GB of RAM. I have a desktop with 64 GB and I can partially offload the model to 24 GB of GPU RAM with an Nvidia GTX 3080 card. The more I get off the CPU and onto the video card (which does all the math faster), the faster my responses to chat. Unless one plays a lot of resource-hungry video games or does work with AI, you will not find most consumers that have desktop PCs with that much memory to run those models.

Now, given a base system with a CPU and enough memory to hold a much smaller model (like 7 billion parameters instead of 70) and quantized down to about 4 bits, you're looking at slightly under 4 GB for a model. I think this is something an Edge device can handle and I think that may be useful depending on the use case.

So I think that technically, Akida could run an LLM if it were properly "massaged" (I believe I ran across a GitHub repo where an individual was attempting to run an LLM with a Spiking Neural Network), but running an LLM the size and quality of GPT-4 or Claude-3 would not be practical.
 
  • Like
  • Fire
  • Love
Reactions: 24 users
 
  • Like
  • Love
  • Haha
Reactions: 29 users

overpup

Regular
Fair enough, but I still disagree 😛

The definition of something, with changing relative context, can not remain constant, in my opinion.

Not even sure if that made sense, but it's past even my bedtime...
That old saying attributed to Einstein: "Insanity is doing the same thing over and over and expecting a different result"...
You can tell Albert never worked with computers!
 
  • Like
  • Haha
Reactions: 5 users

IloveLamp

Top 20
1000013960.jpg
1000013957.jpg
1000013965.jpg
1000013962.jpg
 
  • Like
  • Fire
  • Love
Reactions: 41 users

Diogenese

Top 20
There is something that I would like to point out about LLMs. LLMs can have large memory requirements, depending upon the model quality, the number of parameters, and quantization. For example, a 70-billion parameter CodeLlama Instruct model (useful for writing computer code) with an 8-bit quantization requires about 70+ GB of RAM. You can still get decent results and save space with a lower quantization but it suffers a loss in quality if the quantization is too low.

That same CodeLlama Instruct model with 4-bit quantization is about 40 GB of RAM. I have a desktop with 64 GB and I can partially offload the model to 24 GB of GPU RAM with an Nvidia GTX 3080 card. The more I get off the CPU and onto the video card (which does all the math faster), the faster my responses to chat. Unless one plays a lot of resource-hungry video games or does work with AI, you will not find most consumers that have desktop PCs with that much memory to run those models.

Now, given a base system with a CPU and enough memory to hold a much smaller model (like 7 billion parameters instead of 70) and quantized down to about 4 bits, you're looking at slightly under 4 GB for a model. I think this is something an Edge device can handle and I think that may be useful depending on the use case.

So I think that technically, Akida could run an LLM if it were properly "massaged" (I believe I ran across a GitHub repo where an individual was attempting to run an LLM with a Spiking Neural Network), but running an LLM the size and quality of GPT-4 or Claude-3 would not be practical.

Hi JD,

That's some impressive technowizardry.

As you know, PvdM's "4 Bits are enough" white paper discusses the advantages of 4-bit quantization.

https://brainchip.com/4-bits-are-enough/
...
4-bit network resolution is not unique. Brainchip pioneered this Machine Learning technology as early as 2015 and, through multiple silicon implementations, tested and delivered a commercial offering to the market. Others have recently published papers on its advantages, such as IBM, Stanford University and MIT.

Akida is based on a neuromorphic, event-based, fully digital design with additional convolutional features. The combination of spiking, event-based neurons, and convolutional functions is unique. It offers many advantages, including on-chip learning, small size, sparsity, and power consumption in the microwatt/milliwatt ranges. The underlying technology is not the usual matrix multiplier, but up to a million digital neurons with either 1, 2, or 4-bit synapses. Akida’s extremely efficient event-based neural processor IP is commercially available as a device (AKD1000) and as an IP offering that can be integrated into partner System on Chips (SoC). The hardware can be configured through the MetaTF software, integrated into TensorFlow layers equating up to 5 million filters, thereby simplifying model development, tuning and optimization through popular development platforms like TensorFlow/Keras and Edge Impulse. There are a fast-growing number of models available through the Akida model zoo and the Brainchip ecosystem.

To dive a little bit deeper into the value of 4-bit, in its 2020 NeurIPS paper IBM described the various pieces that are already present and how they come together. They prove the readiness and the benefit through several experiments simulating 4-bit training for a variety of deep-learning models in computer vision, speech, and natural language processing. The results show a minimal loss of accuracy in the models’ overall performance compared with 16-bit deep learning. The results are also more than seven times faster and seven times more energy efficient. And Boris Murmann, a professor at Stanford who was not involved in the research, calls the results exciting. “This advancement opens the door for training in resource-constrained environments,” he says. It would not necessarily make new applications possible, but it would make existing ones faster and less battery-draining“ by a good margin
.”

While I have some understanding of the visual aspect, I find the NLP, covering both speech and text, more perplexing mainly because of the need for context or "attention", but, as usual, Prof Wiki has some useful background:

https://en.wikipedia.org/wiki/Natural_language_processing
 
  • Like
  • Love
  • Fire
Reactions: 28 users

McHale

Regular
  • Like
Reactions: 2 users
Top Bottom