BRN Discussion Ongoing

IloveLamp

Top 20
1000013971.jpg
1000013974.jpg



 
  • Like
  • Thinking
  • Fire
Reactions: 35 users
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?
Samsung, is certainly a big spicy cabbage, but some prefer fruit..

I think you're being a bit dramatic, saying we're screwed, if we are not in with them.

Yes, they are a huge player, but they don't control or dominate the World product markets.

For example..

"Apple has overtaken Samsung as the world's top smartphone seller, ending the Korean tech firm's 12-year run as industry leader. The iPhone took the top spot in 2023 with 234.6m units sold, according to figures from the International Data Corporation (IDC), overtaking Samsung's 226.6m units"
17 Jan 2024


Are you saying, that if we got in with Apple, or some other Big players, but not Samsung, that we may as well pack up and go home?..

Of course no guarantees anywhere, but your arguments don't make sense, in my opinion.

I'm not saying you can't have one..
 
Last edited:
  • Like
  • Fire
Reactions: 23 users
Errrrr.....what :unsure:


PI Cm4 GPS 5G Brainchip Prototype​

Posted 2 weeks ago

Worldwide
I need a PCB Board Prototype For a Raspberry Pi CM4 Prototype Device with a Brainchip PCI Express Board - a PI HQ camera -- 5G and GPS. I want a working prototype with a camera based on a Cable, a Front Screen and a Back screen -- Dashcam with external Cable Camera.

See below email.

Hello Mike,

Thank you for your interest in our technology.

As requested, here is some information about our Akida IP.

Below are links to some of our demonstrations. You can find more on YouTube just search for BrainChip.



Our Akida IP is offered as a Licensing model plus a per component royalty.



BrainChip offers a much different approach to AI computations. We lighten the computation load by only computing event-data, and also quantizing the bit count to the lowest possible size while maintaining model accuracy. Since our compute is on chip (within our IP) utilizing our integrated memory, our latency is very low. Our approach lends itself very well to applications at the edge, for example with the sensor on battery operated platforms. For instance, Akida IP only draws uW-mW (depending upon application) for inference at the edge. In addition, our technology offers edge learning (independent from the Cloud) which in turn offers Security of data.



For AI compute, some system level advantages and features of our IP are as follows:

Reduce Power Consumption,
Increase Performance,
Reduce: System Level & BOM cost & Recurring Costs,
Reduce Complex Firmware,
Future Proof your design,
Security on the edge,
Learning on the edge, in the field,
And increase your feature set and capability, etc.
Models are developed via our free MetaTF platform:

https://doc.brainchipinc.com/installation.html

www.Brainchip.com/developer

Overview — Akida Examples documentation (brainchipinc.com)



We also offer development platforms to assist in bringing up your product, to develop models and validate our IP. Akida Enablement Platforms - BrainChip . You can purchase our PCIe dev kits at: Welcome to BrainChip (brainchipinc.com)



Links to some of our demonstrations (all on the edge learning) are below:

Wine Tasting




BrainChip demonstrates Taste Sensing with Akida - YouTube



Edge Based Learning




Keyword Spotting


Visual Wake & Facial Recognition



Smart Automotive In Cabin Experience

Edge Based Learning (High Speed Environment) Racetrack object recognition at the edge

Regression Analysis with Vibration Sensors

BrainChip demonstrates Akida Vibrational Analysis Tactile Sensing – YouTube

Gesture Control

BrainChip + Nviso Emotion Detection Demo

BrainChip Demonstrates Gesture Recognition with Prophesee EV4 Development Camera - YouTube

BrainChip Demonstrates Drone Voice Keyword Spotting - YouTube



TENNs: A New Approach to Streaming and Sequential Data - YouTube



Please let me know if you require any additional information.

We look forward to meeting with you again soon.
 
  • Like
  • Love
  • Thinking
Reactions: 28 users
Haven't bothered signing up for a free trial as Gen 2 old news but one thing caught my eye, highlighted.

Did I read earlier that someone asked about Sean in or coming to OZ for a client? Is that right?

Wonder if anything to do with below thought in the article :unsure:


BrainChip Adds Temporal Networks​

Author: Bryon Moyer

BrainChip Adds Temporal Networks


Read Full Article Start My Free Trial
Akida 2, BrainChip’s latest intellectual property (IP) offering, adds time as a component of convolution, allowing activity identification in video streams. It also accelerates the Transformer encoder block in hardware, speeding models employing that block.

BrainChip’s artificial intelligence (AI) processors employ an event-based architecture that responds only to nonzero activations on internal layers, reducing the amount of required computation. It’s a form of neuromorphic computing that the company first implemented in its Akida 1 IP and coprocessor chip. It has now positioned the original chip as a reference chip for sale in low quantities as evaluation for possible IP licensing; no chip is planned for Akida 2.

The second generation brings four changes to the original architecture; in addition to temporal networks and Transformer encoders, it adds the INT8 data type and the ability to handle long-range forward skip connections. Akida 1 quantized aggressively to INT4 and below, but INT8 has become the most common edge inference data type; Akida 2 acknowledges that.

Since its Akida 1 launch, the company has signed Megachips and Renesas as IP customers. The company says many other prospects have evaluated the reference silicon (including Circle8 Clean Technologies for an application to improve recycling sorting); licensing decisions are pending for those companies. Given its cash and revenue position, it must boost sales to better balance its cash burn.

The company faces no event-based IP competitors, and it claims to provide lower-power inference than its standard-network IP competition. But its uniqueness makes its tool and ecosystem development critical to ensuring that its customers can implement networks without having to be aware of the unique underlying technology.
 
  • Like
  • Fire
  • Love
Reactions: 40 users
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?
It was peaceful here for a while, so why don’t you go back to where you belong

1709836127906.gif
 
  • Haha
  • Like
  • Fire
Reactions: 14 users

Tothemoon24

Top 20
IMG_8567.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 38 users

cosors

👀
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?
baby-facepalm.gif

Self-esteem
 
Last edited:
  • Haha
  • Like
Reactions: 16 users

cosors

👀
Just a test. I didn't realise that I could still reply to ignore.
 
  • Haha
  • Like
Reactions: 10 users
https://www.cnet.com/tech/mobile/on...-way-of-experiencing-artificial-intelligence/

An interesting 7 min read. We're definitely amongst the big players.
Probably even powering most of the up and coming AI features 😁

On-Device AI Is a Whole New Way of Experiencing Artificial Intelligence​

At MWC 2024, I saw firsthand how AI is fundamentally reshaping current and future devices, from phones to robots.
At Mobile World Congress last week, the show floor was abuzz with AI. It was the same at CES two months earlier: The biggest theme of the biggest consumer tech show was that AI suddenly seemed to be part of every single product. But the hype can make it hard to know what we should be excited about, what we should fear and what we should dismiss as a fad.


"Omnipresent ... but also overwhelming." That's how CCS Insight Chief Analyst Ben Wood described the MWC moment. "For many attendees, I felt it was rapidly reaching levels that risked causing AI fatigue."


But there was a positive side as well. Said Wood: "The most impressive demos were from companies showing the benefits AI could offer rather than just describing a service or a product as being AI-ready."

At last year's MWC, the popular generative AI tool ChatGPT was only around 3 months old, and on-device AI was mostly a twinkle in the eye of the tech companies present. This year, on-device was a reality, and attendees — like me — could experience it on the show floor.

I got to experience several demos featuring AI on devices, and the best of them brought artificial intelligence to life in ways I'd never seen before. In many cases, I could see that products we're already familiar with — from smartphones to cars — are getting a new lease on life thanks to AI, with some offerings using the technology in unique ways to set themselves apart from rivals. In other cases, new types of products, like AI-focused wearables and robots, are emerging that have the potential to displace what we know and love.




Above all, it was clear that on-device AI isn't a technology for tomorrow's world. It's available right here, right now. And it could impact your decision as to what piece of technology you buy next.

The age of AI phones has arrived​

One of my biggest takeaways from MWC was that while all tech companies now have a raft of AI tools at their disposal, most are choosing to deploy them in different ways.

Take smartphones. Samsung has developed Gauss, its own large language model (the tech that underlies AI chatbots), to focus on translation on the Galaxy S24, whereas Honor uses AI to include eye tracking on its newly unveiled Magic 6 Pro — which I got to try out at its booth. Oppo and Xiaomi, meanwhile, both have on-device generative AI that they're applying to phone cameras and photo editing tools.



It goes to show that we're entering a new period of experimentation as tech companies figure out what AI can do, and crucially how it can improve our experience of using their products.

Samsung's Y.J. Kim, an executive vice president at the company and head of its language AI team, told reporters at an MWC roundtable that Samsung thought deeply about what sort of AI tools it wanted to deliver to users that would elevate the Galaxy S24 above the basic smartphone experience we've come to expect. "We have to make sure that customers will see some tangible benefits from their day-to-day use of the product or technologies that we develop," he said.

Conversely, there's also some crossover in AI tools between devices because of the partners these phone-makers share. As the maker of Android, the operating system used by almost all non-Apple phones, Google is experimenting heavily with AI features. These will be available across phones made by Samsung, Xiaomi, Oppo, Honor and a host of others.


Google used its presence at MWC this year to talk about some of its recently introduced AI features, like Circle to Search, a visual search tool that lets you draw a circle around something you see on screen to search for it.

The other, less visible partner that phone-makers have in common is chipmaker Qualcomm, whose chips were in an entire spectrum of devices at MWC this year. Its Snapdragon 8 Gen 3 chip, announced late in 2023, can be found in many of the phones that are now running on-device generative AI.


It's been only a year since Qualcomm first showed a basic demo of what generative AI on a phone might look like. Now phones packing this technology are on sale, said Ziad Asghar, who leads the company's AI product roadmap.

"From our perspective, we are the enablers," said Asghar. "Each and every one of our partners can choose to commercialize with unique experiences that they think are more important for their end consumer."

At MWC, the company launched its AI Hub, which gives developers access to 75 plug-and-play generative AI models that they can pick and choose from to apply to their products. That number will grow, and it means any company making devices with Qualcomm chips will be able to add all sorts of AI features.

As well as deciding which AI features to develop, one of the next big challenges phone-makers will have to tackle is how to get AI onto their cheaper devices. For now AI is primarily reserved for the top-end phones — the Galaxy S24s of the world — but over time this will change. There will be a trickle-down effect where this tech ends up on a wider range of a company's devices.

There will naturally be a difference in quality and speed between what the most expensive and the cheapest devices can do, said Asghar, as is currently the case with a phone's camera tech.

AI is changing how we interact with our devices​

AI enhancements to our phones are all well and good, but already we're seeing artificial intelligence being used in ways that have the power to totally change how we interact with our devices — as well as potentially changing what devices we choose to own.

In addition to enabling companies to bring AI to their existing device lines, Qualcomm's tech is powering concept phones like the T Phone, created by Deutsche Telekom and Brain.AI. Together, these two have tapped Qualcomm's chipset to totally reimagine your phone's interface, creating an appless experience that responds to you based on your needs and the task you're trying to accomplish and generates, on the fly, whatever you see on screen as you go.

n the demo I saw at MWC, AI showed it has the potential to put an end to the days of constant app-swapping as you're trying to make a plan or complete a task. "It really changes the way we interface with devices and becomes a lot more natural," said Asghar.

But, he said, that's only the beginning. He'd like to see the same concept applied to mixed reality glasses. He sees the big benefit of the AI in allowing new inputs through gesture, voice and vision that don't necessarily rely on us tapping on a screen. "Technology is much more interesting when it's not really in your face, but it's solving the problems for you in an almost invisible manner," he said.

His words reminded me of a moment in the MWC keynote presentation when Google DeepMind CEO Demis Hassabis asked an important question. "In five-plus years time, is the phone even really going to be the perfect form factor?" said Hassabis. "There's all sorts of amazing things to be invented."

As we saw at CES with the Rabbit R1 and at MWC with the Humane AI Pin, these things are starting to become a reality. In my demo with the AI Pin — a wearable device with no screen that you interact with through voice and touch — it was clear to me that AI is creating space for experimentation. It's allowing us to ask what may succeed the phone as the dominant piece of technology in our lives.
It's also opening up new possibilities for tech that's been around awhile but for whatever reason hasn't quite struck a chord with consumers and found success outside of niche use cases.

Many of us have now played around with generative AI chatbots such as ChatGPT, and we're increasingly growing familiar with the idea of AI assistants. One company, Integrit from South Korea, brought a robot to the show that demonstrated how we may interact with these services in public settings, such as hotels or stores. Its AI and robotics platform, Stella AI, features a large, pebble-shaped display on a robotic arm that can swivel to address you directly.

Where this differs from previous robots I've encountered in customer service settings, such as the iconic Pepper, is that Stella is integrated with the latest AI models, including OpenAI's GPT-4 and Meta's Llama. This means it's capable of having sophisticated conversations with people in many different languages.

Rather than featuring a humanoid robot face like Pepper does, Stella uses generative AI to present a photorealistic human on its display. It's entirely possible that people will feel more comfortable interacting with a human, even one that isn't real, than a humanoid robot, but it feels very early to know this for sure.

What is clear is that this is just the beginning. This is the first generation of devices to really tap into the power of generative and interactive AI, and the floodgates are now well and truly open.

"I think we'll look back at MWC 2024 as being a foundational year for AI on connected devices," said Wood, the CCS Insight analyst. "All the pieces of the jigsaw are falling into place to enable developers to start innovating around AI to deliver new experiences which will make our interactions with smartphones and PCs more intuitive."

If this is the beginning, I'm intrigued to check back a year from now to see how AI continues to change our devices. Hype aside, there's a lot already happening to be excited about.

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.
 
  • Like
  • Fire
  • Love
Reactions: 22 users

IloveLamp

Top 20
https://www.cnet.com/tech/mobile/on...-way-of-experiencing-artificial-intelligence/

An interesting 7 min read. We're definitely amongst the big players.
Probably even powering most of the up and coming AI features 😁

On-Device AI Is a Whole New Way of Experiencing Artificial Intelligence​

At MWC 2024, I saw firsthand how AI is fundamentally reshaping current and future devices, from phones to robots.
At Mobile World Congress last week, the show floor was abuzz with AI. It was the same at CES two months earlier: The biggest theme of the biggest consumer tech show was that AI suddenly seemed to be part of every single product. But the hype can make it hard to know what we should be excited about, what we should fear and what we should dismiss as a fad.


"Omnipresent ... but also overwhelming." That's how CCS Insight Chief Analyst Ben Wood described the MWC moment. "For many attendees, I felt it was rapidly reaching levels that risked causing AI fatigue."


But there was a positive side as well. Said Wood: "The most impressive demos were from companies showing the benefits AI could offer rather than just describing a service or a product as being AI-ready."

At last year's MWC, the popular generative AI tool ChatGPT was only around 3 months old, and on-device AI was mostly a twinkle in the eye of the tech companies present. This year, on-device was a reality, and attendees — like me — could experience it on the show floor.

I got to experience several demos featuring AI on devices, and the best of them brought artificial intelligence to life in ways I'd never seen before. In many cases, I could see that products we're already familiar with — from smartphones to cars — are getting a new lease on life thanks to AI, with some offerings using the technology in unique ways to set themselves apart from rivals. In other cases, new types of products, like AI-focused wearables and robots, are emerging that have the potential to displace what we know and love.




Above all, it was clear that on-device AI isn't a technology for tomorrow's world. It's available right here, right now. And it could impact your decision as to what piece of technology you buy next.

The age of AI phones has arrived​

One of my biggest takeaways from MWC was that while all tech companies now have a raft of AI tools at their disposal, most are choosing to deploy them in different ways.

Take smartphones. Samsung has developed Gauss, its own large language model (the tech that underlies AI chatbots), to focus on translation on the Galaxy S24, whereas Honor uses AI to include eye tracking on its newly unveiled Magic 6 Pro — which I got to try out at its booth. Oppo and Xiaomi, meanwhile, both have on-device generative AI that they're applying to phone cameras and photo editing tools.



It goes to show that we're entering a new period of experimentation as tech companies figure out what AI can do, and crucially how it can improve our experience of using their products.

Samsung's Y.J. Kim, an executive vice president at the company and head of its language AI team, told reporters at an MWC roundtable that Samsung thought deeply about what sort of AI tools it wanted to deliver to users that would elevate the Galaxy S24 above the basic smartphone experience we've come to expect. "We have to make sure that customers will see some tangible benefits from their day-to-day use of the product or technologies that we develop," he said.

Conversely, there's also some crossover in AI tools between devices because of the partners these phone-makers share. As the maker of Android, the operating system used by almost all non-Apple phones, Google is experimenting heavily with AI features. These will be available across phones made by Samsung, Xiaomi, Oppo, Honor and a host of others.


Google used its presence at MWC this year to talk about some of its recently introduced AI features, like Circle to Search, a visual search tool that lets you draw a circle around something you see on screen to search for it.

The other, less visible partner that phone-makers have in common is chipmaker Qualcomm, whose chips were in an entire spectrum of devices at MWC this year. Its Snapdragon 8 Gen 3 chip, announced late in 2023, can be found in many of the phones that are now running on-device generative AI.


It's been only a year since Qualcomm first showed a basic demo of what generative AI on a phone might look like. Now phones packing this technology are on sale, said Ziad Asghar, who leads the company's AI product roadmap.

"From our perspective, we are the enablers," said Asghar. "Each and every one of our partners can choose to commercialize with unique experiences that they think are more important for their end consumer."

At MWC, the company launched its AI Hub, which gives developers access to 75 plug-and-play generative AI models that they can pick and choose from to apply to their products. That number will grow, and it means any company making devices with Qualcomm chips will be able to add all sorts of AI features.

As well as deciding which AI features to develop, one of the next big challenges phone-makers will have to tackle is how to get AI onto their cheaper devices. For now AI is primarily reserved for the top-end phones — the Galaxy S24s of the world — but over time this will change. There will be a trickle-down effect where this tech ends up on a wider range of a company's devices.

There will naturally be a difference in quality and speed between what the most expensive and the cheapest devices can do, said Asghar, as is currently the case with a phone's camera tech.

AI is changing how we interact with our devices​

AI enhancements to our phones are all well and good, but already we're seeing artificial intelligence being used in ways that have the power to totally change how we interact with our devices — as well as potentially changing what devices we choose to own.

In addition to enabling companies to bring AI to their existing device lines, Qualcomm's tech is powering concept phones like the T Phone, created by Deutsche Telekom and Brain.AI. Together, these two have tapped Qualcomm's chipset to totally reimagine your phone's interface, creating an appless experience that responds to you based on your needs and the task you're trying to accomplish and generates, on the fly, whatever you see on screen as you go.

n the demo I saw at MWC, AI showed it has the potential to put an end to the days of constant app-swapping as you're trying to make a plan or complete a task. "It really changes the way we interface with devices and becomes a lot more natural," said Asghar.

But, he said, that's only the beginning. He'd like to see the same concept applied to mixed reality glasses. He sees the big benefit of the AI in allowing new inputs through gesture, voice and vision that don't necessarily rely on us tapping on a screen. "Technology is much more interesting when it's not really in your face, but it's solving the problems for you in an almost invisible manner," he said.

His words reminded me of a moment in the MWC keynote presentation when Google DeepMind CEO Demis Hassabis asked an important question. "In five-plus years time, is the phone even really going to be the perfect form factor?" said Hassabis. "There's all sorts of amazing things to be invented."

As we saw at CES with the Rabbit R1 and at MWC with the Humane AI Pin, these things are starting to become a reality. In my demo with the AI Pin — a wearable device with no screen that you interact with through voice and touch — it was clear to me that AI is creating space for experimentation. It's allowing us to ask what may succeed the phone as the dominant piece of technology in our lives.
It's also opening up new possibilities for tech that's been around awhile but for whatever reason hasn't quite struck a chord with consumers and found success outside of niche use cases.

Many of us have now played around with generative AI chatbots such as ChatGPT, and we're increasingly growing familiar with the idea of AI assistants. One company, Integrit from South Korea, brought a robot to the show that demonstrated how we may interact with these services in public settings, such as hotels or stores. Its AI and robotics platform, Stella AI, features a large, pebble-shaped display on a robotic arm that can swivel to address you directly.

Where this differs from previous robots I've encountered in customer service settings, such as the iconic Pepper, is that Stella is integrated with the latest AI models, including OpenAI's GPT-4 and Meta's Llama. This means it's capable of having sophisticated conversations with people in many different languages.

Rather than featuring a humanoid robot face like Pepper does, Stella uses generative AI to present a photorealistic human on its display. It's entirely possible that people will feel more comfortable interacting with a human, even one that isn't real, than a humanoid robot, but it feels very early to know this for sure.

What is clear is that this is just the beginning. This is the first generation of devices to really tap into the power of generative and interactive AI, and the floodgates are now well and truly open.

"I think we'll look back at MWC 2024 as being a foundational year for AI on connected devices," said Wood, the CCS Insight analyst. "All the pieces of the jigsaw are falling into place to enable developers to start innovating around AI to deliver new experiences which will make our interactions with smartphones and PCs more intuitive."

If this is the beginning, I'm intrigued to check back a year from now to see how AI continues to change our devices. Hype aside, there's a lot already happening to be excited about.

Editors' note: CNET is using an AI engine to help create some stories. For more, see this post.
Yep, great post @luvlifetravel . 2024 will be our year imo......is everybody ready?


1000013986.jpg
 
  • Like
  • Love
  • Fire
Reactions: 21 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Like some of us including me said before getting absolutely bashed in the forum for simple facts.
If we're not involved in the 2024 product roadmap of Samsung, then we're screwed.
No more lead for us. The biggest company that designs and uses its own chipsets does neuromorphic now.
Can't wait for the comments. Is someone gonna write a 3 pager about haunting and ridiculing me again, or will it just be reported?

Hi @DerAktienDude,

From the articles I've read so far, it would imply that KAIST are using neuromorphic technology, which doesn't necessarily mean they developed that component of it.

Here are three examples.

1.

Screenshot 2024-03-08 at 9.09.36 am.png



2.
Screenshot 2024-03-08 at 9.18.16 am.png

3.

Screenshot 2024-03-08 at 9.35.02 am.png




And then there's also Tony Lewis's previous comment on Linkedin.

Screenshot 2024-03-08 at 9.16.28 am.png
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 36 users

Evermont

Stealth Mode
  • Like
  • Thinking
  • Love
Reactions: 15 users

AARONASX

Holding onto what I've got
From 9.45min they talk about the new Kaist chip, still calling it "The World's First".



This video was posted just over 20min ago.


Hard to hear and understand fully what he was saying, but from what I took in was Kaist are using something "compression techniques ".. mentioning spikes neuros has poor accuracy and toys only...obviously saying poor is just them trying to make their alternative look and seem better, nor do they hold the patentes needed for the accuracy they claim is lacking. (maybe jealous lol)

Competition is good for the market, do they have a chip, yes, great whoop-de-do! ...do the sell IP to a wider market working to integrate their technology with others, probably not yet! ....do they have multiple foundries, partners, etc, i don't think so.

IMO
 
  • Like
  • Fire
Reactions: 16 users

IloveLamp

Top 20
1000013990.jpg
 
  • Like
  • Fire
Reactions: 7 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Hard to hear and understand fully what he was saying, but from what I took in was Kaist are using something "compression techniques ".. mentioning spikes neuros has poor accuracy and toys only...obviously saying poor is just them trying to make their alternative look and seem better, nor do they hold the patentes needed for the accuracy they claim is lacking. (maybe jealous lol)

Competition is good for the market, do they have a chip, yes, great whoop-de-do! ...do the sell IP to a wider market working to integrate their technology with others, probably not yet! ....do they have multiple foundries, partners, etc, i don't think so.

IMO
Hi @AARONASX, I think he may be saying "efficient data compression techniques" which just sounds like techniques employed to make the LLM more compact.
 
Last edited:
  • Like
Reactions: 9 users

AARONASX

Holding onto what I've got
Hi @AARONASX, I think he may be saying "efficient data compression techniques" which just sounds like techniques employed to make the LLM's more compact.
Thanks Bravo :-D
 
  • Like
Reactions: 8 users
Nice to see Circle8 have us on their site now.



Screenshot_2024-03-08-08-30-12-46_4641ebc0df1485bf6b47ebd018b5ee76.jpg
 
  • Like
  • Fire
  • Love
Reactions: 51 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Bummer! This article does seem indicate that KAIST have developed a neuromorphic system different to ours.

Amongst other things it states "The research team developed a unique DNN-to-SNN equivalent conversion technique to solve this problem. This is a method of precisely controlling the spike occurrence threshold to increase the accuracy of the method of converting the existing deep artificial neural network (DNN) structure into a spiking neural network (SNN). Regarding this, the research team stated, “We were able to achieve accuracy at the level of a deep artificial neural network (DNN) while maintaining the energy efficiency of a spiking neural network (SNN).”



KAIST develops AI semiconductor that resembles the human brain... “The core of ultra-low power and high-speed technology”

Digital Daily Publication Date 2024-03-06 14:43:34

Sejong = Reporter Chae Seong-oh
Professor Hoejun Yoo of the Department of Electrical and Electronic Engineering at KAIST is explaining complementary-transformer technology at the Sejong Government Complex on the 6th.  [ⓒ Digital Daily]
Professor Hoejun Yoo of the Department of Electrical and Electronic Engineering at KAIST is explaining complementary-transformer technology at the Sejong Government Complex on the 6th. [ⓒ Digital Daily]

[Digital Daily Reporter Chae Seong-o] The Korea Advanced Institute of Science and Technology (KAIST) research team has developed an artificial intelligence (AI) semiconductor 'complementary-transformer' that can process large language models at ultra-high speed of 0.4 seconds while consuming ultra-low power of 400 milliwatts (㎽). Complementary-Transformer) was developed for the first time in the world, it was announced on the 6th. It was developed through Samsung Electronics’ 28-nano process.

The KAIST PIM Semiconductor Research Center and the Artificial Intelligence (AI) Semiconductor Graduate School's Professor Yoo Hoi-jun's research team have developed large language models (LLMs) such as GPT, which run through a large number of GPUs and 250 watts of power consumption, into small 4.5 mm We succeeded in implementing it with ultra-low power on an AI semiconductor chip.

In particular, it is characterized by implementing transformer operation using spiking neural network (SNN), a neuromorphic computing technology that mimics the operation of the human brain. This research, in which Dr. Sang-yeop Kim participated as the first author, was presented and demonstrated at the International Society of Solid State Circuit Design (ISSCC) held in San Francisco from the 19th to the 23rd of last month.

Existing neuromorphic computing technology is inaccurate compared to convolutional neural networks (CNN) and is mainly capable of simple image classification tasks. The research team raised the accuracy of neuromorphic computing technology to the same level as CNN and proposed a complementary-deep neural network (C-DNN) that can be applied to a variety of applications beyond simple image classification.

Complementary deep neural network technology uses a mixture of deep artificial neural networks (DNN) and spiking neural networks (SNN) and is a technology that can minimize power by allocating input data to different neural networks depending on their size.

Just as the human brain consumes a lot of energy when there is a lot to think about and consumes less energy when there is little to think about, a spiking neural network (SNN) that mimics the brain consumes a lot of power when the size of the input value is large and the When the size is small, it consumes less power.

This study actually proved that ultra-low power and high-performance on-device AI is possible by applying last year's complementary-deep neural network technology to LLM, and is the world's first to implement research content that had been limited to theoretical research in the form of an AI semiconductor. It is meaningful.

In particular, the research team focused on the practical scalability of neuromorphic computing and studied whether it could successfully perform advanced language processing tasks such as sentence generation, translation, and summarization. The biggest challenge in this process is achieving high accuracy in the neuromorphic network. In general, neuromorphic systems have high energy efficiency, but due to limitations in the learning algorithm, they tend to be less accurate when performing complex tasks, and act as a major obstacle in tasks that require high precision and performance, such as large language models. .

The research team developed a unique DNN-to-SNN equivalent conversion technique to solve this problem. This is a method of precisely controlling the spike occurrence threshold to increase the accuracy of the method of converting the existing deep artificial neural network (DNN) structure into a spiking neural network (SNN). Regarding this, the research team stated, “We were able to achieve accuracy at the level of a deep artificial neural network (DNN) while maintaining the energy efficiency of a spiking neural network (SNN).”


The research team said that in the future, they plan to expand the scope of research on neuromorphic computing to various application fields beyond language models, while also identifying and improving problems related to commercialization.

Professor Hoi-Jun Yoo of the Department of Electrical and Electronic Engineering at KAIST said, "This research is significant in that it not only solved the power consumption problem of existing AI semiconductors, but also successfully implemented the application of actual giant language models such as GPT-2." “Neuromorphic computing is a core technology for ultra-low-power, high-performance on-device AI that is essential in the artificial intelligence era, so we will continue to conduct related research in the future,” he explained.

Jeon Young-soo, Director of Information and Communication Industry Policy at the Ministry of Science and ICT, said, "This research outcome is significant in that it actually confirmed the possibility that AI semiconductors can develop into neuromorphic computing beyond NPU and PIM." “As the importance of AI semiconductors was emphasized in the discussion, we will actively support them so that they can continue to produce world-class research results in the future.”


 
  • Like
  • Sad
  • Thinking
Reactions: 23 users

Boab

I wish I could paint like Vincent
Bummer! This article does seem indicate that KAIST have developed a neuromorphic system different to ours.

Amongst other things it states "The research team developed a unique DNN-to-SNN equivalent conversion technique to solve this problem. This is a method of precisely controlling the spike occurrence threshold to increase the accuracy of the method of converting the existing deep artificial neural network (DNN) structure into a spiking neural network (SNN). Regarding this, the research team stated, “We were able to achieve accuracy at the level of a deep artificial neural network (DNN) while maintaining the energy efficiency of a spiking neural network (SNN).”



KAIST develops AI semiconductor that resembles the human brain... “The core of ultra-low power and high-speed technology”
Digital Daily Publication Date 2024-03-06 14:43:34

Sejong = Reporter Chae Seong-oh
Professor Hoejun Yoo of the Department of Electrical and Electronic Engineering at KAIST is explaining complementary-transformer technology at the Sejong Government Complex on the 6th.  [ⓒ Digital Daily]
Professor Hoejun Yoo of the Department of Electrical and Electronic Engineering at KAIST is explaining complementary-transformer technology at the Sejong Government Complex on the 6th. [ⓒ Digital Daily]

[Digital Daily Reporter Chae Seong-o] The Korea Advanced Institute of Science and Technology (KAIST) research team has developed an artificial intelligence (AI) semiconductor 'complementary-transformer' that can process large language models at ultra-high speed of 0.4 seconds while consuming ultra-low power of 400 milliwatts (㎽). Complementary-Transformer) was developed for the first time in the world, it was announced on the 6th. It was developed through Samsung Electronics’ 28-nano process.

The KAIST PIM Semiconductor Research Center and the Artificial Intelligence (AI) Semiconductor Graduate School's Professor Yoo Hoi-jun's research team have developed large language models (LLMs) such as GPT, which run through a large number of GPUs and 250 watts of power consumption, into small 4.5 mm We succeeded in implementing it with ultra-low power on an AI semiconductor chip.

In particular, it is characterized by implementing transformer operation using spiking neural network (SNN), a neuromorphic computing technology that mimics the operation of the human brain. This research, in which Dr. Sang-yeop Kim participated as the first author, was presented and demonstrated at the International Society of Solid State Circuit Design (ISSCC) held in San Francisco from the 19th to the 23rd of last month.

Existing neuromorphic computing technology is inaccurate compared to convolutional neural networks (CNN) and is mainly capable of simple image classification tasks. The research team raised the accuracy of neuromorphic computing technology to the same level as CNN and proposed a complementary-deep neural network (C-DNN) that can be applied to a variety of applications beyond simple image classification.

Complementary deep neural network technology uses a mixture of deep artificial neural networks (DNN) and spiking neural networks (SNN) and is a technology that can minimize power by allocating input data to different neural networks depending on their size.

Just as the human brain consumes a lot of energy when there is a lot to think about and consumes less energy when there is little to think about, a spiking neural network (SNN) that mimics the brain consumes a lot of power when the size of the input value is large and the When the size is small, it consumes less power.

This study actually proved that ultra-low power and high-performance on-device AI is possible by applying last year's complementary-deep neural network technology to LLM, and is the world's first to implement research content that had been limited to theoretical research in the form of an AI semiconductor. It is meaningful.

In particular, the research team focused on the practical scalability of neuromorphic computing and studied whether it could successfully perform advanced language processing tasks such as sentence generation, translation, and summarization. The biggest challenge in this process is achieving high accuracy in the neuromorphic network. In general, neuromorphic systems have high energy efficiency, but due to limitations in the learning algorithm, they tend to be less accurate when performing complex tasks, and act as a major obstacle in tasks that require high precision and performance, such as large language models. .

The research team developed a unique DNN-to-SNN equivalent conversion technique to solve this problem. This is a method of precisely controlling the spike occurrence threshold to increase the accuracy of the method of converting the existing deep artificial neural network (DNN) structure into a spiking neural network (SNN). Regarding this, the research team stated, “We were able to achieve accuracy at the level of a deep artificial neural network (DNN) while maintaining the energy efficiency of a spiking neural network (SNN).”

The research team said that in the future, they plan to expand the scope of research on neuromorphic computing to various application fields beyond language models, while also identifying and improving problems related to commercialization.

Professor Hoi-Jun Yoo of the Department of Electrical and Electronic Engineering at KAIST said, "This research is significant in that it not only solved the power consumption problem of existing AI semiconductors, but also successfully implemented the application of actual giant language models such as GPT-2." “Neuromorphic computing is a core technology for ultra-low-power, high-performance on-device AI that is essential in the artificial intelligence era, so we will continue to conduct related research in the future,” he explained.

Jeon Young-soo, Director of Information and Communication Industry Policy at the Ministry of Science and ICT, said, "This research outcome is significant in that it actually confirmed the possibility that AI semiconductors can develop into neuromorphic computing beyond NPU and PIM." “As the importance of AI semiconductors was emphasized in the discussion, we will actively support them so that they can continue to produce world-class research results in the future.”


They've done well but appear to have a long way to go to catch us.
The research team said that in the future, they plan to expand the scope of research on neuromorphic computing to various application fields beyond language models, while also identifying and improving problems related to commercialization.
 
  • Like
  • Fire
  • Love
Reactions: 23 users
Top Bottom