BRN Discussion Ongoing

Can you show me a company that is ahead, in neuromorphic hardware?

I'm not talking about the share price, or how many IP deals we may or may not have signed.

I'm talking about the technology.

So show me a neuromorphic hardware technology that is ahead of us or STFU.
Think the issue is when does the market need it ?

Not if it’s the market leader in its space..

There’s enough evidence to show it’s the leader in the non-cloud far edge sector in Neuromorphic on Chip computing companies
 
  • Like
Reactions: 2 users
Nanose patent was assigned a couple of weeks ago, what ever that means, so can be long until its approved.



IMG_5733.png
 
  • Like
  • Fire
  • Wow
Reactions: 26 users
I've noticed the fervour and demands for validation both here and over on the crapper have kicked up a notch.
Thrashing themselves into a right old lather. Get it now deadbeat? 🤣
Why don't you reveal yourself to us here in the Tardie? 🤣
Afraid if you show yourself the Messiah will smite you? 🤣
Or is it just too cosy here in the shadows, cutting and pasting tidbits for your would be followers over there in hell. 🤣
Don't worry snookum's, I won't mention you again.
None of these Fear Uncertainty Doubt merchants are worth the oxygen or the key strokes.
That is the troll's game, engagement.
Either for cents per view or the weird little thrill as they rub themselves thinking about how they 'got ya' 🤣

Grow up FFS. No one is holding a gun to anyones head here.
We each of us, as adult individuals, have those three choices.
BUY, SELL, HOLD.
Make your choice and reap the rewards/ take the consequences.

But FFS, stop all the constant whining and whinging.
No one cares.
Not Sean or Antonio or PVDM or me. 🤣
1704268383125.gif
 
  • Haha
  • Like
Reactions: 4 users

DK6161

Regular
The focus will now be on this year's CES which will start in the next few days in Vegas.

I just want to warn others not to automatically assume anything with AI, ML, Edge etc would have AKIDA involvement, or get too excited about potential increase in our SP.

A few of us have been caught up with the excitement before and now paying hefty prices for it. I myself is roughly 70% down in my BRN holding.

Historically we have been left disappointed with expected news and announcements - a great example was when Akida 2.0 was officially announced though the ASX and the SP did not move at all.

The fact that we now have to secure further funding through LDA Capital also suggests that imminent sizeable revenue is not expected.
And no, I am not a "Manipulator" or have "Hidden Agendas". I am just a pissed off shareholder, who should've done more research instead being sold on "Explosive sales", "Imminent revenues" and "AKIDA everywhere".
 
  • Like
  • Fire
  • Love
Reactions: 27 users
Qualcomm is talking a big game for CES.... generative AI with no connection to the cloud. They are either with us, or they've caught up quick? I'm keeping faith in the ecosystem we are developing, but we all have hard earned cash invested and naturally getting nervous as other companies claim to do what Akida can... despite our "3 year lead." Bring on these updates on partnerships prior to CES!
Like I've said we either have a lead and prominent companies are knocking down our door,
Because it's all about having a edge on your competitors , Has Sean come out and stated where miles in front ?
 
  • Fire
Reactions: 1 users

Makeme 2020

Regular
I've noticed the fervour and demands for validation both here and over on the crapper have kicked up a notch.
Thrashing themselves into a right old lather. Get it now deadbeat? 🤣
Why don't you reveal yourself to us here in the Tardie? 🤣
Afraid if you show yourself the Messiah will smite you? 🤣
Or is it just too cosy here in the shadows, cutting and pasting tidbits for your would be followers over there in hell. 🤣
Don't worry snookum's, I won't mention you again.
None of these Fear Uncertainty Doubt merchants are worth the oxygen or the key strokes.
That is the troll's game, engagement.
Either for cents per view or the weird little thrill as they rub themselves thinking about how they 'got ya' 🤣

Grow up FFS. No one is holding a gun to anyones head here.
We each of us, as adult individuals, have those three choices.
BUY, SELL, HOLD.
Make your choice and reap the rewards/ take the consequences.

But FFS, stop all the constant whining and whinging.
No one cares.
Not Sean or Antonio or PVDM or me. 🤣
Are you talking to your self.
 
Last edited:
  • Fire
Reactions: 1 users

Iseki

Regular
"I've noticed the fervour and demands for validation both here and over on the crapper have kicked up a notch."

I think it's fair to ask for validation that we can put akida into either an arm chip or a SiFive chip, and that that combination is cheaper than the arm or SiFive base chip with their respective vector processor.

Otherwise, aren't we in trouble?
 
  • Like
  • Thinking
Reactions: 4 users
Modern neuromorphic processor architectures...PLURAL...Hmmm?????



This Tiny Sensor Could Be in Your Next Headset​

Prophesee
PROPHESEE Event-Based Metavision GenX320 Bare Die 2.jpg

Neuromorphic computing company develops event-based vision sensor for edge AI apps.
Spencer Chin | Oct 16, 2023


As edge-based artificial intelligence (AI) applications become more common, there will be a greater need for sensors that can meet the power and environmental needs of edge hardware. Prophesee SA, which supplies advanced neuromorphic vision systems, has introduced an event-based vision sensor for integration into ultra-low-power edge AI vision devices. The GenX320 Metavision sensor, which uses a tiny 3x4mm die, leverages the company’s technology platform into growing intelligent edge market segments, including AR/VR headsets, security and monitoring/detection systems, touchless displays, eye tracking features, and always-on intelligent IoT devices.

According to Luca Verre, CEO and co-founder of Prophesee, the concept of event-based vision has been researched for years, but developing a viable commercial implementation in a sensor-like device has only happened relatively recently. “Prophesee has used a combination of expertise and innovative developments around neuromorphic computing, VLSI design, AL algorithm development, and CMOS image sensing,” said Verre in an e-mail interview with Design News. “Together, those skills and advancements, along with critical partnerships with companies like Sony, Intel, Bosch, Xiaomi, Qualcomm, 🤔and others 🤔have enabled us to optimize a design for the performance, power, size, and cost requirements of various markets.”

Prophesse’s vision sensor is a 320x320, 6.3μm pixel BSI stacked event-based vision sensor that offers a tiny 1/5-in. optical format. Verre said, “The explicit goal was to improve integrability and usability in embedded at-the-edge vision systems, which in addition to size and power improvements, means the design must address the challenge of event-based vision’s unconventional data format, nonconstant data rates, and non-standard interfaces to make it more usable for a wider range of applications. We have done that with multiple integrated event data pre-processing, filtering, and formatting functions to minimize external processing overhead.”

Verre added, “In addition, MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.

Low-Power Operation

According to Verre, the GenX320 sensor has been optimized for low-power operation, featuring a hierarchy of power modes and application-specific modes of operation. On-chip power management further improves sensor flexibility and integrability. To meet aggressive size and cost requirements, the chip is fabricated using a CMOS stacked process with pixel-level Cu-Cu bonding interconnects achieving a 6.3μm pixel-pitch.
The sensor performs low latency, µsec resolution timestamping of events with flexible data formatting. On-chip intelligent power management modes reduce power consumption to a low 36uW and enable smart wake-on-events. Deep sleep and standby modes are also featured.

According to Prophesee, the sensor is designed to be easily integrated with standard SoCs with multiple combined event data pre-processing, filtering, and formatting functions to minimize external processing overhead. MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.

Prophesee’s Verre expects the sensor to find applications in AR/VR headsets. “We are solving an important issue in our ability to efficiently (i.e. low power/low heat) support foveated rendering in eye tracking for a more realistic, immersive experience. Meta has discussed publicly the use of event-based vision technology, and we are actively involved with our partner Zinn Labs in this area. XPERI has already developed a driver monitor system (DMS) proof of concept based on our previous generation sensor for gaze monitoring and we are working with them on a next-gen solution using GenX320 for both automotive and other potential uses, including micro expression monitoring. The market for gesture and motion detection is very large, and our partner Ultraleap has demonstrated a working prototype of a touch-free display using our solution.”

The sensor incorporates an on-chip histogram output compatible with multiple AI accelerators. The sensor is also natively compatible with Prophesee Metavision Intelligence, an open-source event-based vision software suite that is used by a community of over 10,000 users.

Prophesee will support the GenX320 with a complete range of development tools for easy exploration and optimization, including a comprehensive Evaluation Kit housing a chip-on-board (COB) GenX320 module, or a compact optical flex module. In addition, Prophesee will offer a range of adapter kits that enable seamless connectivity to a large range of embedded platforms, such as an STM32 MCU, speeding time-to-market.

Spencer Chin is a Senior Editor for Design News covering the electronics beat. He has many years of experience covering developments in components, semiconductors, subsystems, power, and other facets of electronics from both a business/supply-chain and technology perspective. He can be reached at Spencer.Chin@informa.com.

Could be interesting at CES

 
  • Like
  • Fire
Reactions: 10 users
  • Like
  • Fire
Reactions: 11 users

Terroni2105

Founding Member
I haven’t seen this posted so apologies if it has and I missed it.

Direct from the Arm website. It is an article by Stephen Ozoigbo, Senior Director, Ecosystem Development, Education and Research, Arm.
Written on 20th December.

It is about Arm staff and ambassadors travelling across Africa, highlighting the range of AI-based developer experiences running on Arm.

“The additional demos were a range hardware, including the Arduino Pro and BrainChip’s Akida, that highlighted how Arm IP can be implemented across embedded systems that utilize AI workloads. As compute power increases, developers can leverage AI workloads for applications that are targeting the smallest, most power and cost-constrained embedded systems, all built on Arm.”


https://newsroom.arm.com/ai-developer-experiences-africa

Happy New Year Chippers :)
 
  • Like
  • Fire
  • Love
Reactions: 65 users

Tothemoon24

Top 20
IMG_8076.jpeg


The internet has changed every aspect of our lives from communication, shopping, and working. Now, for reasons of latency, privacy, and cost-efficiency, the “internet of things” has been born as the internet has expanded to the network edge.

Now, with artificial intelligence, everything on the internet is easier, more personalized, and more intelligent. However, AI is currently confined to the cloud due to the large servers and high compute capacity it needs. As a result, companies like Hailo are driven by latency, privacy, and cost efficiency to develop technologies that enable AI on the edge.

Undoubtedly, the next big thing is generative AI. Generative AI presents enormous potential across industries. It can be used to streamline work and increase the efficiency of various creators — lawyers, content writers, graphic designers, musicians, and more. It can help discover new therapeutic drugs or aid in medical procedures. Generative AI can improve industrial automation, develop new software code, and enhance transportation security through the automated synthesis of video, audio, imagery, and more.

However, generative AI as it exists today is limited by the technology that enables it. That’s because generative AI happens in the cloud — large data centers of costly, energy-consuming computer processors far removed from actual users. When someone issues a prompt to a generative AI tool like ChatGPT or some new AI-based videoconferencing solution, the request is transmitted via the internet to the cloud, where it’s processed by servers before the results are returned over the network. Data centers are major energy consumers, and as AI becomes more popular, global energy consumption will rapidly increase. This is a growing concern for companies trying to balance between the need to offer innovative solutions to the requirement to reduce operating costs and environmental impact.

As companies develop new applications for generative AI and deploy them on different types of devices — video cameras and security systems, industrial and personal robots, laptops and even cars — the cloud is a bottleneck in terms of bandwidth, cost, safety, and connectivity.

And for applications like driver assist, personal computer software, videoconferencing and security, constantly moving data over a network can be a privacy risk.

The solution is to enable these devices to process generative AI at the edge. In fact, edge-based generative AI stands to benefit many emerging applications.

Generative AI on the rise

Consider that in June, Mercedes-Benz said it would introduce ChatGPT to its cars. In a ChatGPT-enhanced Mercedes, for example, a driver could ask the car — hands free — for a dinner recipe based on ingredients they already have at home. That is, if the car is connected to the internet. In a parking garage or remote location, all bets are off.

In the last couple of years, videoconferencing has become second nature to most of us. Already, software companies are integrating forms of AI into videoconferencing solutions. Maybe it’s to optimize audio and video quality on the fly, or to “place” people in the same virtual space. Now, generative AI-powered videoconferences can automatically create meeting minutes or pull in relevant information from company sources in real-time as different topics are discussed.


However, if a smart car, videoconferencing system, or any other edge device can’t reach back to the cloud, then the generative AI experience can’t happen. But what if they didn’t have to? It sounds like a daunting task considering the enormous processing of cloud AI, but it is now becoming possible.

Generative AI at the edge

Already, there are generative AI tools, for example, that can automatically create rich, engaging PowerPoint presentations. But the user needs the system to work from anywhere, even without an internet connection.

Similarly, we’re already seeing a new class of generative AI-based “co-pilot” assistants that will fundamentally change how we interact with our computing devices by automating many routine tasks, like creating reports or visualizing data. Imagine flipping open a laptop, the laptop recognizing you through its camera, then automatically generating a course of action for the day,week or month based on your most used tools, like Outlook, Teams, Slack, Trello, etc. But to maintain data privacy and a good user experience, you must have the option of running generative AI locally.

In addition to meeting the challenges of unreliable connections and data privacy, edge AI can help reduce bandwidth demands and enhance application performance. For instance, if a generative AI application is creating data-rich content, like a virtual conference space, via the cloud, the process could lag depending on available (and costly) bandwidth. And certain types of generative AI applications, like security, robotics, or healthcare, require high-performance, low-latency responses that cloud connections can’t handle.

In video security, the ability to re-identify people as they move among many cameras — some placed where networks can’t reach — requires data models and AI processing in the actual cameras. In this case, generative AI can be applied to automated descriptions of what the cameras see through simple queries like, “Find the 8-year-old child with the red T shirt and baseball cap.”

That’s generative AI at the edge.

Developments in edge AI

Through the adoption of a new class of AI processors and the development of leaner, more efficient, though no-less-powerful generative AI data models, edge devices can be designed to operate intelligently where cloud connectivity is impossible or undesirable.

Of course, cloud processing will remain a critical component of generative AI. For example, training AI models will remain in the cloud. But the act of applying user inputs to those models, called inferencing, can — and in many cases should — happen at the edge.

The industry is already developing leaner, smaller, more efficient AI models that can be loaded onto edge devices. Companies like Hailo manufacture AI processors purpose-designed to perform neural network processing. Such neural-network processors not only handle AI models incredibly rapidly, but they also do so with less power, making them energy efficient and apt to a variety of edge devices, from smartphones to cameras.

Utilizing generative AI at the edge enables effective load-balancing of growing workloads, allows applications to scale more stably, relieves cloud data centers of costly processing, and helps reduce environmental impact. Generative AI is on the brink of revolutionizing computing once more. In the future, your laptop’s LLM may auto-update the same way your OS does today — and function in much the same way. However, in order to get there, generative AI processing will need to be enabled at the network’s edge. The outcome promises to be greater performance, energy efficiency, security and privacy. All of which leads to AI applications that reshape the world just as significantly as generative AI itself.
 
  • Like
  • Fire
Reactions: 16 users

IloveLamp

Top 20
  • Haha
  • Like
Reactions: 16 users

Kachoo

Regular
Why should BRN have problems with finance if customers sign contracts with brainchip? If customers sign contracts, they do it because they see benefits I guess. Our western capitalistic economy works with depts otherwise it would collapse. So, if everyone would worry about the financial situation of other companies, there would be no business at all… just my opinion
When we do multi year sale tenders there is an audit involved and our customer examines our financial health to see if we are in a good financial position to provide the services and products we say.

It protects them from us going broke and them losing our service and be out money or products. So its kinda a standard practice.
 
  • Like
  • Love
  • Fire
Reactions: 24 users

IloveLamp

Top 20
My god these threads tonight

bait.gif
double-chin-fat.gif
source-1.gif
tenor-7.gif
 
  • Haha
  • Like
Reactions: 14 users

IloveLamp

Top 20
  • Like
  • Thinking
Reactions: 8 users

manny100

Top 20
View attachment 53299

The internet has changed every aspect of our lives from communication, shopping, and working. Now, for reasons of latency, privacy, and cost-efficiency, the “internet of things” has been born as the internet has expanded to the network edge.

Now, with artificial intelligence, everything on the internet is easier, more personalized, and more intelligent. However, AI is currently confined to the cloud due to the large servers and high compute capacity it needs. As a result, companies like Hailo are driven by latency, privacy, and cost efficiency to develop technologies that enable AI on the edge.

Undoubtedly, the next big thing is generative AI. Generative AI presents enormous potential across industries. It can be used to streamline work and increase the efficiency of various creators — lawyers, content writers, graphic designers, musicians, and more. It can help discover new therapeutic drugs or aid in medical procedures. Generative AI can improve industrial automation, develop new software code, and enhance transportation security through the automated synthesis of video, audio, imagery, and more.

However, generative AI as it exists today is limited by the technology that enables it. That’s because generative AI happens in the cloud — large data centers of costly, energy-consuming computer processors far removed from actual users. When someone issues a prompt to a generative AI tool like ChatGPT or some new AI-based videoconferencing solution, the request is transmitted via the internet to the cloud, where it’s processed by servers before the results are returned over the network. Data centers are major energy consumers, and as AI becomes more popular, global energy consumption will rapidly increase. This is a growing concern for companies trying to balance between the need to offer innovative solutions to the requirement to reduce operating costs and environmental impact.

As companies develop new applications for generative AI and deploy them on different types of devices — video cameras and security systems, industrial and personal robots, laptops and even cars — the cloud is a bottleneck in terms of bandwidth, cost, safety, and connectivity.

And for applications like driver assist, personal computer software, videoconferencing and security, constantly moving data over a network can be a privacy risk.

The solution is to enable these devices to process generative AI at the edge. In fact, edge-based generative AI stands to benefit many emerging applications.

Generative AI on the rise

Consider that in June, Mercedes-Benz said it would introduce ChatGPT to its cars. In a ChatGPT-enhanced Mercedes, for example, a driver could ask the car — hands free — for a dinner recipe based on ingredients they already have at home. That is, if the car is connected to the internet. In a parking garage or remote location, all bets are off.

In the last couple of years, videoconferencing has become second nature to most of us. Already, software companies are integrating forms of AI into videoconferencing solutions. Maybe it’s to optimize audio and video quality on the fly, or to “place” people in the same virtual space. Now, generative AI-powered videoconferences can automatically create meeting minutes or pull in relevant information from company sources in real-time as different topics are discussed.


However, if a smart car, videoconferencing system, or any other edge device can’t reach back to the cloud, then the generative AI experience can’t happen. But what if they didn’t have to? It sounds like a daunting task considering the enormous processing of cloud AI, but it is now becoming possible.

Generative AI at the edge

Already, there are generative AI tools, for example, that can automatically create rich, engaging PowerPoint presentations. But the user needs the system to work from anywhere, even without an internet connection.

Similarly, we’re already seeing a new class of generative AI-based “co-pilot” assistants that will fundamentally change how we interact with our computing devices by automating many routine tasks, like creating reports or visualizing data. Imagine flipping open a laptop, the laptop recognizing you through its camera, then automatically generating a course of action for the day,week or month based on your most used tools, like Outlook, Teams, Slack, Trello, etc. But to maintain data privacy and a good user experience, you must have the option of running generative AI locally.

In addition to meeting the challenges of unreliable connections and data privacy, edge AI can help reduce bandwidth demands and enhance application performance. For instance, if a generative AI application is creating data-rich content, like a virtual conference space, via the cloud, the process could lag depending on available (and costly) bandwidth. And certain types of generative AI applications, like security, robotics, or healthcare, require high-performance, low-latency responses that cloud connections can’t handle.

In video security, the ability to re-identify people as they move among many cameras — some placed where networks can’t reach — requires data models and AI processing in the actual cameras. In this case, generative AI can be applied to automated descriptions of what the cameras see through simple queries like, “Find the 8-year-old child with the red T shirt and baseball cap.”

That’s generative AI at the edge.

Developments in edge AI

Through the adoption of a new class of AI processors and the development of leaner, more efficient, though no-less-powerful generative AI data models, edge devices can be designed to operate intelligently where cloud connectivity is impossible or undesirable.

Of course, cloud processing will remain a critical component of generative AI. For example, training AI models will remain in the cloud. But the act of applying user inputs to those models, called inferencing, can — and in many cases should — happen at the edge.

The industry is already developing leaner, smaller, more efficient AI models that can be loaded onto edge devices. Companies like Hailo manufacture AI processors purpose-designed to perform neural network processing. Such neural-network processors not only handle AI models incredibly rapidly, but they also do so with less power, making them energy efficient and apt to a variety of edge devices, from smartphones to cameras.

Utilizing generative AI at the edge enables effective load-balancing of growing workloads, allows applications to scale more stably, relieves cloud data centers of costly processing, and helps reduce environmental impact. Generative AI is on the brink of revolutionizing computing once more. In the future, your laptop’s LLM may auto-update the same way your OS does today — and function in much the same way. However, in order to get there, generative AI processing will need to be enabled at the network’s edge. The outcome promises to be greater performance, energy efficiency, security and privacy. All of which leads to AI applications that reshape the world just as significantly as generative AI itself.
I agree about connectivity. When you ambark on a long car regional trip the only thing you remember about the drive are the times you lost internet connection. It dammed annoying!!!
 
  • Like
  • Fire
  • Haha
Reactions: 7 users
View attachment 53305
Interesting that Rob would like a post about Renesas's DRP A.I. which trumpets MACs and is the "long way around" to doing "A.I." tasks and really shouldn't be called Artificial Intelligence at all anyway..

At least that's my limited understanding of that technology (Hey Diogenese bags it..).

Although I don't think Rob likes posts "Willy Nilly' many probably have more to do with networking and building relationships, than anything else..

Which hopefully leads to something down the track.
 
  • Like
Reactions: 12 users

tjcov87

Member
Everyone talking about "revenue" and their corresponding disappointment with the lack of it clearly doesn't understand cash flow; or understand there is a cost associated with generating revenue. Relax, trust your research & due diligence and most of all understand your investment.
 
  • Like
  • Love
  • Fire
Reactions: 30 users

Damo4

Regular
  • Haha
Reactions: 7 users
Top Bottom