BRN Discussion Ongoing

Diogenese

Top 20
A company similar to Prophesee is iniVation with their neuromorphic vision systems.


iniVation partnered with SynSense in 2019 to develop Speck which is a low power smart vision sensor for mobile & IoT devices.


Speck™ is a fully event-driven neuromorphic vision SoC. Speck™ is able to support large-scale spiking convolutional neural network (sCNN) with a fully asynchronous chip architecture. Speck™ is fully configurable with the spiking neuron capacity of 320K. Furthermore, it integrates the state-of-art dynamic vision sensor (DVS) that enables fully event-driven based, real-time, highly integrated solution for varies dynamic visual scene. For classical applications, Speck™ can provide intelligence upon the scene at only mWs with a response latency in few ms.

Prophesee partnered with SynSense in 2021 to develop a one chip event based smart sensing solution for low power edge AI.

Prophesee partnered with BrainChip in 2022 to optimize computer vision AI performance & efficiency.

Prophesee CEO has mentioned BrainChip is a perfect fit for their event based camera vision sensor.

Qualcomm have recently partnered with Prophesee which has been working with Snapdragon processors since 2018.

Qualcomm mentioned Prophesee event based cameras will be launched this year in their recent presentation, however, there was no mention of SynSense's Speck.

It's a puzzle this one. Unless Qualcomm will use Prophesee's metavision event based sensor only with their own processor suitable for neuromorphic SNN if they have one.

I am intrigued because the smartphone market dominated by Qualcomm will result in big revenue for BRN if Akida IP is embedded in their chip for Prophesee's event based camera. It took ARM nearly 10 years from when they started to get into smartphones.
Synsense uses analog SNNs:

WO2023284142A1 SIGNAL PROCESSING METHOD FOR NEURON IN SPIKING NEURAL NETWORK AND METHOD FOR TRAINING SAID NETWORK

1676771117945.png

A signal processing method for a neuron in a spiking neural network, and a method for training said network. Unlike a single spike mechanism that is presently commonly used, same is designed as a multi-spike mechanism. The signal processing method for a neuron comprises: a reception step: at least one neuron receives at least one input spike train; an accumulation step: a membrane voltage is obtained on the basis of a weighted sum of the at least one input spike train; an activation step: once the membrane voltage exceeds a threshold, the amplitude of a spike fired by a neuron is determined on the basis of a ratio of the membrane voltage and the threshold. In order to solve the problems of a training algorithm being inefficient and time-consuming due to an ever-increasing configuration parameter scale, the present network training method achieves highly efficient training of a spiking neural network by means of a multi-spike mechanism, a periodic exponential function surrogate gradient, and addition and suppression of a neuron activity level as loss, low power consumption of neuromorphic hardware can be sustained, and precision and convergence speed are also improved.

Luca Verre has stated that Prophesee do not have Akida in a product (yet!).

Qualcomm Snapdragon 8.2 uses their in-house hexagon AI processor.
 
  • Like
  • Fire
Reactions: 16 users

Sirod69

bavarian girl ;-)
  • Like
Reactions: 1 users
Well if it's Aussie time that's in a couple hours for me😅
@Sirod69
 
  • Haha
  • Like
Reactions: 3 users

Tothemoon24

Top 20
Sounds interesting.
Have we uncovered any links here in the pass ?

 
  • Like
Reactions: 1 users

Sirod69

bavarian girl ;-)
  • Like
  • Love
Reactions: 2 users

Labsy

Regular
  • Like
  • Fire
Reactions: 2 users
  • Haha
  • Love
Reactions: 2 users

Dhm

Regular
Hi @Steve10

A few of us have latched onto that idea with great excitement and I hope Qualcomm does look to include Brainchip in the future for its science fiction qualities. However it was identified via @Bravo and debunked by @Diogenese as NOT being the case currently.

Qualcomm were working with Icatch and another company whose name escapes me to provide it’s AI needs. N

EDIT: the links below discusses Qualcomm‘s tech and although they look the same have different conten:





It is however highly likely Brainchip could improve performance but given there has been no announcement re partnership/agreements I don’t think it’s the case as the moment!


On the flip side if another phone company wants to match or beat Qualcomm current technology then a Prophesee/Brainchip event camera in their phone would be a good fix so fingers crossed Apple, Pixel, Nokia etc are developing that.

:)
Yes, we have tossed this around over the last month or so. @chapman89 published a chat he had with Luca Verre, who said this:
Screen Shot 2023-02-19 at 1.21.01 pm.png



I have no doubt we will be more to Prophesee than our current status of 'technology demonstrator', but for the time being that is what we are.
 
  • Like
  • Fire
Reactions: 18 users

Diogenese

Top 20
Open AI produced its own chip technology for ChatGPT based on RISC-V open source architecture I believe. Main keywords they use to describe it are similar to akida. You can speak to the bot in a certain way and it will disclose some information about it creation.

Needless to say akida could be used in conjunction with its in-house design for GPT4 but I'm confident it's not part of previous versions.

Happy for others to chip in here.

In a previous post I mentioned I found it interesting that intel ditched a billion dollars of investment into RISC-V at the same time it incorporated brainchip into its programs and intel diverted the billion dollars of funding to this program.

I'd generally say that it's a very good sign but RISC-v is competition to x86 which is licensed by intel. So it's probable intel may have other forces pushing it arm for switching.
Interesting thoughts on Intel and RISC-V. Maybe Intel are applying the "if you can't beat 'em, join 'em" principle.

Do you have a link for the OpenAi/RISC-V SoC? I couldn't find any patent applications, but there is an 18 month NDA on patent applications.

Ilya Sutskever, OpenAi's Chief Scientist, is named as inventor in 19 Google patents pre-2017.

https://worldwide.espacenet.com/pat...863A1?q=in = "Sutskever" AND nftxt = "google"

When you look at OpenAi's mission statement about beneficial AI,

https://openai.com/about/

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome
.

I wonder if he was a tad disillusioned at Google.
 
  • Like
  • Thinking
Reactions: 6 users

Diogenese

Top 20
Today I did a google search that Stuart888 has put up a few times and looking in the last week only.


If you go to tools and choose only the last week, it will give you a link to a NASA document dated 13th March 2023.

There are numerous scopes in here that relate to Akida such as:

Neuromorphic Software for Cognition and Learning for Space Missions

Extreme Radiation Hard Neuromorphic Hardware

Radiation Tolerant Neuromorphic Learning Hardware


Akida is mentioned in the document.

Someone else might be able to shed more light on this document as it has many pages.

Apologies if already posted, I cant keep up with all the posts and gifs sometimes!
I knew those NASA rockets were fast, but 13 March???
 
  • Haha
  • Fire
  • Like
Reactions: 8 users

BigDonger101

Founding Member
I think there would be many of you here that have followed Brainchip extremely closely for the past 3/4 years, and maybe longer.

If you're like me - you don't even look at the cash in your portfolio.

For full transparency, my average is 33 cents. I'm that excited for the company that I do not feel the need to do as much research as I used to, and can finally start focusing on different sectors without any worry.

I went through the first ATH to 97 cents. I didn't sell.

I went through the next ATH to 2.34. I didn't sell.

I will go through the next ATH & I won't sell. But luckily on this next run, BRN will solidify themselves as the main players. Remember Novonix? Yeh something like that :)

I'm ambivalent regarding dealings with China. But it is a ''kill or be killed'' game. I'd rather we be the predator than prey.

Good luck all! Patience is always rewarded, even when it feels like it's not! That's all part of the game :)
 
  • Like
  • Love
  • Fire
Reactions: 59 users

Crestman

Regular
I knew those NASA rockets were fast, but 13 March???
My bad Diogenese, it is actually the completion date:

Completed Proposal Package Due Date and Time:

March 13, 2023 - 5:00 p.m. ET

I have put the image below linked to the document so it will download.

1676776205708.png
 
  • Like
  • Love
  • Fire
Reactions: 11 users

Tothemoon24

Top 20
Tata are 1 massive company .

It will be Tata shorts once we land this big boy

 
  • Like
  • Fire
  • Love
Reactions: 25 users

Jasonk

Regular
Interesting thoughts on Intel and RISC-V. Maybe Intel are applying the "if you can't beat 'em, join 'em" principle.

Do you have a link for the OpenAi/RISC-V SoC? I couldn't find any patent applications, but there is an 18 month NDA on patent applications.

Ilya Sutskever, OpenAi's Chief Scientist, is named as inventor in 19 Google patents pre-2017.

https://worldwide.espacenet.com/patent/search/family/057135560/publication/US2018032863A1?q=in = "Sutskever" AND nftxt = "google"

When you look at OpenAi's mission statement about beneficial AI,

https://openai.com/about/

OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome
.

I wonder if he was a tad disillusioned at Google.
Link to an earlier post regarding RISC-V

Hope the link works. I was only reading about Intel and RISC-V and out of interested I asked ChatGPT about its hardware, partners and was given the attached response.
 
Last edited:
  • Like
  • Fire
Reactions: 4 users
Seems like Open Ai has changed tack for CHAT Gpt4
I know they talk of their partnership with Cerebras, ................. BUT


Is it just me , or , does anybody else find themselves saying,

"hey, thats what Akida can do/make better " ................... :sneaky:

Its the newest version of CHAT Gpt ( 4 )

IE ............ SPARSITY = less computational power consumption
............. MULTIMODAL LANGUAGE MODEL = RTs continual emphasise on multi modalities
..............ADVANTAGE OF FASTER CHIPS OR HARDWARE = have they found a better way to reduce the computational power issues?
...............SELF HEALING SENSORS = Brainchips catchcry ........... making sensors smart
................ARTIFICIAL INTELIGENCE ON EDGE DEVICES = eliminating the need for large cloud servers
.................RESOURCE CONSTRAINED ENVIROMENTS = low power consumption, ? 6mths on a ÄAA" battery
.................BIOLOGICAL BRAINS ABLE TO LEARN = one shot learning

Check out this video if interested




Just wondering whether Elon has taken onboard my numerous emails to him ...................:unsure:

AKIDA ( G iveit Patience Time ) BALLISTA

Recent Forbes article on ChatGPT.

There is some commentary with Rain AI but just gotta look at the behind the scenes issues, that users don't think about, like power etc.

Kinda where neuromorphic can assist obviously as Rain says but as usual journos forget to actually do their reasearch...Akida anyone.


ChatGPT Burns Millions Every Day. Can Computer Scientists Make AI One Million Times More Efficient?
John Koetsier
Senior Contributor
John Koetsier is a journalist, analyst, author, and speaker.

Running ChatGPT costs millions of dollars a day, which is why OpenAI, the company behind the viral natural-language processing artificial intelligence has started ChatGPT Plus, a $20/month subscription plan. But our brains are a million times more efficient than the GPUs, CPUs, and memory that make up ChatGPT’s cloud hardware. And neuromorphic computing researchers are working hard to make the miracles that big server farms in the clouds can do today much simpler and cheaper, bringing them down to the small devices in our hands, our homes, our hospitals, and our workplaces.

One of the keys: modeling computing hardware after the computing wetware in human brains.

Including — surprisingly — modeling a characteristic about our own wetware that we really don’t like: death.

“We have to give up immortality,” the CEO of Rain AI, Gordon Wilson, told me in a recent TechFirst podcast. “We have to give up the idea that, you know, we can save software, we can save the memory of the system after the hardware dies.”

Wilson is quoting Geoff Hinton, a cognitive psychologist and computer scientist, author or co-author of over 200 peer-reviewed publications, current Google employee working on Google Brain, and one of the “godfathers” of deep learning. At a recent NeurIPS machine learning conference, he talked about the need for a different kind of hardware substrate to form the foundation of AI that is both smarter and more efficient. It’s analog and neuromorphic — built with artificial neurons in a very human style — and it’s co-designed with software to form a tight blend of hardware and software that is massively more efficient than current AI hardware

Achieving this is not just a nice-to-have, or a vague theoretical dream.

Building a next-generation foundation for artificial intelligence is literally a multi-billion-dollar concern in the coming age of generative AI and search. One reason is that when training large language models (LLM) in the real world, there are two sets of costs to consider.

Training a large language model like that used by ChatGPT is expensive — likely in the tens of millions of dollars — but running it is the true expense. Running the model, responding to people’s questions and queries, uses what AI experts call “inference.”

That’s precisely what runs ChatGPT compute costs into the millions regularly. But it will cost Microsoft’s AI-enhanced Bing much more.

And the costs for Google to respond to the competitive threat and duplicate this capability could be literally astronomical.

“Inference costs far exceed training costs when deploying a model at any reasonable scale,” say Dylan Patel and Afzal Ahmad in SemiAnalysis. “In fact, the costs to inference ChatGPT exceed the training costs on a weekly basis. If ChatGPT-like LLMs are deployed into search, that represents a direct transfer of $30 billion of Google’s profit into the hands of the picks and shovels of the computing industry.”

If you run the numbers like they have, the implications are staggering.

“Deploying current ChatGPT into every search done by Google would require 512,820 A100 HGX servers with a total of 4,102,568 A100 GPUs,” they write. “The total cost of these servers and networking exceeds $100 billion of Capex alone, of which Nvidia would receive a large portion.”

Assuming that’s not going to happen (likely a good assumption), Google has to find another way to approach similar capability. In fact, Microsoft, which has only released its new ChatGPT-enhanced Bing in very limited availability for very good reasons probably including hardware and cost, needs another way.

Perhaps that other way is analogous to something we already have a lot of familiarity with.

According to Rain AI’s Wilson, we have to learn from the most efficient computing platform we currently know of: the human brain. Our brain is “a million times” more efficient than the AI technology that ChatGPT and large language models use, Wilson says. And it happens to come in a very flexible, convenient, and portable package.

“I always like to talk about scale and efficiency, right? The brain has achieved both,” Wilson says. “Typically, when we’re looking at compute platforms, we have to choose.”

That means you can get the creativity that is obvious in ChatGPT or Stable Diffusion, which relies on data center compute to build AI-generated answers or art (trained, yes, on copyrighted images), or you can get something small and efficient enough to deploy and run on a mobile phone, but doesn’t have much intelligence.

That, Wilson says, is a trade-off that we don’t want to keep having to make.

Which is why, he says, an artificial brain built with memristors that can “ultimately enable 100 billion-parameter models in a chip the size of a thumbnail,” is critical.

For reference, ChatGPT’s large language model is built on 175 billion parameters, and it’s one of the largest and most powerful yet built. ChatGPT 4, which rumors say is as big a leap from ChatGPT 3 as the third version was from its predecessors — will likely be much larger. But even the current version used 10,000 Nvidia GPUs just for training, with likely more to support actual queries, and costs about a penny an answer.

Running something of roughly similar scale on your finger is going to be multiple orders of magnitude cheaper.

And if we can do that, it unlocks much smarter machines that generate that intelligence in much more local ways.

“How can we make training so cheap and so efficient that you can push that all the way to the edge?” Wilson asks. “Because if you can do that, then I think that’s what really encapsulates an artificial brain. It’s a device. It’s a piece of hardware and software that can exist, untethered, perhaps in a cell phone, or AirPods, or a robot, or a drone. And it importantly has the ability to learn on the fly. To adapt to a changing environment or a changing self.”

That’s a critical evolution in the development of artificial intelligence. Doing so enables smarts in machines we own and not just rent, which means intelligence that is not dependent on full-time access to the cloud. Also: intelligence that doesn’t upload everything known about us to systems owned by corporations we end up having no choice but to trust.

It also, potentially, enables machines that differentiate. Learn. Adapt. Maybe even grow.

My car should know me and my area better than a distant colleagues’ car. Your personal robot should know you and your routines, your likes and dislikes, better than mine. And those likes and dislikes, with your personal data, should stay local on that local machine.

There’s a lot more development, however, to be done on analog systems and neuromorphic computing: at least several years. Rain has been working on the problem for six years, and Wilson thinks shipping product in quantity — 10,000 units for Open AI, 100,000 units for Google — is at least “a few years away.” Other companies like chip giant Intel are also working on neuromorphic computing with the Loihi chip, but we haven’t seen that come to the market in scale yet.

If and when we do, however, the brain-emulation approach shows great promise. And the potential for great disruption.

“A brain is a platform that supports intelligence,” says Wilson. “And a brain, a biological brain, is hardware and software and algorithms all blended together in a very deeply intertwined way. An artificial brain, like what we’re building at Rain, is also hardware plus algorithms plus software, co-designed, intertwined, in a way that is really ... inseparable.”
 
  • Like
  • Love
Reactions: 14 users

Diogenese

Top 20
I was reading the other day about Intel terminating Pathfinder RISC-V development kit program. It was planning on investing a billion dollars as of a month ago.

The below chain-of-events attached related or coincidence to BRN joining Intel foundry services?

Interesting enough I asked ChatGPT about this RISC-V.

Just another side note. It's been clear for some time via linkedin that Intel verification engineering/s were clearly interested in BRN... low and behold over night Intel seem to have pivoted technologies.
Great research, sadly it looks like Intel has pulled the plug on Pathfinder:

https://www.theregister.com/2023/01/30/intel_ris_v_pathfinder_discontinued/


After less than half a year, Intel quietly kills RISC-V dev environment​


Did Pathfinder get lost in sea of red ink? Or is Chipzilla becoming RISC averse?​

Simon Sharwood
Mon 30 Jan 2023 // 06:02 UTC

Intel has shut down its RISC-V Pathfinder – an initiative it launched less than six months ago to encourage use of the open source RISC-V CPU designs.
Pathfinder was launched in August 2022. A joint press release from the 30th of that month includes a canned quote from Vijay Krishnan, general manager for RISC-V Ventures at Intel Corporation, who at the time stated: "With Intel Pathfinder, users will be able to test drive pre-silicon concepts on Intel FPGAs or virtual simulators."
"There should be tremendous value for pre-silicon architects, software developers and product managers looking to prove out use cases upfront in the product development lifecycle," he added.
Intel billed the service as "scalable from individual users in academia and research, all the way to large-scale commercial projects."
On December 1, 2022, Intel emitted an announcement of impending enhancements to the Pathfinder.
That document again featured Krishnan, this time quoted as saying "Maintaining a torrid pace of execution and fostering ecosystem collaboration are key imperatives for Intel Pathfinder for RISC-V." Next came a quote from Sundari Mitra, chief incubation officer, corporate vice president, and general manager at Intel's Incubation & Disruptive Innovation (IDI) Group: "We are excited to see Intel Pathfinder for RISC-V grow rapidly while continuing to adapt to market needs."
But in recent days a visit to pathfinder.intel.com produces only the following announcement:
We regret to inform you that Intel is discontinuing the Intel® Pathfinder for RISC-V program effective immediately.
Since Intel will not be providing any additional releases or bug fixes, we encourage you to promptly transition to third-party RISC-V* software tools that best meet your development needs
.

So the question is whether OpenAI is going it alone with Pathfinder.
 
  • Like
  • Thinking
Reactions: 7 users

Quiltman

Regular
  • Like
Reactions: 12 users

Diogenese

Top 20

The last time someone said that, it was a decoy duck.

All that waddles is not Akida.

Just remember Ella ...

TDK use an analog neuromorphic element:
US2020210818A1 ARRAY DEVICE INCLUDING NEUROMORPHIC ELEMENT AND NEURAL NETWORK SYSTEM

[0327] The neuromorphic element 711 is controlled by a control signal which is input from a control unit (not shown) assigning weights, and the values of the weights change with change in characteristics (for example, conductance) of the neuromorphic element 711 . The neuromorphic element 711 multiplies the weights (the values thereof) corresponding to the characteristics of the neuromorphic element 711 by an input signal and outputs a signal which is the result of multiplication.
 

Attachments

  • 1676781440597.png
    1676781440597.png
    65.1 KB · Views: 66
  • Like
  • Fire
Reactions: 10 users

wilzy123

Founding Member
The last time someone said that, it was a decoy duck.

All that waddles is not Akida.

Just remember Ella ...

TDK use an analog neuromorphic element:
US2020210818A1 ARRAY DEVICE INCLUDING NEUROMORPHIC ELEMENT AND NEURAL NETWORK SYSTEM

[0327] The neuromorphic element 711 is controlled by a control signal which is input from a control unit (not shown) assigning weights, and the values of the weights change with change in characteristics (for example, conductance) of the neuromorphic element 711 . The neuromorphic element 711 multiplies the weights (the values thereof) corresponding to the characteristics of the neuromorphic element 711 by an input signal and outputs a signal which is the result of multiplication.

Misleading junk decoy duck posts? Never.

crawl-space-roaches-tennessee.gif
 

hamilton66

Regular
It says bucket loads of profit is coming to those who have patience.
Rise, just an observation. U and I have been in a few identical shares over the yrs. Always saw u as a trader. I've never seen u so upbeat on any share u've ever owned. For what it's worth, I think u judgement is spot on. I'm as frustrated as f34k. That said, hugely confident.
GLTA
 
  • Like
  • Love
  • Fire
Reactions: 15 users
Top Bottom