BRN Discussion Ongoing

ODAAT

one day at a time
Hi Gies,

That's not correct.

Peter is named as the inventor on most of the patents, but the company is the assignee.
Hi Dio,

There was some discussion earlier this month (post #66,770 and #66,790) where it was said that PVDM owns the foundation patent (circa 2008), and this ownership is good until 2028. It was said in the post this foundation patent is owned by Peter, not the company. I don't know if this is true myself but remember thinking this is sensational, as it's another reason why our IP and lead from our competitors is maintained.
 
  • Like
  • Fire
Reactions: 6 users

Frangipani

Top 20
Meet global futurist Dr Bruce McCabe, who is extremely passionate about neuromorphic computing! 🚀

269AD2C3-1CFF-44A1-AC8C-1CFF3CAE4156.jpeg


I listened to his podcast with Dr Alexandre Marcireau from WSU’s ICNS (International Centre for Neuromorphic Systems) a couple of weeks ago, but can’t seem to find it posted here, yet, via the search function. No mention of Brainchip, but worthwhile listening to nevertheless.

A link to the podcast transcript is also provided below.

One thing that confuses me, though, in the article & podcast below as well as in other publications, is how the term “analog” is being used here (eg “The Future of AI is analog“).

Is it correct to say in this context it doesn’t refer to the analog vs digital logic circuitry design specifically (such as Akida being fully digital vs. eg Mythic’s analog compute architecture) but rather to the general concept of the extremely power-efficient way our brain processes information (neurons working asynchronously and in parallel etc) which differs fundamentally from the way a digital computer operates on data expressed in binary code?

So is analog here essentially just being used as a synonym for neuromorphic?




NEUROMORPHIC COMPUTING AND THE FUTURE OF A.I.​


BIO-INSPIRED COMPUTER CHIPS WITH DR ALEX MARCIREAU​


neuromorphic_camera.png



I’m calling neuromorphic computing the most important computer engineering research in the world, now and through the next 20 years. That’s right, more important than quantum computing (you heard it hear first!). Why? Because everything we want to do in the future of AI, everywhere we want to go long-term, is predicated on transitioning to more fit-for-purpose computer architectures, and the most fit-for-purpose architectures are most certainly those inspired by nature.

MEETING DR ALEXANDER MARCIREAU​

I interviewed the irrepressible Dr Alexandre Marcireau at the International Centre for Neuromorphic Systems (ICNS) at University of Western Sydney. Alex is softly-spoken, laughs easily, and as you would expect is extremely passionate about his field. He generously took me on a tour of his lab to check out his prototypes, including the neuromorphic cameras (see right-hand image above, and the cover image for this article) that are currently circulating in the International Space Station. Afterwards he sat down to answer questions and share his views on the future. He’s very much tuned-in to both the immediate applications and the long-term planetary-scale benefits his technology has to offer. I know you’ll enjoy listening to him!




CHECK OUT THE PODCAST TRANSCRIPT

THE FUTURE OF AI IS ANALOG (YES, REALLY!)​

Alex’s long-term dream is analog AI computing from end to end. Biology is messy, every organism is different, but it WORKS SO WELL! If we want to truly emulate the efficiencies of nature in computers that sense and learn then we MUST go analog not digital. We are slicing off and solving one sensory processing function at a time — a bio-inspired camera here, a bio-inspired microphone there — but the long-term dream is end to end. Of course we’ll still be using fast number-crunching and general purpose computing chips for everything else, so the future of computing more broadly will always be a mix of digital and analog, classical and neuromorphic.

And when it comes to the eye-watering energy demands of AI, the comparative advantages with respect to classical chips for AI-related functions are immense. If things keep going as they are, the ruinous energy demands of crypto-mining will be as nothing compared to the future energy demands from exponential AI. Conclusion: long-term we will HAVE TO transition AI workloads off traditional architectures.

ENDLESS REAL-WORLD APPLICATIONS​

I loved Alex’s discussion of the different opportunities at the ‘edge’ and the ‘centre’ of AI, and the lab’s real-world applications in deploying high-efficiency low-energy event-based cameras to detect lightning strikes and satellites from the International Space Station, and to track koalas and insects in forests down here on earth. We also had fun talking about using AI to decipher animal languages, such as they are doing at projectceti.org (decoding the communications of sperm whales), and neuromorphic applications to make drones vastly more capable and to enable the bi-directional brain interfaces and neuro-prosthetics I’ve been looking at lately in the future of medicine. So many good things to be transformed.

BIOLOGY + COMPUTING = MAGIC​

The field crosses many boundaries. Biologists, mathematicians, hardware engineers, physicists, programmers are all working together to create a new industry – what could be more exciting than that? And what about the big gaps in our understanding of various neural systems in nature, and how every improvement in our biological knowledge (such as those auditory, visual, olfactory and learning connectomes that we keep extending for fruit flies and mice) yields new opportunities? I loved Alex’s honesty when he said that the computer scientists were benefiting immensely from the biologists, but perhaps not yet giving back nearly as much. When I see all the ways AI is being used to unravel the biology of animals and plants, I don’t think it will be one-way traffic for long!

HARDWARE PLUS WETWARE?​

As a side note, I enjoyed hearing Alex’s comments about the use of living cells, aka ‘wetware.’ I can’t remember whether this made it onto the recorded interview or not, but when I asked about growing live neurons as a way to resolve some of the challenges of constructing massively parallel data connectors between all those pixels and all those ‘transistors,’ Alex confirmed that this is something scientists have indeed tried, although their experiments have been hampered because the computer quickly dies … literally! When I link these experiments to the work I’m seeing with brain organoids in regenerative medicine, it really opens up my thinking about the long-term (>30 years?) possibilities. Fascinating!

HUGE OPPORTUNITY SPACE​

A big takeaway from this interview is how much headroom there is. It’s like looking at the nascent computer chip industry circa 1950. We’ve only started knocking off the little opportunities. As fast as we explore, we identify new ones. The opportunity space is HUGE. If you are investing in computer engineering, or studying it, or building AI systems (who isn’t?) or you happen to manufacture computer chips, take note!

The software side is a particularly innovative space. How to best to receive all that parallel data? How best to process it? How best to navigate the analog/digital interfaces? How can we take full advantage of super-fast and super-local AI decision-making ‘at the edge.’ With so many different possibilities and no baked standards (because, hey, it’s still way too early for those) this field is a boon for creative and talented programmers.

FUNDAMENTAL TO THE FUTURE OF AI​

When I say more important than quantum, I mean it. Don’t get me wrong, quantum is big. It matters. It tackles big problem spaces, especially in molecular, atomic and sub-atomic simulations that will give us access to new drugs, enzymes and materials, and in complex system optimisation problems. But neuromorphic engineering has the potential to boost the capabilities of EVERY aspect of the biggest technological force of change in our time, artificial intelligence, and further, transitioning to bio-inspired neuromorphic hardware is fundamental to the long-term future of AI if we want it to stay on its current exponential adoption curve without smashing into an energy ceiling.
 
Last edited:
  • Like
  • Love
  • Thinking
Reactions: 14 users

Diogenese

Top 20
Hi Dio,

There was some discussion earlier this month (post #66,770 and #66,790) where it was said that PVDM owns the foundation patent (circa 2008), and this ownership is good until 2028. It was said in the post this foundation patent is owned by Peter, not the company. I don't know if this is true myself but remember thinking this is sensational, as it's another reason why our IP and lead from our competitors is maintained.
Hi Odatt,

That was true until 2015 when it was assigned to the company:

https://worldwide.espacenet.com/patent/search/family/042038652/publication/US8250011B2?q=us8250011

1697099103037.png



US8250011B2 Autonomous learning dynamic artificial neural computing device and brain inspired system
 
  • Like
  • Fire
  • Love
Reactions: 25 users

Perhaps

Regular
Hi Odatt,

That was true until 2015 when it was assigned to the company:

https://worldwide.espacenet.com/patent/search/family/042038652/publication/US8250011B2?q=us8250011

View attachment 46903


US8250011B2 Autonomous learning dynamic artificial neural computing device and brain inspired system
Not to forget, US patents only work for the US. There still remains a lot to do with all those pending patents to have a real IP protection, especially for UK, Korea, Taiwan and Europe. Chinese patents will not work anyway.
The whole patent situation leads to defensive tactics Brainchip is forced to use.
The existing patent portfolio needs further updates, on a low level yet. Not the fault of Brainchip, just bureaucracy.
 
  • Like
  • Fire
  • Thinking
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
We are severely underrating BRN's partnership with TCS.

1697102460467.png

1697102531709.png

1697102681701.png
 
  • Like
  • Fire
  • Love
Reactions: 51 users

Perhaps

Regular
  • Like
Reactions: 7 users

equanimous

Norse clairvoyant shapeshifter goddess
Last edited:
  • Like
  • Love
  • Fire
Reactions: 16 users

Perhaps

Regular
  • Like
  • Thinking
Reactions: 3 users

equanimous

Norse clairvoyant shapeshifter goddess
Interesting going back through some filed patents of other companies with SNN

Spiking neural network with reduced memory access and reduced in-network bandwidth consumption​

Mar 7, 2016 - Samsung Electronics
A spiking neural network having a plurality layers partitioned into a plurality of frustums using a first partitioning may be implemented, where each frustum includes one tile of each partitioned layer of the spiking neural network. A first tile of a first layer of the spiking neural network may be read. Using a processor, a first tile of a second layer of the spiking neural network may be generated using the first tile of the first layer while storing intermediate data within an internal memory of the processor. The first tile of the first layer and the first tile of the second layer belong to a same frustum.

SPIKING NEURAL NETWORK SYSTEM, LEARNING PROCESSING DEVICE, LEARNING METHOD, AND RECORDING MEDIUM​

May 18, 2020 - NEC CORPORATION
A spiking neural network system includes: a time-based spiking neural network; and a learning processing unit that causes learning of the spiking neural network to be performed by supervised learning using a cost function, the cost function using a regularization term relating to a firing time of a neuron in the spiking neural network.

Odor discrimination using binary spiking neural network


Patent number: H2215
Abstract: An odor discrimination method and device for an electronic nose system including olfactory pattern classification based on a binary spiking neural network with the capability to handle many sensor inputs in a noise environment while recognizing a large number of potential odors. The spiking neural networks process a large number of inputs arriving from a chemical sensor array and implemented with efficient use of chip surface area.
Type: Grant
Filed: March 29, 2004
Date of Patent: April 1, 2008
Assignee: The United States of America as represented by the Secretary of the Air Force
Inventors:
Jacob Allen, Robert L. Ewing, Hoda S. Abdel-Aty-Zohdy

Continuous time spiking neural network event-based simulation that schedules co-pending events using an indexable list of nodes

Patent number: 9015096
Abstract: Certain aspects of the present disclosure provide methods and apparatus for a continuous-time neural network event-based simulation that includes a multi-dimensional multi-schedule architecture with ordered and unordered schedules and accelerators to provide for faster event sorting; and a formulation of modeling event operations as anticipating (the future) and advancing (update/jump ahead/catch up) rules or methods to provide a continuous-time neural network model. In this manner, the advantages include faster simulation of spiking neural networks (order(s) of magnitude); and a method for describing and modeling continuous time neurons, synapses, and general neural network behaviors.
Type: Grant
Filed: May 30, 2012
Date of Patent: April 21, 2015
Assignee: QUALCOMM Incorporated
Inventor: Jason Frank Hunzinger

SPIKING NEURAL NETWORK DEVICE AND LEARNING METHOD OF SPIKING NEURAL NETWORK DEVICE

Publication number: 20210056383
Abstract: A spiking neural network device according to an embodiment includes a synaptic element, a neuron circuit, a synaptic potentiator, and a synaptic depressor. The synaptic element has a variable weight. The neuron circuit inputs a spike voltage having a magnitude adjusted in accordance with the weight of the synaptic element via the synaptic element, and fires when a predetermined condition is satisfied. The synaptic potentiator performs a potentiating operation for potentiating the weight of the synaptic element depending on input timing of the spike voltage and firing timing of the neuron circuit. The synaptic depressor performs a depression operation for depressing the weight of the synaptic element in accordance with a schedule independent from the input timing of the spike voltage and the firing timing of the neuron circuit.
Type: Application
Filed: February 27, 2020
Publication date: February 25, 2021
Applicant: KABUSHIKI KAISHA TOSHIBA
Inventors:
Yoshifumi NISHI, Kumiko NOMURA, Radu BERDAN, Takao MARUKAME

METHOD AND SYSTEM FOR OPTIMIZED SPIKE ENCODING FOR SPIKING NEURAL NETWORKS

Publication number: 20220222522
Abstract: This disclosure generally relates optimized spike encoding for spiking neural networks (SNNs). The SNN processes data in spike train format, whereas the real world measurements/input signals are in analog (continuous or discrete) signal format; therefore, it is necessary to convert the input signal to a spike train format before feeding the input signal to the SNNs. One of the challenges during conversion of the input signal to the spike train format is to ensure retention of maximum information between the input signal to the spike train format. The disclosure reveals an optimized encoding method to convert the input signal to optimized spike train for spiking neural networks. The disclosed optimized encoding approach enables maximizing mutual information between the input signal and optimized spike train by introducing an optimal Gaussian noise that augments the entire input signal data.
Type: Application
Filed: March 1, 2021
Publication date: July 14, 2022
Applicant: Tata Consultancy Services Limited
Inventors:
DIGHANCHAL BANERJEE, Sounak DEY, Arijit MUKHERJEE, Arun GEORGE

SPIKING NEURAL NETWORK

Publication number: 20180260696
Abstract: Broadly speaking, embodiments of the present technique provide a neuron for a spiking neural network, where the neuron is formed of at least one Correlated Electron Random Access Memory (CeRAM) element or Correlated Electron Switch (CES) element.
Type: Application
Filed: March 8, 2017
Publication date: September 13, 2018
Applicant: ARM LTD
Inventors: Naveen SUDA, Vikas CHANDRA, Brian Tracy CLINE, Saurabh Pijuskumar SINHA, Shidhartha DAS
 
  • Like
  • Fire
  • Wow
Reactions: 11 users

equanimous

Norse clairvoyant shapeshifter goddess
You've been talking about partnership. This is a presentation from 2019. The actual known partnership is with Tata Elxsi.
Yep I agree that I should of not said partnership with TCS but its likely that we are working with them. Isnt it all under the Tata umbrella?
 
Last edited:
  • Like
  • Love
Reactions: 5 users

Perhaps

Regular
Yep I agree that I should of not said partnership with TCS but its likely that we are working with them
Hard to say what's really going on. My view is they try everything neuromorphic and build their own patent portfolio. Can this be filed under partner or competition, who knows. TCS is big in patent filings.

Here the actual TCS portfolio dedicated to neuromorphic technology:

And here a white paper from TCS, worth a read, Brainchip is mentioned also:
 
  • Like
  • Fire
Reactions: 4 users

Perhaps

Regular
Additional to TCS discussion:

1697106361055.png



1697106713895.png


Here the full package of neuromorphic research at TCS.
Worth a deeper look, don't have the time now to dig into it:

 
Last edited:
  • Like
  • Love
Reactions: 7 users

Frangipani

Top 20
The podcast with Dr Alexandre Marcireau from ICNS reminded me that I had long wanted to bring another podcast to your attention, namely the first episode of the new Brains and Machines podcast with Sunny Bains and her interview partner Prof André van Schaik, Director of ICNS, which was recorded during the Capo Caccia Neuromorphic Workshop 2023 (organised by the Zurich Institute of Neuroinformatics) in the first week of May.


Now the tech is way above my head, hence I could be totally off, but I just thought I’d mention it anyway, so the more tech-savvy can give their opinions. So here are my thoughts:

Even though Brainchip is targeting the Edge AI market with Akida, I have been wondering whether André van Schaik could be hinting at Akida being used in the ICNS Deep South project, a large FPGA-based neuromorphic simulator that’s currently in the works. He talks about commercial hardware they will be using to build it, and as completion of building the platform, which will consist of “a bit over a hundred FPGAs”, is scheduled for the end of the year, the timeline would nicely align with the recent release of Akida 2000.

ICNS originally seems to have started on this project back in 2021 in collaboration with Intel, set out as a two year proof-of-concept project at the time:

D882CD5C-A865-451F-9953-2FA57889C294.jpeg

More details here:



I am wondering, though, whether the ICNS researchers started out with Intel, then may have gotten their hands on the Akida 1000 reference chip, realised that Brainchip’s product(s) would be a much better choice for their envisaged large neural simulator and hence switched to Akida for any further planning, once their proof of concept model with Intel was done and dusted resp have been waiting with bated breath for Akida 2000 to be released?

I find it very weird that André van Schaik does not mention Intel at all, when talking about the Deep South project, as they did start out with them in 2021. Also the option to scale up and down sounds very familiar, doesn’t it? Does anyone know the price tag for an Akida 2000 reference chip?


Here are some excerpts from the podcast that I found relevant to my thoughts:

SB: More recently, you’ve been working on a very ambitious project to build large neural simulators using FPGAs. So, can you start by telling us what you hope to achieve with this project, and then I’ll ask you a little bit more about how it works.

AVS: Sure. What I’m trying to achieve with this project is a similar enabling technology as GPUs were for neural networks when, at the beginning I mentioned neural networks, tanking in the 90s just as I wanted to start on it and coming back when GPUs made it possible to simulate really large, deep neural networks in a reasonable amount of time.

Now a problem for spiking neural networks, which is what we’re interested in at the moment because brains are spiking neural networks, is that they are terrible to simulate on a computer—it’s very slow to do large networks. And so, I want to create a technology that enables you to simulate these large-scale spiking neural networks in a reasonable amount of time.

And I want to do it in such a way that we don’t build our own chips for this, but that we use commercial hardware instead.
Because similar to the GPU, was not developed for neural networks. It was a technology that was commercially done for graphical processing on computers. FPGAs’ reconfigurable hardware is another commercial technology that’s being used for various applications, and one application that we think it’s good for is simulating spiking neural networks.

The advantage of using commercial technology is that we don’t have to develop our own chips in a small university research group, where you can maybe do one generation of chip every so many years. Some of the other groups around the world that are doing spiking neural network accelerators, they’re building their own chip. And you’re seeing that going from generation one to generation two takes typically five, six years to do that. So, that’s a very slow iteration—and my contention is that we don’t yet know what it is exactly that we want on these hardware systems, on these accelerators. What neural model do we need? What plasticity model? What connectivity patterns? That should still be open.

So, the advantage of FPGAs is that we can reconfigure it, so it can be very flexible. We’ll start with an initial design, but then the design can be iterated, and it will be open source, as well. So, anybody in the world will be able to work with the machine, but also add things to it if we think we need it.


(…)

Sb: Do you have a name for it, by the way?

AVS: Not really. The original design we call Deep South in response to True North, but True North now is getting really old and is not really that active anymore, so we need to come up with a better system, but it was Deep South because we’re based in Australia down under, so it was a nice balance to the True North.

Sunny Bains:
So, the new FPGA machine—it’s essentially a simulator. So, although you could use it to solve problems in its own right, right? You could use it as a machine to do stuff. That’s actually not its intention. The main intention is to understand the principles and to optimize models that could then be built in the next generation of optimized, small, power efficient hardware to do those things in all sorts of applications, right?

So, this is almost like an intermediate step, an experimental platform much in the same way that Spinnaker and Loihi are intermediate steps and experimental platforms.

AVS: Absolutely. We just think that this platform based on FPGAs provides more flexibility for this intermediate step to figure out what is it that we actually want on the system, before you distil that down into a really efficient, low-power chip if that’s what you need.

Also, the design is modular, so we can make really large systems, or you can use one FPGA and use a smaller system that you’d want on a robot or on a drone or something like that to do the processing locally. So, you can scale up and down with this system, as well. In theory, all the way to human brain level computation and beyond in terms of the number of operations per second that you’re doing.

SB:
Now this is, this is a long-term project that you’re sort of at the beginning of, right? As I understand it, you’ve done some proof of concept, first iteration of your design, but you’ve got quite a bit of work to do to get to what you’ve just described, am I right?

AVS:
Yes, but we’ve made a fair bit of progress behind the curtains, I guess. And so, we are looking to build a system that can do human brain scale number of operations per second this year using commercially available FPGAs. That system hopefully will exist at the end of the year. Then it’s a matter of making it also user friendly because we don’t want people to have to do FPGA programming to use the system. So, it’s a matter of providing software interface, user interface, that allows people to specify the networks that they want on it. That might take a little bit longer to be ready for people to use, but I hope that we’ll be pretty far on that next year.

And again, I hope that we’ll do that in an open-source way with contributions from a global community.
And then, even longer-term aspect of it is, is if you have that data of a billion neurons in your spiking neural network, how do you analyze that data? And that’s an interesting research question—how do you visualize that? And those are clear areas where we’re going to need help because I don’t really know the answer to those questions.

SB: And I noticed, because we’re recording this at the Capo Caccia Neuromorphic Workshop, I noticed you’re looking for postdocs and people to come and help you in this endeavor.

AVS: Yes, the more the merrier, basically. And, we have positions open in Sydney, pretty much constantly. It’s just hard to find the number of people that we need for these efforts. And we’re in an interesting time at the moment in neuromorphic engineering where funding is easier to get than people, and that hasn’t always been the case.

But there’s a lot of interest from industry, defense—all those non-academic players in neuromorphic engineering and what can it do. That’s been a real change over the years. It used to always be, I had to explain first coming through the door to somebody what neuromorphic engineering was. Whereas now, company representatives contact me and ask about what can neuromorphic engineering do for them or how can we collaborate.

And that’s a massive change that has happened over the last five years.

SB:
So, you’re expecting that by the end of…certainly by the end of 2024, you would have people in different labs around the globe playing with your machine in the cloud, essentially.

AVS: Absolutely. Yes.

SB: And presumably because it’s commercially based FPGAs, they could, if they wanted to have a local one, they could do that very simply as well, right? So, that’s your goal.

AVS: The funding, I can buy one FPGA or a few FPGAs. These are high-end FPGAs, so they are about $10,000 each. So, it’s not something that everybody will buy, but at the same time, as a university piece of equipment, buying several of them is possible, obviously. And our system will cost several million dollars to put all the hardware together—that’s a lot of money. But at the same time, it’s not impossible for somebody to replicate somewhere else if they realize they need a system at their university or at their company.

SB: So how many chips will be on the version of the system that you’re building right now?

AVS: A bit over a hundred FPGAs will be on that system.


(…)

SB: Now I don’t want to encourage you to become a betting man, but you talked about applications. Looking forward over the next 10-15 years, that kind of timescale, which are the ones that you think are most likely to have neuromorphic elements to them, before the likes of you and I retire?

AVS: The most safe bet for me would be neuromorphic vision systems. Sensing in a larger term too, but vision systems are the ones that have developed the furthest. And I think there will be a fair bit of development in that area. At the moment, we use event cameras as I described earlier. There [is] only one form of camera, of neuromorphic camera.

All each pixel does is detect changes in time. Biological retinas also look at spatial processing, what are the neighboring photoreceptors doing? That’s not in these cameras. We can try and build that in. An advantage of the current camera is that if you keep the camera still, only things that move will generate changes, and therefore you automatically extract things that move.


But if you start moving the camera, everything changes all the time, which is actually a disadvantage for the cameras. We can build cameras that try and compensate for these things. We can build cameras that don’t just use visible light, but that do infrared or ultraviolet or hyperspectral versions of these cameras.

So, there’s a whole range in applications in sensing, vision sensing in particular, but we’re also doing work in the lab on audio, on olfaction, smell. I’m interested in tactile. I’m interested in electric sense of the shark, or radar, or neuromorphic versions of that. So, I think there we’ll see a lot of first applications happening
.

I’m hopeful that we will get applications out of neuromorphic computation with spiking neural networks and with the system that we do, that it inspires stuff, and we saw that with the GPUs. It reinvigorated the field and once you have a critical mass, things are going really, really fast, and then snowball and progress has been so fast over the last decade in deep neural networks that we might trigger that in spiking neural networks, but that’s much harder to predict, so I wouldn’t want to bet on that.

(…)

REC: Yeah, so definitely, Andre speaks to this, right. [He] indicates that from the olden days, if you will, what constituted [a] neuromorphic system is very different from what constituted a neuromorphic system now. For example, back in the day, a neuromorphic system had to be hardware, had to be analog, had to mimic parsimoniously as possible the biological system that’s being modelled.

Today, we have the various models like the Deep South that Andre speaks about, which is strictly a digital system. Back then, that would not have been considered to be neuromorphic.
 
  • Like
  • Fire
  • Love
Reactions: 19 users
  • Like
  • Love
  • Fire
Reactions: 23 users

Tothemoon24

Top 20

Audio sensors and beyond neuromorphic

“Automotive is an important market for companies like Prophesee, but it’s a long play,” Ocket said. “If you want to develop a product for autonomous cars, you’ll need to think seven to 10 years ahead. And you’ll need the patience and deep pockets to sustain your company until the market really takes off.”

In the meantime, event-based cameras are meeting the needs of several other markets. These include industrial use cases that require ultra-high-speed counting, particle size monitoring and vibration monitoring for predictive maintenance. Other applications include eye tracking, visual odometry and gesture detection for AR and VR. And in China, there is a growing market for small cameras in toy animals. The cameras need to operate at low power—and the most important thing for them to detect is movement. Neuromorphic cameras meet this need, operating on very little power, and fitting nicely into toys.

Neuromorphic principles can also be applied to audio sensors. Like the retina, the cochlea does not sample spectrograms at fixed intervals. It just conveys changes in sensory input. So far, there are not many examples of neuromorphic audio sensors, but that’s likely to change soon since audio-based AI is now in high demand. Neuromorphic principles can also be applied to sensors with no biological counterpart, like radar or LiDAR.
 
  • Like
  • Fire
Reactions: 23 users

jtardif999

Regular
In the US, the Constitution only permits the actual inventor to apply for a patent. Where the inventor works for a company and the invention relates to the company's business, the patent will normally be assigned to the company.
Yeah I think the only patent directly owned by PVDM is the original, filed in 2008 before BrainChip the company came into existence.
 
  • Like
Reactions: 5 users

Diogenese

Top 20
Yeah I think the only patent directly owned by PVDM is the original, filed in 2008 before BrainChip the company came into existence.
No. US8250011 was the one filed in 2008.
 
  • Like
Reactions: 13 users

Tothemoon24

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 13 users

MDhere

Top 20
I may as well tick it all!!

20231013_060513.jpg
 
  • Like
  • Love
  • Fire
Reactions: 31 users
Top Bottom