BRN Discussion Ongoing

Something is missing in this forum.

Where's Fact Finder?

@Fact Finder
 
  • Like
  • Fire
  • Haha
Reactions: 5 users
Juicy!! Not a bad guy to plug our Chipper and Prosphesee has also liked the post.


Screenshot_20230412_214424_LinkedIn.jpg
Screenshot_20230412_214610_LinkedIn.jpg

Screenshot_20230412_220236_LinkedIn.jpg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 55 users

MrRomper

Regular
  • Like
  • Fire
  • Love
Reactions: 67 users

Tothemoon24

Top 20
Some familiar quotes below inside a newly structured read & info





Why you will be seeing much more from event cameras
14 February 2023
share-black.svg

February/March 2023
Advances in sensors that capture images like real eyes, plus in the software and hardware to process them, are bringing a paradigm shift in imaging, finds Andrei Mihai
Neuromorphic%20vision.jpg

The field of neuromorphic vision, where electronic cameras mimic the biological eye, has been around for some 30 years.

Neuromorphic cameras (also called event cameras) mimic the function of the retina, the part of the eye that contains light-sensitive cells. This is a fundamental change from conventional cameras – and why applications for event cameras for industry and research are also different.
Conventional cameras are built for capturing images and visually reproducing them.
They take a picture at certain amounts of time, capturing the field of vision and snapping frames at predefined intervals, regardless of how the image is changing. These frame-based cameras work excellently for their purpose, but they are not optimised for sensing or machine vision. They capture a great deal of information but, from a sensing perspective, much of that information is useless, because it is not changing.
Event cameras suppress this redundancy and have fundamental benefits in terms of efficiency, speed, and dynamic range. Event-based vision sensors can achieve better speed versus power consumption trade-off by up to three orders of magnitude. By relying on a different way of acquiring information compared with a conventional camera, they also address applications in the field of machine vision and AI.
monitoring%20particle%20size%20and%20movement.png

Event camera systems can quickly and efficiently monitor particle size and movement

“Essentially, what we’re bringing to the table is a new approach to sensing information, very different to conventional cameras that have been around for many years,” says Luca Verre, CEO of Prophesee, a market leader in the field.
Whereas most commercial cameras are essentially optimised to produce attractive images, the needs of the automotive, industrial, Internet of Things (IoT) industries, and even some consumer products, often demand different performances. If you are monitoring change, for instance, as much as 90% of the scene is useless information because it does not change. Event cameras bypass that as they only monitor when light goes up or down in certain relative amounts, which produces a so-called “change event”.
In modern neuromorphic cameras, each pixel of the sensor works independently (asynchronously) and records continuously, so there is no downtime, even when you go down to microseconds. Also, since they only monitor changing data, they do not monitor redundant data. This is one of the key aspects driving the field forward.

Innovation in neuromorphic vision

Vision sensors typically gather a lot of data, but increasingly there is a drive to use edge processing for these sensors. For many machine vision applications, edge computation has become a bottleneck. But for event cameras, it is the opposite.
“More and more, sensor cameras are used for some local processing, some edge processing, and this is where we believe we have a technology and an approach that can bring value to this application,” says Verre.
“We are enabling fully fledged edge computing by the fact that our sensors produce very low data volumes. So, you can afford to have a cost-reasonable, low-power system on a chip at the edge, because you can simply generate a few event data that this processor can easily interface with and process locally.

“Instead of feeding this processor with tons of frames that overload them and hinder their capability to process data in real-time, our event camera can enable them to do real-time across a scene. We believe that event cameras are finally unlocking this edge processing.”
Making sensors smaller and cheaper is also a key innovation because it opens up a range of potential applications, such as in IoT sensing or smartphones. For this, Prophesee partnered with Sony, mixing its expertise in event cameras with Sony’s infrastructure and experience in vision sensors to develop a smaller, more efficient, and cheaper event camera evaluation kit. Verre thinks the pricing of event cameras is at a point where they can be realistically introduced into smartphones.
Another area companies are eyeing is fusion kits – the basic idea is to mix the capability of a neuromorphic camera with another vision sensor, such as lidar or a conventional camera, into a single system.
“From both the spatial information of a frame-based camera and from the information of an event-based camera, you can actually open the door to many other applications,” says Verre. “Definitely, there is potential in sensor fusion… by combining event-based sensors with some lidar technologies, for instance, in navigation, localisation, and mapping.”

Neuromorphic computing progress

However, while neuromorphic cameras mimic the human eye, the processing chips they work with are far from mimicking the human brain. Most neuromorphic computing, including work on event camera computing, is carried out using deep learning algorithms that perform processing on CPUs of GPUs, which are not optimised for neuromorphic processing. This is where new chips such as Intel’s Loihi 2 (a neuromorphic research chip) and Lava (an open-source software framework) come in.
“Our second-generation chip greatly improves the speed, programmability, and capacity of neuromorphic processing, broadening its usages in power and latency-constrained intelligent computing applications,” says Mike Davies, Director of Intel’s Neuromorphic Computing Lab.

BrainChip, a neuromorphic computing IP vendor, also partnered with Prophesee to deliver event-based vision systems with integrated low-power technology coupled with high AI performance.
It is not only industry accelerating the field of neuromorphic chips for vision – there is also an emerging but already active academic field. Neuromorphic systems have enormous potential, yet they are rarely used in a non-academic context. Particularly, there are no industrial employments of these bio-inspired technologies. Nevertheless, event-based solutions are already far superior to conventional algorithms in terms of latency and energy efficiency.
Working with the first iteration of the Loihi chip in 2019, Alpha Renner et al (‘Event-based attention and tracking on neuromorphic hardware’) developed the first set-up that interfaces an event-based camera with the spiking neuromorphic system Loihi, creating a purely event-driven sensing and processing system. The system selects a single object out of a number of moving objects and tracks it in the visual field, even in cases when movement stops, and the event stream is interrupted.
In 2021, Viale et al demonstrated the first spiking neuronal network (SNN) on a chip used for a neuromorphic vision-based controller solving a high-speed UAV control task. Ongoing research is looking at ways to use neuromorphic neural networks to integrate chips and event cameras for autonomous cars. Since many of these applications use the Loihi chip, newer generations, such as Loihi 2, should speed development. Other neuromorphic chips are also emerging, allowing quick learning and training of the algorithm even with a small dataset. Specialised SNN algorithms operating on neuromorphic chips can further help edge processing and general computing in event vision.
“The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint – time. Due to their asynchronous course of operation, considering the precise occurrence of spikes, spiking neural networks take advantage of this constraint,” writes Lea Steffen and colleagues (‘Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and Algorithms’).
Lighting is another aspect the field of neuromorphic vision is increasingly looking at. An advantage of event cameras compared with frame-based cameras is their ability to deal with a range of extreme light conditions – whether high or low. But event cameras can now use light itself in a different way.
Prophesee and CIS have started work on the industry’s first evaluation kit for implementing 3D sensing based on structured light. This uses event-based vision and point cloud generation to produce an accurate 3D Point Cloud.
“You can then use this principle to project the light pattern in the scene and, because you know the geometry of the setting, you can compute the disparity map and then estimate the 3D and depth information,” says Verre. “We can reach this 3D Point Cloud at a refresh rate of 1kHz or above. So, any application of 3D tourism, such as 3D measurements or 3D navigation that requires high speed and time precision, really benefits from this technology. There are no comparable 3D approaches available today that can reach this time resolution.”

Industrial applications of event vision

Due to its inherent advantages, as well as progress in the field of peripherals (such as neuromorphic chips and lighting systems) and algorithms, we can expect the deployment of neuromorphic vision systems to continue – especially as systems become increasingly cost-effective.
monitoring%20engine%20vibration.jpg

Event vision can trace particles or monitor vibrations with low latency, low energy consumption, and relatively low amounts of data
We have mentioned some of the applications of event cameras here at IMVE before, from helping restore people’s vision to tracking and managing space debris. But in the near future perhaps the biggest impact will be at an industrial level.
From tracing particles or quality control to monitoring vibrations, all with low latency, low energy consumption, and relatively low amounts of data that favour edge computing, event vision is promising to become a mainstay in many industrial processes. Lowering costs through scaling production and better sensor design is opening even more doors.
Smartphones are one field where event cameras may make an unexpected entrance, but Verre says this is just the tip of the iceberg. He is looking forward to a paradigm shift and is most excited about all the applications that will soon pop up for event cameras – some of which we probably cannot yet envision.
“I see these technologies and new tech sensing modalities as a new paradigm that will create a new standard in the market. And in serving many, many applications, so we will see more event-based cameras all around us. This is so exciting.”
Lead image credit: Vector Tradition/Shutterstock.com
Event-based vision
neuromorphic vision
 
  • Like
  • Love
  • Fire
Reactions: 31 users
I posted in diff thread in error so deleted and posting it here.

I see our friends at Prophesee doing a presso at upcoming tinyML.

Be great if one day, sooner than later preferably, revealed has a little Akida sauce in it....or will have. :)


tinyML EMEA Innovation Forum - June 26-28, 2023​



Event sensors for embedded edge AI vision applications

Christoph POSCH, CTO, PROPHESEE

Abstract (English)

Event-based vision is a term naming an emerging paradigm of acquisition and processing of visual information for numerous artificial vision applications in industrial, surveillance, IoT, AR/VR, automotive and more. The highly efficient way of acquiring sparse data and the robustness to uncontrolled lighting conditions are characteristics of the event sensing process that make event-based vision attractive for at-the-edge visual perception systems that are able to cope with limited resources and a high degree of autonomy.

However, the unconventional format of the event data, non-constant data rates, non-standard interfaces and, in general, the way, dynamic visual information is encoded inside the data, pose challenges to the usage and integration of event sensors in an embedded vision system.

Prophesee has recently developed the first of a new generation of event sensor that was designed with the explicit goal to improve integrability and usability of event sensing technology in an embedded at-the-edge vision system.

Particular emphasis has been put on event data pre-processing and formatting, data interface compatibility and low-latency connectivity to various processing platforms including low- power uCs and neuromorphic processor architectures.

Furthermore, the sensor has been optimized for ultra-low power operation, featuring a hierarchy of low-power modes and application-specific modes of
operation. On-chip power management and an embedded uC core further improve sensor flexibility and useability at-the-edge
 
  • Like
  • Fire
  • Love
Reactions: 20 users

MrRomper

Regular
@Diogenese another Patent from Toshiba referencing SNN. Does not mention Brainchip. But am I getting closer?

Source USPTO
 

Attachments

  • 11625579.pdf
    1.6 MB · Views: 151
  • Like
  • Love
Reactions: 9 users

Tothemoon24

Top 20
Something is missing in this forum.

Where's Fact Finder?

@Fact Finder
@Fact Finder has been employed by Brainchip as acting Legal Counsel.

Obviously he’s no longer able to make comments or express opinions in relations to the said company Brainchip Holdings .

He’s permitted to read what’s written on this forum & it’s my wish that if he chooses to read this post that he does NOT seek damages due to myself @Tothemoon24 speaking absolute crap as usual.
 
Last edited:
  • Haha
  • Like
  • Wow
Reactions: 32 users

The Pope

Regular
Fact Finder has been employed by Brainchip as acting Legal Counsel.

Obviously he’s no longer able to make comments or express opinions in relations to the said company Brainchip Holdings .

He’s permitted to read what’s written on this forum & it’s my wish that if he chooses to read this post that he does NOT seek damages due to myself @Tothemoon24 speaking absolute crap as usual.
I also appreciate that FF last post was on April fools day as well. Lol
 
  • Haha
  • Like
Reactions: 10 users
@Fact Finder has been employed by Brainchip as acting Legal Counsel.

Obviously he’s no longer able to make comments or express opinions in relations to the said company Brainchip Holdings .

He’s permitted to read what’s written on this forum & it’s my wish that if he chooses to read this post that he does NOT seek damages due to myself @Tothemoon24 speaking absolute crap as usual.
That's a good choice by Brainchip, although it's also sort of stealing something precious from this forum 😀
 
  • Like
Reactions: 6 users

Tothemoon24

Top 20

Sony investment will put AI chips inside Raspberry Pi boards​

Steve Dent
Steve Dent
April 12, 2023, 6:35 pm
9187f130-d906-11ed-bfff-437ef0360c42

Sony's semiconductor division has announced that it's making a "strategic investment" in Raspberry Pi as a way to bring its AI tech to a wider market. The idea is to give Raspberry Pi users around the world a development platform for its Aitrios edge computing (on-chip) AI platform used for image sensing functions like facial recognition.
"We are very pleased to be partnering with Raspberry Pi Ltd. to bring our Aitrios platform — which supports the development of unique and diverse solutions utilizing our edge AI devices — to the Raspberry Pi user and developer community, and provide a unique development experience," said Sony Semiconductor Solutions president and CEO Terushi Shimizu.

The Raspberry Pi 4 and other devices from the company give users PC-like power in a small form factor. Originally designed as an educational platform to teach robotics, coding and more, it has become popular as a way for coders to prototype IoT (Internet of Things) and other devices.
The addition of Sony's Aitrios could make it even more useful. Unlike cloud AI, it runs directly on chips (edge computing) to reduce latency, and Sony has pitched the system for uses like surveillance, security and more. Examples cited on a dedicated website include inventory monitoring and retention, customer counting, license plate recognition and "detailed employee analysis." Sony says it preserves privacy by analyzing data strictly on-chip and only sending metadata to the cloud.
Sony is already involved with Raspberry Pi as a "longstanding and valued strategic partner," the company said. It recently provided imaging chips with autofocus capability and helped Raspberry Pi get its UK manufacturing plant up to speed in the early days of the company.
 
  • Like
  • Fire
  • Thinking
Reactions: 21 users

charles2

Regular
Must be a good thing..

 
  • Like
  • Fire
  • Love
Reactions: 30 users

Dugnal

Member
  • Like
  • Love
Reactions: 5 users
Sounds neuromorphic to me. Emphasis mine.



Cambridge University scientists create robotic hand able to hold objects​

    • Published
      18 minutes ago
Share
Robotic hand holding an object
IMAGE SOURCE,PA MEDIA
Image caption,
The 3D-printed hand was initially trained using plastic balls
Scientists have designed a robotic hand that can grasp and hold objects using only the movement of its wrist.
The 3D-printed hand was created by a team at the University of Cambridge.
It was implanted with sensors that enabled it to "sense" what it was touching and more than 1,200 tests were carried out, using objects including a peach, computer mouse and bubble wrap.
Researchers said the human hand was extremely complex and recreating its capabilities was a massive challenge.
Dr Thomas George-Thuruthel, formerly of the University of Cambridge who is now a lecturer in robotics and AI at University College London, said: "The sensors, which are sort of like the robot's skin, measure the pressure being applied to the object.
"We can't say exactly what information the robot is getting, but it can theoretically estimate where the object has been grasped and with how much force."

Researchers said the technology was low cost and energy efficient as it did not require its fingers to move independently.
Robotic hand holding an object
IMAGE SOURCE,PA MEDIA
Image caption,
This hand was only capable of passive, wrist-based movement and the individual fingers were not fully motorised
The team said humans instinctively knew how much force to use when picking up an egg - but for a robot, this was a challenge.
If the robot applied too much force, the egg could break, and if there was not enough pressure, it could drop the egg.
The robotic hand was able to successfully grasp 11 of 14 objects it was tested on.
Fumiya Iida, professor of robotics at the University of Cambridge's department of engineering, said: "The big advantage of this design is the range of motion we can get without using any actuators.
"We want to simplify the hand as much as possible.

"We can get lots of good information and a high degree of control without any actuators, so that when we do add them, we'll get more complex behaviour in a more efficient package."


*Edit

Full article here https://www.cam.ac.uk/stories/robotic-hand



Funny how ARM is helping to create the robotic hand 😄
 
Last edited:
  • Like
  • Fire
  • Haha
Reactions: 19 users

Glen

Regular
  • Like
  • Fire
  • Love
Reactions: 15 users

Sirod69

bavarian girl ;-)
Tim Llewellynn

Tim Llewellynn1.CEO/Co-Founder of NVISO Human Behaviour AI | President Bonseyes Community Association | Coordinator Bonseyes AI Marketplace | IBM Beacon Award Winner #metaverse #edgeai #decentralizedai
33 Min. •


And we haven't even arrived at #neuromorphic computing - there is so much performance improvement possible with Edge AI that applications in the next 5 years are going to be mind blowing. It will feel like Edge AI applications arrived from nowhere - but have effectively been over a decade in the making.
1681320635922.png
 
  • Like
  • Love
  • Fire
Reactions: 27 users

Sirod69

bavarian girl ;-)
Edge Impulse monthly newsletter has arrived via email. BrainChip prominently displayed.

Won't copy as I am too un-savvy to remove my personal data.

Someone else surely will.
I think you mean this?

Cutting-Edge Hardware​

 
  • Like
  • Fire
  • Love
Reactions: 19 users
D

Deleted member 118

Guest
 
  • Haha
  • Like
Reactions: 3 users

IloveLamp

Top 20


Screenshot_20230413_052800_LinkedIn.jpg
 
  • Like
  • Love
Reactions: 18 users

Quercuskid

Regular
  • Like
  • Haha
  • Love
Reactions: 12 users
Top Bottom