BRN Discussion Ongoing

Diogenese

Top 20
Modern neuromorphic processor architectures...PLURAL...Hmmm?????



This Tiny Sensor Could Be in Your Next Headset​

Prophesee
PROPHESEE Event-Based Metavision GenX320 Bare Die 2.jpg

Neuromorphic computing company develops event-based vision sensor for edge AI apps.
Spencer Chin | Oct 16, 2023


As edge-based artificial intelligence (AI) applications become more common, there will be a greater need for sensors that can meet the power and environmental needs of edge hardware. Prophesee SA, which supplies advanced neuromorphic vision systems, has introduced an event-based vision sensor for integration into ultra-low-power edge AI vision devices. The GenX320 Metavision sensor, which uses a tiny 3x4mm die, leverages the company’s technology platform into growing intelligent edge market segments, including AR/VR headsets, security and monitoring/detection systems, touchless displays, eye tracking features, and always-on intelligent IoT devices.

According to Luca Verre, CEO and co-founder of Prophesee, the concept of event-based vision has been researched for years, but developing a viable commercial implementation in a sensor-like device has only happened relatively recently. “Prophesee has used a combination of expertise and innovative developments around neuromorphic computing, VLSI design, AL algorithm development, and CMOS image sensing,” said Verre in an e-mail interview with Design News. “Together, those skills and advancements, along with critical partnerships with companies like Sony, Intel, Bosch, Xiaomi, Qualcomm, 🤔and others 🤔have enabled us to optimize a design for the performance, power, size, and cost requirements of various markets.”

Prophesse’s vision sensor is a 320x320, 6.3μm pixel BSI stacked event-based vision sensor that offers a tiny 1/5-in. optical format. Verre said, “The explicit goal was to improve integrability and usability in embedded at-the-edge vision systems, which in addition to size and power improvements, means the design must address the challenge of event-based vision’s unconventional data format, nonconstant data rates, and non-standard interfaces to make it more usable for a wider range of applications. We have done that with multiple integrated event data pre-processing, filtering, and formatting functions to minimize external processing overhead.”

Verre added, “In addition, MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.

Low-Power Operation

According to Verre, the GenX320 sensor has been optimized for low-power operation, featuring a hierarchy of power modes and application-specific modes of operation. On-chip power management further improves sensor flexibility and integrability. To meet aggressive size and cost requirements, the chip is fabricated using a CMOS stacked process with pixel-level Cu-Cu bonding interconnects achieving a 6.3μm pixel-pitch.
The sensor performs low latency, µsec resolution timestamping of events with flexible data formatting. On-chip intelligent power management modes reduce power consumption to a low 36uW and enable smart wake-on-events. Deep sleep and standby modes are also featured.

According to Prophesee, the sensor is designed to be easily integrated with standard SoCs with multiple combined event data pre-processing, filtering, and formatting functions to minimize external processing overhead. MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.

Prophesee’s Verre expects the sensor to find applications in AR/VR headsets. “We are solving an important issue in our ability to efficiently (i.e. low power/low heat) support foveated rendering in eye tracking for a more realistic, immersive experience. Meta has discussed publicly the use of event-based vision technology, and we are actively involved with our partner Zinn Labs in this area. XPERI has already developed a driver monitor system (DMS) proof of concept based on our previous generation sensor for gaze monitoring and we are working with them on a next-gen solution using GenX320 for both automotive and other potential uses, including micro expression monitoring. The market for gesture and motion detection is very large, and our partner Ultraleap has demonstrated a working prototype of a touch-free display using our solution.”

The sensor incorporates an on-chip histogram output compatible with multiple AI accelerators. The sensor is also natively compatible with Prophesee Metavision Intelligence, an open-source event-based vision software suite that is used by a community of over 10,000 users.

Prophesee will support the GenX320 with a complete range of development tools for easy exploration and optimization, including a comprehensive Evaluation Kit housing a chip-on-board (COB) GenX320 module, or a compact optical flex module. In addition, Prophesee will offer a range of adapter kits that enable seamless connectivity to a large range of embedded platforms, such as an STM32 MCU, speeding time-to-market.

Spencer Chin is a Senior Editor for Design News covering the electronics beat. He has many years of experience covering developments in components, semiconductors, subsystems, power, and other facets of electronics from both a business/supply-chain and technology perspective. He can be reached at Spencer.Chin@informa.com.


Hi Bravo,

This reference to foveated eye tracking is interesting, particularly as Luminar, who, it has been reported, will take a significant part of Mercedes lidar business in a couple of years, use foveated lidar.

Foveated refers to the difference between central eye vision and peripheral vision. In lidar, this means that the laser spot density is increased for points of interest. I think Luminar do this by increasing the frequency of transmitting laser pulses.

Prophesee’s Verre expects the sensor to find applications in AR/VR headsets. “We are solving an important issue in our ability to efficiently (i.e. low power/low heat) support foveated rendering in eye tracking for a more realistic, immersive experience. Meta has discussed publicly the use of event-based vision technology, and we are actively involved with our partner Zinn Labs in this area.

I don't know who, if anyone, has the controlling patents for foveated lidar, but Luminar does have some patents:

US2018284234A1 Foveated Imaging in a Lidar System

1697525042412.png


To identify the most important areas in front of a vehicle for avoiding collisions, a lidar system obtains a foveated imaging model. The foveated imaging model is generated by detecting the direction at which drivers' are facing at various points in time for several scenarios based on road conditions or upcoming maneuvers. The lidar system identifies an upcoming maneuver for the vehicle or a road condition and applies the identified maneuver or road condition to the foveated imaging model to identify a region of a field of regard at which to increase the resolution. The lidar system then increases the resolution at the identified region by increasing the pulse rate for transmitting light pulses within the identified region, filtering pixels outside of the identified region, or in any other suitable manner.

The Zinn patent

WO2023081297A1 EYE TRACKING SYSTEM FOR DETERMINING USER ACTIVITY 20211105

refers to a "differential camera" which I guess is a DVS, which is where Prophesee would come in.

ZINN uses a NN with a machine learned model trained to identify various optical activities, reading, mobile phone use, social media use, ...


Another Zinn patent application

US2023195220A1 EYE TRACKING SYSTEM WITH OFF-AXIS LIGHT SOURCES 20201217 uses a NN to detect the pupil position and uses this to judge the focus distance and adjust the focal length of a vari-focus lens.

1697528165839.png







1697527794048.png



I couldn't find anything to show Zinn roll their own NNs.
 
  • Like
  • Fire
  • Love
Reactions: 15 users

Diogenese

Top 20
I ‘fess up. I’ve googled twice in the last hour or so for explanations of Dodgy’s posts.

To add to the now lost count number of times I’ve done so. 🫤
Richard the Turd.

(Pardon my Irish accent).
 
  • Haha
  • Like
Reactions: 9 users

wilzy123

Founding Member
“The price is more of an indication of the sophistication and unrealistic expectations of the market IMO”

I remember Ken Scarince - CFO, saying publicly at the 2020 AGM that the SP should be multiple times higher than what it currently was.

So the company holds some accountability here, like it or not.
OK
shhh-shush.gif
 
  • Haha
Reactions: 5 users
It's honestly a surprise shorts are still around 6% tbh when you consider the current share price to progress over time of BRN.

Quite funny. Looks like quite a few sheep on the short train.
6% is the available short shares on offer in the system, not actually how many shares have to be covered on market.

There could be as little as 1-1.5% of those or less actively sold on market that need to be covered..
 
  • Like
Reactions: 2 users

M_C

Founding Member


1000006822.png
 
  • Like
  • Fire
  • Love
Reactions: 30 users
Well all I know Tech is that I had a dream also. But a nightmare as well. And the funny thing is that I was awake the whole time and it was like watching a car crash.

I have a favorite number and I thought it was a lucky number,4.

So imagine holding 444,444 shares @28c and it turns into $2.34. You see your super and think, these added funds, I could likely retire very soon. Then as the daytime nightmare happens you watch and think, shall I sell? shall I sell, shall I .....sell....?????

Now by no means is this anything to do with managment or the company in anyway. But at the same time it does make you think, WHERE THE FUCK IS MY TIME MACHINE!!!!!

o_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_Oo_O🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣


Sorry if my language is poor. If it is please replace one word with Hell, where the hell. Thanks in advance,
With that sentiment, the bottom could be in..
 

Diogenese

Top 20


View attachment 47297


EP4181081A1 ENERGY EFFICIENT HIERARCHICAL SNN ARCHITECTURE FOR CLASSIFICATION AND SEGMENTATION OF HIGH-RESOLUTION IMAGES 20211116

1697530290148.png



State of art techniques rely of FPGA based approaches when power efficiency is of concern. However, compared to SNN on Neuromorphic hardware, ANN on FPGA requires higher power and longer design cycles to deploy neural network on hardware accelerators. Embodiments of the present disclosure provide a method and system for energy efficient hierarchical multi-stage SNN architecture for classification and segmentation of high-resolution images. Patch-to-patch-class classification approach is used, where the image is divided into smaller patches, and classified at first stage into multiple labels based on percentage coverage of a parameter of interest, for example, cloud coverage in satellite images. The image portion corresponding to the partially covered patches is divided into further smaller size patches, classified by a binary classifier at second level of classification. Labels across multiple SNN classifier levels are aggregated to identify segmentation map of the input image in accordance with the coverage parameter of interest.

This is similar to Akida ViT:

Vision Transformers (ViTs) - Their Popularity And Unique Architecture - BrainChip


In a Vision Transformer, an image is first divided into patches, which are then flattened and fed into a multi-layer transformer network. The self-attention mechanism allows the model to attend to different parts of the image at different scales, enabling it to simultaneously capture global and local features. The transformer’s output is passed through a final classification layer to obtain the predicted class label

1697531291463.png
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Ian

Founding Member
Nothing we didn't already know but nice couple of Brainchip mentions

 

Attachments

  • Screenshot_20231017_192047_YouTube.jpg
    Screenshot_20231017_192047_YouTube.jpg
    607.6 KB · Views: 142
  • Screenshot_20231017_192218_YouTube.jpg
    Screenshot_20231017_192218_YouTube.jpg
    537.4 KB · Views: 140
  • Like
  • Fire
  • Love
Reactions: 37 users

robsmark

Regular

@zeeb0t

You clearly hold your “Founding Members” with some regard, evidenced by the that fact that it’s listed under their user name.

If this is the way you want them behaving to genuine discussion then you probably need to reassess your forum user agreement/policy.

At this point in time this (his) behaviour is akin to bullying and as a platform owner you’ll find that owning an environment that some pay for which encourages this behaviour by doing absolutely nothing to moderate it can carry serious legal ramifications.
 
  • Like
  • Love
  • Wow
Reactions: 24 users

Sam

Nothing changes if nothing changes

Attachments

  • 0EE0227B-4138-4E1E-B1CE-8C6674673888.gif
    0EE0227B-4138-4E1E-B1CE-8C6674673888.gif
    790.5 KB · Views: 73
  • Like
Reactions: 1 users
Modern neuromorphic processor architectures...PLURAL...Hmmm?????



This Tiny Sensor Could Be in Your Next Headset​

Prophesee
PROPHESEE Event-Based Metavision GenX320 Bare Die 2.jpg

Neuromorphic computing company develops event-based vision sensor for edge AI apps.
Spencer Chin | Oct 16, 2023


As edge-based artificial intelligence (AI) applications become more common, there will be a greater need for sensors that can meet the power and environmental needs of edge hardware. Prophesee SA, which supplies advanced neuromorphic vision systems, has introduced an event-based vision sensor for integration into ultra-low-power edge AI vision devices. The GenX320 Metavision sensor, which uses a tiny 3x4mm die, leverages the company’s technology platform into growing intelligent edge market segments, including AR/VR headsets, security and monitoring/detection systems, touchless displays, eye tracking features, and always-on intelligent IoT devices.

According to Luca Verre, CEO and co-founder of Prophesee, the concept of event-based vision has been researched for years, but developing a viable commercial implementation in a sensor-like device has only happened relatively recently. “Prophesee has used a combination of expertise and innovative developments around neuromorphic computing, VLSI design, AL algorithm development, and CMOS image sensing,” said Verre in an e-mail interview with Design News. “Together, those skills and advancements, along with critical partnerships with companies like Sony, Intel, Bosch, Xiaomi, Qualcomm, 🤔and others 🤔have enabled us to optimize a design for the performance, power, size, and cost requirements of various markets.”

Prophesse’s vision sensor is a 320x320, 6.3μm pixel BSI stacked event-based vision sensor that offers a tiny 1/5-in. optical format. Verre said, “The explicit goal was to improve integrability and usability in embedded at-the-edge vision systems, which in addition to size and power improvements, means the design must address the challenge of event-based vision’s unconventional data format, nonconstant data rates, and non-standard interfaces to make it more usable for a wider range of applications. We have done that with multiple integrated event data pre-processing, filtering, and formatting functions to minimize external processing overhead.”

Verre added, “In addition, MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.

Low-Power Operation

According to Verre, the GenX320 sensor has been optimized for low-power operation, featuring a hierarchy of power modes and application-specific modes of operation. On-chip power management further improves sensor flexibility and integrability. To meet aggressive size and cost requirements, the chip is fabricated using a CMOS stacked process with pixel-level Cu-Cu bonding interconnects achieving a 6.3μm pixel-pitch.
The sensor performs low latency, µsec resolution timestamping of events with flexible data formatting. On-chip intelligent power management modes reduce power consumption to a low 36uW and enable smart wake-on-events. Deep sleep and standby modes are also featured.

According to Prophesee, the sensor is designed to be easily integrated with standard SoCs with multiple combined event data pre-processing, filtering, and formatting functions to minimize external processing overhead. MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.

Prophesee’s Verre expects the sensor to find applications in AR/VR headsets. “We are solving an important issue in our ability to efficiently (i.e. low power/low heat) support foveated rendering in eye tracking for a more realistic, immersive experience. Meta has discussed publicly the use of event-based vision technology, and we are actively involved with our partner Zinn Labs in this area. XPERI has already developed a driver monitor system (DMS) proof of concept based on our previous generation sensor for gaze monitoring and we are working with them on a next-gen solution using GenX320 for both automotive and other potential uses, including micro expression monitoring. The market for gesture and motion detection is very large, and our partner Ultraleap has demonstrated a working prototype of a touch-free display using our solution.”

The sensor incorporates an on-chip histogram output compatible with multiple AI accelerators. The sensor is also natively compatible with Prophesee Metavision Intelligence, an open-source event-based vision software suite that is used by a community of over 10,000 users.

Prophesee will support the GenX320 with a complete range of development tools for easy exploration and optimization, including a comprehensive Evaluation Kit housing a chip-on-board (COB) GenX320 module, or a compact optical flex module. In addition, Prophesee will offer a range of adapter kits that enable seamless connectivity to a large range of embedded platforms, such as an STM32 MCU, speeding time-to-market.

Spencer Chin is a Senior Editor for Design News covering the electronics beat. He has many years of experience covering developments in components, semiconductors, subsystems, power, and other facets of electronics from both a business/supply-chain and technology perspective. He can be reached at Spencer.Chin@informa.com.

This statement locks it in for me… IMO in layman terms, it relates to the incoming spiking data and subsequent translation of it into conventional computer speak! 😉

“means the design must address the challenge of event-based vision’s unconventional data format, nonconstant data rates, and non-standard interfaces to make it more usable for a wider range of applications.”
 
  • Like
  • Love
  • Thinking
Reactions: 12 users

AusEire

Founding Member.
And no…. No ffing laughing emojis thanks, this is next level trolling. And it needs to be stomped out
Who told you he was connected to the area of mental health? I'm pretty sure that wilzy has been doxed a number of times on this platform. I haven't seen you call it out? I have also seen veiled threats of violence made towards him have you called that out?
 
  • Like
  • Fire
Reactions: 4 users

AusEire

Founding Member.
@zeeb0t

You clearly hold your “Founding Members” with some regard, evidenced by the that fact that it’s listed under their user name.

If this is the way you want them behaving to genuine discussion then you probably need to reassess your forum user agreement/policy.

At this point in time this (his) behaviour is akin to bullying and as a platform owner you’ll find that owning an environment that some pay for which encourages this behaviour by doing absolutely nothing to moderate it can carry serious legal ramifications.
There's a report button. It's moderated by dreadbot. The idea is that the forum should be able to moderate itself. The more people that report a comment the more likely it is that the bot will remove the comment. @zeeb0t can elaborate more on how it works if he feels there's a need to I'm sure.
 
  • Like
  • Fire
Reactions: 5 users

robsmark

Regular
There's a report button. It's moderated by dreadbot. The idea is that the forum should be able to moderate itself. The more people that report a comment the more likely it is that the bot will remove the comment. @zeeb0t can elaborate more on how it works if he feels there's a need to I'm sure.
I don’t have a report button. Where is it?
 
  • Haha
  • Love
Reactions: 3 users

Perhaps

Regular
My French isn't that good other than repeatedly hearing the word "oui" in some dodgy videos, however take a look at this:


However, one of the paragraphs translated in google says the following :(

" Synchronous or asynchronous operation
Directly accelerating the processing of event data could involve neuromorphic chips such as those developed by Brainchip (Akida), Intel (Loihi) and many others, some of which are beginning their commercial careers. The spiking neural networks (SNN) that they execute are therefore particularly suited to asynchronous calculations.

“That would be ideal,” recognizes Luca Verre. But, although we collaborate with Brainchip, Synsense and Intel, their chips are not yet mainstream. » Unlike the systems of Qualcomm, Renesas, AMD, etc., optimized to apply AI algorithms more common today (convolution in particular) to images from standard sensors."
This should answer all speculations. Prophesee builds sensor chips which can run on different systems. They had great results coupling their sensor chip with Akida, but the use of system is up to the customer. Prophesee makes their sensor chips future-ready to run on every system including neuromorphics. That doesn't mean there is Akida IP inside, no matter how many Rob Telson likes you can count.
 
  • Like
  • Love
  • Fire
Reactions: 12 users

Kozikan

Regular
This post was made by Arijit from TCS Research 5 months ago, seeking PhD and Masters candidates.
I'm unsure if it was posted on this forum at the time.

Two months after this post by Arijit, TCS announced a formal commercial partnership with BrainChip via Tata Elxsi, with a focus on healthcare and industrial ( robotics ).

Just think about what is being said here .... with knowledge it is being done utlising BrainChip IP.

At TCS Research, we specialise in embedding intelligence at the edge through Neuromorphic Computing and Spiking Neural Networks.
Our systems targeted for evolving neuromorphic hardware offer extreme low-power consumption, online learning, and real-time inferencing, ideal for IoT, edge analytics, healthcare, robotics, space-tech & more.


explore new topics, advance ongoing projects

If we can't be bullish about this ... well .... then I am lost for words !

1697454951055.png
Hiya Quiltman, Please don’t be lost for words on Tata. You’re a great champion of theirs, and rightly so. Thank you for your input.
Imo ,I’m hopeful that Tata Elxsi are to be one of those few future BrainChip ASX sensitive Announcements, that actually could rerate us significantly.
Not thru the financials.
Finger crossed , its sooner than most might consider.
Just my thoughts.
 
  • Like
  • Love
Reactions: 18 users

AusEire

Founding Member.
  • Haha
  • Like
Reactions: 12 users

Sam

Nothing changes if nothing changes
Who told you he was connected to the area of mental health? I'm pretty sure that wilzy has been doxed a number of times on this platform. I haven't seen you call it out? I have also seen veiled threats of violence made towards him have you called that out?
Yes I have called it out, and I will continue to because everyone has the right to post on this forum without being bullied or belittled.
 
  • Like
  • Fire
Reactions: 5 users
Top Bottom