BRN Discussion Ongoing

mrgds

Regular
Big Telsa recall faulty FSD crash risk 362000 cars! wow
When the media stop using the word "RECALL" the general population will be better informed.
Its a "OVER THE AIR UPDATE FOR THE FSD BETA" ............... NOT A PHYSICAL RECALL OF 360000K VEHICLES" :rolleyes:
 
Last edited:
  • Like
  • Thinking
Reactions: 9 users

Mugen74

Regular
Cheers Mrgds typical dodgy media.
 
  • Like
  • Thinking
Reactions: 6 users

TheDrooben

Pretty Pretty Pretty Pretty Good
Our partners Prophesee talking big......mention for us as well


Smartphones are one field where event cameras may make an unexpected entrance, but Verre says this is just the tip of the iceberg. He is looking forward to a paradigm shift and is most excited about all the applications that will soon pop up for event cameras – some of which we probably cannot yet envision.

“I see these technologies and new tech sensing modalities as a new paradigm that will create a new standard in the market. And in serving many, many applications, so we will see more event-based cameras all around us. This is so exciting."
 
  • Like
  • Love
  • Fire
Reactions: 33 users

Murphy

Life is not a dress rehearsal!
Hi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......


If you don't havevdreams, you can't have dreams come true!
 
  • Like
Reactions: 4 users

Diogenese

Top 20
Hi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......


If you don't havevdreams, you can't have dreams come true!
It's POETS day for the graphbot.
 
  • Haha
  • Like
Reactions: 3 users

Slade

Top 20
Tracking How the Event Camera is Evolving


Event camera processing is advancing and enabling a new wave of neuromorphic technology.

Sony, Prophesee, iniVation, and CelePixel are already working to commercialize event (spike-based) cameras. Even more important, however, is the task of processing the data these cameras produce efficiently so that it can be used in real-world applications. While some are using relatively conventional digital technology for this, others are working on more neuromorphic, or brain-like, approaches.

Though more conventional techniques are easier to program and implement in the short term, the neuromorphic approach has more potential for extremely low-power operation.

By processing the incoming signal before having to convert from spikes to data, the load on digital processors can be minimized. In addition, spikes can be used as a common language with sensors in other modalities, such as sound, touch or inertia. This is because when things happen in the real world, the most obvious thing that unifies them is time: When a ball hits a wall, it makes a sound, causes an impact that can be felt, deforms and changes direction. All of these cluster temporally. Real-time, spike-based processing can therefore be extremely efficient for finding these correlations and extracting meaning from them.

Last time, on Nov. 21, we looked at the advantage of the two-cameras-in-one approach (DAVIS cameras), which uses the same circuitry to capture both event images, including only changing pixels, and conventional intensity images. The problem is that these two types of images encode information in fundamentally different ways.

Common language

Researchers at Peking University in Shenzhen, China, recognized that to optimize that multi-modal interoperability all the signals should ideally be represented in the same way. Essentially, they wanted to create a DAVIS camera with two modes, but with both of them communicating using events. Their reasoning was both pragmatic—it makes sense from an engineering standpoint—and biologically motivated. The human vision system, they point out, includes both peripheral vision, which is sensitive to movement, and foveal vision for fine details. Both of these feed into the same human visual system.

The Chinese researchers recently described what they call retinomorphic sensing or super vision that provides event-based output. The output can provide both dynamic sensing like conventional event cameras and intensity sensing in the form of events. They can switch back and forth between the two modes in a way that allows them to capture the dynamics and the texture of an image in a single, compressed representation that humans and machines can easily process.

These representations include the high temporal resolution you would expect from an event camera, combined with the visual texture you would get from an ordinary image or photograph.

They have achieved this performance using a prototype that consists of two sensors: a conventional event camera (DVS) and a Vidar camera, a new event camera from the same group that can efficiently create conventional frames from spikes by aggregating over a time window. They then use a spiking neural network for more advanced processing, achieving object recognition and tracking.

The other kind of CNN

At Johns Hopkins University, Andreas Andreou and his colleagues have taken event cameras in an entirely different direction. Instead of focusing on making their cameras compatible with external post-processing, they have built the processing directly into the vision chip. They use an analog, spike-based cellular neural network (CNN) structure where nearest-neighbor pixels talk to each other. Cellular neural networks share an acronym with convolutional neural networks, but are not closely related.

In cellular CNNs, the input/output links between each pixel and its eight nearest are built directly in hardware and can be specified to perform symmetrical processing tasks (see figure). These can then be sequentially combined to produce sophisticated image-processing algorithms.

Two things make them particularly powerful. One is that the processing is fast because it is performed in the analog domain. The other is that the computations across all pixels are local. So while there is a sequence of operations to perform an elaborate task, this is a sequence of fast, low-power, parallel operations.

A nice feature of this work is that the chip has been implemented in three dimensions using Chartered 130nm CMOS and Terrazon interconnection technology. Unlike many 3D systems, in this case the two tiers are not designed to work separately (e.g. processing on one layer, memory on the other, and relatively sparse interconnects between them). Instead, each pixel and its processing infrastructure are built on both tiers operating as a single unit.

Andreou and his team were part of a consortium, led by Northrop–Grumman, that secured a $2 million contract last year from the Defence Advanced Research Projects Agency (DARPA). While exactly what they are doing is not public, one can speculate the technology they are developing will have some similarities to the work they’ve published.


Shown is the 3D structure of a Cellular Neural Network cell (right) and layout (bottom left) of the John’s Hopkins University event camera with local processing.
In the dark

We know DARPA has strong interest in this kind of neuromorphic technology. Last summer the agency announced that its Fast Event-based Neuromorphic Camera and Electronics (FENCE) program granted three contracts to develop very-low-power, low-latency search and tracking in the infrared. One of the three teams is led by Northrop-Grumman.

Whether or not the FENCE project and the contract announced by Johns Hopkins university are one and the same, it is clear is that event imagers are becoming increasingly sophisticated.
@Tothemoon24 your post is exciting. Oculi's technology is the same technology developed at John Hopkin's university as descibed in your post. Brainchip is currently engaged with Oculi. @chapman89 post today shows that Oculi has entered into a strategic agreement with Global Foundaries (as we all know, Brainchip recently taped out the Akida 1500 on Global Foudaries technology). Oculi's new chip will be used in smart devices and homes, industrial, IoT, automotive markets and wearables including AR/VR. Prophesee is an Oculi competitor. No wonder NDAs are so well guarded.

No one can tell me that Akida is not being used by Oculi.

It's happy days. Perhaps we will get an update on this next week in either the podcast that comes out at 6am on Monday!! or In our annual report due out sometime next week.



 
  • Like
  • Fire
  • Love
Reactions: 32 users

Diogenese

Top 20
Hi folks. Does anyone else have the delayed price missing from TSE? I had it this morning, but not since lunchtime......


If you don't havevdreams, you can't have dreams come true!
Click "BRN Quotes" for an alternative
 
  • Like
Reactions: 1 users

Diogenese

Top 20
@Tothemoon24 your post is exciting. Oculi's technology is the same technology developed at John Hopkin's university as descibed in your post. Brainchip is currently engaged with Oculi. @chapman89 post today shows that Oculi has entered into a strategic agreement with Global Foundaries (as we all know, Brainchip recently taped out the Akida 1500 on Global Foudaries technology). Oculi's new chip will be used in smart devices and homes, industrial, IoT, automotive markets and wearables including AR/VR. Prophesee is an Oculi competitor. No wonder NDAs are so well guarded.

No one can tell me that Akida is not being used by Oculi.

It's happy days. Perhaps we will get an update on this next week in either the podcast that comes out at 6am on Monday!! or In our annual report due out sometime next week.



Hi Salde,

On your hypothesis, you still need a processor, because Akida1500 doesn't have one.
 
  • Like
Reactions: 2 users

Slade

Top 20
Hi Salde,

On your hypothesis, you still need a processor, because Akida1500 doesn't have one.
Hi @Diogenese
I am not a vegetarian.
I will leave the technical side to you
I am not saying that the Akida 1500 chip has anything to do with Oculi's new chip. But what I am saying is that there is a very high probability that Oculi is using Akida IP.
 
  • Haha
  • Like
Reactions: 10 users
Hi Salde,

On your hypothesis, you still need a processor, because Akida1500 doesn't have one.
Is there any connection through Xylinx SOC?

We did some early work with them didn't we?

Screenshot_2023-02-17-13-49-06-31_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Colorado23

Regular
  • Like
  • Thinking
Reactions: 5 users

Diogenese

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 10 users
We used their COTS FPGA for a Studio accelerator 6 years ago.
Cheers.

Thought seen mentioned by LTHs previously.

That snip from a 2021 slide.
 
  • Like
Reactions: 3 users

Slade

Top 20
Hypothetical question. Could taping out Akida 1500 on Global Foudries be enough evidence that Akida works on GF technology for Oculi to have the confidence to proceed with developing their own chip incorporating Akida IP through GF?
I feel there is too many coincidences (at least in my head) to ignore.
 
  • Like
  • Thinking
  • Fire
Reactions: 16 users

JB49

Regular


41.50 is where anil mentions we've "directly worked with" oculi
 
  • Like
  • Love
  • Fire
Reactions: 25 users

Diogenese

Top 20
Hypothetical question. Could taping out Akida 1500 on Global Foudries be enough evidence that Akida works on GF technology and thus give Oculi the confidence to develop their own chip through GF that incorporates Akida IP?
I feel there is too many coincidences (at least in my head) to ignore.
You want coincidences?

Check out the NASA SBIR for 22nm FD-SoI NN sans processor.
 
  • Like
  • Fire
  • Love
Reactions: 27 users

Slade

Top 20


41.50 is where anil mentions we've "directly worked with" oculi

Thank you @JB49
Just to add, It has also been confirmed in an email that Brainchip is engaged with Oculi (not Oculii).
 
Last edited:
  • Like
  • Fire
Reactions: 16 users
You want coincidences?

Check out the NASA SBIR for 22nm FD-SoI NN sans processor.
You mean this one ;)


 
  • Like
  • Love
  • Fire
Reactions: 10 users

Diogenese

Top 20
You mean this one ;)


Thanks Fmf,

My short term memory is somewhat deficient.
 
  • Like
  • Haha
Reactions: 2 users
Top Bottom