BRN Discussion Ongoing

Diogenese

Top 20
At what stage of the adoption of a neuromorphic SNN technology do non-disclosure agreements have a cost benefit ratio less than one? What are some critical milestones that need to be met before the use of new technology becomes a positive selling point for adopters?

chatgpt: 'Non-disclosure agreements (NDAs) are typically used in the early stages of technology development to protect intellectual property, trade secrets, and other confidential information. As the technology moves closer to commercialization, the cost-benefit ratio of NDAs may decrease, as companies need to share information with potential partners and customers to generate interest and secure funding.
In the case of neuromorphic SNN technology, critical milestones that need to be met before the use of the new technology becomes a positive selling point for adopters include:
  1. Demonstrated performance: The technology needs to be shown to work reliably and effectively in a range of applications.
  2. Scalability: The technology needs to be scalable to meet the needs of large-scale applications.
  3. User-friendly interface: The technology needs to have a user-friendly interface that allows non-experts to easily use and integrate the technology into their systems.
  4. Cost-effectiveness: The technology needs to be cost-effective, both in terms of the hardware and software required to implement it and in terms of the time and effort required to learn and use the technology.
  5. Regulatory compliance: The technology needs to comply with relevant regulations, including those related to safety, privacy, and data protection.
Once these critical milestones are met, the benefits of disclosing information about the technology may outweigh the costs of using NDAs, as potential partners and customers may be more willing to invest in and adopt the technology."

Not a bad response I thought!
Yes, but it left out the one about the customer's commercial requirements.
 
  • Like
  • Haha
Reactions: 10 users

Yak52

Regular
Hi Yak,

Welcome back (finished decontamination/quarantine/exorcism from the other place?).

What I gleaned from Tony's message and other information released by the company is that Akida 1500 is not the earlier planned Akida 1500 with LSTM/transformer. It is a pared down version of Akida 1000 without the ARM Cortex. This has a couple of advantages including not having to pay the ARM licence, and a smaller silicon footprint so more chips per wafer.

Also it is being produced in Global Foundries' 22 nm FD-SoI. Coincidentally a new NASA SBIR was released calling for a NN design in 22 nm FD-SoI and without the encumbrance of a processor.

The lack of a processor means it is designed to work as an accelerator with another CPU/GPU which can be programmed to configure the Akida NN.

Another peripherally relevant matter is that there was a recent announcement that NASA was looking favourably on SiFive/RISC-V processors, and they are our mates.

Gidday Dio,

yes finished following the crapper for now, did very little posting, just annoying old Shitman mostly. My parting call was to let him know that I know exactly who he is, his business and connections to HC and TMH. Needless to say he squirmed quickly. Now he will be looking over his shoulder constantly wondering whats next. good!

THANKYOU Dio for adding that extra info regarding my posted info. Really helps show clearly the Product side of the business is progressing nicely.

Yak52 :cool:
 
  • Like
  • Fire
  • Haha
Reactions: 36 users

mcm

Regular
Gidday Dio,

yes finished following the crapper for now, did very little posting, just annoying old Shitman mostly. My parting call was to let him know that I know exactly who he is, his business and connections to HC and TMH. Needless to say he squirmed quickly. Now he will be looking over his shoulder constantly wondering whats next. good!

THANKYOU Dio for adding that extra info regarding my posted info. Really helps show clearly the Product side of the business is progressing nicely.

Yak52 :cool:
Have to agree with this post at least on the crapper 😎:

Anyone selling BRN's at current prices given there is a podcast about to be released has no idea what they are doing. It will be a long awaited update on where we are at and heading ... and an insight into Akida 1500. I predict Monday will see a sharp spike upwards in the SP.


This informal, “fireside chat”-style conversation features BrainChip founder and CTO Peter van der Made, Chief Marketing Officer Nandan Nayampally and Vice President of Ecosystems & Partnerships Rob Telson discussing the company’s progress as well as providing their perspectives on the AI market today and what the future holds.

Not financial advice ... my opinion only
 
  • Like
  • Fire
  • Love
Reactions: 26 users
Gidday Dio,

yes finished following the crapper for now, did very little posting, just annoying old Shitman mostly. My parting call was to let him know that I know exactly who he is, his business and connections to HC and TMH. Needless to say he squirmed quickly. Now he will be looking over his shoulder constantly wondering whats next. good!

THANKYOU Dio for adding that extra info regarding my posted info. Really helps show clearly the Product side of the business is progressing nicely.

Yak52 :cool:
🤣 Yeah, I've long had a strong feeling, the Shareman was a part of HC's "conflict model" which evolved, in my opinion, to a "sentiment manipulative model" for financial gain, to the parent organisation.

Just my hypothesis on the matter..
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 23 users

Yak52

Regular
Have to agree with this post at least on the crapper 😎:

Anyone selling BRN's at current prices given there is a podcast about to be released has no idea what they are doing. It will be a long awaited update on where we are at and heading ... and an insight into Akida 1500. I predict Monday will see a sharp spike upwards in the SP.


This informal, “fireside chat”-style conversation features BrainChip founder and CTO Peter van der Made, Chief Marketing Officer Nandan Nayampally and Vice President of Ecosystems & Partnerships Rob Telson discussing the company’s progress as well as providing their perspectives on the AI market today and what the future holds.

Not financial advice ... my opinion only
Good post mcm.

I have to agree, and this also brings forth another question.

Does the PODCAST release news info, which MUST be released before the next report supposedly out on Tues 22nd?
ducks lining up to suggest this might just be the case. To keep withing ASX rules of course.

Makes sense of this latest rundown (shorting) in the SP this week............... IF SOMETHING BIG IS COMING OUT.
PVDM, Nandan & ROB talking in the podcast. Hmmm.

Yak52 :cool:
 
  • Like
Reactions: 13 users

Yak52

Regular
🤣 Yeah, I've long had a strong feeling, the Shareman was a part of HC's "conflict model" which evolved, in my opinion, to a "sentiment manipulative model for financial gain" to the parent organisation.

Just my hypothesis on the matter..

drop me a private message with an email and I will discuss offline DB if you like.

Question for forum members.
Did anybody lodge with ASIC a complaint about trading/shorting activity last year or two and get knocked back by ASIC???
If you did you are entitled to present the case before a special Parliamentary committee and receive FULL PARLIAMENTARY PRVILAGES, which means you can name people etc with full immunity from repercussions, legal or otherwise.

Yak 52 :cool:
 
  • Like
  • Fire
  • Love
Reactions: 19 users

stuart888

Regular
  • Like
  • Fire
Reactions: 3 users

BaconLover

Founding Member
Some of you may have been paying attention to the increased number of derail and train / freight accidents that are occurring across the globe.
A massive one in Ohio, couple of days ago one in Adelaide and a number of other ones across the globe.
Obviously the environmentalists, media and green activists are silent.

Anyways, can't wait Akida to come as a standard across these key parts of economy and human lives, so such accidents could be avoided with the vibration analysis.
It saves humans, water, our food, and so much flaura and fauna!
Every time I see these accidents I think "there's a problem and we need a solution".
Good thing is now we've got one.
Time for the marketing gurus to smash a six out of the park while we have just enough overs left in the match.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

stuart888

Regular
Plenty of good vibes over in my camp. Yeah Brainchip, low power, nothing to do with text! All sorts of sensors.

 
  • Like
  • Fire
Reactions: 3 users

stuart888

Regular
Tractor Ride!

 
  • Like
Reactions: 1 users

stuart888

Regular
Cadence preaching the AI love.

 
  • Like
  • Fire
Reactions: 5 users

stuart888

Regular
Sorry to toss in some Stripe.

This guy is wow factor. It helps me learn how Brainchip is growing, as they deal with similar.

 

Mugen74

Regular
Big Telsa recall faulty FSD crash risk 362000 cars! wow
 
  • Like
Reactions: 4 users

Pappagolla

Regular
  • Haha
  • Like
  • Love
Reactions: 20 users

Tothemoon24

Top 20
Tata Motors.
2023 Harrier & Safari
New feature 360-degree camera


Tata Harrier and Safari with ADAS to launch in March 2023
By Utkarsh Deshmukh Published: February 6, 2023


Tata Motors the third largest carmaker in India recently confirmed that the Red Editions of its two highly popular SUVs the Harrier and Safari will be launched in March of this year. The company unveiled these new edition models at the 2023 Auto Expo where it also displayed a plethora of production and concept vehicles. At the time it was anticipated that Tata would be launching these SUVs in the second half of this year, although these will be now coming a lot sooner.





Like every other special edition of these SUVs that the company has launched in the last few years, there will be no mechanical changes this time as well. However, unlike the other editions these two SUVs will now get some much-anticipated features apart from minor aesthetic changes on the outside and major changes on the inside.


To begin with, the new Harrier and Safari have received an update for the 2023 model year that includes the addition of advanced driver assistance systems (ADAS). The automaker is now providing ADAS technology including autonomous emergency braking (AEB), forward collision alert, lane assist, and even traffic sign recognition. The Red Dark model also comes with six airbags in addition to these state-of-the-art collision avoidance technologies.




One of the standout features of the new 2023 Red Edition automobiles is the inclusion of a substantially larger 10.25-inch touchscreen screen, which is once again combined with a nine-speaker JBL sound system. Additionally, the SUV siblings now get a fully digital instrument cluster in place of the previous semi-digital one, and they now include the eagerly awaited 360-degree camera.


The new “Carnelian” red seat upholstery with a quilted pattern, red leatherette grab handles, grey dashboard trim, and piano black accents on the steering wheel are additional upgrades on the interior of the Red Edition SUVs. Additionally, the front seats in the new Edition cars are now ventilated, and the driver’s seat has memory and is electrically adjustable.





In addition to ambient lighting that surrounds the panoramic sunroof, the three-row Safari now boasts a second row that is ventilated, a motorised front passenger seat with a “Boss” mode that allows the rear passenger to pull the seat forward, and all three rows that are motorised.

The SUV’s exterior, on the other hand, has seen little alterations; it now just features a slightly different shade of “Oberon Black” and a faint tinge of red on the front grille. With the exception of the red-painted brake calipers inside, the new versions come with the same 18-inch alloy wheel style.




The forthcoming new Red Edition SUVs will utilize the same 2.0 litre turbocharged diesel engine, as previously mentioned, and have not undergone any mechanical alterations. The highest output of this engine is 170 PS and the maximum torque is 350 Nm. In both the Harrier and Safari, this engine is available with a 6-speed manual or 6-speed automatic transmission









Tata Harrier, Safari with ADAS launching soon:
 
  • Like
  • Thinking
Reactions: 10 users

Xray1

Regular
Great find. This may explain in part, the issue of employee incentive shares.
I'm quite surprised that Peter nor Anil weren't listed as one of the inventor's on that list !!!
 
  • Like
Reactions: 3 users

stuart888

Regular
Searching in Youtube for 24 hours is interesting "neuromorphic spiking". Seems like a hot topic!!!

I cannot keep up.


1676606198481.png
 
  • Like
Reactions: 4 users

MrNick

Regular
  • Like
  • Haha
Reactions: 2 users

Taproot

Regular
Brainchip stays at home alone is that what you mean?
But why?
Animation Abschied GIF
It's all OK Sirod
Prophesee and Brainchip are going to achieve great things together.

"Patience is a form of wisdom. It demonstrates that we understand and accept the fact that sometimes things must unfold in their own time "



16:35 - 18:41
Solving motion blur.

19:42 - 20.07
Rob letting US know where the Prophesee / Brainchip collaboration is headed.

25:35 - 28:35
Jerry Maguire " You complete me " moment !

 
  • Like
  • Love
  • Haha
Reactions: 11 users

Tothemoon24

Top 20
Tracking How the Event Camera is Evolving


Event camera processing is advancing and enabling a new wave of neuromorphic technology.

Sony, Prophesee, iniVation, and CelePixel are already working to commercialize event (spike-based) cameras. Even more important, however, is the task of processing the data these cameras produce efficiently so that it can be used in real-world applications. While some are using relatively conventional digital technology for this, others are working on more neuromorphic, or brain-like, approaches.

Though more conventional techniques are easier to program and implement in the short term, the neuromorphic approach has more potential for extremely low-power operation.

By processing the incoming signal before having to convert from spikes to data, the load on digital processors can be minimized. In addition, spikes can be used as a common language with sensors in other modalities, such as sound, touch or inertia. This is because when things happen in the real world, the most obvious thing that unifies them is time: When a ball hits a wall, it makes a sound, causes an impact that can be felt, deforms and changes direction. All of these cluster temporally. Real-time, spike-based processing can therefore be extremely efficient for finding these correlations and extracting meaning from them.

Last time, on Nov. 21, we looked at the advantage of the two-cameras-in-one approach (DAVIS cameras), which uses the same circuitry to capture both event images, including only changing pixels, and conventional intensity images. The problem is that these two types of images encode information in fundamentally different ways.

Common language

Researchers at Peking University in Shenzhen, China, recognized that to optimize that multi-modal interoperability all the signals should ideally be represented in the same way. Essentially, they wanted to create a DAVIS camera with two modes, but with both of them communicating using events. Their reasoning was both pragmatic—it makes sense from an engineering standpoint—and biologically motivated. The human vision system, they point out, includes both peripheral vision, which is sensitive to movement, and foveal vision for fine details. Both of these feed into the same human visual system.

The Chinese researchers recently described what they call retinomorphic sensing or super vision that provides event-based output. The output can provide both dynamic sensing like conventional event cameras and intensity sensing in the form of events. They can switch back and forth between the two modes in a way that allows them to capture the dynamics and the texture of an image in a single, compressed representation that humans and machines can easily process.

These representations include the high temporal resolution you would expect from an event camera, combined with the visual texture you would get from an ordinary image or photograph.

They have achieved this performance using a prototype that consists of two sensors: a conventional event camera (DVS) and a Vidar camera, a new event camera from the same group that can efficiently create conventional frames from spikes by aggregating over a time window. They then use a spiking neural network for more advanced processing, achieving object recognition and tracking.

The other kind of CNN

At Johns Hopkins University, Andreas Andreou and his colleagues have taken event cameras in an entirely different direction. Instead of focusing on making their cameras compatible with external post-processing, they have built the processing directly into the vision chip. They use an analog, spike-based cellular neural network (CNN) structure where nearest-neighbor pixels talk to each other. Cellular neural networks share an acronym with convolutional neural networks, but are not closely related.

In cellular CNNs, the input/output links between each pixel and its eight nearest are built directly in hardware and can be specified to perform symmetrical processing tasks (see figure). These can then be sequentially combined to produce sophisticated image-processing algorithms.

Two things make them particularly powerful. One is that the processing is fast because it is performed in the analog domain. The other is that the computations across all pixels are local. So while there is a sequence of operations to perform an elaborate task, this is a sequence of fast, low-power, parallel operations.

A nice feature of this work is that the chip has been implemented in three dimensions using Chartered 130nm CMOS and Terrazon interconnection technology. Unlike many 3D systems, in this case the two tiers are not designed to work separately (e.g. processing on one layer, memory on the other, and relatively sparse interconnects between them). Instead, each pixel and its processing infrastructure are built on both tiers operating as a single unit.

Andreou and his team were part of a consortium, led by Northrop–Grumman, that secured a $2 million contract last year from the Defence Advanced Research Projects Agency (DARPA). While exactly what they are doing is not public, one can speculate the technology they are developing will have some similarities to the work they’ve published.


Shown is the 3D structure of a Cellular Neural Network cell (right) and layout (bottom left) of the John’s Hopkins University event camera with local processing.
In the dark

We know DARPA has strong interest in this kind of neuromorphic technology. Last summer the agency announced that its Fast Event-based Neuromorphic Camera and Electronics (FENCE) program granted three contracts to develop very-low-power, low-latency search and tracking in the infrared. One of the three teams is led by Northrop-Grumman.

Whether or not the FENCE project and the contract announced by Johns Hopkins university are one and the same, it is clear is that event imagers are becoming increasingly sophisticated.
 
  • Like
  • Fire
  • Love
Reactions: 16 users
Top Bottom