BRN Discussion Ongoing

Diogenese

Top 20
So speaking of shorts, I still can't quite get my head around the whole process. Are institution's lending shares to shorters. The shorters then sell those shares and hope that in the process of selling the borrowerd shares they spook the market so the price drops and they then buy back in and pass the shares back to the institutions having made a profit. If that is the case and the institutions are actually scooping up shares during this process (as their holdings are increasing) then the price is probably not dropping as much as the shorters would like or they could be preventing them from buying back in. Sorry that probably sounds very stupid. Am I missing something or everything!!! 🤪
Sounds like the old bootstrap theory - if you pull up on your shoelaces hard enough, you can lift your feet off the ground.
 
  • Haha
  • Like
Reactions: 5 users

Draed

Regular
My opinion re: games being played on this stock goes as follows. It's a good company with a great innovative technology. But, it takes time to get to market. It will happen, but its just taking longer than hoped. We all know that.

But what has the institutional investors pissed off is the high percentage of retail investors, and stubborn ones at that. They want as many stocks as they can in their hands. For the eventual lift off. I.e when all the testing and validation is done, design production marketing and products on shelves. The institutions love this because ndas are in place and nothing will happen till money starts appearing in the bank. So, they sow the story that it's a dud, the tech isn't going anywhere, no sales are taking place.

and vecause there is a long cycle with low chance of suprise announcements, they sell it down and maintain an aggressive short selling campaign To spook retail into selling at a loss.

One day the price will pump, without an announcement, just like it has multiple times in the last year or so. Some retail investors will think "ahhh another pump and dump". I'll sell for a tidy profit or break even and buy back in at a lower price.

But I'm sure that one time I even think of attempting that, it will be an announcement like ,"ahhh telsa wants to incoperate akida into every car sensor they have"...Or "a new licence with samsung" etc.

This is pure speculation with no research to back it up. But hey, it's just how I think it will play out. Maybe not telsa to start with though🤣.
 
  • Like
  • Fire
  • Love
Reactions: 32 users
Not saying involves us but wondering what's special, if at all, with GF 22nm and neuromorphic? Other than previous connections to NASA preferences on that type as well.

This Uni team presented their work at the recent GOMACTech in the US.

Program HERE

Screenshot_2023-06-07-21-26-21-09_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg


IMG_20230607_212541.jpg
 
  • Like
  • Love
  • Fire
Reactions: 35 users

Diogenese

Top 20
Not saying involves us but wondering what's special, if at all, with GF 22nm and neuromorphic? Other than previous connections to NASA preferences on that type as well.

This Uni team presented their work at the recent GOMACTech in the US.

Program HERE

View attachment 37968

View attachment 37967
Hi Fmf,

The term "neuromorphic image sensors" is used to refer to event cameras/DVS like Prophesee.
My opinion re: games being played on this stock goes as follows. It's a good company with a great innovative technology. But, it takes time to get to market. It will happen, but its just taking longer than hoped. We all know that.

But what has the institutional investors pissed off is the high percentage of retail investors, and stubborn ones at that. They want as many stocks as they can in their hands. For the eventual lift off. I.e when all the testing and validation is done, design production marketing and products on shelves. The institutions love this because ndas are in place and nothing will happen till money starts appearing in the bank. So, they sow the story that it's a dud, the tech isn't going anywhere, no sales are taking place.

and vecause there is a long cycle with low chance of suprise announcements, they sell it down and maintain an aggressive short selling campaign To spook retail into selling at a loss.

One day the price will pump, without an announcement, just like it has multiple times in the last year or so. Some retail investors will think "ahhh another pump and dump". I'll sell for a tidy profit or break even and buy back in at a lower price.

But I'm sure that one time I even think of attempting that, it will be an announcement like ,"ahhh telsa wants to incoperate akida into every car sensor they have"...Or "a new licence with samsung" etc.

This is pure speculation with no research to back it up. But hey, it's just how I think it will play out. Maybe not telsa to start with though🤣.
That's right. Every day we don't have a licence announcement is a day closer to a licence announcement.
 
  • Like
  • Love
  • Thinking
Reactions: 31 users
Also good to see the USAF Space Force program still chasing neuromorphic / cortical solutions and amalgamating the project into another for 2024.

Original paper HERE

IMG_20230607_214208.jpg
 
  • Like
  • Love
  • Fire
Reactions: 15 users
Hi Fmf,

The term "neuromorphic image sensors" is used to refer to event cameras/DVS like Prophesee.

That's right. Every day we don't have a licence announcement is a day closer to a licence announcement.
Cheers.

Yeah, got my head around the Davis and Prophesee solutions and mused as to the "simulations" on a GF 22nm given we provide some Unis the ability to run simulations (think this is correct?) and we taped out on the GF 22nm FDSOI.

I don't expect much to it, was just interesting I thought.
 
  • Like
  • Fire
Reactions: 7 users

Diogenese

Top 20
  • Haha
Reactions: 6 users
  • Haha
Reactions: 5 users
Also see Northrop playing with DVS.

Mention they been working (trying haha) with John's Hopkins for a decade :oops:

Was trying to find any link anywhere with us or Prophesee etc to Northrop or JHU.

Maybe they need to get with the Akida program :LOL:

Couldn't find a date of the article.


Neuromorphic Cameras Provide a Vision of the Future​

Neuromorphic Cameras Provide a Vision of the Future
From enhanced battlefield protection systems to maintaining aerial drone delivery fleets, neuromorphic cameras hold promise for the future.

By Scott Gourley
From enhanced battlefield protection systems to maintaining aerial drone delivery fleets, neuromorphic cameras have the potential to enhance many future defense, commercial and industrial tasks. When coupled with machine learning, this technology may soon set the stage for dramatic enhancements in how systems operate and how we perceive and understand surrounding environments.

Like the Human Eye​

“Cameras we use today have an array of pixels: 1024 by 768,” explained Isidoros Doxas, an AI Systems Architect at Northrop Grumman. “And each pixel essentially measures the amount of light or number of photons falling on it. That number is called the flux. Now, if you display the same numbers on a screen, you will see the same image that fell on your camera.”
By contrast, neuromorphic cameras only report changes in flux. If the rate of photons falling on a pixel doesn’t change, they report nothing.
“If a constant 1,000 photons per second is falling on a pixel, it basically says, ‘I’m good, nothing happened.’ But, if at some point there are now 1,100 photons per second falling on the pixel, it will report that change in flux,” Doxas said.
“Surprisingly, this is exactly how the human eye works,” he added. “You may think that your eye reports the image that you see. But it doesn’t. All that stuff is in your head. All the eye reports are little blips saying ‘up’ or ‘down.’ The image we perceive is built by our brains.”


You need a lot of energy to send a number from pixel to computer or from eye to brain. And you don’t want to spend all that energy. In fact, that’s why people started thinking about building neuromorphic cameras. They require much less energy because the pixels just report changes and not actual values.”
Isidoros Doxas
Northrop Grumman AI Systems Architect

Advantages of Neuromorphic Cameras​

Doxas identified several advantages in neuromorphic imaging, beginning with reduced power requirements.
“You need a lot of energy to send a number from pixel to computer or from eye to brain,” he said. “And you don’t want to spend all that energy. In fact, that’s why people started thinking about building neuromorphic cameras. They require much less energy because the pixels just report changes and not actual values.”
He continued, “Another important advantage is speed. If you have a million pixels, and you have to send the computer a thousand frames per second, that’s one billion numbers per second. However, usually nothing changes in a scene from one millisecond to the next, so you don’t need to report that entire image.”

Doxas likened the process to compression methods for video entertainment, noting that a 4K-resolution movie represents 8 million pixels, times three colors, times 30 frames per second.
“That’s over a gigabyte per second,” he said. “Yet, you can watch that over an internet connection that’s only a few megabits per second. That’s because little changes from one frame to the next. They leverage that fact and compress frames in the same way. The difference here is that neuromorphic cameras do the compression.”
Decompression is accomplished by computers, where reporting speeds accelerating from one to tens of thousands of frames per second allow for millisecond reaction times. This paves the way for a huge range of different applications — from active combat systems detecting and defeating a bullet, to self-driving cars interpreting dangerous situations almost instantly.
Doxas said that Northrop Grumman has been involved with the technology for more than a decade, highlighting a collaboration with Johns Hopkins University that resulted in the recent design of a readout integrated circuit as well as “the brains that go behind that circuitry.”
Future efforts will include increasing the number of pixels, further lowering power and increasing resolution.

Coupling with Machine Learning​

Optimizing the new camera technology involves the application of machine learning methods that can work directly with the photon plus and minus signals. With self-driving cars, for example, machine learning can construct an image of a cat or dog with just a few pixel pluses or minuses, resulting in much quicker decisions compared with images built by convolutional neural network sensors.
“In the same vein, Northrop Grumman can use this system for non-invasive diagnostics for high-speed parts. This technology will dramatically change power requirements and time to decision across any number of applications,” Doxas concluded.
 
  • Like
  • Fire
Reactions: 13 users
Also see Northrop playing with DVS.

Mention they been working (trying haha) with John's Hopkins for a decade :oops:

Was trying to find any link anywhere with us or Prophesee etc to Northrop or JHU.

Maybe they need to get with the Akida program :LOL:

Couldn't find a date of the article.


Neuromorphic Cameras Provide a Vision of the Future​

Neuromorphic Cameras Provide a Vision of the Future
From enhanced battlefield protection systems to maintaining aerial drone delivery fleets, neuromorphic cameras hold promise for the future.

By Scott Gourley
From enhanced battlefield protection systems to maintaining aerial drone delivery fleets, neuromorphic cameras have the potential to enhance many future defense, commercial and industrial tasks. When coupled with machine learning, this technology may soon set the stage for dramatic enhancements in how systems operate and how we perceive and understand surrounding environments.

Like the Human Eye​

“Cameras we use today have an array of pixels: 1024 by 768,” explained Isidoros Doxas, an AI Systems Architect at Northrop Grumman. “And each pixel essentially measures the amount of light or number of photons falling on it. That number is called the flux. Now, if you display the same numbers on a screen, you will see the same image that fell on your camera.”
By contrast, neuromorphic cameras only report changes in flux. If the rate of photons falling on a pixel doesn’t change, they report nothing.
“If a constant 1,000 photons per second is falling on a pixel, it basically says, ‘I’m good, nothing happened.’ But, if at some point there are now 1,100 photons per second falling on the pixel, it will report that change in flux,” Doxas said.
“Surprisingly, this is exactly how the human eye works,” he added. “You may think that your eye reports the image that you see. But it doesn’t. All that stuff is in your head. All the eye reports are little blips saying ‘up’ or ‘down.’ The image we perceive is built by our brains.”


You need a lot of energy to send a number from pixel to computer or from eye to brain. And you don’t want to spend all that energy. In fact, that’s why people started thinking about building neuromorphic cameras. They require much less energy because the pixels just report changes and not actual values.”
Isidoros Doxas
Northrop Grumman AI Systems Architect

Advantages of Neuromorphic Cameras​

Doxas identified several advantages in neuromorphic imaging, beginning with reduced power requirements.
“You need a lot of energy to send a number from pixel to computer or from eye to brain,” he said. “And you don’t want to spend all that energy. In fact, that’s why people started thinking about building neuromorphic cameras. They require much less energy because the pixels just report changes and not actual values.”
He continued, “Another important advantage is speed. If you have a million pixels, and you have to send the computer a thousand frames per second, that’s one billion numbers per second. However, usually nothing changes in a scene from one millisecond to the next, so you don’t need to report that entire image.”

Doxas likened the process to compression methods for video entertainment, noting that a 4K-resolution movie represents 8 million pixels, times three colors, times 30 frames per second.
“That’s over a gigabyte per second,” he said. “Yet, you can watch that over an internet connection that’s only a few megabits per second. That’s because little changes from one frame to the next. They leverage that fact and compress frames in the same way. The difference here is that neuromorphic cameras do the compression.”
Decompression is accomplished by computers, where reporting speeds accelerating from one to tens of thousands of frames per second allow for millisecond reaction times. This paves the way for a huge range of different applications — from active combat systems detecting and defeating a bullet, to self-driving cars interpreting dangerous situations almost instantly.
Doxas said that Northrop Grumman has been involved with the technology for more than a decade, highlighting a collaboration with Johns Hopkins University that resulted in the recent design of a readout integrated circuit as well as “the brains that go behind that circuitry.”
Future efforts will include increasing the number of pixels, further lowering power and increasing resolution.

Coupling with Machine Learning​

Optimizing the new camera technology involves the application of machine learning methods that can work directly with the photon plus and minus signals. With self-driving cars, for example, machine learning can construct an image of a cat or dog with just a few pixel pluses or minuses, resulting in much quicker decisions compared with images built by convolutional neural network sensors.
“In the same vein, Northrop Grumman can use this system for non-invasive diagnostics for high-speed parts. This technology will dramatically change power requirements and time to decision across any number of applications,” Doxas concluded.
Ok....looks like working with Intel Movidius research chip, as expected I guess.

Though interesting they were / are in negotiations with a small business current team member....that doesn't sound like Intel...maybe someone who uses Intel...or not :unsure:

Original mid 22 paper HERE

IMG_20230607_221437.jpg
IMG_20230607_221519.jpg
 
  • Like
  • Fire
Reactions: 10 users
Wonder if Neil a SH or just checking in occasionally.

Liked one of FNNs posts with Rob Telson last year.

Neil used to work for Northrop for 16 yrs and also Raytheon.

Am I bored :unsure::ROFLMAO:


1517779861016

Neil Wigner

Neil Wigner​

Chief System Engineer at SAIC​

SAICUniversity of Southern California​

Long Beach, California, United States

Screenshot_2023-06-07-22-47-26-76_4641ebc0df1485bf6b47ebd018b5ee76.jpg

 
  • Like
  • Fire
  • Love
Reactions: 15 users
I see Teksun are a design partner with u-blox.

Might be a handy friend someday :unsure:



For quick information please see

Organization​

We operate in more than 28 locations worldwide.

Newsroom​

Welcome to the u-blox Newsroom! Stay informed on our latest announcements and learn what is yet to come.

Revenue Full Year 2022​

CHF 623.9​

in Millions​

Our world is moving fast, soon your car will drive itself and you will carry your doctor in your pocket. Your home will power the grid and you will be connected like never before. At u-blox we are setting the beat delivering the core technology to locate and wirelessly connect people, machines and everything else making us the agile and reliable partner that will take you beyond expectations.

u-blox



Partners and Alliances​

In the complex world that is the IoT, companies need to work together to bring value to the customer; at u-blox we work with different types of partners to do this.
We categorise our partners into four segments: Design Partners, Solution Partners, Technology Partners and our industry Alliances.



Image of Teksun logo

Teksun Inc​

Teksun Inc is an ISO 9001:2015 certified IoT & AI Product Design & Development company, supporting ODM (Design Services) & OEM (Ready-made hardware, Software, and AI accelerators) partners across the globe.
We provide end-to-end product engineering services including:
  • Embedded Hardware
  • Electrical Design
  • Embedded Software & Firmware
  • Artificial Intelligence & Machine Learning
  • Product Design & Mechanical Engineering
  • Cloud Architecture, Web Development
  • Mobile Application Development
  • Product Certification
  • Electronics Manufacturing Services
To expedite time to market Teksun has proprietary AI & ML algorithms along with high-power GPS & NPU based single-board computers for different applications under Teksun, Tejas, Teksun Astra, Teksun Telep & Teksun TEKT.
 
  • Like
  • Fire
Reactions: 15 users

jk6199

Regular
Just home really late from work.

Looked at the USA and brchf up 4.39% and 500% of normal transactions???
 
  • Like
  • Fire
  • Thinking
Reactions: 22 users

jk6199

Regular
Just home really late from work.

Looked at the USA and brchf up 4.39% and 500% of normal transactions???
Sorry, 5 times the average amount of trades
 
  • Like
Reactions: 6 users

Diogenese

Top 20
Sorry, 5 times the average amount of trades
potarto/potato ...
 
  • Haha
  • Like
Reactions: 7 users

Sirod69

bavarian girl ;-)
PROPHESÄER
PROPHESEE
1 Std. •

Our CEO, Luca Verre, spoke today at Global Semiconductor Alliance’s flagship event, the 2023 Global Leadership Summit in Tokyo.

At this exclusive event, Luca addressed the world’s top #semiconductor leaders to speak on the power and progress of Prophesee’s event-based #Metavision® technology.

We are in the midst of a generational shift in terms of how information is collected, analyzed, and used. Over the past several years, event cameras that leverage #neuromorphic techniques to operate more like the human eye and brain have gained a strong foothold in #machinevision, #industrialautomation, #mobile, #automotive, and other consumer applications.

We look forward to fruitful discussions and revealing the invisible with the GSA community.
1686162567557.png
 
  • Like
  • Fire
Reactions: 25 users

Tothemoon24

Top 20
Today, we announced the completion of our Generation 1 Prototype – our first portable research-grade device for non-invasive blood glucose monitoring. Gen 1 is designed to be a sophisticated research lab that can fit in your pocket and will enable us to scale data collection tenfold, including testing across more diverse participant populations and scenarios.

This marks a significant achievement for the Know Labs team and our partners, Igor Institute, bould design, Reza Kassayan, Edge Impulse, as we continue to work toward developing the first FDA-cleared, non-invasive glucose monitoring device for the billions of people living with diabetes and pre-diabetes worldwide.

Find out how we collectively overcame incredible engineering complexities over several years and through hundreds of iterations to achieve this lev

 
  • Like
  • Fire
  • Love
Reactions: 31 users

charles2

Regular
Noteworthy US volume today.

BRCHF 727k shares......nearly 10x normal volume and mostly at the ask.

And to top it off BCHPY (which usually trades < 350 shares/day traded 7800. So 7800x40 = 312k BRCHF shares (equivalent)

In summary 1,039,000 BRN.AX shares traded today in the US.

Some entity/entities are discovering BrainChip and taking a position.
 
  • Like
  • Fire
  • Wow
Reactions: 48 users

Tothemoon24

Top 20
6F301F07-F5FB-4694-99BE-75C86EDBEA70.jpeg
 
  • Like
  • Fire
Reactions: 13 users

IloveLamp

Top 20

Screenshot_20230608_073002_Google News.jpg
 
  • Like
  • Thinking
Reactions: 10 users
Top Bottom