BRN Discussion Ongoing

zeeb0t

Administrator
Staff member
Another warning thats 3 in 15 minutes @zeeb0t


It was two in the end, and you even acknowledged the message I sent you about one of the incorrect moderations which I reversed. Which also indicates to me that you definitely know where the private message function is, so please message me about this there.
 
  • Like
  • Haha
  • Fire
Reactions: 13 users

Frangipani

Top 20
The Human Brain Project (HBP), a ten-year European Union-funded research initiative launched in 2013 that describes itself on Twitter as “A global collaborative effort for neuroscience, medicine and computing to understand the brain, its diseases, and its computational capabilities” is one of three EU FET (Future and Emerging Technologies) Flagship Projects, partnered with more than 150 universities, research institutions and hospitals. It will conclude this September.
The Neuromorphic Computing Platform developed in the HBP provides remote access to two complementary, large-scale neuromorphic computing systems (NCS) built in custom hardware at locations in Heidelberg (the BrainScaleS system) and Manchester (the SpiNNaker system).

A couple of days ago, the Human Brain Project’s website reported on a new study on SNN by two researchers from a Dutch HBP partner institution published in Nature Machine Intelligence. No mention of Akida here, but substantiating my claim about the VR/AR sector being a lucrative field for Brainchip.




MAY 8, 2023

Human Brain Project: Study presents large brain-like neural networks for AI​


In a new study in Nature Machine Intelligence, researchers Bojian Yin and Sander Bohté from the HBP partner Dutch National Research Institute for Mathematics and Computer Science (CWI) demonstrate a significant step towards artificial intelligence that can be used in local devices like smartphones and in VR-like applications, while protecting privacy. They show how brain-like neurons combined with novel learning methods enable training fast and energy-efficient spiking neural networks on a large scale. Potential applications range from wearable AI to speech recognition and Augmented Reality.
human_brain_project-_study_presents_large_brain-like_neural_networks_for_ai.png

While modern artificial neural networks are the backbone of the current AI revolution, they are only loosely inspired by networks of real, biological neurons such as our brain. The brain however is a much larger network, much more energy-efficient, and can respond ultra-fast when triggered by external events. Spiking neural networks are special types of neural networks that more closely mimic the working of biological neurons: the neurons of our nervous system communicate by exchanging electrical pulses, and they do so only sparingly.
Implemented in chips, called neuromorphic hardware, such spiking neural networks hold the promise of bringing AI programmes closer to users – on their own devices. These local solutions are good for privacy, robustness and responsiveness. Applications range from speech recognition in toys and appliances, health care monitoring and drone navigation to local surveillance.
Just like standard artificial neural networks, spiking neural networks need to be trained to perform such tasks well. However, the way in which such networks communicate poses serious challenges. "The algorithms needed for this require a lot of computer memory, allowing us to only train small network models mostly for smaller tasks. This holds back many practical AI applications so far," says Sander Bohté of CWI's Machine Learning group. In the Human Brain Project, he works on architectures and learning methods for hierarchical cognitive processing.

Mimicking the learning brain
The learning aspect of these algorithms is a big challenge, and they cannot match the learning ability of our brain. The brain can easily learn immediately from new experiences, by changing connections, or even by making new ones. The brain also needs far fewer examples to learn something and it works more energy-efficiently. "We wanted to develop something closer to the way our brain learns," says Bojian Yin.
Yin explains how this works: if you make a mistake during a driving lesson, you learn from it immediately. You correct your behaviour right away and not an hour later. "You learn, as it were, while taking in the new information. We wanted to mimic that by giving each neuron of the neural network a bit of information that is constantly updated. That way, the network learns how the information changes and doesn't have to remember all the previous information. This is the big difference from current networks, which have to work with all the previous changes. The current way of learning requires enormous computing power and thus a lot of memory and energy."

Six million neurons
The new online learning algorithm makes it possible to learn directly from the data, enabling much larger spiking neural networks. Together with researchers from TU Eindhoven and research partner Holst Centre, Bohté and Yin demonstrated this in a system designed for recognising and locating objects. Yin shows a video of a busy street in Amsterdam: the underlying spiking neural network, SPYv4, has been trained in such a way that it can distinguish cyclists, pedestrians and cars and indicate exactly where they are.
"Previously, we could train neural networks with up to 10,000 neurons; now, we can do the same quite easily for networks with more than six million neurons," says Bohté. "With this, we can train highly capable spiking neural networks like our ¬¬SPYv4."

Future
And where does it all lead? Now having such powerful AI solutions based on spiking neural networks, chips are being developed that can run these AI programmes at very low power and will ultimately show up in many smart devices, like hearing-aides and augmented or virtual reality glasses.

Original Publication:
Bojian Yin, Federico Corradi, and Sander M. Bohté: Accurate online training of dynamical spiking neural networks through forward propagation through time. Nature Machine Intelligence, 8. May 2023. DOI: 10.1038/s42256-023-00650-4

human_brain_project-_study_presents_large_brain-like_neural_networks_for_ai_-_2.png

The researchers
sander_bohte_.png
Credit: Dirk Gillissen
Sander Bohté works in the Human Brain Project’s research area “Adaptive networks for cognitive architectures: from advanced learning to neurorobotics and neuromorphic applications.”




bojian_yin.png

Researcher Bojian Yin
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 18 users
D

Deleted member 118

Guest
  • Like
Reactions: 1 users

Deadpool

Did someone say KFC
The Human Brain Project (HBP), a ten-year European Union-funded research initiative launched in 2013 that describes itself on Twitter as “A global collaborative effort for neuroscience, medicine and computing to understand the brain, its diseases, and its computational capabilities” is one of three EU FET (Future and Emerging Technologies) Flagship Projects, partnered with more than 100 universities, research institutions and hospitals. It will conclude this September.
The Neuromorphic Computing Platform developed in the HBP provides remote access to two complementary, large-scale neuromorphic computing systems (NCS) built in custom hardware at locations in Heidelberg (the BrainScaleS system) and Manchester (the SpiNNaker system).

A couple of days ago, the Human Brain Project’s website reported on a new study on SNN by two researchers from a Dutch HBP partner institution published in Nature Machine Intelligence. No mention of Akida here, but substantiating my claim about the VR/AR sector being a lucrative field for Brainchip.




MAY 8, 2023

Human Brain Project: Study presents large brain-like neural networks for AI​


In a new study in Nature Machine Intelligence, researchers Bojian Yin and Sander Bohté from the HBP partner Dutch National Research Institute for Mathematics and Computer Science (CWI) demonstrate a significant step towards artificial intelligence that can be used in local devices like smartphones and in VR-like applications, while protecting privacy. They show how brain-like neurons combined with novel learning methods enable training fast and energy-efficient spiking neural networks on a large scale. Potential applications range from wearable AI to speech recognition and Augmented Reality.
human_brain_project-_study_presents_large_brain-like_neural_networks_for_ai.png

While modern artificial neural networks are the backbone of the current AI revolution, they are only loosely inspired by networks of real, biological neurons such as our brain. The brain however is a much larger network, much more energy-efficient, and can respond ultra-fast when triggered by external events. Spiking neural networks are special types of neural networks that more closely mimic the working of biological neurons: the neurons of our nervous system communicate by exchanging electrical pulses, and they do so only sparingly.
Implemented in chips, called neuromorphic hardware, such spiking neural networks hold the promise of bringing AI programmes closer to users – on their own devices. These local solutions are good for privacy, robustness and responsiveness. Applications range from speech recognition in toys and appliances, health care monitoring and drone navigation to local surveillance.
Just like standard artificial neural networks, spiking neural networks need to be trained to perform such tasks well. However, the way in which such networks communicate poses serious challenges. "The algorithms needed for this require a lot of computer memory, allowing us to only train small network models mostly for smaller tasks. This holds back many practical AI applications so far," says Sander Bohté of CWI's Machine Learning group. In the Human Brain Project, he works on architectures and learning methods for hierarchical cognitive processing.

Mimicking the learning brain
The learning aspect of these algorithms is a big challenge, and they cannot match the learning ability of our brain. The brain can easily learn immediately from new experiences, by changing connections, or even by making new ones. The brain also needs far fewer examples to learn something and it works more energy-efficiently. "We wanted to develop something closer to the way our brain learns," says Bojian Yin.
Yin explains how this works: if you make a mistake during a driving lesson, you learn from it immediately. You correct your behaviour right away and not an hour later. "You learn, as it were, while taking in the new information. We wanted to mimic that by giving each neuron of the neural network a bit of information that is constantly updated. That way, the network learns how the information changes and doesn't have to remember all the previous information. This is the big difference from current networks, which have to work with all the previous changes. The current way of learning requires enormous computing power and thus a lot of memory and energy."

Six million neurons
The new online learning algorithm makes it possible to learn directly from the data, enabling much larger spiking neural networks. Together with researchers from TU Eindhoven and research partner Holst Centre, Bohté and Yin demonstrated this in a system designed for recognising and locating objects. Yin shows a video of a busy street in Amsterdam: the underlying spiking neural network, SPYv4, has been trained in such a way that it can distinguish cyclists, pedestrians and cars and indicate exactly where they are.
"Previously, we could train neural networks with up to 10,000 neurons; now, we can do the same quite easily for networks with more than six million neurons," says Bohté. "With this, we can train highly capable spiking neural networks like our ¬¬SPYv4."

Future
And where does it all lead? Now having such powerful AI solutions based on spiking neural networks, chips are being developed that can run these AI programmes at very low power and will ultimately show up in many smart devices, like hearing-aides and augmented or virtual reality glasses.

Original Publication:
Bojian Yin, Federico Corradi, and Sander M. Bohté: Accurate online training of dynamical spiking neural networks through forward propagation through time. Nature Machine Intelligence, 8. May 2023. DOI: 10.1038/s42256-023-00650-4

human_brain_project-_study_presents_large_brain-like_neural_networks_for_ai_-_2.png

The researchers
sander_bohte_.png
Credit: Dirk Gillissen
Sander Bohté works in the Human Brain Project’s research area “Adaptive networks for cognitive architectures: from advanced learning to neurorobotics and neuromorphic applications.”




bojian_yin.png

Researcher Bojian Yin
Good read @Frangipani

Sander looks a bit like old Georgey boy:LOL:

1683802319717.png
george costanza comedy GIF
 
  • Haha
  • Like
Reactions: 34 users

GStocks123

Regular
A good read on the STM32N6

 
  • Like
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I could say the same thing about everyone's stock disclosure.

For eg: the resident touchmenot has a stock disclosure of "holding BRN not letting go". Is this an intend for others to do the same? If so, then is it financial advice?

A stock disclosure is just that. What I intend to do with my stock. Nothing to do with how others see it.

I'm just saying it's okay to vote NO if someone wishes to, and do not need to be intimidated by the pack who attack here (not directed at you) with anyone who has another view point.

But it's also a bit confusing too, because it could appear that you are using your Stock Disclosure banner to advocate a position on the Voice to Parliament, otherwise why allude to voting "no" in the ref?

If I have misread this somehow, then my apologies. Just trying to understand what this is all about?

See "ref" crossed out below from your Stock Disclosure.

Screen Shot 2023-05-11 at 10.09.49 pm.png
 
Last edited:
  • Like
  • Haha
  • Love
Reactions: 8 users

Sirod69

bavarian girl ;-)
Teksun Inc
Teksun Inc.

Your Solution to Safer Roads: Telep Driver Monitoring!

Utilize the Teksun Telep Driver Monitoring solution to enhance driver experience and protection. It monitors the state of alertness and finds the earliest signs of sleepiness. Stay ahead of the competition with Telep Driver Monitoring. Try it today.
1683809457833.png
 
  • Like
  • Fire
  • Love
Reactions: 37 users

Sirod69

bavarian girl ;-)

Expanded access to Arm Virtual Hardware for the entire IoT ecosystem​


......
A little over a year ago, we extended the capabilities of AVH to address new uses cases and to enable a wider range of Arm processors and third-party hardware via Corellium’s hypervisor technology. This included adding hardware from partners NXP Semiconductors, STMicroelectronics and Raspberry Pi, as well as Arm models of Corstone-300, Corstone-310, and Cortex-M processors ranging from Cortex-M0 to Cortex-M33. Over the past year, hundreds of embedded and IoT developers across the Arm ecosystem have participated in a private beta with this powerful new AVH offering, incorporating it into their development workflows, CI/CD pipelines, IoT SaaS solutions, and development tools. Our private beta users have also provided invaluable feedback that has helped improve and enhance the AVH service.

Today we are pleased to announce that this service has transitioned from private beta to public beta and is now open to anyone with an Arm account to try out and use for commercial purposes. The public beta is available for a trial period of 30 days followed by a paid service based on usage per device-hour. Go to arm.com/virtual-hardware today to get started.
......
 
  • Like
  • Fire
  • Wow
Reactions: 19 users

Sirod69

bavarian girl ;-)
Are we inside?:unsure:

The Next Generation Smart Toothbrush with Neuton.ai

16. Mai 2023 17:00

Want to see a cool demo and learn some of the most innovative Tiny ML practices? Come to this Arm Tech Talk for a real-time demonstration of a next-gen smart toothbrush! The talk will feature a toothbrush tracking solution on a Smart Toothbrush leveraging Bosch’s BMI160 IMU and embedded to a PSoC 6 (Arm Сortex-M4) with an incredibly small total footprint of nearly 4 kb. This solution produced with Neuton.AI’s neural networks doesn't compromise functionality or quality while being the best in class when it comes to energy consumption. We will step into the next generation of smart devices while demonstrating how you can build solutions with complex functionality while consuming a minimum amount of energy and memory.

 
  • Like
  • Thinking
  • Haha
Reactions: 12 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Smart Eye making moves and I know from listening to a previous podcast/webinar of theirs that they explore and test a lot of different sensors.

Screen Shot 2023-05-11 at 11.04.15 pm.png
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 29 users

overpup

Regular
Are we inside?:unsure:

The Next Generation Smart Toothbrush with Neuton.ai

16. Mai 2023 17:00

Want to see a cool demo and learn some of the most innovative Tiny ML practices? Come to this Arm Tech Talk for a real-time demonstration of a next-gen smart toothbrush! The talk will feature a toothbrush tracking solution on a Smart Toothbrush leveraging Bosch’s BMI160 IMU and embedded to a PSoC 6 (Arm Сortex-M4) with an incredibly small total footprint of nearly 4 kb. This solution produced with Neuton.AI’s neural networks doesn't compromise functionality or quality while being the best in class when it comes to energy consumption. We will step into the next generation of smart devices while demonstrating how you can build solutions with complex functionality while consuming a minimum amount of energy and memory.

The BrainChip PCI Boards use M4's
1683817507689.png
 
  • Like
  • Fire
  • Thinking
Reactions: 24 users

Dallas

Regular
https://twitter.com/RenesasGlobal/status/1656660679909318656?t=w0yzhUyFey4Tvy0YZL5Ghw&s=19🥰🚀🙏🤖🤣🤠
 
  • Like
  • Fire
Reactions: 10 users

Glen

Regular
possible akida with m4 cortex
 
  • Like
Reactions: 5 users

Glen

Regular
  • Like
  • Thinking
Reactions: 7 users

Jchandel

Regular
Are we inside?:unsure:

The Next Generation Smart Toothbrush with Neuton.ai

16. Mai 2023 17:00

Want to see a cool demo and learn some of the most innovative Tiny ML practices? Come to this Arm Tech Talk for a real-time demonstration of a next-gen smart toothbrush! The talk will feature a toothbrush tracking solution on a Smart Toothbrush leveraging Bosch’s BMI160 IMU and embedded to a PSoC 6 (Arm Сortex-M4) with an incredibly small total footprint of nearly 4 kb. This solution produced with Neuton.AI’s neural networks doesn't compromise functionality or quality while being the best in class when it comes to energy consumption. We will step into the next generation of smart devices while demonstrating how you can build solutions with complex functionality while consuming a minimum amount of energy and memory.

Unfortunately not us, these guys use Syntiant/ Arduono NDP120
 
  • Like
  • Sad
Reactions: 5 users

stockduck

Regular
Sorry if mentioned before....maybe worth for reading:


Cadence Tools Paving the Way for the AI-at-the-Edge Transition​

10 May 2023 • 7 minute read
 
Last edited:
  • Like
Reactions: 3 users

IloveLamp

Top 20
  • Like
  • Thinking
Reactions: 7 users

stockduck

Regular
Sorry if mentioned before....maybe worth for reading:


Cadence Tools Paving the Way for the AI-at-the-Edge Transition​

10 May 2023 • 7 minute read

".....
Specialty technology is increasingly important: RF, CIS, PMIC, NVM, ULP, Auto (key: RF=radios, CIS=image sensors, PMIC=power management, NVM=non-volatile memory, ULP=ultra low power, auto=automotive).

Here are a few process names:

RF: N6RF, N6RF+, N4PRF (coming). Many customer tapeouts last year.

ULP: N22, N12e, N6e (coming).

....

But TSMC also invests in new variations on those processes too. In fact, TSMC's investment in specialty technology has grown (over the last 5 years) at a CAGR of 40%. For the first time ever, it is investing in building mature technology capacity and will expand by 1.5X over the next three years (this counts the TSMC/Sony/Denso fab in Kumamoto, Japan, which is all mature technology).

  • ULP solution. 55ULP, 40ULP, 22ULL (0.6-1V) N12e (0.45-1V), N6e.
  • RF connectivity. N6RF offers fast er performance with 49% power consumption reduction compared to 16FFC enhancement (benchmark half analog, half digital).
  • N4PRF most advanced RF CMOS technology. RF on top of logic for digital-intensive RF products. N4PRF vs N6RF logic density 1.77X, power reduction 45%
  • MCU is all about how to scale non-volatile memory. MRAM and RRAM in 28nm/22nm since 2019. 16/12nmsince 2022, and 6nm in planning.
  • 40RRAM And 28/22RRAM in volume production
  • 28RRAM achieves >10 years 125° retention (automotive grade)
  • 12RRAM to be released in 1Q2024
  • 22MRAM in production
  • 16MRAM consumer qualified in 2022, planned for automotive 2023.
  • Achieve >1M endurance cycle, 20 years 230 degrees retention.
  • Power management. Smart power IC drives BCD scaling.
    • N90 BCD+ since 2021.
    • N55 BCD+ SoI (pathfinding for automotive)
    • N50BCD+ with RRAM
  • CMOS image sensor. Resolution 50MP and beyond. Pixel size 11-1.4um. ISP (processor) N40 going to 12FFC.
  • Roadmap for CIS has two logic layers with advanced packaging. Allows for AI in the logic.
...."


What does this mean.....?
 
Last edited:
  • Like
Reactions: 2 users

Labsy

Regular
🤔😉👍

 
  • Like
Reactions: 22 users
I'm so bloody slow sometimes, only now it hit me that neuromorphic computing in a way solves the problem af scaling down the process node, by enabling more performance in different ways:
1) Now that we have such a low power consumption and dissipated heat, it must be possible to run them more agressively.
2) As I see it, neuromorphic computing lends itself perfectly to expanding the amount of silicon used, like connecting multiple chiplets to support greater models and/or running multiple models utilizing each other. They can even be stacked not to take up any significant space.
3) It's a young technology that is already beating the old technology and has a long runway of innovation ahead of it, like the jump from Akida 1 to 2. I bet that there's a vast space of possibilities to be explored, like hardware support for n-dimensional models.

While nVidia hit the brickwall and others are struggeling with Moores law, we just got started and are seemingly already way ahead of their Jetson.

So, now I think neuromorphic computing is going to be indespensable for future performance gains and it might branch out like the three suggestions above and combinations/more branches may appear.
 
  • Like
  • Fire
  • Love
Reactions: 55 users
Top Bottom