BRN Discussion Ongoing

Diogenese

Top 20
Below is the second STMicroelectronic's video on their Edge AI Summit (published a couple of days ago) with a presentation by Deepu Talla from NVDIA. You can see here at 17.45 a screenshot of the TAO Ecosystem. Deepu mentions in the video that Nvidia design environment is also supported on the Edge Impulse platform.

I've circled Arm, STSTMicroelectronics and Edge Impulse.

View attachment 51830


So this reminded me of what @chapman89 posted on Rob Telson's thoughts on Edge Impulse and TAO, and I thought perhaps a similar situation might prevail in relation to STMicro's customers who might then choose to incorporate BrainChip's IP.





View attachment 51828









Hi Bravo,

We know that, from day 1, Akida has been "processor agnostic".

We know that Akida is even more agnosticker where ARM is concerned, compatibility with all ARM processors having been proven.

ARM produces CPU IP. Nvidia produces GPU and, in 2013, Nvidia announced that it would license its GPU IP.

https://www.anandtech.com/show/7083/nvidia-to-license-kepler-and-future-gpu-ip-to-3rd-parties

NVIDIA to License Kepler and Future GPU IP to 3rd Parties

by Anand Lal Shimpion June 18, 2013

Processor agnostic should mean that Akida works with Nvidia GPUs, so has there been anything published to show Akida is also agnosticker with Nvidia?

CPUs basically work by performing processes in series. GPUs get their processing power from parallel processing.
Akida is designed for asynchronous operation and has a basically parallel architecture, so I would guess that Akida can be adapted to operate with GPUs.

The main interactions between Akida 1 and an associated processor are that the processor is used to configure the NN nodes/layers and allocate the weights. The processor played no part in the operation of the NN in carrying out its classification/inferencing tasks. The results of the NNs operation would then be sent to the processor to use in its specific tasks.

Akida 2, with its TeNNs/ViT does require some limited interaction between the NN and the processor in implementing these advanced features. There must be some fiddling around to enable the NN and the processor to implement these advanced features. Clearly this fiddling about has been successfully implemented with ARM. It would be nice to have some affirmation that the fiddling around has also been implemented with GPUs, and this is implicit in Rob's response to Chapman.
 
  • Like
  • Love
  • Fire
Reactions: 37 users
  • Like
Reactions: 2 users

wilzy123

Founding Member
 
  • Like
  • Love
  • Fire
Reactions: 15 users

Diogenese

Top 20
Hi @Bravo,

I’m trying to work out the context of the word ”Proprietary” in the article you via posted:

“Unlike other STM32 MCU, the STM32N6 includes a proprietary NPU and an ARM Cortex core.

I’ve listed a few examples of the meaning of “Proprietary” but still not sure it helps make their sentence any clearer.

So I don’t know if the NPU is: theirs, e.g. unique to them or a trade secret?

Fingers crossed it’s Brainchips NPU and their trade secret they’re using it!

:)


What does proprietary mean for a product?

A Proprietary Product refers to a product produced under a legally protected proprietary process or a brand name. With the legally backed propriety rights, others cannot replicate the process or the product.

Does proprietary mean secret?

Proprietary information, also known as a trade secret, is information a company wishes to keep confidential. Proprietary information can include secret formulas, processes, and methods used in production.


What is an example of proprietary in business?

Proprietary information is any information that deals with the activities, business or products of a company. More specifically, some things that commonly fall under this umbrella include trade secrets, financial data, product research and development, computer software, business processes and marketing strategies.
Hi @Bravo ,

Just to drive a stake through the heart of the garlic-allergic monster, this is a very recent application by STM for a Frankenstein in-memory compute chip.

STM use a capacitor to sum currents to perform the analog equivalent of digital MAC (multiply accumulate).


US2023386565A1 IN-MEMORY COMPUTATION CIRCUIT USING STATIC RANDOM ACCESS MEMORY (SRAM) ARRAY SEGMENTATION AND LOCAL COMPUTE TILE READ BASED ON WEIGHTED CURRENT 20230419

1702296835582.png
1702296865651.png




An in-memory computation circuit includes a memory array including sub-arrays of with SRAM cells connected in rows by word lines and in columns by bit lines. A row controller circuit selectively actuates word lines across the sub-arrays for an in-memory compute operation. A computation tile circuit for each sub-array includes a column compute circuit for each bit line. Each column compute circuit includes a switched timing circuit that is actuated in response to weight data on the bit line for a duration of time set by an in-memory compute operation enable signal. A current digital-to-analog converter powered by the switched timing circuit operates to generate a drain current having a magnitude controlled by bits of feature data for the in-memory compute operation. The drain current is integrated to generate an output voltage.

[004] … A column processing circuit 20 senses the analog current signals on the pairs of complementary bit lines BLT and BLC (and/or on the read bit line BLR) for the M columns and generates a decision output for the in-memory compute operation from those analog current signals. The column processing circuit 20 can be implemented to support processing where the analog current signals on the columns are first processed individually and then followed by a recombination of multiple column outputs.

[0006] The row controller circuit 18 receives the feature data for the in-memory compute operation and in response thereto performs the function of selecting which ones of the word lines WL<0> to WL<N−1> are to be simultaneously accessed (or actuated) in parallel during an in-memory compute operation, and further functions to control application of pulsed signals to the word lines in accordance with that in-memory compute operation. FIG. 1 illustrates, by way of example only, the simultaneous actuation of all N word lines with the pulsed word line signals, it being understood that in-memory compute operations may instead utilize a simultaneous actuation of fewer than all rows of the SRAM array. The analog signals on a given pair of complementary bit lines BLT and BLC (or on the read bit line RBL in the 8T-type implementation) are dependent on the logic state of the bits of the computational weight stored in the memory cells 14 of the corresponding column and the width(s) of the pulsed word line signals applied to those memory cells 14 .

[0007] The implementation illustrated in FIG. 1 shows an example in the form of a pulse width modulation (PWM) for the applied word line signals for the in-memory compute operation dependent on the received feature data. The use of PWM or period pulse modulation (PTM) for the applied word line signals is a common technique used for the in-memory compute operation based on the linearity of the vector for the multiply-accumulation (MAC) operation. The pulsed word line signal format can be further evolved as an encoded pulse train to manage block sparsity of the feature data of the in-memory compute operation. It is accordingly recognized that an arbitrary set of encoding schemes for the applied word line signals can be used when simultaneously driving multiple word lines. Furthermore, in a simpler implementation, it will be understood that all applied word line signals in the simultaneous actuation may instead have a same pulse width
.
 
  • Like
  • Fire
  • Sad
Reactions: 18 users

IloveLamp

Top 20
  • Like
  • Fire
  • Love
Reactions: 16 users

Mt09

Regular
  • Haha
  • Like
  • Fire
Reactions: 8 users

Tothemoon24

Top 20

📣 Have you heard? Our latest GenX320 Metavision® sensor is making waves in the vision-sensing world!

📰 The smallest and most power-efficient event-based vision sensor to date, the GenX320, has captured the attention of industry enthusiasts and experts alike 🌟

🔗 Explore the buzz with these insightful articles:

👉 Imaging and Machine Vision Europe: Prophesee unveils event-based sensor for ultra-low power Edge AI Devices


👉 EE Times | Electronic Engineering Times: Prophesee Reinvents DVS Camera For AIoT Applications (Sally Ward-Foxton)


👉 BFMTV: Prophesee dévoile son nouveau capteur GenX320, capteur neuromorphique le plus petit et le plus économe

https://lnkd.in/dTcGx2cS

👉 Fierce Electronics: Prophesee unveils latest sensor, aimed at consumer products, IoT (Dan O'Shea)

https://lnkd.in/d8DTVUAU

👉 EE Journal: Prophesee’s 5th generation sensors detect motion instead of images for industrial, robotic and consumer applications (Steve Leibson)

https://lnkd.in/g9XngTRn

👉 Vision Systems Design: Prophesee Launches Event-Based Vision Sensor (Linda Wilson)

https://lnkd.in/dNHaYrCH

👉 L'Usine Nouvelle: Prophesee part à la conquête de l’IoT et de la réalité immersive avec son nouveau capteur événementiel GenX320 (Frédéric Monflier)

https://lnkd.in/dwKeVa7s

👉 eeNews Europe: Event-based sensor for ‘always-on’ video, low-power apps (Peter Clarke)

https://lnkd.in/dkfrpTGF

👉 The Ojo-Yoshida Report: Prophesee Emboldens Its Mass Consumer Outreach (Junko Yoshida)

https://lnkd.in/dmNwj4JV

👉 New Electronics: Prophesee launches event-based vision sensor for consumer Edge-AI devices

https://lnkd.in/drMJEub7


🙌 Thank you to these esteemed publications and journalists for covering our latest innovation!

🚀 Stay tuned for more exciting updates!
 
  • Like
  • Fire
  • Love
Reactions: 31 users

IloveLamp

Top 20
  • Haha
  • Like
Reactions: 12 users

IloveLamp

Top 20









Screenshot_20231212_061821_LinkedIn.jpg
 
  • Like
  • Fire
  • Love
Reactions: 27 users

IMG_0796.jpeg
 
  • Like
  • Fire
Reactions: 17 users

IloveLamp

Top 20
  • Like
  • Fire
Reactions: 9 users

jtardif999

Regular
AI threat keeps me awake at night, says Arm boss


Rene Haas believes the rapidly developing technology ‘will change everything we do’ within a decade


December 11 2023, The Times


The head of one of Britain’s most important technology companies has spoken of his fears that humans could lose control of artificial intelligence.


Rene Haas, chief executive of Arm Holdings, the Cambridge-based microchip designer, said the threat kept him up at night. “The thing I worry about most is humans losing capability [over the machines],” he told Bloomberg. “You need some override, some backdoor, some way that the system can be shut down.”


Arm creates the blueprint for energy-efficient microchips and licences these designs to companies such as Apple, Nvidia and Qualcomm. Its processors run virtually every smartphone on the planet, as well as other devices such as digital TVs and drones.


Haas estimated that 70 per cent of the world’s population have come into contact with Arm-designed products in some way. He said AI would be transformational for the company, which is trying to lessen its reliance on the smartphone sector.


“I think it will find its way into everything that we do, and every aspect of how we work, live, play,” he said. “It’s going to change everything over the next five to ten years.”


The company, which was valued at $54.5 billion at its New York stock market listing in September, employs about 6,400 people globally, 3,500 of them in the UK. The shares have since risen from $51 to $67.23.


Arm’s owner, the Japanese tech conglomerate SoftBank, chose the Nasdaq exchange even though the company was listed in London until 2016. The decision was regarded as a blow to the British technology scene, although Arm emphasised its commitment to the UK.


Haas said that access to talent, particularly in the UK, was another concern. “We were born here, we intend to stay here,” he added. “Please make it very easy for us to attract world-class talent and attract engineers to come and work for Arm.”
 
  • Like
  • Fire
  • Love
Reactions: 17 users

TheDrooben

Pretty Pretty Pretty Pretty Good
AI threat keeps me awake at night, says Arm boss


Rene Haas believes the rapidly developing technology ‘will change everything we do’ within a decade


December 11 2023, The Times


The head of one of Britain’s most important technology companies has spoken of his fears that humans could lose control of artificial intelligence.


Rene Haas, chief executive of Arm Holdings, the Cambridge-based microchip designer, said the threat kept him up at night. “The thing I worry about most is humans losing capability [over the machines],” he told Bloomberg. “You need some override, some backdoor, some way that the system can be shut down.”


Arm creates the blueprint for energy-efficient microchips and licences these designs to companies such as Apple, Nvidia and Qualcomm. Its processors run virtually every smartphone on the planet, as well as other devices such as digital TVs and drones.


Haas estimated that 70 per cent of the world’s population have come into contact with Arm-designed products in some way. He said AI would be transformational for the company, which is trying to lessen its reliance on the smartphone sector.


“I think it will find its way into everything that we do, and every aspect of how we work, live, play,” he said. “It’s going to change everything over the next five to ten years.”


The company, which was valued at $54.5 billion at its New York stock market listing in September, employs about 6,400 people globally, 3,500 of them in the UK. The shares have since risen from $51 to $67.23.


Arm’s owner, the Japanese tech conglomerate SoftBank, chose the Nasdaq exchange even though the company was listed in London until 2016. The decision was regarded as a blow to the British technology scene, although Arm emphasised its commitment to the UK.


Haas said that access to talent, particularly in the UK, was another concern. “We were born here, we intend to stay here,” he added. “Please make it very easy for us to attract world-class talent and attract engineers to come and work for Arm.”
Here is the interview, less than 8 mins long



Happy as Larry
 
  • Like
  • Love
  • Fire
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!


41 mins

12 Dec 2023 Bloomberg Talks
Arm CEO Rene Haas speaks exclusively with Bloomberg's Tom Mackenzie at the company's global headquarters in Cambridge, UK. The pair spoke about how Arm will prove essential to the generative AI revolution, trying to lessen Arm's dependence on the slowing smartphone industry by getting its technology into new areas such as personal computers, servers and electric vehicles. Haas also spoke about Arm's business in China and the challenges of attracting talent in the UK.
 
  • Like
  • Fire
  • Love
Reactions: 11 users

Deena

Regular
Here is the interview, less than 8 mins long



Happy as Larry

Wow. This video is very encouraging. Well worth a watch and listen. Good one Drooben. Power efficiency is so important ... and Brainchip delivers.
Deena
 
  • Like
  • Love
  • Fire
Reactions: 17 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Neuromorphic roadmap: are brain-like processors the future of computing?​

Neuromorphic chips could reduce energy bills for AI developers as well as emit useful cybersecurity signals.
11 December 2023
Picturing the future of computing.


Rethinking chip design: brain-inspired asynchronous neuromorphic devices are gaining momentum as researchers report on progress.

• The future of computing might not look anything like computing as we know it.
• Neuromorphic chips would function much more like brains than the chips we have today.
• Neuromorphic chips and AI could be a combination that takes us much further – without the energy billls.

A flurry of new chips announced recently by Qualcomm, NVIDIA, and AMD has ramped up competition to build the ultimate PC processor. And while the next couple of years are shaping up to be good ones for consumers of laptops and other PC products, the future of computing could end up looking quite different to what we know right now.
Despite all of the advances in chipmaking, which have shrunk feature sizes and packed billions of transistors onto modern devices, the computing architecture remains a familiar one. General-purpose, all-electronic, digital PCs based on binary logic are, at their heart, so-called Von Neumann machines.

Von Neumann machines versus neuromorphic chips​

The basics of a Von Neumann computing machine features a memory store to hold instructions and data; control and logic units; plus input and output devices.
Demonstrated more than half a century ago, the architecture has stood the test of time. However, bottlenecks have emerged – provoked by growing application sizes and exponential amounts of data.

Processing units need to fetch their instructions and data from memory. And while on-chip caches help reduce latency, there’s a disparity between how fast the CPU can run and the rate at which information can be supplied.
What’s more, having to bus data and instructions between the memory and the processor not only affects chip performance, it drains energy too.
Chip designers have loaded up processors with multiple cores, clustered CPUs, and engineered other workarounds to squeeze as much performance as they can from Von Neumann machines. But this complexity adds cost and requires cooling.
It’s often said that the best solutions are the simplest, and today’s chips based on Von Neumann principles are starting to look mighty complicated. There are resource constraints too, made worse by the boom in generative AI, and these could steer the future of computing away from its Von Neumann origins.

Neuromorphic chips and AI – a dream combination?​

Large language models (LLMs) have wowed the business world and enterprise software developers are racing to integrate LLMs developed by OpenAI, Google, Meta, and other big names into their products. And competition for computing resources is fierce.
OpenAI had to pause new subscriptions to its paid-for ChatGPT service as it couldn’t keep up with demand. Google, for the first time, is reportedly spending more on compute than it is on people – as access to high-performance chips becomes imperative to revenue growth.


Writing in a Roadmap for Unconventional Computing with Nanotechnology (available on arXiv and submitted to Nano Futures), experts highlight the fact that the computational need for artificial intelligence is growing at a rate 50 times faster than Moore’s law for electronics.
LLMs feature billions of parameters – essentially a very long list of decimal numbers – which have to be encoded in binary so that processors can interpret whether artificial neurons fire or not in response to their software inputs.
So-called ‘neural engines’ can help accelerate AI performance by hard-coding common instructions, but running LLMs on conventional computing architecture is resource-intensive.
Researchers estimate that data processing and transmission worldwide could be responsible for anywhere between 5 and 15% of global energy consumption. And this forecast was made before ChatGPT existed.
But what if developers could switch from modeling artificial neurons in software to building them directly in hardware instead? Our brains can perform all kinds of supercomputing magic using a few Watts of power (orders of magnitude less than computers) and that’s thanks to physical neural networks and their synaptic connections.


Rather than having to pay an energy penalty for shuffling computing instructions and data into a different location, calculations can be performed directly in memory. And developers are busy working on a variety of neuromorphic (brain-inspired) chip ideas to enable computing with small energy budgets, which brings a number of benefits.

“It provides hardware security as well, which is very important for artificial intelligence,” comments Jean Anne Incorvia – who holds the Fellow of Advanced Micro Devices (AMD) Chair in Computer Engineering at The University of Texas at Austin, US – in the roadmap paper. “Because of the low power requirement, these architectures can be embedded in edge devices that have minimal contact with the cloud and are therefore somewhat insulated from cloud‐borne attacks.”

Neuromorphic chips emit cybersecurity signals​

What’s more, with neuromorphic computing devices consuming potentially tiny amounts of power, hardware attacks become much easier to detect due to the tell-tale increase in energy demand that would follow – something that would be noticeable through side-channel monitoring.
The future of computing could turn out to be one involving magnetic neural network crossbar arrays, redox memristors, 3D nanostructures, biomaterials and more, with designers of neuromorphic devices using brain functionality as a blueprint.
“Communication strength depends on the history of synapse activity, also known as plasticity,” writes Aida Todri‐Sanial – who leads the NanoComputing Research Lab at Eindhoven University of Technology (TU/e) in The Netherlands. “Short‐term plasticity facilitates computation, while long‐term plasticity is attributed to learning and memory.”


Neuromorphic computing is said to be much more forgiving of switching errors compared with Boolean logic. However, one issue holding back progress is the poor tolerance of device-to-device variations. Conventional chip makers have taken years to optimize their fabrication processes, so the future of computing may not happen overnight.
However, different ways of doing things may help side-step some hurdles. For example, researchers raise the prospect of being able to set model weights using an input waveform rather than having to read through billions of individual parameters.
Also, the more we learn about how the brain functions, the more designers of future computing devices can mimic those features in their architectures.

Giving a new meaning to sleep mode​

“During awake activity, sensory signals are processed through subcortical layers in the cortex and the refined outputs reach the hippocampus,” explains Jennifer Hasler and her collaborators, reflecting on what’s known about how the brain works. “During the sleep cycle, these memory events are replayed to the neocortex where sensory signals cannot disrupt the playback.”
Today, closing your laptop – putting the device to sleep – is mostly about power-saving. But perhaps the future of computing will see chips that utilize sleep more like the brain. With sensory signals blocked from disrupting memory events, sleeping provides a chance to strengthen synapses, encode new concepts, and expand learning mechanisms.
And if these ideas sound far-fetched, it’s worth checking out the computing capabilities of slime mold powered by just a few oat flakes. The future of computing doesn’t have to resemble a modern data center, and thinking differently could dramatically lower those energy bills.

 
  • Like
  • Love
  • Fire
Reactions: 37 users

7für7

Top 20

Neuromorphic roadmap: are brain-like processors the future of computing?​

Neuromorphic chips could reduce energy bills for AI developers as well as emit useful cybersecurity signals.
11 December 2023
Picturing the future of computing.


Rethinking chip design: brain-inspired asynchronous neuromorphic devices are gaining momentum as researchers report on progress.

• The future of computing might not look anything like computing as we know it.
• Neuromorphic chips would function much more like brains than the chips we have today.
• Neuromorphic chips and AI could be a combination that takes us much further – without the energy billls.

A flurry of new chips announced recently by Qualcomm, NVIDIA, and AMD has ramped up competition to build the ultimate PC processor. And while the next couple of years are shaping up to be good ones for consumers of laptops and other PC products, the future of computing could end up looking quite different to what we know right now.
Despite all of the advances in chipmaking, which have shrunk feature sizes and packed billions of transistors onto modern devices, the computing architecture remains a familiar one. General-purpose, all-electronic, digital PCs based on binary logic are, at their heart, so-called Von Neumann machines.

Von Neumann machines versus neuromorphic chips​

The basics of a Von Neumann computing machine features a memory store to hold instructions and data; control and logic units; plus input and output devices.
Demonstrated more than half a century ago, the architecture has stood the test of time. However, bottlenecks have emerged – provoked by growing application sizes and exponential amounts of data.

Processing units need to fetch their instructions and data from memory. And while on-chip caches help reduce latency, there’s a disparity between how fast the CPU can run and the rate at which information can be supplied.
What’s more, having to bus data and instructions between the memory and the processor not only affects chip performance, it drains energy too.
Chip designers have loaded up processors with multiple cores, clustered CPUs, and engineered other workarounds to squeeze as much performance as they can from Von Neumann machines. But this complexity adds cost and requires cooling.
It’s often said that the best solutions are the simplest, and today’s chips based on Von Neumann principles are starting to look mighty complicated. There are resource constraints too, made worse by the boom in generative AI, and these could steer the future of computing away from its Von Neumann origins.

Neuromorphic chips and AI – a dream combination?​

Large language models (LLMs) have wowed the business world and enterprise software developers are racing to integrate LLMs developed by OpenAI, Google, Meta, and other big names into their products. And competition for computing resources is fierce.
OpenAI had to pause new subscriptions to its paid-for ChatGPT service as it couldn’t keep up with demand. Google, for the first time, is reportedly spending more on compute than it is on people – as access to high-performance chips becomes imperative to revenue growth.


Writing in a Roadmap for Unconventional Computing with Nanotechnology (available on arXiv and submitted to Nano Futures), experts highlight the fact that the computational need for artificial intelligence is growing at a rate 50 times faster than Moore’s law for electronics.
LLMs feature billions of parameters – essentially a very long list of decimal numbers – which have to be encoded in binary so that processors can interpret whether artificial neurons fire or not in response to their software inputs.
So-called ‘neural engines’ can help accelerate AI performance by hard-coding common instructions, but running LLMs on conventional computing architecture is resource-intensive.
Researchers estimate that data processing and transmission worldwide could be responsible for anywhere between 5 and 15% of global energy consumption. And this forecast was made before ChatGPT existed.
But what if developers could switch from modeling artificial neurons in software to building them directly in hardware instead? Our brains can perform all kinds of supercomputing magic using a few Watts of power (orders of magnitude less than computers) and that’s thanks to physical neural networks and their synaptic connections.


Rather than having to pay an energy penalty for shuffling computing instructions and data into a different location, calculations can be performed directly in memory. And developers are busy working on a variety of neuromorphic (brain-inspired) chip ideas to enable computing with small energy budgets, which brings a number of benefits.

“It provides hardware security as well, which is very important for artificial intelligence,” comments Jean Anne Incorvia – who holds the Fellow of Advanced Micro Devices (AMD) Chair in Computer Engineering at The University of Texas at Austin, US – in the roadmap paper. “Because of the low power requirement, these architectures can be embedded in edge devices that have minimal contact with the cloud and are therefore somewhat insulated from cloud‐borne attacks.”

Neuromorphic chips emit cybersecurity signals​

What’s more, with neuromorphic computing devices consuming potentially tiny amounts of power, hardware attacks become much easier to detect due to the tell-tale increase in energy demand that would follow – something that would be noticeable through side-channel monitoring.
The future of computing could turn out to be one involving magnetic neural network crossbar arrays, redox memristors, 3D nanostructures, biomaterials and more, with designers of neuromorphic devices using brain functionality as a blueprint.
“Communication strength depends on the history of synapse activity, also known as plasticity,” writes Aida Todri‐Sanial – who leads the NanoComputing Research Lab at Eindhoven University of Technology (TU/e) in The Netherlands. “Short‐term plasticity facilitates computation, while long‐term plasticity is attributed to learning and memory.”


Neuromorphic computing is said to be much more forgiving of switching errors compared with Boolean logic. However, one issue holding back progress is the poor tolerance of device-to-device variations. Conventional chip makers have taken years to optimize their fabrication processes, so the future of computing may not happen overnight.
However, different ways of doing things may help side-step some hurdles. For example, researchers raise the prospect of being able to set model weights using an input waveform rather than having to read through billions of individual parameters.
Also, the more we learn about how the brain functions, the more designers of future computing devices can mimic those features in their architectures.

Giving a new meaning to sleep mode​

“During awake activity, sensory signals are processed through subcortical layers in the cortex and the refined outputs reach the hippocampus,” explains Jennifer Hasler and her collaborators, reflecting on what’s known about how the brain works. “During the sleep cycle, these memory events are replayed to the neocortex where sensory signals cannot disrupt the playback.”
Today, closing your laptop – putting the device to sleep – is mostly about power-saving. But perhaps the future of computing will see chips that utilize sleep more like the brain. With sensory signals blocked from disrupting memory events, sleeping provides a chance to strengthen synapses, encode new concepts, and expand learning mechanisms.
And if these ideas sound far-fetched, it’s worth checking out the computing capabilities of slime mold powered by just a few oat flakes. The future of computing doesn’t have to resemble a modern data center, and thinking differently could dramatically lower those energy bills.

“why” neuromorphic chips “could” be the future? 😂 could? COULD???? someone didn’t hear the alarm clock I guess? Ok let’s try this “why washing machines could make your clothes cleaner” 😂 what the heck?
 
  • Haha
  • Fire
  • Like
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Published: December 11, 2023 | Source: ResetEra | Author: Mark Campbell

Alleged PlayStation 5 Pro Specifications emerge – RDNA 3 graphics and an AI-focused NPU​


Extract

Screen Shot 2023-12-12 at 11.13.08 am.png


EXRACT ONLY
Screen Shot 2023-12-12 at 11.34.03 am.png

 
Last edited:
  • Like
  • Wow
  • Love
Reactions: 23 users
Top Bottom