BRN Discussion Ongoing

Bravo

If ARM was an arm, BRN would be its biceps💪!
Good evening,

Having read yesterday the company's 4C & Quarterly Report I felt overall it was positive, the Board appears to have done a
thorough evaluation of their earlier statement about moving our entire listing offshore to the US, yes, that's correct, the USA.

And the decision/recommendation was a solid NO.........in my opinion it was/is way too early and that the Chairperson is
part of a team that never ever revolves around one individual and his fickle ideas, once again, that's my private view.

The revenue appeared to represent a 10-fold increase, now no shareholder should assume anything at this point, what we
would like to see is back-back quarters, showing another increase, say 2 million +.... slowly increasing, but continually, quarter
on quarter, remembering that we are still starting from a very low base.

A concern to me was the failure to raise the 20 million AUD, receiving the 8.2 million AUD at an average of 0.2059, surely this
wasn't the agreement, I must be going mad, I thought the agreement was more like 0.50 a share x 40 million shares issued,
and I thought that Ken Scarince confirmed as such at the AGM?......so clearly, I am wrong.

Never-the-less we have approximately 17.3 million USD to 1 August 2025.......plus revenue (if any).

The grind continues.

Goodnight, All......Tech :sleep:

Hi Tech,

I think it's important to acknowledge that many of us, were raising red flags about listing in the US from the outset.

I wouldn’t give too much credit for the BOD reversing a decision that should never have progressed as far as it did.

I think of it as a bit of a misstep.

But I get that everyone will have a different opinion.

Cheers,

B
 
  • Like
  • Fire
  • Love
Reactions: 38 users

Diogenese

Top 20

Is this also Akida?
Hi Rach,

This is not Akida:

"The AI algorithm runs at remarkable speed. Ideal for demanding space applications"

GR740 has 4*Leon4FT processors.

https://www.frontgrade.com/products...s/leon-sparc-microprocessors-microcontrollers
 
  • Like
  • Fire
Reactions: 8 users

TheDrooben

Pretty Pretty Pretty Pretty Good

Screenshot 2025-08-01 082550.png


Happy as Larry
 
  • Like
  • Love
  • Fire
Reactions: 27 users

TheDrooben

Pretty Pretty Pretty Pretty Good
  • Like
  • Fire
  • Love
Reactions: 25 users

Tothemoon24

Top 20

Upgrade the Raspberry Pi for AI with a Neuromorphic Processor​


Chris Anastasi, Applications Engineer
Raspberry Pi developers, makers and hobbyists interested in creating Edge AI use cases need to look no further than the BrainChip neuromorphic add-on cards. As interest grows for efficient, scalable AI solutions at the edge, developers are seeking hardware platforms that combine accessibility with advanced performance at a Maker price point, BrainChip is meeting that need by integrating its Akida neuromorphic technology into familiar, compact systems—making it easier than ever to prototype and deploy intelligent applications.

Accelerating Edge AI Development with BrainChip and Raspberry Pi​


BrainChip provides the AKD1000 PCIe card and the AKD1000 M.2 cards, development boards that incorporate a neuromorphic processor. These can be bundled into development kits, including a small form factor PC and the Raspberry Pi 4 both with the AKD1000 PCIe card and the Raspberry Pi 5 with the AKD1000 M.2 card. Paired with the Akida MetaTF tools available on our Developer Hub (developer.brainchip.com), these platforms empower customers to rapidly develop product prototypes, showcasing Akida’s edge AI capabilities for their specific use cases. These development kits offer a quick evaluation of BrainChip Akida’s neuromorphic technology, enabling conversion of CNN networks to execute on an event based neural network.
Raspberry Pi 5 has taken the world of single-board computers by storm, offering substantial upgrades in processing power, RAM, and connectivity. This new model paves the way for more advanced applications in fields like IoT, AI, and edge computing. However, while the Pi 5 is powerful, there’s always room for more — and that’s where the AKD1000 Neuromorphic Processor M.2 Card comes in. This integration opens exciting possibilities for developers, particularly in the field of machine learning (ML) and artificial intelligence (AI).
image-13.png


Why Upgrade the Raspberry Pi 5 with the AKD1000 Neuromorphic Processor?​


The Raspberry Pi 5 comprises a 64-bit quad-core ARM Cortex-A76 processor, dual 4K HDMI outputs, and Gigabit Ethernet. These upgrades make it an ideal candidate for running more demanding tasks like AI processing, but its processing capabilities are still limited by traditional CPUs and GPUs. Introducing the AKD1000 Neuromorphic Processor M.2 Card. This accelerator card is designed to handle complex AI workloads efficiently by mimicking the brain’s structure, using a neuromorphic architecture that provides 1.5 TOPS of high parallelism and low power consumption of a few watts. By integrating the AKD1000 with the Raspberry Pi 5, developers can offload AI tasks to the processor, boosting performance while keeping the overall system energy efficient.
1._BC_m2_Card_1.png

Unlocking New Capabilities with AI and Edge Computing​


The AKD1000’s integration with the Raspberry Pi 5 creates a powerful combination for edge computing. Edge computing is essential for real-time, low-latency applications, such as video analytics, facial recognition, and sensor data processing. The ability to run AI models locally on the device, without needing to communicate with cloud servers, significantly reduces response time and bandwidth consumption.
Take IoT applications, for example: A Raspberry Pi 5 paired with the AKD1000 can be used to run machine learning models directly on-site, enabling smart devices to make real-time decisions without relying on external servers. This could revolutionize industries like healthcare (e.g., telemedicine and diagnostics), manufacturing (e.g., predictive maintenance), and even education (e.g., AI-powered robotics and learning tools).

The Technical Process: Simple Integration for Powerful Results​


Integrating the AKD1000 with the Raspberry Pi 5 is straightforward with the Raspberry Pi HAT (Hardware Attached on Top) is a standardized add-on board designed to be attached to a Raspberry Pi single-boardproviding the M.2 slot on the Raspberry Pi 5 used to connect the AKD1000, and with the right drivers and software, the two components communicate seamlessly. This allows the Raspberry Pi to offload AI-intensive tasks to the AKD1000 processor, allowing the system to handle more complex workloads without bogging down the Pi’s primary processor.

What’s Next for Raspberry Pi and AI?​


The combination of the Raspberry Pi 5 and AKD1000 is just the beginning. As AI and edge computing continue to grow, the ability to integrate specialized processors like the AKD1000 will become even more critical. This modular approach allows developers to customize their systems for specific applications, creating highly flexible, powerful platforms.
This integration sets the stage for a new wave of AI-powered edge computing applications, from smart cities and healthcare to advanced robotics. Developers now have the opportunity to push the boundaries of what’s possible with the Raspberry Pi 5, taking full advantage of its affordability, versatility, and newfound processing power.

The Future of Edge AI is Here​


As an AE/FAE providing design-in assistance and technical support for edge AI applications I’ve found that these development kits make my job—and our customers’ jobs—faster, easier, and most importantly fun!
For developers seeking scalable, efficient AI at the edge, this combination offers a compelling foundation to build, test, and deploy. Whether you’re working on robotics, IoT, or any other AI-driven project, this pairing offers a cost-effective, powerful solution that can handle the most demanding tasks. The future of edge AI is now within reach, and with the right tools and integration, developers can create the next generation of intelligent, real-time systems.
Come to BrainChip’s Developer Hub at developer.brainchip.com and use the discount code RaspberryPi25 to get a real deal on a neuromorphic processor for your very own Raspberry Pi.
 
  • Like
  • Love
  • Fire
Reactions: 26 users
  • Like
  • Love
  • Fire
Reactions: 28 users
Asking GPT.


BrainChip's Akida neuromorphic processor is expected to be integrated into Renesas's microcontrollers (MCUs) and microprocessors (MPUs), targeting a broad range of edge AI applications. These integrations are anticipated to enable on-device learning and inference for applications such as industrial automation, smart home devices, automotive systems, and consumer electronics.[1] The collaboration aims to bring AI capabilities directly to the edge, reducing latency and improving privacy by processing data locally.[2]
According to www.iAsk.Ai - Ask AI:
While specific product names for 2026 availability are not yet publicly detailed by Renesas or BrainChip, the licensing agreement suggests that Renesas will incorporate Akida IP into its future product roadmap, leading to new AI-enabled MCUs and MPUs.[3] The timeline for commercial availability of end products incorporating these integrated chips is often subject to development cycles and market demand. However, given the strategic nature of the partnership announced in late 2023, it is plausible that initial products or development kits featuring Akida-enabled Renesas chips could be available for sale or sampling in 2026, particularly for industrial and automotive sectors where design cycles are longer but early adoption is critical.[4
 
  • Like
  • Fire
Reactions: 12 users

Wags

Regular
Asking GPT.


BrainChip's Akida neuromorphic processor is expected to be integrated into Renesas's microcontrollers (MCUs) and microprocessors (MPUs), targeting a broad range of edge AI applications. These integrations are anticipated to enable on-device learning and inference for applications such as industrial automation, smart home devices, automotive systems, and consumer electronics.[1] The collaboration aims to bring AI capabilities directly to the edge, reducing latency and improving privacy by processing data locally.[2]
According to www.iAsk.Ai - Ask AI:
While specific product names for 2026 availability are not yet publicly detailed by Renesas or BrainChip, the licensing agreement suggests that Renesas will incorporate Akida IP into its future product roadmap, leading to new AI-enabled MCUs and MPUs.[3] The timeline for commercial availability of end products incorporating these integrated chips is often subject to development cycles and market demand. However, given the strategic nature of the partnership announced in late 2023, it is plausible that initial products or development kits featuring Akida-enabled Renesas chips could be available for sale or sampling in 2026, particularly for industrial and automotive sectors where design cycles are longer but early adoption is critical.[4
Hi Smoothsailing 18.
Im just not completely trusting of Chat GPT or others, though obviously very good tools in the right context. See the Renesas announcement from december 2020.
First Akida IP License Agreement
cheers
 
  • Like
Reactions: 5 users
Pretty decent breakdown & summary of Akida by Electronics | Projects | Focus. Whether AI gen or human written, who knows these days 🤷‍♂️



BrainChip Akida : Architecture, Working, Advantages, Limitations & Its Applications​


A neuromorphic processor is a computer chip that mimics the structure and function of the human brain. It integrates artificial neurons & synapses to process data in a highly energy-efficient and parallel manner. Thus, this approach enables extremely parallel computation and efficient processing of complex data, particularly in machine learning and artificial intelligence applications. This approach can be inspired by the neural networks of the brain to attain higher performance with less power consumption. In addition, neuromorphic processor plays a key role, particularly in areas that need energy-efficient and real-time processing like robotics, edge AI, and autonomous vehicles. This article elaborates on BrainChip Akida, its working, and its applications.

What is BrainChip Akida?​

BrainChip Akida is a low-power, adaptable, and powerful neuromorphic processor. It is designed to mimic the neural architecture of the human brain by allowing on-chip learning, efficient data processing, and ultra-low power, particularly in edge AI applications like consumer electronics, industrial IoT, and connected cars. In addition, this processor aims to bring superior AI capabilities to a broad range of edge devices, changing how we communicate with technology.
Akida shines at event-based processing by focusing on important data changes instead of processing whole frames. Thus, it leads to improved speed and decreased power usage as compared to usual AI processing.

BrainChip Akida Processor
BrainChip Akida Processor

How does BrainChip Akida Work?​

The BrainChip Akida processor works by using an event-based and SNN (spiking neural network) architecture to achieve AI computations. So it focuses only on processing the events or significant changes in data, which leads to significant power and energy savings. This is an event-based approach that is merged with its neuromorphic design to allow Akida to handle various AI tasks efficiently like audio processing, sensor fusion, and image recognition, mainly at the edge. BrainChip’s Akida neuromorphic processor can be inspired by the neural architecture of the brain.

BrainChip Akida Architecture​

BrainChip Akida is an ultra-low-power neuromorphic that uses the brain’s neural architecture. Thus, it speeds up complex AI at the edge using event-based processing, on-chip learning abilities & support for superior NNs like RNNs, CNNs & Nets based on custom temporal events. The Akida Brainchip processor is designed to accelerate NNs like CNNs (convolution neural networks), DNNs (deep neural networks), RNNs (recurrent neural networks), and ViTs (Vision Transformers) in hardware directly.
Akida uses a processing approach based on events, where computations are executed only when new sensor input is obtained, thus reducing the number of operations. In addition, it can also allow event-based communication among processor nodes without the intervention of the CPU. Further, this architecture can also support on-chip learning by allowing models to adjust without connecting to the cloud.

BrainChip Akida Architecture
BrainChip Akida Architecture

Components​

BrainChip Akida architecture includes different components, which include: data input interfaces, on-chip processors, data processing, external memory interfaces, multichip expansion, and flexible Akida neuron fabric. Thus, these components are discussed below.

Data Input Interfaces​

The data input interfaces of BrainChip Akida include PCI-Express, USB 3.0 endpoint, I3S, UART, I2C, and JTAG, which are discussed below.

PCI-Express​

Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard that helps in connecting a variety of computer components to the motherboard, like storage devices and graphics cards. Thus, this interface ensures optimal functionality and performance of components in a computer.

USB 3.0 endpoint​

USB 3.0 endpoint is a particular addressable location that handles data transmission. Here, every endpoint can be associated through a transfer type like control, bulk, interrupt/ isochronous. In addition, it includes a direction like IN for data supply to the host from the device, and OUT for data to the device from the host. Usually, endpoints are gathered into interfaces, where interfaces are utilized to signify logical connections similar to a keyboard/ mouse.

UART, I2C and JTAG​

UART (Universal Asynchronous Receiver/Transmitter) is a communication protocol that allows serial data transmission between two devices. In addition, this is used to send one bit of data at a time over two transmit & receive wires. UART protocol is asynchronous, which doesn’t depend on a shared CLK signal mainly for synchronization, but it uses start & stop bits to enclose the data.
Inter-Integrated Circuit or I2C is a two-wire serial communication protocol that connects several master & slave devices within embedded systems. This protocol is well known for its simplicity in using two wires, like SDA for data, whereas SCL is for the clock. I2C is commonly used for short-distance communication between memory chips, sensors, microcontrollers & other peripheral devices.

JTAG (Joint Test Action Group) is a standard interface used to debug and test electronic circuits. Thus, this interface gives access to particular points in the circuitry of the device, like memory modules and embedded processors, by allowing different tasks like testing connections, debugging code, and programming firmware.

On-chip Processor​

This processor includes an M-class CPU, system management, and Akida configuration, which are explained below.
The M-class CPU in the BrainChip Akida neuromorphic processor is an Arm Cortex-M class processor, used for primary setup & system management tasks. In addition, this processor can also handle different tasks like neural network computational graph loading, managing I/O functions, and many more.
System management in the brain chip is also used to inform the neuron fabric to be in inference mode or training. Thus, it helps in setting the thresholds within the neuron fabric.
Akida configuration is inspired by the neural architecture of the brain which accelerates difficult AI at the frame through on-chip learning abilities, event-based processing & support for higher neural networks.

Data Processing & Event Generation​

The Akida processor is event-based, which means it processes data in the form. Generally, events are the occurrences where things occur, like a change of difference in a picture, otherwise a color change. An event can be expressed as a short energy burst. Thus, the burst in an Akida can have a value that signifies neural performance. When zero values happen within the network, then no events will be generated.

External Memory Interfaces​

External memory interfaces of this Brain chip include SPI Flash and LPDDR 4, which are explained below.
SPI flash (Serial Peripheral Interface flash memory) is a non-volatile memory, used commonly in embedded systems to store data & code. In addition, this memory uses the SPI for communication through a host microcontroller. It is small in size, low cost, and suitable for a variety of applications like program code, data storage, and boot code storage within embedded systems.
LPDDR4 (Low-Power Double Data Rate 4) is a low-power memory, primarily used in different mobile devices. Thus, this memory is designed for small size and low power consumption to make it perfect for portable electronics. In addition, it provides significant improvements in power efficiency and data rates as compared to earlier LPDDR3 generations.

Multichip Expansion​

Multichip expansion is the integration of various chips into a single package to form an MCM (multi-chip module). So this approach allows for increasing its functionality, decreasing the size of electronic devices & higher performance. It is particularly related to the background of the ongoing miniaturization trend within the electronics industry.
The PCIe links in multichip expansion allow for data-center deployments & can balance through the multi-chip expansion port. In addition, it is an essential high-speed serial interface that sends spikes to various neural processing cores, which can be expanded to 1024 devices for extremely large spiking neural networks.

Advantages​

The BrainChip Akida advantages include the following.
  • Akida provides real-time insights more efficiently and faster than usual processors.
  • It powers your edge applications through unprecedented efficiency and accuracy by exploring the future of AI.
  • It is an ultra-low-power neuromorphic processor.
  • This processor uses event-based processing & on-chip learning abilities by supporting superior NNs like RNNs, CNNs, and custom temporal event-based Nets.
  • By completely incorporating the neural network (NN) control, neuronal mathematics, and parameter memory, the Brainchip Akida removes significant compute & I/O data power overhead. Thus, this factor can save many watts of preventable power consumption.

Limitations​

BrainChip Akida limitations lie in software support, integration challenges and ecosystem maturity.
  • BrainChip Akida has a limitation in software support because of a lack of compatibility through mainstream AI frameworks,
  • Ecosystem maturity because of its fairly young & narrow developer community,
  • Integration challenges can be raised from hardware compatibility problems & the need for specialized expertise.

BrainChip Akida Applications​

The BrainChip Akida applications include the following.
  • The Akida neural processor can run the present’s most common neural networks, convolutional NNs in event-based hardware & the next-generation SNNs.
  • It is applicable in neuromorphic computing and edge AI areas like consumer electronics, connected vehicles, IoT sensors, and industrial automation.
  • This chip is designed mainly for low power consumption & efficient sensor data processing at the edge. Thus, enables always-on performance, improved safety & quicker response times.
  • In addition, this technology has the potential to transform healthcare in neurological disease treatment. In addition, it improves the cognitive abilities of strong people.
  • It allows ultra-low power intelligence to expand battery life and minimize device size. Thus, security strengthens & delivers always-on performance in smart cameras, wearables, and more.

FAQ’S​

1. What is BrainChip Akida?​

BrainChip Akida is a neuromorphic processor designed to mimic the brain’s neural architecture, enabling ultra-low power and high-performance AI processing at the edge.

2. What makes BrainChip Akida different from traditional AI chips?​

Akida uses a spiking neural network (SNN) and event-based processing, unlike traditional chips that rely on frame-based, power-intensive computation.

3. What is event-based processing in Akida?​

Event-based processing means the chip only processes data when there is a significant change or “event,” reducing energy usage and improving efficiency.

4. What is a Spiking Neural Network (SNN)?​

An SNN is a type of artificial neural network that mimics the brain’s method of processing information using spikes, enabling real-time and low-power inference.

5. Is BrainChip Akida suitable for edge AI applications?​

Yes, Akida is specifically designed for edge AI tasks, offering always-on performance in devices like wearables, smart cameras, industrial sensors, and autonomous vehicles.

6. Can BrainChip Akida learn on the chip itself?​

Yes, Akida supports on-chip learning, allowing it to adapt to new data in real-time without cloud connectivity.

7. Which neural networks are supported by Akida?​

Akida can accelerate convolutional neural networks (CNNs), recurrent neural networks (RNNs), vision transformers (ViTs), and custom temporal event-based networks.

8. What is the power consumption of BrainChip Akida?​

Akida operates at ultra-low power, often consuming milliwatts, making it ideal for battery-powered and low-resource devices.

9. What industries use BrainChip Akida?​

Industries include automotive, consumer electronics, industrial IoT, healthcare, defense, and security.

10. Does BrainChip Akida support multichip expansion?​

Yes, Akida supports multi-chip configurations and can scale up to 1024 devices using high-speed PCIe interfaces for larger spiking neural networks.

11. Is BrainChip Akida commercially available?​

Yes, BrainChip offers commercial Akida IP, development kits, and modules that are available for integration into third-party hardware solutions.

12. What programming tools or SDKs are available for Akida?​

BrainChip provides its MetaTF development environment and supports training models using TensorFlow/Keras, which can be converted to run on Akida.

13. Can I run traditional AI models on Akida?​

Yes, trained CNN or DNN models can be converted and optimized for Akida’s architecture using BrainChip’s software tools.

14. How does Akida compare to Intel Loihi or IBM TrueNorth?​

Akida is unique for its on-chip learning, commercial availability, and compatibility with modern edge AI applications, whereas others are more research-focused.

15. What is the neuron fabric in Akida?​

The neuron fabric is Akida’s core component that handles parallel event-based neural computations, inspired by biological neurons and synapses.

16. What types of sensors can Akida interface with?​

Akida can interface with microphones, cameras, IMUs, and other sensor types using I2C, UART, USB 3.0, and PCIe interfaces.

17. What memory interfaces does BrainChip Akida support?​

Akida supports LPDDR4 and SPI Flash for high-speed, low-power memory operations in edge devices.

18. How secure is BrainChip Akida?​

Akida includes built-in memory protection and does not rely on cloud connectivity for inference, offering improved security for sensitive edge applications.

19. What are some real-world applications of Akida?​

Keyword spotting in voice assistants, Predictive maintenance in factories, Object detection in smart cameras, Driver behavior analysis in vehicles
Thus, BrainChip Akida represents a new frontier in neuromorphic computing—bringing intelligent, low-power, real-time AI to edge devices. In addition, its event-driven, on-chip learning capabilities make it a game changer for smart sensors, wearables, autonomous vehicles, and more. As the demand for efficient AI grows, Akida is set to lead the evolution of AI hardware at the edge. Here is a question for you, what is the example of Brainchip?
 
  • Like
  • Fire
  • Love
Reactions: 20 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
  • Like
  • Haha
  • Fire
Reactions: 16 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Couldn't BrainChip’s Akida potentially be utilized as an accelerator card-style NPU?



MD mulls dedicated NPUs for desktop PCs - like graphics cards, but for AI tasks - and this could be excellent news for PC gamers​

By Darren Allan published 7 hours ago
Easing the pressure on the supply of higher-end GPUs that AI power users might snap up


A masculine hand holding an AMD Radeon RX 9070

(Image credit: Future / John Loeffler)


  • AMD's head of client CPUs says it's looking into dedicated NPU accelerators
  • These would be the equivalent of a discrete GPU, but for AI tasks
  • Such boards would lessen demand on higher-end GPUs, as they'd no longer be bought for AI work, as they are in some cases

AMD is looking to a future where it might not just produce standalone graphics cards for desktop PCs, but similar boards which would be the equivalent of an AI accelerator - a discrete NPU, in other words.

CRN reports (via Wccftech) that AMD's Rahul Tikoo, head of its client CPU business, said that Team Red is “talking to customers” about “use cases” and “potential opportunities” for such a dedicated NPU accelerator card.
CRN points out that there are already moves along these lines afoot, such as an incoming Dell Pro Max Plus laptop, which is set to boast a pair of Qualcomm AI 100 PC inference cards. That's two discrete NPU boards with 16 AI cores and 32GB of memory apiece, for 32 AI cores and 64GB of RAM in total.

To put that in perspective, current integrated (on-chip) NPUs, such as those in Intel's Lunar Lake CPUs, or AMD's Ryzen AI chips, offer around 50 TOPS - ideal for Copilot+ PCs - whereas you're looking at up to 400 TOPS with the mentioned Qualcomm AI 100. These boards are for beefy workstation laptops and AI power users.

Tikoo observed: "It’s a very new set of use cases, so we're watching that space carefully, but we do have solutions if you want to get into that space - we will be able to."
The AMD exec wouldn't be drawn to provide a hint at a timeframe in which AMD might be planning to realize such discrete NPU ambitions, but said that "it's not hard to imagine we can get there pretty quickly" given the 'breadth' of Team Red's technologies.

An AMD Radeon RX 9070 XT in a test bench

(Image credit: Future / John Loeffler)

Analysis: potentially taking the pressure off high-end GPU demand​

So, does this mean it won't be too long before you might be looking at buying your desktop PC and mulling a discrete NPU alongside a GPU? Well, not really, this still isn't consumer territory as such - as noted, it's more about AI power users - but it will have an important impact on everyday PCs, at least for enthusiasts.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

These standalone NPU cards will only be needed by individuals working on more heavyweight AI tasks with their PC. They will offer benefits for running large AI models or complex workloads locally rather than on the cloud, with far more responsive performance (dodging the delay factor that's inevitably brought into the mix when piping work online, into the cloud).
There are obvious privacy benefits from keeping work on-device, rather than heading cloud-wards, and these discrete NPUs will be designed to be more efficient than GPUs taking on these kinds of workloads - so there will be power savings to be had.
And it's here we come to the crux of the matter for consumers, at least enthusiast PC gamers looking at buying more expensive graphics cards. As we've seen in the past, sometimes individuals working with AI purchase top-end GPUs - like the RTX 5090 or 5080 - for their rigs. When dedicated NPUs come out from AMD (and others), they will offer a better choice than a higher-end GPU - which will take pressure off the market for graphics cards.
So, especially when a new range of GPUs comes out, and there's an inevitable rush to buy, there'll be less overall demand on higher-end models - which is good news for supply and pricing, for gamers who want a graphics card to, well, play PC games, and not hunker down to AI workloads.
Roll on the development of these standalone NPUs, then - it’s got to be a good thing for gamers in the end. Another thought for the much further away future is that eventually, these NPUs may be needed for AI routines within games, when complex AI-driven NPCs are brought into being. We've already taken some steps down this road, cloud-wise, although whether that's a good thing or not is a matter of opinion.
 
  • Like
  • Thinking
  • Fire
Reactions: 7 users

7für7

Top 20
Couldn't BrainChip’s Akida potentially be utilized as an accelerator card-style NPU?



MD mulls dedicated NPUs for desktop PCs - like graphics cards, but for AI tasks - and this could be excellent news for PC gamers​

By Darren Allan published 7 hours ago
Easing the pressure on the supply of higher-end GPUs that AI power users might snap up


A masculine hand holding an AMD Radeon RX 9070

(Image credit: Future / John Loeffler)


  • AMD's head of client CPUs says it's looking into dedicated NPU accelerators
  • These would be the equivalent of a discrete GPU, but for AI tasks
  • Such boards would lessen demand on higher-end GPUs, as they'd no longer be bought for AI work, as they are in some cases

AMD is looking to a future where it might not just produce standalone graphics cards for desktop PCs, but similar boards which would be the equivalent of an AI accelerator - a discrete NPU, in other words.

CRN reports (via Wccftech) that AMD's Rahul Tikoo, head of its client CPU business, said that Team Red is “talking to customers” about “use cases” and “potential opportunities” for such a dedicated NPU accelerator card.
CRN points out that there are already moves along these lines afoot, such as an incoming Dell Pro Max Plus laptop, which is set to boast a pair of Qualcomm AI 100 PC inference cards. That's two discrete NPU boards with 16 AI cores and 32GB of memory apiece, for 32 AI cores and 64GB of RAM in total.

To put that in perspective, current integrated (on-chip) NPUs, such as those in Intel's Lunar Lake CPUs, or AMD's Ryzen AI chips, offer around 50 TOPS - ideal for Copilot+ PCs - whereas you're looking at up to 400 TOPS with the mentioned Qualcomm AI 100. These boards are for beefy workstation laptops and AI power users.

Tikoo observed: "It’s a very new set of use cases, so we're watching that space carefully, but we do have solutions if you want to get into that space - we will be able to."
The AMD exec wouldn't be drawn to provide a hint at a timeframe in which AMD might be planning to realize such discrete NPU ambitions, but said that "it's not hard to imagine we can get there pretty quickly" given the 'breadth' of Team Red's technologies.

An AMD Radeon RX 9070 XT in a test bench

(Image credit: Future / John Loeffler)

Analysis: potentially taking the pressure off high-end GPU demand​

So, does this mean it won't be too long before you might be looking at buying your desktop PC and mulling a discrete NPU alongside a GPU? Well, not really, this still isn't consumer territory as such - as noted, it's more about AI power users - but it will have an important impact on everyday PCs, at least for enthusiasts.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

These standalone NPU cards will only be needed by individuals working on more heavyweight AI tasks with their PC. They will offer benefits for running large AI models or complex workloads locally rather than on the cloud, with far more responsive performance (dodging the delay factor that's inevitably brought into the mix when piping work online, into the cloud).
There are obvious privacy benefits from keeping work on-device, rather than heading cloud-wards, and these discrete NPUs will be designed to be more efficient than GPUs taking on these kinds of workloads - so there will be power savings to be had.
And it's here we come to the crux of the matter for consumers, at least enthusiast PC gamers looking at buying more expensive graphics cards. As we've seen in the past, sometimes individuals working with AI purchase top-end GPUs - like the RTX 5090 or 5080 - for their rigs. When dedicated NPUs come out from AMD (and others), they will offer a better choice than a higher-end GPU - which will take pressure off the market for graphics cards.
So, especially when a new range of GPUs comes out, and there's an inevitable rush to buy, there'll be less overall demand on higher-end models - which is good news for supply and pricing, for gamers who want a graphics card to, well, play PC games, and not hunker down to AI workloads.
Roll on the development of these standalone NPUs, then - it’s got to be a good thing for gamers in the end. Another thought for the much further away future is that eventually, these NPUs may be needed for AI routines within games, when complex AI-driven NPCs are brought into being. We've already taken some steps down this road, cloud-wise, although whether that's a good thing or not is a matter of opinion.

Time will tell… but for sure not a price sensitive announcement


loop perfect loops GIF
 
  • Haha
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

giphy-downsized.gif

Arm shares drop as outlook disappoints; company looks to invest to make own chips​

By Max A. Cherney and Arsheeya Bajwa
July 31, 20259:32 AM GMT+10Updated July 31, 2025



Malaysia PM announces $250 million deal with Arm Holdings for chip design blueprints

Rene Haas, CEO of chip tech provider Arm Holdings, holds a replica of a chip with his company's logo on it, during an event in which Malaysia's Prime Minister Anwar Ibrahim officially announces a $250 million deal with the company, in Kuala Lumpur, Malaysia March 5, 2025.

  • Arm's stock drops 8% after disappointing quarterly forecast
  • Arm to invest in developing finished chips, CEO says
  • CEO declined to share details on plans to develop finished chips
July 30 (Reuters) - Arm Holdings shares tumbled 8% in extended trading on Wednesday, after the chip tech provider issued quarterly forecasts that disappointed investors, in part because of its plans to invest a portion of its profit into building its own chips and other components.
The company forecast fiscal second-quarter profit slightly below estimates as global trade tensions threaten to hit demand for Arm in its mainstay smartphone market, failing to satisfy investors who have sent the stock surging in recent months.

The plan to invest more heavily in developing its own chips marks a departure from Arm's long-time business of supplying intellectual property to companies ranging from Nvidia (NVDA.O), opens new tab to Amazon.com (AMZN.O), opens new tab, which already design their own chips.

Finished chips are the "physical embodiment" of a product Arm already sells called Compute Sub Systems (CSS), Arm CEO Rene Haas said.

"We are consciously deciding to invest more heavily - (in) the possibility of going beyond (designs) and building something, building chiplets or even possible solutions," Haas said in an interview with Reuters.


Chiplets are smaller, function-specific versions of a larger chip that designers can use as building blocks to form a complete processor. Solutions integrate hardware and software.

The decision to increase its investments in potential chips, chiplets and solutions may not result in a product if Arm decides to halt development or pause various projects, the company said.

If the company opts to make a full chip, it will eat into the company's profit and is no guarantee of success. Advanced AI chips cost upwards of $500 million for the silicon alone and potentially more for the server hardware and software necessary to support it.

To build up the necessary staff to make chiplets and other finished chips, Arm has been recruiting from its customers and competing against them for deals.
Haas declined to provide a timeframe in which the company's investments in the new strategy would translate into profit, or give specifics about potential new products that are part of the initiative. But he said that Arm would look at chiplets, "a physical chip, a board, a system, all of the above."

For years, the SoftBank Group -owned Arm has embarked on an ambitious campaign to expand its revenue and boost its profit through a combination of new, higher-margin products such as the CSS tech and boosting the royalties it collects on each chip. Details of discussions among Arm executives about making its own chips emerged during a trial in December.


The decision to build its own chip could bring Arm into direct competition with its customers such as Nvidia (NVDA.O), opens new tab, who rely on the company's intellectual property.

INVESTORS DISAPPOINTED​

Arm's chip technology powers nearly every smartphone in the world, and its tame forecast underscores uncertainty faced by global manufacturers and their suppliers resulting from U.S. President Donald Trump's tariff policies.
UK-based Arm forecast adjusted per-share profit between 29 cents and 37 cents for the fiscal second quarter, the midpoint of which is below analysts' average estimate of 36 cents per share, according to LSEG data.

"Results and outlook were light and below expectations," said Summit Insights analyst Kinngai Chan.
Arm has surged around 150% since its stock market debut in 2023, and its shares recently traded at over 80 times expected earnings, far higher than the PE valuations of Nvidia, Advanced Micro and other chipmakers focused on AI.
Smartphones remain Arm's biggest stronghold. Morningstar analysts expect Arm to continue as the dominant architecture provider in smartphone processors, where it has a 99% market share.

Uncertainty fueled by tariff volatility and ongoing macroeconomic challenges has tapered end-market demand, with global smartphone shipments increasing just 1% in the April-to-June period, according to International Data Corporation.
Arm expects current-quarter revenue between $1.01 billion and $1.11 billion, in line with estimates of $1.06 billion.
The company reported first-quarter sales of $1.05 billion, coming in just shy of estimates of $1.06 billion. Adjusted profit of 35 cents per share was in line with estimates.
"Smartphone royalties (call it “Android on a low‑carb diet”) remain soft, especially in China, but cloud‑server and AI accelerator design wins keep the (next generation Arm tech) royalty treadmill humming," Running Point Capital chief investment officer Michael Schulman said.


 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 12 users

Arm shares drop as outlook disappoints; company looks to invest to make own chips​

By Max A. Cherney and Arsheeya Bajwa
July 31, 20259:32 AM GMT+10Updated July 31, 2025



Malaysia PM announces $250 million deal with Arm Holdings for chip design blueprints

Rene Haas, CEO of chip tech provider Arm Holdings, holds a replica of a chip with his company's logo on it, during an event in which Malaysia's Prime Minister Anwar Ibrahim officially announces a $250 million deal with the company, in Kuala Lumpur, Malaysia March 5, 2025.

  • Arm's stock drops 8% after disappointing quarterly forecast
  • Arm to invest in developing finished chips, CEO says
  • CEO declined to share details on plans to develop finished chips
July 30 (Reuters) - Arm Holdings shares tumbled 8% in extended trading on Wednesday, after the chip tech provider issued quarterly forecasts that disappointed investors, in part because of its plans to invest a portion of its profit into building its own chips and other components.
The company forecast fiscal second-quarter profit slightly below estimates as global trade tensions threaten to hit demand for Arm in its mainstay smartphone market, failing to satisfy investors who have sent the stock surging in recent months.

The plan to invest more heavily in developing its own chips marks a departure from Arm's long-time business of supplying intellectual property to companies ranging from Nvidia (NVDA.O), opens new tab to Amazon.com (AMZN.O), opens new tab, which already design their own chips.

Finished chips are the "physical embodiment" of a product Arm already sells called Compute Sub Systems (CSS), Arm CEO Rene Haas said.

"We are consciously deciding to invest more heavily - (in) the possibility of going beyond (designs) and building something, building chiplets or even possible solutions," Haas said in an interview with Reuters.

Chiplets are smaller, function-specific versions of a larger chip that designers can use as building blocks to form a complete processor. Solutions integrate hardware and software.

The decision to increase its investments in potential chips, chiplets and solutions may not result in a product if Arm decides to halt development or pause various projects, the company said.

If the company opts to make a full chip, it will eat into the company's profit and is no guarantee of success. Advanced AI chips cost upwards of $500 million for the silicon alone and potentially more for the server hardware and software necessary to support it.

To build up the necessary staff to make chiplets and other finished chips, Arm has been recruiting from its customers and competing against them for deals.
Haas declined to provide a timeframe in which the company's investments in the new strategy would translate into profit, or give specifics about potential new products that are part of the initiative. But he said that Arm would look at chiplets, "a physical chip, a board, a system, all of the above."

For years, the SoftBank Group -owned Arm has embarked on an ambitious campaign to expand its revenue and boost its profit through a combination of new, higher-margin products such as the CSS tech and boosting the royalties it collects on each chip. Details of discussions among Arm executives about making its own chips emerged during a trial in December.


The decision to build its own chip could bring Arm into direct competition with its customers such as Nvidia (NVDA.O), opens new tab, who rely on the company's intellectual property.

INVESTORS DISAPPOINTED​

Arm's chip technology powers nearly every smartphone in the world, and its tame forecast underscores uncertainty faced by global manufacturers and their suppliers resulting from U.S. President Donald Trump's tariff policies.
UK-based Arm forecast adjusted per-share profit between 29 cents and 37 cents for the fiscal second quarter, the midpoint of which is below analysts' average estimate of 36 cents per share, according to LSEG data.

"Results and outlook were light and below expectations," said Summit Insights analyst Kinngai Chan.
Arm has surged around 150% since its stock market debut in 2023, and its shares recently traded at over 80 times expected earnings, far higher than the PE valuations of Nvidia, Advanced Micro and other chipmakers focused on AI.
Smartphones remain Arm's biggest stronghold. Morningstar analysts expect Arm to continue as the dominant architecture provider in smartphone processors, where it has a 99% market share.

Uncertainty fueled by tariff volatility and ongoing macroeconomic challenges has tapered end-market demand, with global smartphone shipments increasing just 1% in the April-to-June period, according to International Data Corporation.
Arm expects current-quarter revenue between $1.01 billion and $1.11 billion, in line with estimates of $1.06 billion.
The company reported first-quarter sales of $1.05 billion, coming in just shy of estimates of $1.06 billion. Adjusted profit of 35 cents per share was in line with estimates.
"Smartphone royalties (call it “Android on a low‑carb diet”) remain soft, especially in China, but cloud‑server and AI accelerator design wins keep the (next generation Arm tech) royalty treadmill humming," Running Point Capital chief investment officer Michael Schulman said.


I have allways thought chiplets is the future due to AI allowing small form factor in all verticals.
As ARM start to head this way designing their own chips , brn can bring models that enhance preformence we should do very well out of this new direction IMO.
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

gilti

Regular
How is this crap legal
 

Attachments

  • brn3.JPG
    brn3.JPG
    43.2 KB · Views: 54
  • Like
  • Fire
  • Haha
Reactions: 16 users

Diogenese

Top 20
  • Like
  • Haha
  • Fire
Reactions: 14 users

7für7

Top 20
How is this crap legal

They are looking like “aaaaarghhh…I think I will sell one more share… it’s soooo fun”

Tired Monday GIF
 
  • Haha
  • Like
Reactions: 6 users
Top Bottom