FabricatedLunacy
Regular
Tick Tock Sean... End of 2025 is near. Nothing has been announced or happened, as per usual.
Strange.Happy Sunday to most.
The likelihood of BrainChip's Akida technology being integrated into mobile phones is high, with strong indications of imminent commercialization.
According to www.iAsk.Ai - Ask AI:
BrainChip's neuromorphic AI processors, particularly the Akida NSoC, are designed for ultra-low power consumption and on-device processing, making them ideal for resource-constrained edge devices like smartphones [1] [2] [3]. The company has explicitly stated its intention to expand into consumer electronics, including mobile and embedded devices
I asked my question negatively in order to get a negative response, you can's really trust them that much.Strange.
I asked the same question:
It is unlikely that BrainChip's neuromorphic AI processors, such as the Akida Pico, will end up in mainstream smartphones as their primary AI processing unit, though they could be integrated for specific, highly power-constrained tasks. While BrainChip's technology is designed for power-constrained devices and the "extreme edge," which includes mobile phones and wearables, the current market for smartphone AI processing is dominated by more general-purpose AI accelerators integrated into powerful System-on-Chips (SoCs) [1] [2] [3].
According to www.iAsk.Ai - Ask AI:
BrainChip's Akida platform, including the recently launched AKD1500 and the miniaturized Akida Pico, is specifically engineered for ultra-low power, real-time AI processing at the edge, leveraging neuromorphic principles and event-based hardware acceleration [4] [5] [6]. This design minimizes compute and communication, making it highly efficient for applications where power consumption and thermal limits are critical, such as battery-powered wearables, smart sensors, and heat-constrained environments [5] [7]. The Akida Pico, for instance, consumes as little as 1 milliwatt of power, targeting devices with severe limitations on power and wireless communication capacities [7].
However, the "extreme edge" market, while encompassing mobile phones, also includes a broader range of smaller user devices and wearables [7]. While BrainChip's technology offers significant power savings for specific tasks like keyword detection and audio de-noising, potentially reducing power consumption to one-fifth of traditional models on conventional microprocessors [7], the overall AI capabilities required by modern smartphones are far more extensive. These include complex tasks like advanced image and video processing, natural language understanding, augmented reality, and sophisticated user interface interactions, which typically demand high computational throughput and flexibility that current neuromorphic chips may not fully provide as a primary processor [8] [9].
Intel's Mike Davies notes that while neuromorphic computing excels in processing signal streams in real-time with energy efficiency, there's a "limited amount of magic you can bring to a problem" at the very tiny neural network level [7]. This suggests that while BrainChip's chips are ideal for specific, power-sensitive AI functions, they might not replace the more powerful and versatile AI accelerators found in flagship smartphone SoCs that handle a wider array of demanding AI workloads [8] [9]. Therefore, while a BrainChip processor could potentially be integrated into a smartphone for specialized, ultra-low-power background tasks, it is unlikely to serve as the main AI engine for the device's broad AI functionalities [7] [10].
World's Most Authoritative Sources
Neuromorphic Computing. IEEE Spectrum↩
BrainChip Unveils Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North America. BrainChip Investor Relations↩
Akida. Open Neuromorphic↩
IP. BrainChip↩
Product. BrainChip↩
BrainChip Edge AI AI Chips. Quantum Zeitgeist↩
Neuromorphic Computing. IEEE Spectrum↩
Mobile AI Processors. Qualcomm↩
Apple Neural Engine. Apple Developer↩
Chips. BrainChip↩
Hi Guzzi,Strange.
I asked the same question:
It is unlikely that BrainChip's neuromorphic AI processors, such as the Akida Pico, will end up in mainstream smartphones as their primary AI processing unit, though they could be integrated for specific, highly power-constrained tasks. While BrainChip's technology is designed for power-constrained devices and the "extreme edge," which includes mobile phones and wearables, the current market for smartphone AI processing is dominated by more general-purpose AI accelerators integrated into powerful System-on-Chips (SoCs) [1] [2] [3].
According to www.iAsk.Ai - Ask AI:
BrainChip's Akida platform, including the recently launched AKD1500 and the miniaturized Akida Pico, is specifically engineered for ultra-low power, real-time AI processing at the edge, leveraging neuromorphic principles and event-based hardware acceleration [4] [5] [6]. This design minimizes compute and communication, making it highly efficient for applications where power consumption and thermal limits are critical, such as battery-powered wearables, smart sensors, and heat-constrained environments [5] [7]. The Akida Pico, for instance, consumes as little as 1 milliwatt of power, targeting devices with severe limitations on power and wireless communication capacities [7].
However, the "extreme edge" market, while encompassing mobile phones, also includes a broader range of smaller user devices and wearables [7]. While BrainChip's technology offers significant power savings for specific tasks like keyword detection and audio de-noising, potentially reducing power consumption to one-fifth of traditional models on conventional microprocessors [7], the overall AI capabilities required by modern smartphones are far more extensive. These include complex tasks like advanced image and video processing, natural language understanding, augmented reality, and sophisticated user interface interactions, which typically demand high computational throughput and flexibility that current neuromorphic chips may not fully provide as a primary processor [8] [9].
Intel's Mike Davies notes that while neuromorphic computing excels in processing signal streams in real-time with energy efficiency, there's a "limited amount of magic you can bring to a problem" at the very tiny neural network level [7]. This suggests that while BrainChip's chips are ideal for specific, power-sensitive AI functions, they might not replace the more powerful and versatile AI accelerators found in flagship smartphone SoCs that handle a wider array of demanding AI workloads [8] [9]. Therefore, while a BrainChip processor could potentially be integrated into a smartphone for specialized, ultra-low-power background tasks, it is unlikely to serve as the main AI engine for the device's broad AI functionalities [7] [10].
World's Most Authoritative Sources
Neuromorphic Computing. IEEE Spectrum↩
BrainChip Unveils Breakthrough AKD1500 Edge AI Co-Processor at Embedded World North America. BrainChip Investor Relations↩
Akida. Open Neuromorphic↩
IP. BrainChip↩
Product. BrainChip↩
BrainChip Edge AI AI Chips. Quantum Zeitgeist↩
Neuromorphic Computing. IEEE Spectrum↩
Mobile AI Processors. Qualcomm↩
Apple Neural Engine. Apple Developer↩
Chips. BrainChip↩
Can BRN stop promising shit that never comes true, they are becoming very boring... Broken promise after broken promise. Watch financials, unprecedented interest, big announcement before the end of the year.Can we stop with the juvenile ‘tick tick’ comments? They’re getting very boring.
I can imagine it will be hard for brainchip to take business from Qualcomm due to their existing partnerships with in the phone industry as they do have a lot of bargaining power to keep their seat. I think brainchip will be at the table and in phones at some point but maybe not at snapdragons party, it may be a new company outside of their strangle holdHi Guzzi,
There is a pervading misconception about Akida's capabilities.
This analysis misunderstands the capabilities of Akida, particularly Akida 2 and later.
For example, the use of Pico to demonstrate Akida's limited processing capabilities demonstrates total ignorance not only of Akida's capabilities, but also of TENNs. Pico is specifically designed for ultra-low power applications. It is the minimal configuration of Akida and uses a single NPU, whereas Akida can have hundreds of NPUs. Then, as with other processors, many Akidas can be ganged together for exceptionally heavy AI workloads.
Akida 1 is highly efficient for object classification and can be used as an AI accelerator/coprocessor. TENNs in Akida 2 adds temporal definition, enabling time dependent signals such as video and speech analysis.
Among other things, Akida GenAI and Akida 3 will boost Akida's SLLM capabilities.
As to mobile phones, Qualcomm has a firm grip on that market, and they have their own in-house AI in Snapdragon.
https://www.qualcomm.com/news/relea...g-redefine-premium-performance-by-bringing-th
Qualcomm and Samsung Redefine Premium Performance by Bringing the Most Powerful Mobile Platform to the Galaxy S25 Series Globally
US2020073636A1 MULTIPLY-ACCUMULATE (MAC) OPERATIONS FOR CONVOLUTIONAL NEURAL NETWORKS
Priority: 20180831
View attachment 93692
[0032] The SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104 , a DSP 106 , a connectivity block 110 , which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures. In one implementation, the NPU is implemented in the CPU, DSP, and/or GPU. The SOC 100 may also include a sensor processor 114 , image signal processors (ISPs) 116 , and/or navigation module 120 , which may include a global positioning system.
Qualcomm has been doing a lot of work on analog:
US2024095492A1 MEMORY MANAGEMENT FOR MATHEMATICAL OPERATIONS IN COMPUTING SYSTEMS WITH HETEROGENEOUS MEMORY ARCHITECTURES 20220901
[0019] To improve the performance of operations using machine learning models, various techniques may locate computation near or with memory (e.g., co-located with memory). For example, compute-in-memory techniques may allow for data to be stored in SRAM and for analog computation to be performed in memory using modified SRAM cells. In another example, function-in-memory or processing-in-memory techniques may locate digital computation capacity near memory devices (e.g., DRAM, SRAM, MRAM, etc.) in which the weight data and data to be processed are located. In each of these techniques, however, many data transfer operations may still need to be performed to move data into and out of memory for computation (e.g., when computation is co-located with some, but not all, memory in a computing system).
US2022414443A1 COMPUTE IN MEMORY-BASED MACHINE LEARNING ACCELERATOR ARCHITECTURE 20210625
techniques for processing machine learning model data with a machine learning task accelerator, including: configuring one or more signal processing units (SPUs) of the machine learning task accelerator to process a machine learning model; providing model input data to the one or more configured SPUs; processing the model input data with the machine learning model using the one or more configured SPUs; and receiving output data from the one or more configured SPUs
Akida has the capabilities to handle all the features listed in the first patent abstract. I don't have comparative performance figures, but my money would be on Akida to outperform Snapdragon AI.
Goggle/AI:I can imagine it will be hard for brainchip to take business from Qualcomm due to their existing partnerships with in the phone industry as they do have a lot of bargaining power to keep their seat. I think brainchip will be at the table and in phones at some point but maybe not at snapdragons party, it may be a new company outside of their strangle hold.
While no direct partnership is announced as of December 2025, indirect enablers like Qualcomm's acquisition of Edge Impulse (which officially supports Akida model training and deployment) boost feasibility. Samsung uses both its Exynos and Qualcomm Snapdragon chips, making Akida integration viable via tools like MetaTF framework.I can imagine it will be hard for brainchip to take business from Qualcomm due to their existing partnerships with in the phone industry as they do have a lot of bargaining power to keep their seat. I think brainchip will be at the table and in phones at some point but maybe not at snapdragons party, it may be a new company outside of their strangle hold.
| Chip | Client Type / Use Case | LLM Benefits | Business Value |
|---|---|---|---|
| Akida 1000 | IoT device makers, industrial sensors | Local natural language processing for commands and alerts | Smarter devices without cloud dependency, reduced latency |
| Akida 1500 | Wearables, medical device manufacturers | Real‑time generative AI for health monitoring, private voice assistants | Enhanced patient privacy, longer battery life, cost‑efficient AI |
| Akida Gen 2 | Automotive, consumer electronics, industrial automation | More complex LLMs for predictive maintenance, in‑car assistants, smart appliances | Richer user experiences, scalable AI at the edge, reduced cloud costs |
| PICO | Ultra‑constrained IoT, miniaturized sensors | Ultra‑compressed LLMs for micro‑devices (e.g., smart patches, implantables) | AI in places previously impossible, extreme efficiency, new product categories |
Hi Manny,We will get news soon that GEN AI with LLMs will be available likely via TENNs.
We will know soon enough.
This will be huge for our clients as AKIDA 1000, 1500, Gen 2 and PICO all get access to LLMs.
Its easy to imagine the benefits for clients because we all use LLMs on a regular basis via Ghatgpt, Copilot, Gemini etc.
LLM's will add reasoning and decision support to our already adaptable on chip learning ability.
I asked 'Chat' to provide a brief summary of benefits and business value to each of our offerings of LLM access. Its pretty well common sense.
PICO will go to places previously thought impossible. a Health implant with Pico/TENNs/LLM and Haila, the battery will last years, is event driven so when there is an event it will explain the issue the issue with recommendations.
Here’s a structured table showing how access to LLMs via TENNs benefits clients of Akida 1000, 1500, Gen 2, and PICO:
Chip Client Type / Use Case LLM Benefits Business Value Akida 1000 IoT device makers, industrial sensors Local natural language processing for commands and alerts Smarter devices without cloud dependency, reduced latency Akida 1500 Wearables, medical device manufacturers Real‑time generative AI for health monitoring, private voice assistants Enhanced patient privacy, longer battery life, cost‑efficient AI Akida Gen 2 Automotive, consumer electronics, industrial automation More complex LLMs for predictive maintenance, in‑car assistants, smart appliances Richer user experiences, scalable AI at the edge, reduced cloud costs PICO Ultra‑constrained IoT, miniaturized sensors Ultra‑compressed LLMs for micro‑devices (e.g., smart patches, implantables) AI in places previously impossible, extreme efficiency, new product categories
It's due around 28 January 2026, the 4C and update and comments regarding the previous 3 month period are always generally lodged with the exchange about 24 to 31 days after the quarter finishes.Anyone know when our quarterly update is due?
Not the place for these remarksWel dumb Albo, 1,500 immigrants a day into Australia and were at war with some of them. Talk about stupidity.
Wow, anyone can just post a photo of Peter with some random people and make up crazy stories about having direct access to key people in the company.DK
You are a liar.
How can I say such a thing. Well as the organiser of the Perth based members social group I have arranged many get togethers in Perth. At these events some of the Perth based staff ( office since closed) attended but we had Peter VDM, Tony Dawe and Sean at numerous events including board and senior staff. I can also advise that Tech was present at some (even have photos as evidence).
So yes I am aware from direct conversations and observations of Tech’s long term engagement with Perth staff including Peter.
I don’t usually buy into arguments but when truth is challenged that’s where I draw a line. Peter was very generous with his time as was Sean whenever there were meetings in Perth. I can also attest for FF whom through other non BRN matters I have made contact with. Whilst East Coast based he is also know to Peter by his correct name.
So please don’t state fact when not so.
I have included a photo with Tech and Peter included. Those that know will know.
As for Peter not caring about us. The fact the only shares he sold (gifted to Dementia Wa) is testament to his belief in the product and his generous sharing of time with local holders tells different.
Please stay to facts or you may find yourself on the tipping point ( nod to FF) of being blocked by everyone.