Well it looks like Grasshopper Mike is prepared to learn from the master:Since yesterday's post I've been doing a bit more research into Loihi 3 and discovered a very long and detailed blog on Loihi 3 from someone called Dr Shayan Erfanian (see link below).
The profile material I found presents Dr Erfanian as a technology strategist and software/cybersecurity entrepreneur, not obviously a primary neuromorphic researcher ,which I think pushes the blog more toward forecasting rather than authoritative disclosure.
For what it's worth, Dr Erfanian says that we can expect to hear more about Loihi 3 at the Intel Developer Conference in Q2/3 2026.
In another expert he states "the 2-3 year horizon following Loihi 3's launch (2028-2029) will witness substantial industry restructuring as the energy efficiency benefits of neuromorphic computing become widely acknowledged and integrated."
![]()
Dr Shayan Erfanian - Technology Mentor & Strategist
University professor and technology strategist. Teaching thousands of students and mentoring startup founders in AI, digital transformation, and business innovation.shayanerfanian.com
Excerpt 1
View attachment 95145
Excerpt 2
View attachment 95146
All of these ruminations about the emergence of Loihi 3 would seem to align with Intel's job advertising published 16 days ago for an AI Software Architect Neuromorhpic Computing.
The job ad says "Now, we're entering an exciting new chapter: transforming these breakthroughs into real-world products that will power the coming era of physical AI systems beyond the reach of GPUs and mainstream AI accelerators." It also states a key responsibility is to "Integrate neuromorphic software into leading robotics, IoT, and sensing frameworks to enable broad ecosystem adoption."
Intel's Job Ad
![]()
AI Software Architect – Neuromorphic Computing
Job Details: Job Description: For nearly a decade, Intel's Neuromorphic Computing Lab-together with a global ecosystem of 250+ research groups-has explored architectures, algorithms, and software inspired by the brain's extraordinary efficiency, scalability, and adaptability. Our Loihi series of...intel.wd1.myworkdayjobs.com
The other thing that struck me yesterday about Mike Davies LinkedIn Post 8 months ago on Loihi 3 was how similar it sounded to Akida 2 with regard to features such as LLM's (see below).
View attachment 95147
I realise that the addressable market is big enough for more than 1 player, but Intel is a behemoth of a player and therefore a pretty significant threat.
I can’t deny feeling quite disappointed that what once looked like a meaningful lead has now narrowed.
In hindsight, relying solely on an IP-only strategy appears to have seriously limited commercial traction. The pivot to physical chips AKD1500 (and now AKD2500) feels, at least to me, like an acknowledgment that licensing alone wasn’t able to convert at the pace required.
Having said that, I can understand the appeal of the IP-only idea because it's capital-light (no wafer commitments, inventory risk, or hardware support burden). But in practice, IP-only seems to work best when there’s already a mature ecosystem, strong reference implementations and customers ready to integrate with confidence. Neuromorphic as a new and disruptive technology would make pure IP much harder to monetise.
The key question now isn’t whether the pivot was necessary but whether it was too late. The risk is that competitors with much deeper resources and broader ecosystems may be able to close the window further.
In the coming months I'll be keeping a very close eye on any primary sources from Intel announcing Loihi 3 specs.
https://www.bing.com/videos/rivervi...om/channel/UC4BPiFQh-5JbyVpjZ68aXOg&FORM=VIRE
US2025371331A1 IMPLEMENTING N:M SPARSITY IN A DIGITAL COMPUTE-IN-MEMORY ACCELERATOR 20241114
To support flexible N:M sparsity pattern in a DCiM macro, the DCiM macro is subdivided into multiple sub-macros according to a partitioning factor P. Each sub-macro can support 1:2 sparsity ratio. Leveraging the partitioned design, the sub-macros can be grouped together to support different N:M sparsity patterns. To determine optimal N:N sparsity pattern for each layer of a neural network, an algorithm can determine the value A of a sparsity ratio A/B is based on the number of outliers in a layer, and the value B of the sparsity ratio A/B is based on the locality measure of the outliers representing the spatial distribution of the outliers. Moreover, the optimal N:M sparsity pattern that is aligned with the determined sparsity ratio A/B can be selected based on whether to prioritize latency or accuracy, or to balance both latency and accuracy.