BRN Discussion Ongoing

Hi Esky,

I agree and speaking of competition...

View attachment 85704

DEEPX aims to deliver energy-efficient, low-cost AI chip solutions for edge computing​

News highlightsWednesday 28 May 20250

1_b.jpg

Exclusive Interview with DEEPX CEO Lokwon Kim. Credit: DEEPX

At Computex in Taipei, Lokwon Kim, founder and CEO of the Korean semiconductor company DEEPX, shed light on his company's robust capabilities in designing high-performance AI chips that prioritize cost-effectiveness and power efficiency. DEEPX aims to compensate, rather than compete with, industry giants like NVIDIA, by focusing on the burgeoning "on-device" AI market.

Kim came to sign an MOU of strategic partnership with Taiwan-based AIC Inc., a storage and server solutions provider. DEEPX's advanced Neural Processing Units (NPUs) will be integrated with AIC's robust industrial-grade server platforms. This collaboration aims to deliver unprecedented computational power combined with significant energy efficiency and compact form factors tailored specifically for edge environments.
Kim formalizes strategic partnership with Taiwan-based AIC Inc. through MOU signing. Credit: DEEPX

Kim formalizes strategic partnership with Taiwan-based AIC Inc. through MOU signing. Credit: DEEPX
Kim's journey into AI chip design began during his PhD program at UCLA in 2007, where he had the opportunity to work on a deep learning processor project at the IBM T.J. Watson Research Center, which he calls "the number one research organization in the world." His research contributed to the early development of NPUs, even before the term "deep learning" was widely used.
"At that time there's no word like deep learning. We just call artificial neural network. So I was really extraordinary fortunate to start earlier than others," Kim shared. This early start gave him a significant advantage.
Having graduated from UCLA, Kim's tenure at Cisco, working on semiconductors for internet routers, provided crucial insights into the explosive growth of connected devices and the impending data deluge. He foresaw that by 2025, 70 billion devices would be connected to the internet, generating an unprecedented amount of data that humans alone could not process. Furthermore, he recognized that "40% of the internet data must be processed in a real time manner, not the data center we are waiting for." This realization fueled his vision for a low-power, high-performance, and low-cost AI processing solution at the device level.
"We need a very low power, high-performance, low-cost solution to process that on device, not the data center," Kim explained that relying on the cloud makes no sense due to latency, cost, and security problems.
DEEPX CEO Lokwon Kim at ex-Apple. Credit: DEEPX

DEEPX CEO Lokwon Kim at ex-Apple. Credit: DEEPX
He was inspired by the late Steve Jobs when he started working for Apple in 2014. "At the entrance of the Apple campus, there is this quotation from Steve Jobs that says, 'if you do something and it turns out pretty good, then you should go do something else that's wonderful…, Just figure out what's next." Kim decided it was time to answer that call when his internal startup idea was turned down by Apple. He decided to realize that dream by building a chip for the edge environment at his home country, South Korea, leveraging the foundry service of Samsung to support his fabless venture.
In 2018, Kim established DEEPX, and now the company has more than 100 employees in South Korea and will soon establish a branch office in July 2025, with FA engineer and salesperson stationed in Taipei.
"We Compensate the Giant, Not Compete"
Kim draws a parallel between the current AI chip landscape and the CPU market in the 1990s, where Intel dominated until ARM emerged with a more suitable solution for mobile devices due to its lower power consumption. He believes a similar dynamic is at play with AI processors, where NVIDIA's GPUs, while powerful, are not ideal for small, power-constrained devices due to their "high power consumption and very hot, high cost." As Kim put it, "We cannot put GPU solution into our small electrical devices. That's the point. So I wanted to solve it."

DEEPX's strategy is to address this gap by providing AI chips specifically designed for on-device applications, thereby "compensating Nvidia's market." This approach prioritizes:
Real-time processing: "On device device zero latency guaranteed zero latency," Kim emphasized, highlighting the critical need for immediate responses in applications like self-driving cars and factory automation.
Privacy and security: Kim explained the risk of sending factory data to data centers for AI processing, stating, "If you send the used data to the cloud for AI processing, hackers may intercept and leak that data. " On-device processing mitigates this risk.
Total Cost of Ownership (TCO): DEEPX offers a significantly more affordable long-term solution. Kim provided an example: "The cost of our chip is under $100 for 10 years."
Carbon emission reduction: Kim highlighted the environmental impact of current solutions: "Already, all the H100 GPUs in the world together consume more power than the total power consumption of France, which is one huge country." DEEPX's power-efficient chips offer a sustainable alternative.
DEEPX is already demonstrating its capabilities through collaborations with major companies. They are working with a large IT company in China on industrial monitoring and smart city projects. They are also partnering with South Korea company for white appliances, enabling functionalities like autonomous movement for robot vacuum cleaners and home security features such as detecting unauthorized individuals or elders' falling. DEEPX has already shared samples of their chips with over 300 global companies, achieving significant success in the pre-mass production market. Their chips are manufactured by Samsung's 5nm process.
DEEPX CEO Lokwon Kim at Computex Taipei 2025. Credit: DEEPX

DEEPX CEO Lokwon Kim at Computex Taipei 2025. Credit: DEEPX

DEEPX CEO Lokwon Kim at Computex Taipei 2025. Credit: DEEPX
DX-M2: Chip for Generative AI at the Edge
Looking ahead, DEEPX is developing DX-M2, their next-generation chip for generative AI. DEEPX aims to enable generative AI models with over 1 billion parameters, such as Meta LLaMA4 and Deepseek MoE, to run on-devices with just 5 watts of power consumption, and they are also considering TSMC 3nm or Samsung 2nm for future products.
Kim believes this will address the current financial challenges of generative AI. "Generative AI is not profitable right now because OpenAI pays huge energy bills to maintain its operations," said Kim, emphasizing DEEPX's solution would make generative AI accessible and affordable, with the chip costing under $50 and a module under $150, effectively eliminating the need for expensive data center charges for a decade once purchased.
Kim believes this on-device generative AI will be a "hugely popular product." He acknowledges the skepticism, recalling similar disbelief when they announced their first chip's performance with low power, but he remains confident in their ability to deliver." Actually when we announced our DX-M1, nobody believed... But we did it. We proved it. Now we will do it again," he affirmed.
Lokwon Kim, CEO of DEEPX, on a mission to break into tech's top 10. Credit: DEEPX's top 10. Credit: DEEPX

Lokwon Kim, CEO of DEEPX, on a mission to break into tech's top 10. Credit: DEEPX
Aiming to be Top 10 Players in Tech
DEEPX's long-term vision, as outlined by Kim, aligns with the suggestion of Jensen Huang, who he admires and considers a "rockstar." Huang's suggested phases for chip companies include:
1. Low cost and high usability: DEEPX has achieved this by creating chips that are highly useful and inexpensive.
2. Patent protection: DEEPX boasts over 300 patents for NPU technology, exceeding those of Intel, ARM, Qualcomm, and NVIDIA.
3. Ecosystem development: The next phase involves building an ecosystem with software frameworks and applications to increase the solution's value and profitability.
Kim's ultimate goal for DEEPX is to become "one of the major families in the world, within the top 10 players in the world" within the next 5 to 10 years, contributing to "the process of transformation process of human civilization, which is going to the super-intelligent."
This ambition, fueled by a decade of pioneering research and a clear strategic vision, positions DEEPX not just as a chip maker, but as a key enabler of a more intelligent, efficient, and sustainable technological future. As the demand for on-device AI continues to skyrocket, DEEPX's commitment to cost-effective, power-efficient, and high-performance solutions promises to democratize AI, making its transformative power accessible to countless devices and industries worldwide. To stay updated on the latest from DEEPX, follow the official DEEPX LinkedIn page.
The way this guy thinks, is a bit of a worry (in that he's good)..

But their tech doesn't seem "that flash" on the surface..

Their DX-M1 (and future DX-M2) are in 3nm process and using around 5w?..
That seems like a lot and the around $50 cost per chip, seems high too.. (possibly because of the process?)

AKIDA 2.0 IP in 3nm, would romp all over anything they currently, or intend to offer, in both performance and energy usage...

But the "guy" worries me..
 
  • Like
Reactions: 7 users

Labsy

Regular
The way this guy thinks, is a bit of a worry (in that he's good)..

But their tech doesn't seem "that flash" on the surface..

Their DX-M1 (and future DX-M2) are in 3nm process and using around 5w?..
That seems like a lot and the around $50 cost per chip, seems high too.. (possibly because of the process?)

AKIDA 2.0 IP in 3nm, would romp all over anything they currently, or intend to offer, in both performance and energy usage...

But the "guy" worries me..
I wouldn't worry. There is a whole ecosystem of people who compare power/efficiency/functionality/cost.... Trust our technology. Eventually the cream will float to the surface. If he's smart he'll integrate a RISCV architecture with our IP. Everything else is shit. As oppionioned via the actions of the European space agency. And they re pretty smart.
 
  • Like
  • Fire
  • Love
Reactions: 16 users
I wouldn't worry. There is a whole ecosystem of people who compare power/efficiency/functionality/cost.... Trust our technology. Eventually the cream will float to the surface. If he's smart he'll integrate a RISCV architecture with our IP. Everything else is shit. As oppionioned via the actions of the European space agency. And they re pretty smart.
Hey I'm not "worried" worried, if you know what I mean..
I'm not going to lose any sleep over it. 😛
 
  • Like
  • Love
  • Haha
Reactions: 8 users

manny100

Top 20
Growth Opportunities in Neuromorphic Computing 2025-2030 |
Google or ask AI whether BRN is a leader in Neuromorphic AI at the Edge.
Then read the link.

"Growth Opportunities in Neuromorphic Computing 2025-2030 | Neuromorphic Technology Poised for Hyper-Growth as Market Surges Over 45x by 2030​

Strategic Investments and R&D Fuel the Next Wave of Growth in Neuromorphic Computing"​

 
  • Like
  • Fire
Reactions: 15 users

Rach2512

Regular
Growth Opportunities in Neuromorphic Computing 2025-2030 |
Google or ask AI whether BRN is a leader in Neuromorphic AI at the Edge.
Then read the link.

"Growth Opportunities in Neuromorphic Computing 2025-2030 | Neuromorphic Technology Poised for Hyper-Growth as Market Surges Over 45x by 2030​

Strategic Investments and R&D Fuel the Next Wave of Growth in Neuromorphic Computing"​



Thanks for sharing @manny100, see list of featured companies in paid report which is practically $5k.

Screenshot_20250529_155927_Samsung Internet.jpg
Screenshot_20250529_160142_Samsung Internet.jpg
 
  • Like
  • Love
  • Fire
Reactions: 15 users

itsol4605

Regular
  • Like
  • Fire
  • Love
Reactions: 12 users

Unlocking AI’s Next Wave: How Self-Improving Systems, Neuromorphic Chips, and Scientific AI are Redefining 2025​


How did I even guess there wouldn’t be a mention for BRN and just a mention for others unless I miss something, so why don’t you just go am F off to the hot crapper where you belong

1748508946521.gif
 
  • Like
  • Love
  • Fire
Reactions: 35 users
Takashi Sato, Professor in the Graduate School of Informatics at Kyoto University (see @Fullmoonfever ’s post from August 2024 👆🏻), is also co-author of another (though similarly titled) paper describing research done with Akida: “Zero-Aware Regularization for Energy-Efficient Inference on Akida Neuromorphic Processor”, which happens to be presented at ISCAS (International Symposium on Circuits and Systems) 2025 in London tomorrow.

His two co-authors are fellow researchers from Kyoto University’s Graduate School of Informatics, but others than last year: PhD student Takehiro Habara and Associate Professor Hiromitsu Awano (who might actually be the son of Sato’s August 2024 co-author Hikaru Awano: https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433/215689/2/djohk00613.pdf, cf. Acknowledgment page VI: “Last but not least, I am truly grateful to my parents, Hikaru Awano and Akiko Awano for their support of my long student life.”)

View attachment 85398


View attachment 85399



View attachment 85401

View attachment 85400

First author Takehiro Habara is a PhD student at the “Kyoto University School of Platforms”, an interesting interdisciplinary PhD program. The Graduate School of Informatics he is affiliated with is one of the collaborating Graduate Schools.

He says about himself he is “researching low-power AI and creating handheld AI devices” and aims to build “a platform that enables advanced AI inference with low power consumption”, “an AI system that can be used anywhere and by anyone”.


View attachment 85404


View attachment 85405

Translation courtesy of Google Lens:

View attachment 85406


View attachment 85407
View attachment 85409
Further to @Frangipani post above and my previous post she kindly linked, I managed to find the abstract of the presso.

No full presso as yet but positive results for their study using Akida.

Information for Paper ID 1590
Paper Information:
Paper Title:Zero-Aware Regularization for Energy-Efficient Inference on Akida Neuromorphic Processor
Student Contest:Yes
Affiliation Type:Academia
Keywords:Edge AI, Energy efficiency, Neuromorphic Chips, Regularization, Spiking Neural Networks
Abstract:Spiking Neural Networks (SNNs) and their hardware accelerators have emerged as promising systems for advanced cognitive processing with low power consumption. Although the development of SNN hardware accelerators is particularly active, research on the intelligent use of these accelerators remains limited. This study focuses on the SNN accelerator Akida, a commercially available neuromorphic processor, and presents a novel training method designed to reduce inference energy by leveraging the unique architecture of the hardware. Specifically, we apply sparse constraints on neuron activations and synaptic connection weights, aiming to minimize the number of firing neurons by considering Akida's batch spike processing feature. Our proposed method was applied to a network consisting of three convolutional layers and two fully connected layers. In the MNIST image classification task, the activations became 76.1% sparser, and the weights became 22.1% sparser, resulting in a 13.8% reduction in energy consumption per image.
Track ID:8.2
Track Name:Spiking Neural Networks and Systems
Final Decision:Accept as Poster
Session Name:Neural Learning Systems: Circuits & Systems III (Poster)


 
  • Like
  • Fire
  • Love
Reactions: 25 users

MDhere

Top 20
Anastasi Nvidia Huawei Spray Tan

Looks like Anastasi was standing downwind of Donald’s morning application.



China's HUGE AI Chip Breakthrough: NVIDIA is out?

Bravo are you up to a run?
 
  • Haha
  • Like
Reactions: 5 users

Samus

Top 20
The forum has been infiltrated by malicious bots over at the AVZ threads!
Entire threads have been wiped and long term paid up members are being targeted and having their posts wiped by nefarious actors!

Does anyone personally know @zeeb0t to get in touch with them??

It's fucking crazy!

Screen shot this:
🤯🤯🤯
auto admin is overloaded and fucked!
1000013310.jpg


My post will likely be deleted as all my posts have been since Sunday.
It sometimes takes the fucker a little while to see them on new threads.
 

Frangipani

Top 20
Arijit Mukherjee, Akida-experienced Principal Scientist from TCS Research, will be leading two workshop ‘industry sessions’ during next month’s week-long Summer School & Advanced Training Programme SENSE (Smart Electronics and Next-Generation Systems Engineering) organised by the Defence Institute of Advanced Technology (DIAT) in Pune, India: “Intro talk: Smart at the Edge” as well as “Beyond TinyML - Neuromorphic Computing: Brains, Machines, and the Story of Spikes”.

While we know that TCS Research is not exclusively friends with us, when it comes to neuromorphic computing, I trust BrainChip will get a very favourable mention during those workshop sessions. 😊



EAFFC650-1A90-408C-AF2F-163F10C4B2B7.jpeg


5832738D-DB17-4C8D-B7B2-7CF6E0E4DEA6.jpeg
13C50B9A-4314-49E2-8C3D-AF79EF6CF6FE.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 21 users

Samus

Top 20
The forum has been infiltrated by malicious bots over at the AVZ threads!
Entire threads have been wiped and long term paid up members are being targeted and having their posts wiped by nefarious actors!

Does anyone personally know @zeeb0t to get in touch with them??

It's fucking crazy!

Screen shot this:
🤯🤯🤯
auto admin is overloaded and fucked!
View attachment 85747

My post will likely be deleted as all my posts have been since Sunday.
It sometimes takes the fucker a little while to see them on new threads.
Aren't you guys tech heads??

What in the actual fuck is going on with these forums????

Not concerned some troll with bots can fuck the entire thing? - for paid up members.

Many of us cancelled membership btw - no confirmation that payments have been cancelled.

The place is fucked, owned by fuck knows who and moderated by nobody.
With direct debit payments going to fuck knows where.

Support is uncontactable.
 
  • Like
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Growth Opportunities in Neuromorphic Computing 2025-2030 |
Google or ask AI whether BRN is a leader in Neuromorphic AI at the Edge.
Then read the link.

"Growth Opportunities in Neuromorphic Computing 2025-2030 | Neuromorphic Technology Poised for Hyper-Growth as Market Surges Over 45x by 2030​

Strategic Investments and R&D Fuel the Next Wave of Growth in Neuromorphic Computing"​


45 x 20 cents equals $9.00!

Well, if that’s the case Manny, we’ll all be partying like it’s 1999 in our 2030 bodies. 👯💃🕺
 
  • Like
  • Haha
  • Fire
Reactions: 19 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Aren't you guys tech heads??

What in the actual fuck is going on with these forums????

Not concerned some troll with bots can fuck the entire thing? - for paid up members.

Many of us cancelled membership btw - no confirmation that payments have been cancelled.

The place is fucked, owned by fuck knows who and moderated by nobody.
With direct debit payments going to fuck knows where.

Support is uncontactable.

Try replying to the Admin post on this thread on Tuesday at 5.01 pm.
 
  • Like
Reactions: 2 users

Rach2512

Regular
  • Like
  • Fire
  • Love
Reactions: 5 users
Hi Bravo, thanks for the video. If its wearables then we are streets ahead of anything else on the market. It makes sense for Nanose to use AKIDA for hand held devices as AKIDA is a no brainer for wearables when they move into that area.
So far they say dozens of diseases can be detected..
I can hear your friends at hot crapper calling you.
 
Last edited:
  • Haha
  • Thinking
  • Love
Reactions: 5 users

manny100

Top 20
I can hear your friends at hot crapper calling you.
I have them on ignore right now.
I had my fun with them but it's worn off now.
The agenda driven downrampers never post anything of substance.
 
  • Love
  • Like
  • Fire
Reactions: 7 users

manny100

Top 20
I can hear your friends at hot crapper calling you.
I have them on ignore. The fun of teasing them to the extent they go into mindless rage rants has worn off.
Angry ants have no credibility.
 
  • Like
  • Haha
  • Love
Reactions: 4 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

Why Grok 3’s 1.8 Trillion Parameters Are Pointless Without Neuromorphic Chips: A 2025 Blueprint​

R. Thompson (PhD)
R. Thompson (PhD)

Follow
5 min read
·
Apr 19, 2025

What If We Didn’t Need GPUs to Power the Future of AI?​


2025: The Year AI Hit a Wall 🧠⚡🔥

The generative AI wave, once unstoppable, is now gridlocked by a resource bottleneck. GPU prices have surged. Hardware supply chains are fragile. Electricity consumption is skyrocketing. AI’s relentless progress is now threatened by infrastructure failure.
• TSMC’s January earthquake crippled global GPU production
• Nvidia H100s are priced at $30,000–$40,000–1000% above cost
• Training Grok 3 demands 10²⁴ FLOPs and 100,000 GPUs
• Inference costs for top-tier models now hit $1,000/query
• Data centers draw more power than small nations
This isn’t just a temporary setback. It is a foundational reckoning with how we’ve built and scaled machine learning. As the global AI industry races to meet demand, it now confronts its own unsustainable fuel source: the GPU.
1*sxsu7Ji0NLPyv5UryC-U_g.jpeg

GROK 3: AI’s Biggest Brain with an Unquenchable Thirst​

Launched by xAI in February 2025, Grok 3 represents one of the most ambitious neural architectures ever built.
• A 1.8 trillion-parameter model, dwarfing predecessors
• Trained on Colossus — a 100,000-GPU supercomputer
• Achieves 15–20% performance gains over GPT-4o in reasoning tasks
• Integrates advanced tooling like Think Mode, DeepSearch, and self-correction modules
Yet, Grok 3’s superhuman intelligence is tethered to an aging hardware paradigm. Each inference request draws extraordinary amounts of energy and memory bandwidth. What if that limitation wasn’t necessary?
“Grok 3 is brilliant — but it’s burning the planet. Neuromorphic chips could be the brain transplant it desperately needs.” — Dr. Elena Voss, Stanford

Neuromorphic Chips: Thinking Like the Brain​

Neuromorphic hardware brings a radically different philosophy to computing. Inspired by the brain, it forgoes synchronous operations and instead embraces sparse, event-driven logic.
• Spiking neural networks (SNNs) encode data as temporal spike trains
• Processing is triggered by events, not clock cycles
• Memory and compute are colocated — eliminating latency from data movement
• Power usage is significantly reduced, often by orders of magnitude
This shift makes neuromorphic systems well-suited to inference at the edge, especially in environments constrained by energy or space.
Leading architectures include:
Intel Loihi 2 — Hala Point scales to 1.15 billion neurons
IBM NorthPole — tailored for ultra-low-power deployment
• BrainChip Akida — commercially deployed in compact vision/audio systems
What was once academic curiosity is now enterprise-grade silicon.

The Synergy: Grok 3 + Neuromorphic Hardware​

Transforming Grok 3 into a brain-compatible model requires strategic rewiring.
• Use GPUs for training and neuromorphic systems for inference
• Convert traditional ANN layers into SNN using rate coding techniques
• Redesign the transformer’s attention layers to operate using spikes instead of matrices
Below is a simplified code example to demonstrate ANN-to-SNN conversion:
import numpy as np
import snntorch as snn
from snntorch import spikegen
def ann_to_snn_attention(ann_weights, input_tokens, timesteps=100):
query, key, value = ann_weights['Q'], ann_weights['K'], ann_weights['V']
spike_inputs = spikegen.rate(input_tokens, num_steps=timesteps)
lif_neurons = snn.Leaky(beta=0.9, threshold=1.0)
spike_outputs = []
for t in range(timesteps):
spike_query = np.dot(spike_inputs[t], query)
spike_key = np.dot(spike_inputs[t], key)
attention_scores = np.dot(spike_query, spike_key.T) / np.sqrt(key.shape[-1])
spike_out, mem = lif_neurons(attention_scores)
spike_outputs.append(spike_out)
return np.mean(spike_outputs, axis=0)

Projected Performance Metrics​

1*fSHIAaHj_-qzY2Ja7Tfcxw.png

If scaled and refined, neuromorphic Grok could outperform conventional setups in energy efficiency, speed, and inference cost — particularly in large-scale, low-latency settings.

Real-World Use Case: AI-Powered Clinics in Sub-Saharan Africa​

Imagine Grok 3 Mini — a distilled 50B parameter version — deployed on neuromorphic hardware in community hospitals:
• Low-cost edge inference for X-ray scans and lab diagnostics
• Solar-compatible deployment with minimal power draw
• Offline reasoning through embedded DeepSearch-like retrieval modules
• Massive cost reduction: from $1,000 per inference to under $10

Now layer in mobile deployment: neuromorphic devices in ambulances, refugee clinics, or rural schools. This model changes access to intelligence in places where GPUs will never go.

Overcoming the Common Hurdles​

Knowledge Gap
Most AI engineers lack familiarity with SNNs. Open frameworks like snnTorch and Intel’s Lava are bridging the gap.
Accuracy Trade-offs
ANN-to-SNN transformation often leads to accuracy drops. Solutions like surrogate gradients and spike-based pretraining are emerging.
Hardware Accessibility
Chips like Akida and Loihi are limited today, but joint development programs are underway to commercialize production-grade boards.

Rewiring the Future of AI​

We often assume GPUs are the only viable way forward. But that assumption is crumbling. As AI scales toward trillion-parameter architectures, we must ask:
• Must we rely on energy-hungry matrix multiplications?
• Can intelligence evolve outside the cloud?
xAI and others can:
• Build custom neuromorphic accelerators for key Grok 3 modules
• Train SNN-first models on sparse reasoning tasks
• Embrace hybrid architectures that balance power and scalability
Neuromorphic Grok 3 won’t just save energy. It could redefine where and how intelligence lives.

A Crossroads for Computing​

The 2025 GPU crisis marks a civilizational inflection point. Clinging to the von Neumann architecture risks turning AI into a gated technology, hoarded by a handful of cloud monopolies.
Neuromorphic systems could democratize access:
• Empowering small labs to deploy world-class inference
• Enabling cities to host edge-AI environments for traffic, health, and environment
• Supporting educational tools that run offline, even without a data center
This reimagining doesn’t need to wait. Neuromorphic hardware is here. The challenge is will.

 
  • Like
  • Fire
  • Love
Reactions: 35 users
Top Bottom