[…] Most of you will be familiar with the term Brain-Computer-Interface (BCI) - sometimes also called Brain-Machine-Interface (BMI) or Mind-Machine-Interface (MMI) - but may not be fully aware what it actually means.
In a December 2015 publication, Christoph Guger (that’s, by the way, where the G in g.tec comes from - it stands for Guger Technologies) and two of his co-authors described a BCI as follows:
“A BCI is a device that reads voluntary changes in brain activity, then translates these signals into a message or command in real-time (…) Most BCIs rely on the electroencephalogram (EEG). These signals (also called “brainwaves”) can be detected with electrodes on the surface of the head. Thus, these “noninvasive” sensors can detect brain activity with very little preparation. Some BCIs are “invasive”, meaning that they require neurosurgery to implant sensors. These BCIs can provide a much more detailed picture of brain activity, which can facilitate prosthetic applications or surgery for epilepsy and tumor removal.”
The implants used in clinical trials by Neuralink (founded in 2016 by Elon Musk and a team of eight scientists and engineers) are the most well-known examples of invasive BCIs. And while we BRN shareholders tend to roll our eyes when our company’s silicon gets confused with Musk’s “brain chips”, there is no doubt that BrainChip’s technology is also being evaluated in this field of BCIs.
In 2020, g.tec medical engineering introduced the BCI & Neurotechnology Spring School, a free ten-day virtual event - now held annually - which has become the world’s largest neurotech event, orchestrated from a small town in Austria called Schiedlberg. Participants can access 140 hours of cutting-edge education and even earn 14 ECTS* credits and an official exam certificate at no cost.
*ECTS = European Credit Transfer and Accumulation System
I noticed that one of last year’s 82,000 (!) participants was Temi Mohandespour, who used to work as a research scientist at BrainChip’s now closed Perth office from March 2021 until January 2025. She has since moved to Berlin and now works for Data4life, a non-profit organisation, whose mission is to digitalise health data for research (
www.data4life.care/en/).
https://www.linkedin.com/posts/temi-mohandespour_here-is-a-big-thank-you-to-gtec-medical-activity-7193097495894208513-9euk?
View attachment 92732
Several of her colleagues at BrainChip gave her above “thank you” post a

, including our CTO.
While I wasn’t able to find out anything concrete about what Temi Mohandespour may have been working on relating to BCIs during her last nine months at BrainChip post-Spring School, I happened to discover the LinkedIn profile of someone else who worked not only on one, but on two BCI projects utilising Akida -
although not as an employee of BrainChip:
https://www.linkedin.com/in/hammouamri-ilyass/
View attachment 92729
Ilyass Hammouamri, who recently defended his PhD thesis at the Université de Toulouse (
https://doctorat.univ-toulouse.fr/as/ed/cv.pl?mat=140961&site=EDT)
and whose PhD supervisor was Timothée Masquelier (one of the four co-inventors of the JAST patent that BrainChip first licensed and later acquired),
was a part-time research engineer at Neurobus between September 2024 and April 2025.
It was during that time - still under Gregor Lenz as CTO - that he “
developed a Proof of Concept solution for motor imagery classification from a Dry EEG Headset using a BrainChip Akida neuromorphic chip for robotic arm control”.
“Motor imagery (MI) is a mental process in which a subject vividly imagines performing a movement without any actual physical execution. MI is widely used in BCI systems to enable control of external devices, such as a cursor on a screen or a robotic arm, through brain activity.”
https://docs.medusabci.com/kernel/1.4/tutorials.php (by the Biomedical Engineering Group at the University of Valladolid, Spain)
I wonder whether this project may have been the continuation of the BMI* project that Neurobus’s first employee, Ljubica Cimeša, had developed in collaboration with Airbus, which also used EEG signals for robotic control:
*The terms Brain-Computer-Interface (BCI) and Brain-Machine-Interface (BMI) are often used interchangeably.
https://www.linkedin.com/in/cimesa-ljubica/
View attachment 92736
View attachment 92737
But his part-time contract job with Neurobus was not the first time Ilyass Hammouamri had been involved in BCI research using Akida: During his time at CNRS (Centre national de la recherche scientifique) CerCo (Centre de Recherche Cerveau et Cognition) in Toulouse, where he was as a PhD candidate in Timothée Masquelier‘s NeuroAI lab from to September 2021 to February 2025, he “worked on a joint project between different labs and BrainChip: Decoding speech from ECoG brain signals”.
Which means there must have been at least one more lab involved in that project, possibly more.
ECoG stands for electrocorticography. In contrast to EEG, it involves recording electrical activity directly from the surface of the brain und thus requires a craniotomy.
en.wikipedia.org
View attachment 92731
Here is a good illustration I found online, which happens to be from a video by g.tec medical engineering:
View attachment 92730
I have no idea whether or not any of g.tec medical engineering’s products (such as wearable EEG headsets, biosignal amplifiers) were actually used for either of the two BCI projects that Ilyass Hammouamri was involved in.
What I can tell you, though, is that they list Airbus under “Happy Customers” alongside quite a few other interesting names (
https://www.gtec.at/).
Stumbled across more info today about the “joint project between different labs and BrainChip” (“Decoding speech from ECoG brain signals”) that Ilyass Hammouamri
was involved in during his time at CNRS (Centre national de la recherche scientifique) CerCo (Centre de Recherche Cerveau et Cognition) in Toulouse, where he was as a PhD candidate in Timothée Masquelier’s NeuroAI lab from to September 2021 to February 2025:
The ANR (Agence Nationale de la Recherche / French National al Research Agency) BRAIN-Net project started in December 2020 and ran over a duration of four years, which means it ended about a year ago.
It was coordinated by Blaise Yvert from the Grenoble Institute of Neuroscience, whose goal it is to “restore speech to people who are paralyzed and who have lost their vocal abilities. Along with his team at the Grenoble Institute of Neuroscience, he is developing a system capable of decoding the brain signals associated with speech, so that it can be produced by an external device. This is referred to as a brain-computer interface.” (quoted from the article on Blaise Yvert below)
The Bioelectronics research group of the IMS Laboratory recently published two articles in the prestigious science journals Nature Electronics and Nature Communications: 🔹 « A ferroelectric–memristor memory for both training and inference », in Nature Electronics. This paper reports a unified...
www.linkedin.com
View attachment 93718
View attachment 93719
FYI: The linked article co-authored by researchers from France and Japan and published in Nature Communications -
https://www.nature.com/articles/s41467-025-64231-2 - does not mention BrainChip or Akida.
Large-scale neural recordings using high-density electrode arrays are key to understanding brain dynamics and designing brain-computer interfaces for rehabilitation. These devices produce large data flows that raise new challenges to extract relevant information in real time with limited power...
anr.fr
View attachment 93720
Here is an interesting six-month old article about the research conducted by BRAIN-Net project coordinator Blaise Yvert, Inserm* Research Director and head of the
Neurotechnologies and network dynamics team at Grenoble Institute of Neuroscience:
*
INSERM (= Institut National de la Santé et de la Recherche Médicale) is the French National Institute of Health and Medical Research.
À l’Institut des neurosciences de Grenoble, Blaise Yvert développe un dispositif capable de décoder les signaux cérébraux associés à la parole, pour que celle-ci puisse être produite par un appareil externe. On parle d’interface cerveau-machine.
www.inserm.fr
- Blaise Yvert: Getting the Brain to Talk
- PUBLISHED ON: 10/06/2025
- READING TIME: 5 MIN
- NEWS
Blaise Yvert has one goal – restore speech to people who are paralyzed and who have lost their vocal abilities. Along with his team at the Grenoble Institute of Neuroscience, he is developing a system capable of decoding the brain signals associated with speech, so that it can be produced by an external device. This is referred to as a brain-computer interface.
Blaise Yvert is an Inserm Research Director and head of the Neurotechnologies and network dynamics team at Grenoble Institute of Neuroscience (unit 1216 Inserm/Grenoble-Alpes University) in Grenoble.
Will Yvert’s research restore speech to those who have lost it? This is what the Inserm Research Director leading the Neurotechnologies and Network Dynamics team at the Grenoble Institute of Neuroscience is hoping. For the past ten years, he has been working on the development of a brain-computer interface to decode the brain signals of speech and reproduce the words of people who are unable to utter them. His project was recently selected as part of the Impact Santé program, funded by the France 2030 investment plan and coordinated by Inserm. Brain Implant, the scientific consortium he has formed for this project, has received three million euros to develop a new brain implant that will improve the accuracy of speech reconstruction based on brain activity.
From engineer to researcher
This desire dates back to his engineering studies at École centrale de Lyon and Cornell University in the US. « I was drawn to research and wanted to develop health technologies, especially for people with disabilities. I knew several people with disabilities when I was younger and it’s a cause I hold dear », explains Yvert.
Once he graduated in 1993, the young engineer was hired by an Inserm human electrophysiology research unit in Lyon. « The team sought to mathematically locate the brain regions responsible for the signals recorded on the surface of the head. This was something I was particularly interested in », he recalls. During two postdocs, one in Finland and the other in Germany, the researcher used this approach to identify the auditory areas. But he realized that, even for very simple sounds, the cortex activation pattern is too complex to be finely understood with non-invasive recordings. « So I thought: let’s develop more sophisticated systems, for a more precise look at what happens in the neural networks. »
Towards a new technology
With this goal in mind, and after obtaining a research fellowship at Inserm, Yvert joined in 2003 a research unit in Bordeaux that focuses on neural networks in the developing spinal cord. There, he initiated a partnership with the French Alternative Energies and Atomic Energy Commission (CEA) in Grenoble and the ESIEE engineering school in Paris, which has academic laboratories, to develop microelectrode networks to enable detailed exploration of neural tissue activity in vitro. An initial prototype was finalized three years later. Through multiple collaborations, he continued to improve this technology, particularly with new materials to increase the performance of electrodes (platinum, diamond and, more recently, graphene).
Then, Yvert wanted to put his research to work for patients. With this project in mind, he spent a year at Brown University in the US, in a research unit that led the way in implantable brain-computer interfaces in humans. Back in France, he joined the Grenoble Institute of Neuroscience and began his project on decoding brain signals of speech. In particular, he collaborated with the Clinatec institute created by the CEA, « a unique environment for creating new rehabilitation strategies for people with paralysis », he believes.
The interface to which Yvert devoted his work is aimed particularly at people with “locked-in syndrome” (LIS). Although they cannot move or speak due to complete paralysis, their cognitive faculties are intact. “The cortical activities produced when they want to say something are always present, so if we can decode them with our implants, we can reproduce what they want to say”, hopes the researcher. An initial clinical trial is expected to start in 2025, « if the regulatory procedures go well », he warns. This trial will include people with LIS who will be equipped with an implant developed by Clinatec, positioned on the surface of the brain. “This device provides signals that are highly stable over the long term, with wireless transmission through the skin », he explains.
Pursue and accelerate development
At the same time, the scientist does not forget the fundamental aspect, which has always been a source of motivation in his work. « For example, we’re exploring the brain activity of a new animal model that is very vocal – the pig. This model allows us to test new, more efficient types of implants for potential future use in humans. It will also be possible to see whether there are similarities between the data collected in animals and humans ».
In order to finely decode brain activity, he believes that the devices will still need to be improved, by increasing the number of electrodes, and by innovating in materials and integrated electronics. This is the goal of the Brain Implant project. « We want to create a technological building block that would serve both basic research and to develop brain-computer interfaces for clinical use in different indications: to restore speech or other motor functions », he explains.
These developments and their challenges for people and society are inevitably accompanied by ethical questions around which Yvert has set up processes of reflection, conducted in collaboration with philosophers and patient organizations.
And as if all of this were not enough,
the researcher has also led, since early 2025, the Grenoble Initiative in Medical Devices (LabEx GIMeD), a research partnership on medical devices. « The aim is to bring together multidisciplinary units that develop health technologies, including teams specialized in the humanities and social sciences, to reflect on the implications of these technologies. New projects are expected to emerge from this ecosystem », he outlines for the future.
Looking back, Yvert notes that risk-taking during his career has been successful. “Going from non-invasive brain recording in humans to the technological development of in vitro systems took me out of my comfort zone. But in the end, this leap was essential in preparing for the development of an interface that, I hope, will one day be able to provide real services to patients », he concluded.
Blaise Yvert is an Inserm Research Director and head of the
Neurotechnologies and network dynamics team at Grenoble Institute of Neuroscience (unit 1216 Inserm/Grenoble-Alpes University) in Grenoble.
Back in November, I shared my discovery about BCI research in France that had utilised Akida. The LinkedIn profile of a researcher named Ilyass Hammouamri, which I had stumbled across, revealed two completely different projects he had been involved in.
During his time at CNRS (Centre national de la recherche scientifique) CerCo (Centre de Recherche Cerveau et Cognition) in Toulouse, where he was a PhD candidate in Timothée Masquelier‘s NeuroAI lab from to September 2021 to February 2025, Ilyass Hammouamri “worked on a joint project between different labs and BrainChip: Decoding speech from ECoG brain signals”.
A few weeks later, I found out more about this project, namely that BrainChip was one of the consortium partners of “BRAIN-Net: Spiking Neural Networks for Real-Time Processing of Brain Signals” (scheduled project duration: December 2020 - November 2024), which was coordinated by Blaise Yvert from Brain Tech Lab at the Grenoble Institute of Neurosciences. Besides the above-mentioned CeRCO, other consortium partners included INSERM and IMS Laboratory.
The second Akida BCI research project Ilyass Hammouamri had been involved in took place while he was working for BrainChip partner Neurobus as a part-time research engineer between September 2024 and April 2025. According to his LinkedIn profile, he “developed a Proof of Concept solution for motor imagery classification from a Dry EEG Headset using
a BrainChip Akida neuromorphic chip for robotic arm control”.
It doesn’t come as a surprise, then, that BrainChip has now teamed up with a company developing medical solutions based on advanced BCI technology.
We found out about this earlier this month, when
@ChrisBRN spotted two new logos that had appeared on the BrainChip Partners webpage overnight - one of them is the logo of Korea-based BCI technology company Gbrain (
https://www.gbrainlife.com/).
Earlier today, James Shields liked one of Gbrain’s latest LinkedIn posts:
Gbrain In-Depth: Pioneering the Future of BCI 🧠✨ We are proud to share a special three-part feature series that takes a deep dive into Gbrain’s innovative BCI (Brain-Computer Interface) technology and our strategic vision for the global market. Often referred to as the "Neuralink of Korea,"...
www.linkedin.com
Although we haven’t yet heard anything official from either company, the fact that BrainChip lists Gbrain under OEM Integration Partners (“OEM integration partners produce board and box level product solutions based on Akida silicon implementations that are suitable for end markets.”) suggests to me that our company is interested in the “contracted clinical-grade electrode manufacturing” services Gbrain is offering besides developing their own products, such as Phin Array™, an ECoG cortical electrode, and Phin Stim™, a next-generation wireless cortical implant for neurostimulation, initially targeting patients with Parkinson’s disease. These two products are still in clinical trials, though, and there is also a disclaimer on promotional material by Gbrain that Phin Array and Phin Stim are currently “intended for investigational use only and have not been cleared by the FDA for the treatment of neurological disorders”.
Here is Gbrain’s company profile on the MEDICA website, the International Trade Fair for Medical Technology and Healthcare:
Visit Gbrain Inc. from Incheon at MEDICA 2025 in Düsseldorf in Hall 15 / E48
www.medica-tradefair.com
And here is a photo Gbrain shared on LinkedIn that shows a poster at their CES 2026 booth:
🚀 CES 2026 Day 1 – Booth Officially Open! Gbrain has officially kicked off CES 2026 with our booth open on Day 1! 📍 LVCC North Hall, Booth #9013 (Incheon IFEZ) We’re excited to showcase our latest innovations and vision in Digital Health on a global stage once again. We’re also incredibly...
www.linkedin.com
CES® is the most powerful tech event in the world — the proving ground for breakthrough technologies and global innovators. This is where brands get business done, meet new partners and where the industry’s sharpest minds take the stage to unveil their latest releases and boldest breakthroughs...
www.ces.tech
Phin Stim™ for Parkinson’s Disease
Gbrain
Phin Stim™ is a next-generation, fully implantable wireless neurostimulation system for treating Parkinson’s disease. It offers a safer, less invasive alternative when medications fail or patients are reluctant to undergo deep brain procedures. Unlike traditional deep brain stimulation (DBS), Phin Stim™ uses ultra-thin, flexible electrodes to stimulate the motor cortex through a minimally invasive surgical approach. The system is easy to install, remove, or replace, and provides precise surface stimulation. Phin Stim™ continuously monitors brain signals and delivers AI-powered adaptive stimulation to reduce tremors and slow movements in real time. It uses wireless power and data transmission for safe, daily use. Beyond symptom relief, it supports long-term neuroplasticity, helping the brain rewire itself to restore motor function. By combining bioelectronics, intelligent software, and digital therapeutics, Phin Stim™ delivers smarter, more responsive care—offering hope to patients seeking effective alternatives to traditional brain surgery.
Short CES 2026 interview with Gbrain Chief Device Officer Sung Q Lee:
On their website, Gbrain also provide some information on future projects:
- Next Generation Electrode Array: “Syringe-injectable surface multi-modal sensor array”
- Next Generation Wireless System: “Miniaturized wireless body-coupled communication”
- New Materials for Neural Implants:
“Graphene-coated electrode channels”
And if you’ve come this far, you might as well also read the following article:
I spoke with neuroscientists from an Incheon, Korea-based startup that's looking to minimize the symptoms of epilepsy or Parkinson's disease.
www.cnet.com
At CES 2026, Gbrain's Phin Stim Signals a New Era for Implantable Brain Therapy
I spoke with neuroscientists from an Incheon, Korea-based startup that's looking to minimize the symptoms of epilepsy or Parkinson's disease.
Macy Meyer
Jan. 8, 2026 1:43 p.m. PT
3 min read
Phin Stim is designed to help treat neurological conditions by gently stimulating the brain with precise electrical signals. Macy Meyer/CNET
CES has a unique rhythm. Fast footsteps on carpeted aisles. Neon slogans. Screens flashing promises about the future being smarter, faster, louder. Covering startups on the floor means learning to filter aggressively, to keep moving even when something looks interesting, because there's always another booth waiting.
And then, sometimes, something interrupts that rhythm.
In the middle of the noise, I found myself in the corner of the Las Vegas Convention Center at a booth for Gbrain, a Korean neurotechnology startup specializing in advanced brain-computer interface medical solutions and implantable brain-stimulation devices. No spectacle. No buzzwords shouted from a screen. Just precise hardware, clinical diagrams and conversations that felt unusually grounded for a show known for hype and an oversaturation of AI-nonsense.
It wasn't trying to be the future of everything. It was trying to fix something specific, and that's what made it stand out.
How the Phin Stim works on the human brain
Phin Stim is designed to help treat neurological conditions by gently stimulating the brain with precise electrical signals.
The brain communicates through tiny electrical impulses. When those signals become irregular -- as they can in conditions like epilepsy or Parkinson's disease -- the results can be severe. Phin Stim works by monitoring brain activity and delivering targeted stimulation to help guide those signals back into healthier patterns.
Think of it less like controlling the brain and more like correcting interference on a signal line.
One of the key innovations is Gbrain's ultrathin, flexible electrodes, which sit on the surface of the brain rather than pressing into it like other brain implants. Because they're soft and adaptable, they conform to the brain's natural shape, improving signal quality while reducing irritation. It's the difference between wearing a rigid helmet and something that actually moves with you.
The long-term goal is a fully implantable system: something that can work continuously inside the body, monitoring brain activity and responding when intervention is needed, without bulky external hardware.
The future of Gbrain's work and innovation
I spoke with Euiyoung Kim, a manager at Gbrain, who holds degrees in neuroscience, about the future of Phin Stim and Gbrain's innovations.
Gbrain is showcasing two versions of its flagship system, Phin Stim, at CES. The first, which is undergoing clinical trials, and the second, a prototype, are currently under review by a regulatory body in Korea, according to Kim. The earlier model was a CES 2025 Innovation Awards Honoree, while the updated version earned the same recognition for CES 2026. The newer Phin Stim is smaller, cleaner and more integrated -- less like a prototype and more like a medical device inching toward real-world use.
Gbrain is showcasing two versions of its flagship system Phin Stim at CES 2026.
Macy Meyer/CNET
"[The goal of the devices] is more towards minimizing the symptoms," Kim said. "It would be great if we could further get it to research where we discover the actual core causes of these diseases, but they currently focus more on making people's lives less hard, bringing everyday life back to patients."
What struck me most was how little Gbrain leaned into sci-fi narratives or overpromises. There were no grand claims about mind reading, enhancement or futuristic spectacle. This was neuroscience presented as medicine, not mythology. The focus was squarely on patients whose conditions don't respond well to medication alone and on giving clinicians more precise tools to help them. That restraint felt rare and refreshing on a show floor where ambition can outpace responsibility.
Rather than chasing attention, Gbrain seems focused on the unglamorous fundamentals: manufacturing standards, clinical validation, regulatory pathways and the intense work required to turn technology into treatment.
After hours of walking the CES floor, Gbrain was one of the booths I kept thinking about. In an industry obsessed with speed, Gbrain is moving at the pace medicine demands. And in a space crowded with promises about what technology might do someday, this was a reminder that some of the most meaningful innovation is focused on what technology can do now -- for people who actually need it.