Brilliant ,
Our illustrious leader, Sean Hehir participated in a video session with the Industrial Technology Research Institute (ITRI).
Intriguingly there was an article published online about 16 hours ago which describes how the ITRI was at CES, showing off a new AI-driven trainer that uses image analysis and data inference to improve an athlete’s performance.
I wonder if Sean got to polish up on his badminton skills? But more importantly, I wonder if we're involved in this AI athlete training system in some way, shape or form?
View attachment 75766
AI taught me to be a (slightly) better badminton player at CES
The Industrial Technology Research Institute was in Las Vegas to show off a new AI-driven trainer that uses image analysis and data inference to improve an athlete’s performance. The sport that served as our demonstration? Badminton.
![]()
US badminton Olympian Howard Shu plays with ITRI’s AI Badminton Trainer at CES 2025. [Photo: ITRI]
https://www.linkedin.com/shareArticle?mini=true&url=https://www.fastcompany.com/91258203/ai-taught-me-to-be-a-slightly-better-badminton-player-at-ces&media=AI+taught+me+to+be+a+(slightly)+better+badminton+player+at+CES
BY Chris Morris3 minute read
I am not what you would call a finely tuned athletic machine. I am, if anything, an outdated lawnmower engine held together by duct tape and rust. So when I was offered the opportunity to let AI help make me a better athlete at CES, I jumped at the opportunity.
The offer came from the Industrial Technology Research Institute (ITRI), a non-profit that uses applied research to drive industrial development. They were showing off a new AI-driven trainer that uses image analysis and data inference to improve an athlete’s performance. The sport that was the focus of this particular demonstration? Badminton.
The upside, I thought, would be that no one at CES would be especially good at badminton—so it wouldn’t be as humbling as it would if the system was tracking, say, my golf swing. Then Howard Shu strolled by.
Shu is a member of the U. S. Olympic badminton team and has been playing the sport for 26 years. He’s tall, in remarkable shape, and knows how to make a shuttlecock do pretty much whatever he wants. He is, in other words, the antithesis of me. We’ll get back to him in a moment, though.
[Photo: ITRI]![]()
To get a sense of my abilities, the training tool used a series of cameras to track my stance, swing, and other movements over five volleys. The data was fed into a Generative AI system, which instantly offered recommendations. At the same time, information like the speed of my volleys, the height with which they cleared the net, and where they landed on the court was captured and factored in as well.
I thought I was doing okay, honestly. I whacked the shuttlecock at 52 miles per hour, cleared the net by two feet, and made my invisible opponent chase it around the court. Then I walked over to see what the AI had to say.
“Ah, my friend!,” it wrote. “It looks like we’ve got a bit of a situation here.”
Maybe I hadn’t done quite as well as I thought I had.
The AI (which called itself Bill) noted that I was standing too close to the shuttlecock, which limited my ability to reach for the shot. Also, I needed to work on my weight transfer and balance. My footwork was not exactly ideal either.
And while it had my attention, Bill noted that my grip on the racket “might not be ideal for controlling the shuttlecock effectively or generating power” with my shots. And my follow-through was “abrupt.”
advertisement
Basically, the machine told me, I suck.
That’s when Shu took a turn. His speeds were closer to 80 mph—and he tightly grouped the shots. (He later told me he felt the system’s speed detection needed some calibrating as he normally hits faster than that.)
I gave the system one more try, with Shu suggesting I stand in a different spot on the court—and while my shots weren’t as powerful, they were much more tightly grouped. I won’t be threatening Shu’s spot on the Olympic team anytime soon, but I could be more of a beast at the local recreation center.
[Photo: ITRI]![]()
This training tool is already used by the Taiwan Olympic badminton team, an ITRI representative told me. Shu said it was the first time he had had an opportunity to try it—adding that he expects a growing number of athletes will begin to incorporate AI into their training.
“It’s able to pick up things you’re not able to pick up with the naked eye,” he said. “I can tell you my smash is fast, but I’m not going to be able to tell you the exact speed. You’re able to dial in exact numbers and get data driven results. As high performing athletes, we’re always trying to find that 1% advantage.”
Bill, I should note, remained unimpressed with my performance on the court.
Shu might be looking for a 1% advantage. I’d settle for the AI being a bit less judgmental.
Looking back at the chart from last years ces the SP took a few weeks to rise to 0.50 after it was over. Let’s hope it’s the same this year as things are looking so much better 12 months on.
Please do not just rely on other posters’ (and that includes mine) interpretations and/or supposed “transcripts” to find out what Steve Brightfield said in the interview with Don Baine (aka The Gadget Professor), but instead listen carefully to the interview yourself.
In brief: DYOR.
The way I see it, a few of the comments relating to that interview are testimony of some BRN shareholders reading too much into our CMO’s words. They hear what they would love to hear, not necessarily what was actually being said.
For example: Did Steve Brightfield really let slip we are involved with Apple or say that in five years from now he sees BrainChip’s offerings “embedded in every product”?
Well no, that is not what I gathered from this interview.
Oh, and LOL! What work of fiction is THAT?!
View attachment 75798
This “transcript” by @curlednoodles is NOT what Don Baine and Steve Brightfield actually said. It seems to be some kind of AI-generated version that content-wise resembles the real interview, but apart from some accurate phrases here and there, it is by no means a literal transcript! Do yourself a favour and listen to the original video interview instead.
On top of that, the sequence of snippets is not correct, and those three snippets should have been separated by using an ellipsis (…), too, to clarify other things were said in between. It is not the coherent dialogue it appears to be and misses important context.
Here are some excerpts of what Steve Brightfield ACTUALLY said (please feel free to comment on my transcript, in case you spot something that is not accurate):
From 12:52 min
“Most of the AI chips you hear about, they’re doing one, tens of watts or hundreds of watts or more. This [holding up the newly announced AKD1000 M.2 factor] is a 1 watt device, and we actually have versions [now holds up an unidentified chip, which I took to be just a single AKD1000 or AKD1500 chip, but I could be wrong] that are fractions of a watt here [puts down the chip] and we announced this fall Akida Pico, which is microwatts, so we can use a hearing aid battery and have this run for days and days doing detection algorithms [?]. So it is really applicable to put on wearables, put it in eye glasses, put it in earbuds, put it in smart watches.”
Don Baine, the interviewer, interrupts him and mentions he himself is “grossly hearing-impaired” and is wearing hearing aids but thinks they are horrible, adding that “I would love to see your technology in better [ones?] than those”.
To this, Steve Brightfield replies:
"One of the things we demonstrate in our suite is an audio denoising algorithm. So if you have a noisy environment like the show here, you can pass your audio through the Machine Learning algorithm, and it cleans it up and it sounds wonderful. And this is the thing you're gonna start seeing with the deregulation of hearing aids by the FDA. You know, Apple Pro earbuds have put some of this technology in theirs. And we're seeing, you know - I talked to some manufacturers that have glasses, where they put microphones in the front and then AI algorithms in the arm of the glasses, and it doesn’t look like you’re wearing hearing aids, but you’re getting this much improved audio quality. Because most people have minor hearing loss and they don't want the stigma of wearing hearing aids, but this kind of removes it - oh, it's an earbud, oh, it's glasses I am wearing, and now I can hear and engage."
DB: “Sure, I could see that, that's great. So does this technology exist? Is that something someone could purchase in a hearing aid, for example?”
SB: “Actually, I’ve seen manufacturers out on the floor doing this.
We are working with some manufacturers to put more advanced algorithms in the current ones in there, yes.”
As for the alleged Apple “name drop”:
Steve Brightfield talks both about BrainChip's audio denoising algorithm demo at CES 2025, but also about the growing importance of audio denoising algorithms in general in the context of the deregulation of hearing aids by the FDA and THEN says “Apple Pro earbuds have put some of this technology in theirs [stressing that last word]”.
I take it he meant audio denoising algorithms in general when he referred to “this technology”, not to BrainChip’s audio denoising algorithm specifically.
Also, the fact that our CMO said he had talked to some manufacturers of smart glasses does not necessarily mean he had been in business negotiations with them or that they are already customers behind an NDA - he may very well have just walked around the CES 2025 floor and checked out the respective manufacturers’ booths and their products, chatting with company representatives and handing out promotional material and business cards to get them interested in BrainChip’s offerings that could improve their current products.
As for hearing aids, my interpretation of this little exchange is that hearing aids with our technology are not yet available for purchase, but that they are working on it and once they will become available, our CMO foresees them surpassing those currently available with less advanced audio denoising algorithms.
After talking in more detail about the VVDN Edge AI Box and the newly announced M.2 form factor, Steve Brightfield moved on to the topic of neural networks and explained that there have so far been three waves of NN algorithms: While the first wave of AI was based on Convolutional Neural Networks (CNNs), the second wave was based on transformer-based neural networks that he said have amazing capabilities in generating images and text (he gave ChatGPT as an example), but are very compute-intensive, and then he added that BrainChip was working on the very recent third wave of neural network algorithms called State-Space Models, popularised by Mamba.
He mentions that BrainChip calls its own version of a State-Space Model TENNs and explains it a little, calling the real-life solution it enables “a personal assistant that can actually go in an earbud. We are not talking to the cloud here with a supercomputer. We have taken basically a ChatGPT algorithm and compressed it into a footprint that will fit on a board like this [briefly picks up the M.2 form factor]. And then you’re not sending your data to the cloud, you don’t need a modem or connectivity, and everything you say is private, it’s just being talked to this local device here. So, there’s privacy, security and low latency for this.”
DB: “Are there devices that are out now that incorporate that, not necessarily, you know, hearing aid-types of devices?”
SB: “Not the State-Space Models I’m talking about. All you’ll see today is transformer-based models that take a lot of computing. So probably the smallest devices you are seeing this on right now are $ 1000 smartphones.”
I understand the word “this” to refer to the just-mentioned transformer-based models. Meaning tiny devices containing State-Space Models such as TENNs (“Personal assistant that can go into an earbud.”) are not yet on the market.
Towards the end of the interview they talk about the advantages of neuromorphic computing that companies will benefit from such as independence from the cloud, which will translate into more privacy and security, and it was in this context of talking about the advantages of neuromorphic computing that Don Baine asked the question: “Where do you see this five years from now?”
So when Steve Brightfield answered “I see this embedded in every product and making our lives easier”, I believe he was referring to the benefits of on-device Edge AI and neuromorphic computing in general, and not specifically to BrainChip’s offerings. Such a statement wouldn’t make sense anyway: To the best of my knowledge, there is no company in the world that has a 100% monopoly on anything.
Something informative I took away from the interview, which I can’t recall having been mentioned, yet, was that each of the VVDN Edge AI Boxes apparently contains two of the newly announced AKD1000 M.2 form factor devices. Could their manufacturing have anything to do with the long delay in the Edge AI Boxes’ production and shipping (people who had pre-ordered and fully prepaid theirs last February did not receive it until early December)?
I don’t understand how some people act like this is a contest where the goal is to provide the best news and win something in the end—like free stocks or a new washing machine. Everyone here is doing their best to contribute voluntarily. Imagine that some people sacrifice their free time to continuously search for news because they genuinely enjoy informing others. And then to criticize them because someone thinks they have a monopoly on posting news is just arrogant.I did listen to the interview myself, thank you very much. This is not the first time you’ve made completely false assertions about me, which reflects far more on your character than on mine.
For the record, I posted my own comments on the interview after listening to it a second time, before sharing what Curlednoodles had to say from HC. I made no claims regarding the accuracy of his summary. And I certainly did not encourage people to rely on that information exclusively or not listen to the actual interview themselves.
If you have concerns about Curlednoodles' post, then I suggest you address those concerns directly with him on the Crapper, where you can engage in your forensic critiquing and pathological nitpicking to your heart’s content.
As a side note, Curlednoodles explicitly labeled their post as "some interesting snippets from the interview." He didn't claim it was a precise, chronological transcript. But, it's a free world, so if you want to go chew his balls off, then I can't stop you. With any luck, you'll like it over there so much that you'll decide to stay for an extended holiday.
I wanna get them in a ring somewhere and watch em duke it out.I don’t understand how some people act like this is a contest where the goal is to provide the best news and win something in the end—like free stocks or a new washing machine. Everyone here is doing their best to contribute voluntarily. Imagine that some people sacrifice their free time to continuously search for news because they genuinely enjoy informing others. And then to criticize them because someone thinks they have a monopoly on posting news is just arrogant.
Bravo, don’t let yourself get upset over and over again by him..This is all about exchanging information, writing some nonsense, and having fun together. You’re one of the best here … after me, of course.![]()
Brilliant ,
Great to have our CTO a well respected industry professional who’s happy to share company progress & also share how we compare to other less worthy competition.
This post by our CTO in my view is a perfect example of how to keep share holders updated with progress without any NDA infringements .
just remember the words “there are partners I can not name right now “Nvidia, MediaTek to expand partnership; reportedly include chip for Nintendo's Switch 2
Monica Chen, Hsinchu; Rodney Chan, DIGITIMES AsiaFriday 10 January 20250
![]()
Credit: DIGITIMES
According to industry sources, Nvidia and MediaTek are broadening their collaboration to include core chips for Nintendo's Switch 2 gaming consoles, following their partnership in developing AI supercomputer chips.
![]()
Nvidia, MediaTek to expand partnership; reportedly include chip for Nintendo's Switch 2
According to industry sources, Nvidia and MediaTek are broadening their collaboration to include core chips for Nintendo's Switch 2 gaming consoles, following their partnership in developing AI supercomputer chips.www.digitimes.com
Nvidia, MediaTek to expand partnership; reportedly include chip for Nintendo's Switch 2
Monica Chen, Hsinchu; Rodney Chan, DIGITIMES AsiaFriday 10 January 20250
![]()
Credit: DIGITIMES
According to industry sources, Nvidia and MediaTek are broadening their collaboration to include core chips for Nintendo's Switch 2 gaming consoles, following their partnership in developing AI supercomputer chips.
![]()
Nvidia, MediaTek to expand partnership; reportedly include chip for Nintendo's Switch 2
According to industry sources, Nvidia and MediaTek are broadening their collaboration to include core chips for Nintendo's Switch 2 gaming consoles, following their partnership in developing AI supercomputer chips.www.digitimes.com