BRN Discussion Ongoing

Frangipani

Top 20
First BRN press release I am aware of that was written by someone from Bospar Communications, the Public Relations & Marketing Agency that was recently hired:


Launch of BrainChip Developer Hub Accelerates Event-Based AI Innovation on Akida™ Platform with Release of MetaTF 2.13​

NEWS PROVIDED BY
Brainchip
June 19, 2025, 06:21 GMT



BrainChip announces new Developer Hub and MetaTF toolkit, enabling seamless development and deployment of machine learning models on its Akida™ platform.

LAGUNA HILLS, CA, UNITED STATES, June 19, 2025 /EINPresswire.com/ --

BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based brain-inspired AI, today announced the release of MetaTF 2.13 on its newly launched Developer Hub, a comprehensive portal designed to accelerate AI development on the Akida™ platform. The BrainChip Developer Hub serves as a centralized resource for developers building intelligent edge applications, providing access to tools, pre-trained models, technical documentation, and the company’s MetaTF toolkit. MetaTF 2.13 features seamless conversion, quantization, and deployment of machine learning models on Akida. It is compatible with leading frameworks including Keras and ONNX with support for Jupyter Notebooks, enabling rapid prototyping and optimization.

“We created the Developer Hub to streamline the experience for edge AI developers and give them the tools to move from concept to deployment quickly,”, said Sean Hehir, CEO of BrainChip. “With our Akida processor, highly intuitive software stack, and world class models, we’re delivering solutions that are both high-performing and energy-efficient.”

As part of this launch, BrainChip introduced two high-efficiency models optimized for edge performance. The eye-tracking model is ideal for smart glasses and wearable devices, delivering over 99% accuracy. Built on BrainChip’s proprietary Temporal Event-based Neural Networks (TENNs), it offers real-time gaze detection while dramatically reducing power consumption by processing only motion-relevant data.

The gesture recognition model is designed for embedded applications in consumer electronics, robotics, and IoT and achieves 97% accuracy. By leveraging Akida’s event-based processing and high-speed vision sensors from event-based cameras, it enables ultra-low latency gesture interfaces without sacrificing precision.

These models demonstrate the power of Akida’s event-based architecture across a wide array of real-world applications including autonomous vehicles, industrial automation, AR/VR and spatial computing, smart environments and IoT, and security and surveillance.

BrainChip’s new Developer Hub and AI models underscore the company’s commitment to making edge AI more accessible and scalable. With Akida, developers can build responsive, privacy-aware applications that operate at ultra-low power—ideal for battery-constrained and latency-sensitive environments.

Developers can access the models and tools today by visiting: https://developer.brainchip.com

About BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY)
BrainChip is the global leader in Edge AI on-chip processing and learning. The company’s first-to-market, fully digital, event-based AI processor, AkidaTM, uses neuromorphic principles to mimic the human brain. By analyzing only the essential sensor inputs at the point of acquisition, Akida delivers data processing with unmatched efficiency, precision, and energy savings. Integrated into SoCs on any digital process technology, Akida Neural Processor IP has demonstrated significant advantages across today's workloads and networks. It provides a platform for developers to build, fine-tune, and run their models using standard AI tools such as TensorFlow and Keras.

BrainChip’s Temporal Event-based Neural Networks (TENNs) build on state space models (SSMs) by introducing a time-sensitive, event-driven processing framework that enhances efficiency and makes them ideal for real-time, streaming Edge applications. By enabling efficient computation with optimized models and hardware execution, BrainChip makes real-time streaming Edge AI universally deployable across industries such as aerospace, autonomous vehicles, robotics, mobile, consumer electronics, and wearable technology. BrainChip is leading the way toward a future where ultra-low power, on-chip AI near the sensor not only transforms products but also benefits the planet. Learn more at www.brainchip.com.

Follow BrainChip on Twitter: @BrainChip_inc
Follow BrainChip on LinkedIn: BrainChip LinkedIn

Madeline Coe
Bospar Communications
+1 224-433-9056

maddie@bospar.com

Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
article.gif
 
  • Like
  • Fire
  • Love
Reactions: 22 users
I'm out if there's another cap raise

Don’t think we’ve finished raising funds from the last one? and at the current SP it’s probably going to be way short of what the company hoped for I guess. So another raise will be due with absolutely no sign of any decent announcement as we all hoped for.
 
  • Like
  • Sad
Reactions: 6 users

Frangipani

Top 20
On a brighter note, BrainChip will be attending four large-scale events later this month:

View attachment 86433

We’d already found out about three of the four events prior to the newsletter release. Here is some more info about the fourth one, the Living Planet Symposium in Vienna, organised by ESA.

Douglas McLelland and Gilles Bézard will be representing BrainChip with a talk titled “Event-driven computation and sparse neural network activity deliver low power AI” as part of the session Orbital Intelligence for Earth Observation applications: The edge of AI In Space, chaired by Gabriele Meoni from ESA, who has first-hand experience of AKD1000 cf. https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-462334


View attachment 86435

View attachment 86434 View attachment 86436 View attachment 86437 View attachment 86438 View attachment 86439 View attachment 86440

We’d already found out that Douglas McLelland and Gilles Bézard will be representing BrainChip at the upcoming Living Planet Symposium in Vienna (organised by ESA) thanks to a reference to that conference in the latest BRN newsletter (see my 8 June post for more details 👆🏻).

That event in Austria’s capital now also shows up on our website under “What’s New”:


D43256E3-082A-428A-979C-E2EDA434BE15.jpeg



36514338-E625-4654-BB83-46D7164766F7.jpeg
 
  • Like
  • Love
Reactions: 13 users

DK6161

Regular
You were lead to believe in the company?

By who?(Be specific)

When you invest in a company generally you do some kind of research on the company before you invest in it.

Did YOU do this? Then when YOU decided to throw money at it. Did YOU continue to do research and keep updated on whats going on?

You say you No Longer believe that the company is going to succeed. So obviously YOU have seen something that has brought YOU to that conclusion.

Notice how I emphasised the word YOU there. That implies that it is YOU who made the decision to firstly research what the company does, then YOU decided to invest, then YOU continued to research and follow the progress of the company and now YOU find yourself no longer believing in what the company is trying to achieve for reasons XYZ.

To be honest with you if I were in your shoes I know exactly what I would do next. I'm not going to tell you what that is because that is a decision that ONLY YOU can make.

For the record we all make bad decisions and anyone that says otherwise is flat out lying. I've sold shares in 2 companies this year. 1 because I bought hype and got my ass handed to me and the other funnily enough I should have held on to as it has now done a x3 from my original buy price. I sold it at 150% profit.

I'm still invested in Brainchip because I still firmly believe that they will be successful. That is MY BELIEF. Taking a bit longer than I expected but hey I can't control tWe all know who the number 1 fan boy that made the company look like a "no brainer" to invest in.
1. We all know the no 1 fan boy that made the company looked like it was about to go to the moon.
2. Look at previous AGMs where the CEO said there was going to be an explosion of revenue.

But yes, I made the decision to invest based on the hype.
Thanks for pointing that out.
 
  • Sad
  • Like
  • Fire
Reactions: 3 users

DK6161

Regular
I'm out if there's another cap raise
Like @AusEire said.
Don't blame the company mate. It is your fault for throwing money at it.
 
  • Haha
  • Fire
Reactions: 2 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Back in January, after listening to a podcast featuring Steven Brightfield, I mentioned that I had the impression he may have been hinting at Meta as one of the companies involved in discussions around smart glasses (as above).

So, it’s with great interest that I’ve seen reports today about a new collaboration between Meta and Oakley on a sports-focused line of smart glasses, with an official announcement reportedly scheduled for Friday, 20 June.

The partnership builds on the existing relationship between Oakley and Ray-Ban under their parent company, EssilorLuxottica, which already works closely with Meta on the Ray-Ban Meta smart glasses.

I’ll definitely be tuning into this announcement to learn more. Early reports suggest the glasses are being optimised for sports and athletic use, with a focus on hands-free video recording capabilities. The announcement will also likely reveal specific features, pricing, and availability for the glasses.

Over the past few days, I’ve noticed a recurring theme across industry commentary - battery life continues to be a major challenge for smart glasses, particularly when video capture is involved, which significantly increases power consumption.

Thanks to our collaboration with Onsor, we know BrainChip’s Akida technology can support all-day battery life without the need for external battery packs or cloud connectivity. That’s a capability that most players in this space are still chasing.

If Meta plans to push harder into high-performance, video-centric smart glasses, it seems likey they would need a low-power solution like ours to get them across the line. Edit: As pointed out to me by @Diogenese, monitoring for early signs of a seizure, as in the Onsor glasses, is largely a passive function, similar to wake-word detection. In contrast, continuous video processing and classification is a significantly more active workload for any processor. So, while Akida with TENNs could theoretically help extend battery life, we’d need to wait for real-world performance data before drawing conclusions about its impact in such demanding use cases.

And - whether it’s meaningful or not - we’ve seen a few likes from Meta employees on LinkedIn.

Oh, and then there's also the small fact that Sean recently confirmed that the company that is manufacturing the glasses for Onsor is the same one doing them for Meta, EssilorLuxottica.

So, I welcome you to draw your own conclusions.



Video Start Time : 14.26 when Sean mentions the Onsor frames are made by EssilorLuxottica ,who also make the Meta frames.










View attachment 87214




An EXTRACT from this evenings press release.


Screenshot 2025-06-19 at 8.50.40 pm.png








Hopefully...🤞

Screenshot 2025-06-19 at 9.17.25 pm.png

Screenshot 2025-06-19 at 9.19.03 pm.png
 
Last edited:
  • Like
  • Thinking
Reactions: 8 users

manny100

Top 20
Its interesting how AKIDA and TENNs co exist. Bravo's post above concerning Onsor's epilepsy detectors is a good example. AKIDA the hardware provides the signals and TENNs (with perhaps software designed by Onsor?) does the thinking, tells AKIDA who provides the response, eg seizure within an hour.
They are complementary. TENNs are the brains, and Akida is the body.
Another example, Eg, if i am hiking in bush i have never been to before i could use a drone for forward navigation purposes.
If for some reason i wanted to communicate with the drone via hand signals, eg raised arm means return etc. A camera or other suitable device would pick up my arm motions.
AKIDA provides the cloud free, event based, low power, on chip learning platform.
TENNs do the 'thinking'.
Akida collects and pre-processes that event data in real time—say, my arm moves upward.
It then runs the TENNs model internally to interpret what that movement means—maybe it’s my gesture for “come back and hover.”
Then, Akida enables the appropriate response: the drone returns and hovers.
It's all in real time on low power. Traditional Edge AI would run out of power in no time.
The above is very basically how AKIDA and TENNs run together.
The key is that AKIDA is the hardware and it needs software, eg TENNs to do the 'thinking'.
Easy to see why tiny PICO and TENNs are a good pair.
 
Last edited:
  • Like
  • Love
Reactions: 6 users

FiveBucks

Regular
I'm out if there's another cap raise
It is inevitable.

They said at the AGM they are aiming for $9 mill revenue this year. Our cash burn is significantly higher.
 
  • Like
  • Sad
  • Haha
Reactions: 4 users
New GitHub update 20hrs ago on Akida/CNN2SNN and including TENNs release, modules and models etc by the looks.

@Diogenese will probs know more of anything unusual or new tucked in there.



20 hours ago
@ktsiknos-brainchip
ktsiknos-brainchip
2.13.0-doc-1
d8435c2
Compare
Upgrade to Quantizeml 0.16.0, Akida/CNN2SNN 2.13.0 and Akida models 1.7.0
Latest


Update QuantizeML to version 0.16.0

New features​

  • Added a bunch of sanitizing steps targetting native hardware compatibility:
    • Handle first convolution that cannot be a split layer
    • Added support for "Add > ReLU > GAP" pattern
    • Added identity layers when no merge layers are present after skip connections
    • BatchNormalisation layers are now properly folded in ConvTranspose nodes
    • Added identity layers to enforce layers to have 2 outbounds only
    • Handled Concatenate node with a duplicated input
  • Added support for TENNs ONNX models, which include sanitizing, converting to inference mode and quantizing
  • Set explicit ONNXScript requirement to 0.2.5 to prevent later versions that use numpy 2.x

Bug fixes​

  • Fixed an issue where calling sanitize twice (or sanitize then quantize) would lead to invalid ONNX graphs
  • Fixed an issue where sanitizing could lead to invalid shapes for ONNX Matmul/GEMM quantization

Update Akida and CNN2SNN to version 2.13.0

Aligned with FPGA-1679(2-nodes)/1678(6-nodes)​

New features​

  • [cnn2snn] Updated requirement to QuantizeML 0.16.0
  • [cnn2snn] Added support for ONNX QuantizedBufferTempConv and QuantizedDepthwiseBufferTempConv conversion to Akida
  • [akida] Full support for TNP-B in hardware, including partial reconfiguration with a constraint that TNP-B cannot be the first layer of a pass
  • [akida] Full support of Concatenate layers in hardware, feature set aligned on Add layers
  • [akida] Prevented the mapping of models with both TNP-B and skip connections
  • [akida] Renamed akida.NP.Mapping to akida.NP.Component
  • [akida] Improved model summary for skip connections and TNP-B layers. The summary now shows the number of required SkipDMA channels and the number of components by type.
  • [akida] Updated mapping details retrieval: model summary now contains information on external memory used. For that purpose, some C++/Python binding was updated and cleaned. The NP objects in the API have external members for memory.
  • [akida] Renamed existing virtual devices and added SixNodesIPv2 and TwoNodesIPv2 devices
  • [akida] Introduced create_device helper to build custom virtual devices
  • [akida] Mesh now needs an IP version to be built
  • [akida] Simplified model statistics API and enriched with inference and program clocks when available
  • [akida] Dropped the deprecated evaluate_sparsity tool

Update Akida models to 1.7.0

  • Updated QuantizeML dependency to 0.16.0 and CNN2SNN to 2.13.0
  • Sparsity tool name updated. Now returns python objects instead of simply displaying data and support models with skip connections
  • Introduced tenn_spatiotemporal submodule that contains model definition and training pipelines for DVS128, EyeTracking and Jester TENNs models
  • Added creation and training/evaluation CLI entry points for TENNs

Introducing TENNs modules 0.1.0

  • First release of the package that aims at providing modules for Branchip TENNs
  • Contains blocks of layers for model definition: SpatialBlock, TemporalBlock, SpatioTemporalBlock that come with compatibility checks and custom padding for Akida
  • The TemporalBlock can optionally be defined as a PleiadesLayer following https://arxiv.org/abs/2405.12179
  • An export_to_onnx helper is provided for convenience

Documentation update

  • Added documentation for TENNs APIs, including tenns_modules package
  • Introduced two spatiotemporal TENNs tutorials
  • Updated model zoo page with mAP50, removed 'example' column and added TENNs
  • Added automatic checks for broken external links and fixed a few
  • Cosmetic changes: updated main logo and copyright to 2025
 
Top Bottom