BRN Discussion Ongoing

The Pope

Regular
ok who was the one with small change of over $660,000 in their pocket that just wiped out the the 22c line over 3mill purchased at 10.35.40

wasn't me
Had to spend my tax return on something. Hopefully a wise decision. 😉
 
  • Haha
  • Fire
  • Like
Reactions: 8 users

manny100

Top 20
Green Friday??? Something is brewing folks… or the usual pump before we get some reports? I have no idea….

Happy Dance GIF
The last 2 quarters saw SP rises prior to the quarterly release. Prior quarters were however share price down into quarterlies. I posted charts on the crapper showing this.
We have what is called a bollinger band squeeze ATM. The bands have become very narrow and often bounce like a tight coil when let go.
We may for the 3rd consecutive quarter see move up before the quarterly.
It would be nice to see some good news soon if only to fatten up this bounce.
This likely will be traded up higher.
 
  • Like
Reactions: 12 users

Diogenese

Top 20
We've had a burst of AFRL/RTX(ILS) microDoppler news recently, and the SBIR was to run for 6 months to 1 year. It was announced in December 2024, so it could pop up any time from now to the end of the year.

Another opportunity is Chelpis. They are a Taiwanese cybersecurity company. Which countries in all the world would be under more constant cyber attack than Taiwan?

That urgency may explain why the Chelpis agreement is directed to Akida 1 SoC.

There's a lot to unpack in the Chelpis announcement:

1. Akida 1000 chips for immediate inclusion in M2 cybersecurity cards for qualification and deployment

2. Collaboration to develop a PQ-AI robotic chip (PQ = post-quantum computing, ie, hardened against future quantum computer cyber attack).

3. Akida IP to also be used for NPU capabilities (Akida's primary function)

4. Exploring "advanced Akida IP visual GenAI capabilities" (Akida GenAI).

5. Applied for Taiwanese government support for the development

6. Made-in-USA strategy


https://www.chelpis.com/post/brainchip-collaborates-with-chelpis-mirle-on-security-solution

BrainChip Collaborates with Chelpis-Mirle on Security Solution​

  • May 2


LAGUNA HILLS, Calif.--(BUSINESS WIRE)--BrainChip Holdings Ltd (ASX: BRN, OTCQX: BRCHF, ADR: BCHPY), the world’s first commercial producer of ultra-low power, fully digital, event-based, neuromorphic AI, today announced that Chelpis Quantum Corp. has selected its Akida AKD1000 chips to serve as the processor for built-in post-quantum cryptographic security.

Chelpis, a chip company leading the Quantum Safe Migration ecosystem in Taiwan, is developing an M.2 card using the AKD1000 that can be inserted into targeted products to support their cryptographic security solutions. The M.2 card is based on a design from BrainChip along with an agreement to purchase a significant number of AKD1000 chips for qualification and deployment. Upon completion of this phase, Chelpis is planning to increase its commitment with additional orders for the AKD1000.

This agreement is the first step in a collaboration that is exploring the development of an AI-PQC robotic chip designed to fulfill both next-generation security and AI computing requirements. This project is a joint development effort with Chelpis partner company Mirle (2464.TW) and has been formally submitted for consideration under Taiwan’s chip innovation program. The funding aims to promote a new system-on-chip (SoC) integrating RISC-V, PQC, and NPU technologies. This SoC will specifically support manufacturing markets that emphasize a Made-in-USA strategy. Mirle plans to build autonomous quadruped robotics that mimic the movement of four-legged animals for industrial/factory environments. To enable this vision, Chelpis is exploring BrainChip’s advanced Akida™ IP to incorporate advanced visual GenAI capabilities in the proposed SoC design.

"The ability to add Edge AI security capabilities to our industrial robotics project that provides the low power data processing required is paramount to successfully achieving market validation in the robotics sector," said Ming Chih, CEO of Chelpis. "We believe that BrainChip’s Akida is just the solution that we further need to bring our SoC to fruition. Their event-based processing and advanced models serve as a strong foundation for developing a platform for manufacturing customers looking to leverage advanced robotics in their facilities."

"Akida’s ability to efficiently provide cyber-security acceleration with energy efficiency can help secure autonomous robotic devices," said Sean Hehir, CEO of BrainChip. "Akida’s innovative approach to supporting LLMs and GenAI algorithms could serve as a key contributor to Chelpis as they pursue government funding to develop their SoC and advance their industrial robotic initiatives
."

It looks like Chelpis are in boots and all.
 
  • Like
  • Love
  • Fire
Reactions: 28 users

Tothemoon24

Top 20
IMG_1266.jpeg



IMG_1267.jpeg


Edge AI solutions have become critically important in today’s fast-paced technological landscape. Edge AI transforms how we utilize and process data by moving computations close to where data is generated. Bringing AI to the edge not only improves performance and reduces latency but also addresses the concerns of privacy and bandwidth usage. Building edge AI demos requires a balance of cutting-edge technology and engaging user experience. Often, creating a well-designed demonstration is the first step in validating an edge AI use case that can show the potential for real-world deployment.
Building demos can help us identify potential challenges early when building AI solutions at the edge. Presenting proof-of-concepts through demos enables edge AI developers to gain stakeholder and product approval, demonstrating how AI solutions effectively create real value for users, within size, weight and power resources. Edge AI demos help customers visualize the real-time interaction between sensors, software and hardware, helping in the process of designing effective AI use cases. Building a use-case demo also helps developers experiment with what is possible.

Understanding the Use Case​


The journey of building demos starts with understanding the use case – it might be detecting objects, analyzing the sensor data, interacting with a voice enabled chatbot, or asking AI agents to perform a task. The use case should be able to answer questions like – what problem are we solving? Who can benefit from this solution? Who is your target audience? What are the timelines associated with developing the demo? These answers work as the main objectives which guide the development of the demo.
Let’s consider our Brainchip Anomaly Classification C++ project demonstrating real-time classification of mechanical vibrations from an ADXL345 accelerometer into 5 motion patterns: forward-backward, normal, side-side, up-down and tap. This use case is valuable for industrial use cases like monitoring conveyor belt movements, detecting equipment malfunctions, and many more industrial applications.
Screenshot-2025-07-17-at-8.56.26%E2%80%AFAM.png

Optimizing Pre-processing and Post-processing​


Optimal model performance relies heavily on the effective implementation of both pre-processing and post-processing components. The pre-processing tasks might involve normalization or image resizing or conversion of audio signals to a required format. The post-processing procedure might include decoding outputs from the model and applying threshold filters to refine those results, creating bounding boxes, or developing a chatbot interface. The design of these components must ensure accuracy and reliability.
In the BrainChip anomaly classification project, the model analyzes the data from the accelerometer which records 100HZ three-dimensional vibration through accX, accY, and accZ channels. The data was collected using Edge Impulse’s data collection feature. Spectral analysis of the accelerometer signals was performed to extract features from the time-series data during the pre-processing step. Use this project and retrain them or use your own models and optimize them for Akida IP using the Edge Impulse platform. It provides user friendly no-code interface for designing ML workflow and optimizing model performance for edge devices including BrainChip’s Akida IP.

Balancing Performance and Resource Constraints​


Models at the edge need to be smaller and faster while maintaining accuracy. Quantization along with knowledge distillation and pruning optimization methods allow for sustained accuracy together with improved model efficiency. BrainChip’s Akida AI Acceleration Processor IP leveragesquantization and also adds sparsity processing to realize extreme levels of energy efficiency and accuracy. It supportsreal-time, on-device inferences to take place with extremely low power.

Building Interactive Interfaces​


Different approaches include modern frameworks such as Flask, FastAPI, Gradio, and Streamlit to enable users to build interactive interfaces using innovative approaches. Flask and FastAPI give developers the ability to build custom web applications with flexibility and control, while Gradio and Streamlit enable quick prototyping of machine learning applications using minimal code. Factors like interface complexity together with deployment requirements and customization needs influence framework selection. The effectiveness of the demo depends heavily on user experience such as UI responsiveness and intuitive design. The rise of vibe coding and tools like Cursor and Replit has greatly accelerated the time to build prototypes and enhance the UX, saving time for the users to focus on edge deployment and optimizing performance where it truly matters.
For the Anomaly Classification demo, we implemented user interfaces for both Python and C++ versions to demonstrate real-time inference capabilities. For the Python implementation, we used Gradio to create a simple web-based interface that displays live accelerometer readings and classification results as the Raspberry Pi 5 processes sensor data in real-time. The C++ version features a PyQt-based desktop application that provides more advanced controls and visualizations for monitoring the vibration patterns. Both interfaces allow users to see the model's predictions instantly, making it easy to understand how the system responds to different types of mechanical movements.

Overcoming Common Challenges​


Common challenges in edge AI demo development include handling hardware constraints, performance consistency across different devices, and real-time processing capabilities. By implementing careful optimization combined with robust error handling and rigorous testing under diverse conditions, developers can overcome these challenges. By combining BrainChip'shardware acceleration with Edge Impulse's model optimization tools, the solution canshow consistent performance across different deployment scenarios while maintaining the low latency required for real-time industrial monitoring.

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

Article by:​


Dhvani Kothari is a Machine Learning Solutions Architect at BrainChip. With a background in data engineering, analytics, and applied machine learning, she has held previous roles at Walmart Global Tech and Capgemini. Dhvani has a Master of Science degree in Computer Science from the University at Buffalo and a Bachelor of Engineering in Computer Technology from Yeshwantrao Chavan College of Engineering.
 
  • Like
  • Fire
Reactions: 12 users

MDhere

Top 20
View attachment 88635


View attachment 88636


Edge AI solutions have become critically important in today’s fast-paced technological landscape. Edge AI transforms how we utilize and process data by moving computations close to where data is generated. Bringing AI to the edge not only improves performance and reduces latency but also addresses the concerns of privacy and bandwidth usage. Building edge AI demos requires a balance of cutting-edge technology and engaging user experience. Often, creating a well-designed demonstration is the first step in validating an edge AI use case that can show the potential for real-world deployment.
Building demos can help us identify potential challenges early when building AI solutions at the edge. Presenting proof-of-concepts through demos enables edge AI developers to gain stakeholder and product approval, demonstrating how AI solutions effectively create real value for users, within size, weight and power resources. Edge AI demos help customers visualize the real-time interaction between sensors, software and hardware, helping in the process of designing effective AI use cases. Building a use-case demo also helps developers experiment with what is possible.

Understanding the Use Case​


The journey of building demos starts with understanding the use case – it might be detecting objects, analyzing the sensor data, interacting with a voice enabled chatbot, or asking AI agents to perform a task. The use case should be able to answer questions like – what problem are we solving? Who can benefit from this solution? Who is your target audience? What are the timelines associated with developing the demo? These answers work as the main objectives which guide the development of the demo.
Let’s consider our Brainchip Anomaly Classification C++ project demonstrating real-time classification of mechanical vibrations from an ADXL345 accelerometer into 5 motion patterns: forward-backward, normal, side-side, up-down and tap. This use case is valuable for industrial use cases like monitoring conveyor belt movements, detecting equipment malfunctions, and many more industrial applications.
Screenshot-2025-07-17-at-8.56.26%E2%80%AFAM.png

Optimizing Pre-processing and Post-processing​


Optimal model performance relies heavily on the effective implementation of both pre-processing and post-processing components. The pre-processing tasks might involve normalization or image resizing or conversion of audio signals to a required format. The post-processing procedure might include decoding outputs from the model and applying threshold filters to refine those results, creating bounding boxes, or developing a chatbot interface. The design of these components must ensure accuracy and reliability.
In the BrainChip anomaly classification project, the model analyzes the data from the accelerometer which records 100HZ three-dimensional vibration through accX, accY, and accZ channels. The data was collected using Edge Impulse’s data collection feature. Spectral analysis of the accelerometer signals was performed to extract features from the time-series data during the pre-processing step. Use this project and retrain them or use your own models and optimize them for Akida IP using the Edge Impulse platform. It provides user friendly no-code interface for designing ML workflow and optimizing model performance for edge devices including BrainChip’s Akida IP.

Balancing Performance and Resource Constraints​


Models at the edge need to be smaller and faster while maintaining accuracy. Quantization along with knowledge distillation and pruning optimization methods allow for sustained accuracy together with improved model efficiency. BrainChip’s Akida AI Acceleration Processor IP leveragesquantization and also adds sparsity processing to realize extreme levels of energy efficiency and accuracy. It supportsreal-time, on-device inferences to take place with extremely low power.

Building Interactive Interfaces​


Different approaches include modern frameworks such as Flask, FastAPI, Gradio, and Streamlit to enable users to build interactive interfaces using innovative approaches. Flask and FastAPI give developers the ability to build custom web applications with flexibility and control, while Gradio and Streamlit enable quick prototyping of machine learning applications using minimal code. Factors like interface complexity together with deployment requirements and customization needs influence framework selection. The effectiveness of the demo depends heavily on user experience such as UI responsiveness and intuitive design. The rise of vibe coding and tools like Cursor and Replit has greatly accelerated the time to build prototypes and enhance the UX, saving time for the users to focus on edge deployment and optimizing performance where it truly matters.
For the Anomaly Classification demo, we implemented user interfaces for both Python and C++ versions to demonstrate real-time inference capabilities. For the Python implementation, we used Gradio to create a simple web-based interface that displays live accelerometer readings and classification results as the Raspberry Pi 5 processes sensor data in real-time. The C++ version features a PyQt-based desktop application that provides more advanced controls and visualizations for monitoring the vibration patterns. Both interfaces allow users to see the model's predictions instantly, making it easy to understand how the system responds to different types of mechanical movements.

Overcoming Common Challenges​


Common challenges in edge AI demo development include handling hardware constraints, performance consistency across different devices, and real-time processing capabilities. By implementing careful optimization combined with robust error handling and rigorous testing under diverse conditions, developers can overcome these challenges. By combining BrainChip'shardware acceleration with Edge Impulse's model optimization tools, the solution canshow consistent performance across different deployment scenarios while maintaining the low latency required for real-time industrial monitoring.

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

Article by:​


Dhvani Kothari is a Machine Learning Solutions Architect at BrainChip. With a background in data engineering, analytics, and applied machine learning, she has held previous roles at Walmart Global Tech and Capgemini. Dhvani has a Master of Science degree in Computer Science from the University at Buffalo and a Bachelor of Engineering in Computer Technology from Yeshwantrao Chavan College of Engineering.
Thanks @Tothemoon24 interesting how Brn are actively promoting Edge Impulse

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

 
  • Like
  • Fire
Reactions: 12 users
Thanks @Tothemoon24 interesting how Brn are actively promoting Edge Impulse

The Future of Edge AI Demos​


As edge devices become more powerful and AI models more efficient, demos will play a crucial role in demonstrating the practical applications of these advancements. They serve as a bridge between technical innovation and real-world implementation, helping stakeholders understand and embrace the potential of edge AI technology.
If you are ready to turn your edge AI ideas into powerful, real-world demos, you can start building today with BrainChip’s Akida IP and Edge Impulse’s intuitive development platform. Whether you're prototyping an industrial monitoring solution or exploring new user interactions, the tools are here to help you accelerate development and demonstrate what is possible.

A question was asked to brn management when Qualcomm took over what was the reason behind edge impulses to hold off on further brn business.
The answer was ...they weren't concerned and believe it to be standard business practice while the new management had made full assessment of all the current deals on the table. If Qualcomm do want to include Tenns Genai from brainchip in the future then it's not over for brn and edge Impulse until the fat lady sings as they say.
 
Last edited:
  • Like
Reactions: 6 users
Top Bottom