BRN Discussion Ongoing

Diogenese

Top 20
Many of the long term holders have been here for going on 10 years, so it's not surprising that patience occasionally wears thin. However, it is only in the last 5 years that Akida has been available. Before that there was Brainchip Studio (software) and subsequently Brainchip Accelerator (hardware adapted to improve the function of Studio). Akida has gone through a number of iterations since then, and the business model has changed from IP plus ASIC to IP only, and now back to IP plus a somewhat limited hardware embodiment plus "software" in the form of models and MetaTF.

One of the significant developments in Akida 1 was on-chip learning which mitigates the need for retraining when new items need to be included in the model, a task which required the retraining of the entire model in the cloud. Cloud training is a major cost in terms of power and energy for the major AI engines.

Our tech has continued to evolve at a rapid rate under pressure of expanding market requirements. The commercial version of Akida 1 went from 1-bit weights and activations to 4-bits. Akida 1 has been incorporated in groundbreaking on-chip cybersecurity applications (QV, DoE), which is surely a precursor for Akida2/TENNs in microDoppler radar for AFRL/RTX. Akida 2 is embodied in FPGA for demonstration purposes.

Akida 2 went to 8-bits. It also introduced new features like skip, TENNs and state-space models.

The Roadmap also included Akida GenAI with INT16, also in FPGA, and Akida 3 (INT16, FP32), to be incorporated in FPGA within a year.

There is a clear trend to increased accuracy (INT16/FP32) in Akida 3. This indicates an improved ability to distinguish smaller features, although at the expense of greater power consumption. To at least partially compensate for the increased power consumption, Jonathan Tapson did mention a couple of under-wraps patent applications to improve the efficiency of data movement in memory. So I see Akida 3 as a highly specialized chip adapted for applications needing extremely high accuracy, while I think Akida2/TENNs will be the basis for our general business IP/SoC.

The fact that Akida has evolved so rapidly in such large steps is a tribute to the flexibility of its basic design.

https://brainchip.com/brainchip-sho...essing-ip-and-device-at-tinyml-summit-2020-2/

BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 07.02.2020​

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.

In the Poster Session, “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” representatives from BrainChip will explain to attendees how the company’s Akida™ Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.

BrainChip will also demonstrate “On-Chip Learning with Akida” in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.

“We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit,” said Louis DiNardo, CEO of BrainChip. “We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at
https://tinymlsummit.org/
An application requiring accuracy of INT16 (64k) would be far beyond the capability of an analog SNN.
 
  • Like
  • Fire
Reactions: 11 users

manny100

Top 20
@Frangipani

Great research and logic as usual.

When we go back to the BH & Navair transition program paper and roadmap, we can see that part of the commercialisation project will be a crossover to other DoD areas incl AFRL.

Given the web interconnecting all the players incl BRN there is always the possibility imo, that the card is advanced along enought that AFRL could be currently looking at the BH card as well.



View attachment 88803
Good pick up FMF, in the paragraph earlier BF say they intend to be an industry leader. I have no doubt they would have spruiked their 'card' to all and sundry.
No doubt RTX and Lockheed Martin have a card as well.
 
  • Like
  • Fire
Reactions: 5 users

Yoda

Regular
I can’t really explain why they’re not doing it either…

But one possible reason could be that they want to avoid situations like the one with Mercedes happening again.

Things like that attract the attention of regulatory authorities, who might take action that could further damage BrainChip’s reputation.

Extreme ups and downs are a red flag for regulators…

We all know how fast they put a speeding ticket out of the 🎩 after a nice rise! 🙄
But they didn't announce anything regarding Mercedes - Mercedes did.

They are certainly concerned about the regulators but as a director you need to do your job, you can't be so afraid of ASIC and the ASX that you don't actually fulfill your continuous disclosure obligations and director's duties properly and according to law (and that is what Fact Finder was trying to explain to them, quite correctly in my opinion, at the AGM).

Furthermore, I still can't understand why the partnerships can't at least be announced as non-price sensitive. If they are worthy enough to be announced on X and LinkedIn then they surely merit at least a non-price sensitive announcement. I don't see how a non-price sensitive announcement can be taken to be ramping the stock.
 
  • Like
  • Love
Reactions: 8 users

jrp173

Regular
But they didn't announce anything regarding Mercedes - Mercedes did.

They are certainly concerned about the regulators but as a director you need to do your job, you can't be so afraid of ASIC and the ASX that you don't actually fulfill your continuous disclosure obligations and director's duties properly and according to law (and that is what Fact Finder was trying to explain to them, quite correctly in my opinion, at the AGM).

Furthermore, I still can't understand why the partnerships can't at least be announced as non-price sensitive. If they are worthy enough to be announced on X and LinkedIn then they surely merit at least a non-price sensitive announcement. I don't see how a non-price sensitive announcement can be taken to be ramping the stock.

Yoda, totally agree with what you have said. The fact remains that BrainChip CAN make price sensitive and non price sensitive announcements around deals and partnerships, but Antonio made it crystal clear at the AGM, they don't because he doesn't want to (he actually said he did not want to get in trouble with the ASX). What a load of nonsense.

But additionally, even if BrainChip did make an ASX announcement regarding Mercedes (which we know they did not), so what if the share price skyrocketed.. if that's how the market reacts to news, then that's how the market reacts to news.

It's not ramping... it's just the market's legitimate reaction to news flow! Our ridiculous Chairman needs to understand the difference between the two!
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 11 users

HopalongPetrovski

I'm Spartacus!
But they didn't announce anything regarding Mercedes - Mercedes did.

They are certainly concerned about the regulators but as a director you need to do your job, you can't be so afraid of ASIC and the ASX that you don't actually fulfill your continuous disclosure obligations and director's duties properly and according to law (and that is what Fact Finder was trying to explain to them, quite correctly in my opinion, at the AGM).

Furthermore, I still can't understand why the partnerships can't at least be announced as non-price sensitive. If they are worthy enough to be announced on X and LinkedIn then they surely merit at least a non-price sensitive announcement. I don't see how a non-price sensitive announcement can be taken to be ramping the stock.
The potential saving's grace and perhaps even the strategy of being so tight lipped regarding ASX announcements is that once something commercially significant is announced, years of pent up coiled spring momentum, and capital that has been kept in reserve for just such a run, may propel our share price in a similar manner to when we had the MB announcement.

Shortly after that I would expect the redomicile to be refloated with perhaps the incentive of a cornerstone investor or stakeholder willing to come aboard, providing we shift to a USA market. We'll all be so giddy and drunk with happiness at holding for so long and finally getting a result, we'll give them whatever they want. 🤣
 
  • Like
  • Love
  • Fire
Reactions: 18 users

TECH

Regular
Many of the long term holders have been here for going on 10 years, so it's not surprising that patience occasionally wears thin. However, it is only in the last 5 years that Akida has been available. Before that there was Brainchip Studio (software) and subsequently Brainchip Accelerator (hardware adapted to improve the function of Studio). Akida has gone through a number of iterations since then, and the business model has changed from IP plus ASIC to IP only, and now back to IP plus a somewhat limited hardware embodiment plus "software" in the form of models and MetaTF.

One of the significant developments in Akida 1 was on-chip learning which mitigates the need for retraining when new items need to be included in the model, a task which required the retraining of the entire model in the cloud. Cloud training is a major cost in terms of power and energy for the major AI engines.

Our tech has continued to evolve at a rapid rate under pressure of expanding market requirements. The commercial version of Akida 1 went from 1-bit weights and activations to 4-bits. Akida 1 has been incorporated in groundbreaking on-chip cybersecurity applications (QV, DoE), which is surely a precursor for Akida2/TENNs in microDoppler radar for AFRL/RTX. Akida 2 is embodied in FPGA for demonstration purposes.

Akida 2 went to 8-bits. It also introduced new features like skip, TENNs and state-space models.

The Roadmap also included Akida GenAI with INT16, also in FPGA, and Akida 3 (INT16, FP32), to be incorporated in FPGA within a year.

There is a clear trend to increased accuracy (INT16/FP32) in Akida 3. This indicates an improved ability to distinguish smaller features, although at the expense of greater power consumption. To at least partially compensate for the increased power consumption, Jonathan Tapson did mention a couple of under-wraps patent applications to improve the efficiency of data movement in memory. So I see Akida 3 as a highly specialized chip adapted for applications needing extremely high accuracy, while I think Akida2/TENNs will be the basis for our general business IP/SoC.

The fact that Akida has evolved so rapidly in such large steps is a tribute to the flexibility of its basic design.

https://brainchip.com/brainchip-sho...essing-ip-and-device-at-tinyml-summit-2020-2/

BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 07.02.2020​

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.

In the Poster Session, “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” representatives from BrainChip will explain to attendees how the company’s Akida™ Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.

BrainChip will also demonstrate “On-Chip Learning with Akida” in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.

“We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit,” said Louis DiNardo, CEO of BrainChip. “We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at
https://tinymlsummit.org/


Great post Dio.... right on point......Tech... (I 💓AKD) ..........we both agree, AKD will succeed.
 
  • Like
  • Fire
Reactions: 11 users

manny100

Top 20

BrainChip’s Runway: Longer Haul, Not Instant Failure​

BrainChip’s near-term revenue lag isn’t an omen of doom but a hallmark of deep-tech ventures: cutting-edge R&D takes time to commercialize. While SynSense and Innatera captured early inference-only wins, BrainChip bet on the harder path—on-chip learning—which demands more silicon complexity, partner integration and validation before scale sales kick in.

Why It’s Likely to Succeed—Eventually​

  • Strategic Pilot Programs Military (AFRL), aerospace (Airbus, European Space Agency) and automotive partners are already evaluating Akida’s on-chip learning. Converting these pilots into production licences can unlock substantial royalties over time.
  • Unique Technological Differentiator Incremental and one-shot learning directly on device remains rare. As edge AI markets grow, on-chip learning will command premium adoption in privacy-sensitive, low-latency systems (wearables, smart cameras, industrial sensors).
  • Strong Cash Runway and Funding BrainChip’s recent capital raises, combined with modest burn rates relative to its R&D outlay, give it the financial cushion to refine Akida IP and expand silicon tape-outs with foundry partners (e.g., GlobalFoundries, Intel Foundry Services).
 
  • Like
  • Fire
Reactions: 9 users

7für7

Top 20
But they didn't announce anything regarding Mercedes - Mercedes did.

They are certainly concerned about the regulators but as a director you need to do your job, you can't be so afraid of ASIC and the ASX that you don't actually fulfill your continuous disclosure obligations and director's duties properly and according to law (and that is what Fact Finder was trying to explain to them, quite correctly in my opinion, at the AGM).

Furthermore, I still can't understand why the partnerships can't at least be announced as non-price sensitive. If they are worthy enough to be announced on X and LinkedIn then they surely merit at least a non-price sensitive announcement. I don't see how a non-price sensitive announcement can be taken to be ramping the stock.


Read this
 

TECH

Regular
Regarding Edward Lien moving on, Brainchip does have a requirement in their job description of an evaluation after 1 year,
with regards to an IP signing (1 or more) or at least solid traction shown by a potential customer.

Very happy to be corrected by the company, not a forum poster, now whether Edward decided to leave for his own personal
reasons can't be disclosed for obvious privacy reasons, but we move forward, so many top-class Taiwanese technology brains
to choose from, we just need the right individual to connect with the right entities .......... our technology is proven, it's evolving
to the point of explosion .... time will eventually be our mate!!!

Thanks for your efforts, Edward..........kind regards Tech (y)
 
  • Like
  • Fire
  • Thinking
Reactions: 13 users

Diogenese

Top 20
Many of the long term holders have been here for going on 10 years, so it's not surprising that patience occasionally wears thin. However, it is only in the last 5 years that Akida has been available. Before that there was Brainchip Studio (software) and subsequently Brainchip Accelerator (hardware adapted to improve the function of Studio). Akida has gone through a number of iterations since then, and the business model has changed from IP plus ASIC to IP only, and now back to IP plus a somewhat limited hardware embodiment plus "software" in the form of models and MetaTF.

One of the significant developments in Akida 1 was on-chip learning which mitigates the need for retraining when new items need to be included in the model, a task which required the retraining of the entire model in the cloud. Cloud training is a major cost in terms of power and energy for the major AI engines.

Our tech has continued to evolve at a rapid rate under pressure of expanding market requirements. The commercial version of Akida 1 went from 1-bit weights and activations to 4-bits. Akida 1 has been incorporated in groundbreaking on-chip cybersecurity applications (QV, DoE), which is surely a precursor for Akida2/TENNs in microDoppler radar for AFRL/RTX. Akida 2 is embodied in FPGA for demonstration purposes.

Akida 2 went to 8-bits. It also introduced new features like skip, TENNs and state-space models.

The Roadmap also included Akida GenAI with INT16, also in FPGA, and Akida 3 (INT16, FP32), to be incorporated in FPGA within a year.

There is a clear trend to increased accuracy (INT16/FP32) in Akida 3. This indicates an improved ability to distinguish smaller features, although at the expense of greater power consumption. To at least partially compensate for the increased power consumption, Jonathan Tapson did mention a couple of under-wraps patent applications to improve the efficiency of data movement in memory. So I see Akida 3 as a highly specialized chip adapted for applications needing extremely high accuracy, while I think Akida2/TENNs will be the basis for our general business IP/SoC.

The fact that Akida has evolved so rapidly in such large steps is a tribute to the flexibility of its basic design.

https://brainchip.com/brainchip-sho...essing-ip-and-device-at-tinyml-summit-2020-2/

BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 07.02.2020​

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.

In the Poster Session, “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” representatives from BrainChip will explain to attendees how the company’s Akida™ Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.

BrainChip will also demonstrate “On-Chip Learning with Akida” in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.

“We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit,” said Louis DiNardo, CEO of BrainChip. “We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at
https://tinymlsummit.org/
I should have prefaced this discussion by saying this is all just my conjecture.

Akida 3 steps up from INT8 of Akida 2 to INT16/FP32, which means it will have a significantly larger silicon footprint (fewer chips per wafer), hence much more expensive to manufacture, so it will be going in Rolls-Royce applications (not literally (as far as I know)), where accuracy is the determining feature.

This would require a high information density sensor.

Thinking about that, DVS (Prophesee) is not a highly detailed sensor. Nor is Lidar. They are both inherently sparse. On the other hand, cameras are high density sensors. I would include microDoppler in the high density/small signal variation basket because there is a need to detect small variations in the frequency of the reflected signal. We know of one application for microDoppler radar. I can't think of an application for such high discrimination for images, but that does not exclude the possibility, eg, forged paintings?

Changing tack, it recently occurred to me that DNA analysis would be right up Akida's alley.

.
 
  • Like
  • Love
  • Fire
Reactions: 17 users
Regarding Edward Lien moving on, Brainchip does have a requirement in their job description of an evaluation after 1 year,
with regards to an IP signing (1 or more) or at least solid traction shown by a potential customer.

Very happy to be corrected by the company, not a forum poster, now whether Edward decided to leave for his own personal
reasons can't be disclosed for obvious privacy reasons, but we move forward, so many top-class Taiwanese technology brains
to choose from, we just need the right individual to connect with the right entities .......... our technology is proven, it's evolving
to the point of explosion .... time will eventually be our mate!!!

Thanks for your efforts, Edward..........kind regards Tech (y)
I don't think Edward is leaving. IMO this is an additional role in the region......
 
  • Like
  • Fire
Reactions: 8 users
I should have prefaced this discussion by saying this is all just my conjecture.

Akida 3 steps up from INT8 of Akida 2 to INT16/FP32, which means it will have a significantly larger silicon footprint (fewer chips per wafer), hence much more expensive to manufacture, so it will be going in Rolls-Royce applications (not literally (as far as I know)), where accuracy is the determining feature.

This would require a high information density sensor.

Thinking about that, DVS (Prophesee) is not a highly detailed sensor. Nor is Lidar. They are both inherently sparse. On the other hand, cameras are high density sensors. I would include microDoppler in the high density/small signal variation basket because there is a need to detect small variations in the frequency of the reflected signal. We know of one application for microDoppler radar. I can't think of an application for such high discrimination for images, but that does not exclude the possibility, eg, forged paintings?

Changing tack, it recently occurred to me that DNA analysis would be right up Akida's alley.

.
By DNA analysis do you mean like in DNA sequencers?
 
  • Like
Reactions: 1 users

Diogenese

Top 20
By DNA analysis do you mean like in DNA sequencers?
Just like in CSI, where you line up all the A, C, G, & Ts. (Above my paygrade really).

Akida does pattern recognition ...
 
  • Like
  • Haha
Reactions: 5 users
Appears this highly credentialed Professor will be discussing digital twins in a keynote presentation at the:
IMBSA
Conference Program
September 24-26 2025 | Athens, Greece

As part of the keynote:

"Finally, we will address the integration of emerging computational technologies, such as quantum accelerators (e.g., Quantum Brilliance) and neuromorphic chips (like Intel Loihi and BrainChip Akida), at the edge network to accelerate data processing and improve the responsiveness of digital twins. This talk will provide insights into how these advancements can be leveraged to develop robust, scalable, and intelligent digital twin ecosystems, driving innovation and efficiency in real-world applications."


Professor Rajiv Ranjan is an Australian-British computer scientist, of Indian origin, known for his leadership in computational intelligence. He is University Chair Professor for the AIoT research in the School of Computing of Newcastle University, United Kingdom. He is an internationally established scientist (having published about 350 scientific papers). He is a fellow of IEEE (2024), Academia Europaea (2022) and the Asia-Pacific Artificial Intelligence Association (2023).

& a lot more about the Professor in the link.

 
  • Like
  • Fire
  • Love
Reactions: 15 users

Diogenese

Top 20
Appears this highly credentialed Professor will be discussing digital twins in a keynote presentation at the:
IMBSA
Conference Program
September 24-26 2025 | Athens, Greece

As part of the keynote:

"Finally, we will address the integration of emerging computational technologies, such as quantum accelerators (e.g., Quantum Brilliance) and neuromorphic chips (like Intel Loihi and BrainChip Akida), at the edge network to accelerate data processing and improve the responsiveness of digital twins. This talk will provide insights into how these advancements can be leveraged to develop robust, scalable, and intelligent digital twin ecosystems, driving innovation and efficiency in real-world applications."


Professor Rajiv Ranjan is an Australian-British computer scientist, of Indian origin, known for his leadership in computational intelligence. He is University Chair Professor for the AIoT research in the School of Computing of Newcastle University, United Kingdom. He is an internationally established scientist (having published about 350 scientific papers). He is a fellow of IEEE (2024), Academia Europaea (2022) and the Asia-Pacific Artificial Intelligence Association (2023).

& a lot more about the Professor in the link.


I think it's got to the stage where it's no longer:

"Golly, Akida got mentioned in the same breath as Loihi!"

Now it's:

"Are they still talking about Loihi when Akida gets mentioned?"
 
  • Like
  • Fire
  • Love
Reactions: 22 users
Yoda, totally agree with what you have said. The fact remains that BrainChip CAN make price sensitive and non price sensitive announcements around deals and partnerships, but Antonio made it crystal clear at the AGM, they don't because he doesn't want to (he actually said he did not want to get in trouble with the ASX). What a load of nonsense.
Antonio probably thinks he already does enough work without making any asx announcements as well 😂
 
  • Like
Reactions: 5 users

Frangipani

Top 20
Would appear our relationship with the University of WA (UWA) in Perth is starting to get some output.

I saw this repository a little while ago and wasn't much in it at that point so just kept an eye of it. Has been updated yesterday with more details and results.

Looks pretty decent. The link will give you a much clearer read obviously as the scroll pics are off my phone.




View attachment 88594 View attachment 88595 View attachment 88596

Hi @Fullmoonfever,

I found the following paper about skin lesion classification on edge devices that goes along with the GitHub account you discovered the other day - it was published on www.arxiv.org on 21 July and is called “Quantization-Aware Neuromorphic Architecture for Efficient Skin Disease Classification on Resource-Constrained Devices”.

The paper’s co-authors are

Haitian Wang∗†, Xinyu Wang†, Yiren Wang†, Karen Lee†, Zichen Geng†, Xian Zhang†, Kehkashan Kiran∗, Yu Zhang∗†, Bo Miao‡
∗Northwestern Polytechnical University, Xi’an, Shaanxi 710129, China
†The University of Western Australia, Perth, WA 6009, Australia ‡Australian Institute for Machine Learning, University of Adelaide, SA 5005, Australia


V. CONCLUSION
In this paper, we proposed QANA, a quantization-aware neuromorphic framework for skin lesion classification on edge devices
. Extensive experiments on the large-scale HAM10000 benchmark and a real-world clinical dataset show that QANA achieves state-of-the-art accuracy (91.6% Top-1, 82.4% macro F1 on HAM10000; 90.8%/81.7% on the clinical set) while enabling real-time and energy-efficient inference on the BrainChip Akida platform (1.5 ms latency, 1.7 mJ per image). These results demonstrate that QANA is highly effective for portable medical analysis and AI deployment in dermatology under limited computing resources.

VI. ACKNOWLEDGEMENT
This research was supported by the National Natural Science Foundation of China (Nos. 62172336 and 62032018)
. The authors gratefully acknowledge BrainChip Holdings Ltd. for providing technical support and the Akida AKD1000 hardware platform, whose powerful neuromorphic computing capabilities enabled strong performance of the SNN model. The authors also extend their appreciation to Dr. Atif Mansoor, Dr. Bo Miao and their teams for their preliminary contributions to this research.”





A0FF3F71-7A76-487F-87DC-2295BB86DE89.jpeg
3C978C95-81B9-4FAB-8256-75F6FFCD9B3F.jpeg
8B559209-1007-4591-A3CD-75C9C8EA5C23.jpeg
809AC4FF-B603-4CA5-B934-90749A8814FB.jpeg
C61C840E-26AD-48A3-BEFC-6042FFFD0032.jpeg
3B99799C-0445-422E-8D7E-2A68BB653E66.jpeg
BC7B8333-959B-4F74-BB39-E625162ACC49.jpeg
73F42636-90DF-406A-B786-45EF916DE56D.jpeg

3DE12998-7C09-4A5B-8468-D76BBE970459.jpeg
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 17 users

manny100

Top 20
Hi @Fullmoonfever,

I found the following paper about skin lesion classification on edge devices that goes along with the GitHub account you discovered the other day - it was published on www.arxiv.org on 21 July and is called “Quantization-Aware Neuromorphic Architecture for Efficient Skin Disease Classification on Resource-Constrained Devices”.

The paper’s co-authors are

Haitian Wang∗†, Xinyu Wang†, Yiren Wang†, Karen Lee†, Zichen Geng†, Xian Zhang†, Kehkashan Kiran∗, Yu Zhang∗†, Bo Miao‡
∗Northwestern Polytechnical University, Xi’an, Shaanxi 710129, China
†The University of Western Australia, Perth, WA 6009, Australia ‡Australian Institute for Machine Learning, University of Adelaide, SA 5005, Australia


V. CONCLUSION
In this paper, we proposed QANA, a quantization-aware neuromorphic framework for skin lesion classification on edge devices
. Extensive experiments on the large-scale HAM10000 benchmark and a real-world clinical dataset show that QANA achieves state-of-the-art accuracy (91.6% Top-1, 82.4% macro F1 on HAM10000; 90.8%/81.7% on the clinical set) while enabling real-time and energy-efficient inference on the BrainChip Akida platform (1.5 ms latency, 1.7 mJ per image). These results demonstrate that QANA is highly effective for portable medical analysis and AI deployment in dermatology under limited computing resources.

VI. ACKNOWLEDGEMENT
This research was supported by the National Natural Science Foundation of China (Nos. 62172336 and 62032018)
. The authors gratefully acknowledge BrainChip Holdings Ltd. for providing technical support and the Akida AKD1000 hardware platform, whose powerful neuromorphic computing capabilities enabled strong performance of the SNN model. The authors also extend their appreciation to Dr. Atif Mansoor, Dr. Bo Miao and their teams for their preliminary contributions to this research.”





View attachment 88819 View attachment 88828 View attachment 88821 View attachment 88822 View attachment 88823 View attachment 88824 View attachment 88825 View attachment 88826
View attachment 88827
Thanks Frangipani. That skin 'problem' classification portable device looks like a winner for AKIDA.
Trial and error by doctors diagnosis and trying out different meds to clear skin issues over time is a nuisance.
 
  • Like
  • Fire
Reactions: 8 users

Frangipani

Top 20
Hi @Fullmoonfever,

I found the following paper about skin lesion classification on edge devices that goes along with the GitHub account you discovered the other day - it was published on www.arxiv.org on 21 July and is called “Quantization-Aware Neuromorphic Architecture for Efficient Skin Disease Classification on Resource-Constrained Devices”.

The paper’s co-authors are

Haitian Wang∗†, Xinyu Wang†, Yiren Wang†, Karen Lee†, Zichen Geng†, Xian Zhang†, Kehkashan Kiran∗, Yu Zhang∗†, Bo Miao‡
∗Northwestern Polytechnical University, Xi’an, Shaanxi 710129, China
†The University of Western Australia, Perth, WA 6009, Australia ‡Australian Institute for Machine Learning, University of Adelaide, SA 5005, Australia


V. CONCLUSION
In this paper, we proposed QANA, a quantization-aware neuromorphic framework for skin lesion classification on edge devices
. Extensive experiments on the large-scale HAM10000 benchmark and a real-world clinical dataset show that QANA achieves state-of-the-art accuracy (91.6% Top-1, 82.4% macro F1 on HAM10000; 90.8%/81.7% on the clinical set) while enabling real-time and energy-efficient inference on the BrainChip Akida platform (1.5 ms latency, 1.7 mJ per image). These results demonstrate that QANA is highly effective for portable medical analysis and AI deployment in dermatology under limited computing resources.

VI. ACKNOWLEDGEMENT
This research was supported by the National Natural Science Foundation of China (Nos. 62172336 and 62032018)
. The authors gratefully acknowledge BrainChip Holdings Ltd. for providing technical support and the Akida AKD1000 hardware platform, whose powerful neuromorphic computing capabilities enabled strong performance of the SNN model. The authors also extend their appreciation to Dr. Atif Mansoor, Dr. Bo Miao and their teams for their preliminary contributions to this research.”





View attachment 88819 View attachment 88828 View attachment 88821 View attachment 88822 View attachment 88823 View attachment 88824 View attachment 88825 View attachment 88826
View attachment 88827

A Medium article inspired by the above paper:



Your Phone Could Soon Diagnose Skin Cancer — Thanks to a New Kind of AI​

Bradley Susser
Bradley Susser

Follow
10 min read
·
4 hours ago



Consider receiving a highly accurate skin diagnosis instantly, right on a portable device, no matter your location, and with complete privacy. A recent study introduces QANA (Quantization-Aware Neuromorphic Architecture), an AI system built to achieve this, addressing key difficulties in obtaining dermatological care.

The Healthcare Access Crisis: By the Numbers​

The dermatological care shortage represents one of healthcare’s most pressing areas of concern. According to the U.S. Department of Agriculture, rural communities comprise nearly 66% of primary care health professional shortage areas (HPSAs) in the country. This is due in large part to the fact that urban areas have 40 times the concentration of dermatologists per 100,000 citizens compared to rural areas. In Canada, there is a maldistribution of dermatologists, with as many as 5.6 dermatologists per 100,000 population in urban areas and as low as 0.6 per 100,000 in rural areas.

The Association of American Medical Colleges (AAMC) in its June 2021 study projected a shortage of up to 124,000 physicians by 2034. This includes a shortage of up to approximately 48,000 primary care physicians, but an even more severe shortage of up to 78,000 specialists.

Beyond geographic scarcity, access to timely and accurate skin cancer diagnosis also reveals racial disparities. While non-Hispanic Black individuals have lower incidence rates of melanoma than non-Hispanic White individuals (for instance, the lifetime risk of developing melanoma is approximately 1 in 1,000 for Black people compared to 1 in 38 for White people), Black patients are frequently diagnosed at a later, more advanced stage of the disease. This leads to appreciably poorer survival rates. For example, research indicates that Black melanoma patients have an estimated five-year survival rate of 70%, contrasted with 94% for White patients. Furthermore, studies show that 52% of non-Hispanic Black patients receive an initial diagnosis of advanced-stage melanoma, versus just 16% of non-Hispanic White patients. Such disparities are connected to elements like reduced public awareness of skin cancer risks in some communities and a lower index of suspicion among healthcare providers for skin cancer in individuals with darker skin tones, partly due to limited representation of diverse skin types in medical education materials.

A Real-World Scenario: When Distance Matters (Hypothetical Case Study)​

Consider Samantha Fukes, a nurse practitioner working at a rural health clinic in Montana, 150 miles from the nearest dermatologist. When 68-year-old rancher Tom Miller arrives with a suspicious dark lesion on his neck that has changed shape over recent months, Samantha faces a familiar dilemma. Samatha’s choices were clear. She could direct Tom to drive three hours to see a specialist, or upload images to a cloud-based diagnostic system that might take days for results. Both paths posed considerable impediments.

With QANA-powered portable diagnostic technology, Samantha could capture a high-quality dermatoscopic image and receive an accurate diagnosis within 1.5 milliseconds, all while keeping Tom’s medical data securely on the device. This hypothetical scenario illustrates the transformative potential of edge-based AI diagnosis in addressing healthcare access disparities.

The Clinical Need for Portable Diagnostics​

Skin diseases represent a notable diagnostic burden, particularly in settings where specialized dermatological expertise is unavailable. Conditions such as melanoma and Merkel cell carcinoma frequently face misdiagnosis when evaluated by clinicians without specialized training. While deep learning systems have demonstrated strong performance in dermatological diagnosis, conventional approaches require sensitive patient data to be transmitted to cloud servers for processing, raising substantial privacy concerns under regulations like HIPAA and GDPR.

The demand for portable diagnostic solutions extends beyond privacy considerations. Healthcare delivery in remote and home settings often lacks the infrastructure necessary for cloud-based analysis, creating a critical need for on-device diagnostic capabilities that can operate independently of internet connectivity.

Cloud vs. Edge: A Technology Comparison​

When considering Artificial Intelligence (AI) solutions, two primary architectural approaches emerge: Cloud-Based AI and Edge-Based AI. Each offers distinct advantages and disadvantages across various features, making them suitable for different applications.

Data Privacy and Security​

Cloud-Based AI typically involves uploading data to remote servers for processing. This introduces a moderate level of data privacy, as information must traverse networks to reach the cloud. Consequently, it also presents transmission vulnerabilities, increasing the data security risk during transit.

In contrast, Edge-Based AI, such as QANA, prioritizes high data privacybecause processing occurs directly on the device (“on-device”). This “contained on device” approach significantly reduces data security risks as information does not leave the local environment, eliminating transmission vulnerabilities.

Processing Speed and Latency​

The processing speed of Cloud-Based AI is variable and heavily dependent on network connectivity and bandwidth. This variability can lead to latencyranging from seconds to even minutes, as data travels to and from the cloud.

Conversely, Edge-Based AI boasts ultra-fast processing speeds, often achieving results in as little as 1.5 milliseconds locally. This leads to consistent latency measured in milliseconds, making it ideal for applications requiring real-time responses.

Energy Consumption and Infrastructure​

Cloud-Based AI typically incurs high energy consumption due to the reliance on powerful GPUs and servers in data centers. It also has a high infrastructure dependency, requiring robust internet connectivity and extensive cloud infrastructure to operate.

On the other hand, Edge-Based AI is designed for very low energy consumption, with some systems consuming as little as 1.7 millijoules per image. Its minimal infrastructure dependency means it can function effectively with little to no external network connectivity.

Rural/Remote Usability​

The reliance on internet connectivity for Cloud-Based AI means its rural/remote usability is limited. In areas with poor or no internet access, cloud-based solutions become impractical.

Conversely, Edge-Based AI offers high usability in rural and remote areasbecause it is fully offline. This makes it a robust solution for environments where consistent internet access cannot be guaranteed.

Technical Innovation in Neural Architecture​

QANA addresses these needs using a thorough four-stage process that includes data preparation, neural network creation, spike-based conversion, and hardware implementation. The architecture integrates several novel elements that set it apart from widely used methods.

The system begins with sophisticated data preprocessing that includes quality filtering, augmentation techniques, and Synthetic Minority Oversampling Technique (SMOTE) to address class imbalance common in medical datasets. This preprocessing stage ensures steadfast performance across diverse skin lesion types, including rare conditions that are typically underrepresented in training data.

The core neural architecture features a hierarchical design. It uses Ghost modules to improve its computational awareness. This essentially allows the system to learn more from less data, much like finding extra insights without needing extra effort. It also includes Spatially-Aware Efficient Channel Attention (SA-ECA) mechanisms and Squeeze-and-Excitation blocks. This combination enables the system to extract the most useful distinguishing features from images while remaining compatible with the specialized limits of neuromorphic hardware.

These Ghost modules form the basis for feature extraction. They employ both base and ghost convolutions to widen the range of data interpretation while keeping computational demands low. Similarly, the SA-ECA mechanism helps the system focus its attention. It captures connections between different data channels through lightweight processing, allowing it to model how information is related across space and channels with very little computational expense.

Neuromorphic Computing Advantages​

Moving to Spiking Neural Networks (SNNs) signifies a core alteration from standard neural processing. In contrast to standard neural networks that process continuous values, SNNs utilize discrete spike events for information encoding and transmission. This method yields sparse, event-driven computation that greatly lessens energy use while preserving diagnostic accuracy.

The deployment platform, BrainChip’s Akida (ASX:BRN;OTC:BRCHF) neuromorphic processor, natively supports SNN operations and enables on-chip incremental learning. This capability allows the system to adapt to new patient data without requiring complete model retraining, aligning with clinical workflows that continuously encounter new diagnostic cases.

Spike-Compatible Architecture Design​

The architecture incorporates specific design elements that facilitate seamless conversion from standard neural networks to spike-based processing. The spike-compatible feature transformation module employs separable convolutions with quantization-aware normalization, ensuring all activations remain within bounds suitable for spike encoding while preserving diagnostic information.

The Squeeze-and-Excitation blocks implement adaptive channel weighting through a two-stage bottleneck mechanism, providing additional regularization particularly beneficial for small, imbalanced medical datasets. The quantized output projection produces SNN-ready outputs that can be directly processed by neuromorphic hardware without additional conversion steps.

Comprehensive Performance Validation​

Experimental validation was conducted on both the HAM10000 public benchmark dataset and a real-world clinical dataset from Hospital Sultanah Bahiyah, Malaysia. The clinical dataset comprised 3,162 dermatoscopic images from 1,235 patients, providing real-world validation beyond standard benchmarks.

On HAM10000, QANA achieved 91.6% Top-1 accuracy and 82.4% macro F1 score, maintaining comparable performance on the clinical dataset with 90.8% accuracy and 81.7% macro F1 score. These results demonstrate consistent performance across both standardized benchmarks and clinical practice conditions.

The system showed balanced performance across all seven diagnostic categories, including minority classes such as dermatofibroma and vascular lesions. Notably, melanoma detection achieved 95.6% precision and 93.3% recall, critical metrics for this potentially life-threatening condition.

Hardware Performance and Energy Analysis​

When deployed on Akida hardware, the system delivers 1.5 millisecond inference latency and consumes only 1.7 millijoules of energy per image. These figures represent reductions of over 94.6% in inference latency and 98.6% in energy consumption compared to GPU-based CNN implementations.

Comparative analysis against state-of-the-art architectures converted to SNNs showed QANA’s superior performance across all metrics. While conventional architectures experienced accuracy drops of 3–7% after SNN conversion, QANA maintained high accuracy through its quantization-aware design principles.

Ablation Analysis and Component Contributions​

Systematic ablation studies revealed the contribution of each architectural component. The Ghost blocks provided computational awareness, while ECA and SE modules contributed significantly to feature discrimination. The quantization-aware head and SMOTE preprocessing each added measurable performance improvements, with the complete system achieving optimal results through their synergistic combination.

Incremental learning capabilities demonstrated the system’s ability to adapt to new data without catastrophic forgetting, a critical requirement for clinical deployment where diagnostic patterns may evolve over time.

Clinical Implementation Considerations​

The architecture’s design specifically addresses practical concerns in medical device deployment. The system operates effectively on 64×64 pixel images, accommodating current neuromorphic hardware memory constraints while maintaining sufficient resolution for diagnostic purposes. This resolution requirement aligns with practical constraints of portable diagnostic devices.

The quantization-aware design ensures minimal accuracy degradation during conversion to neuromorphic hardware, addressing a common issue where conventional architectures suffer significant performance loss when adapted for spike-based processing.

Expert Perspectives on Neuromorphic Healthcare Applications​

The potential of neuromorphic computing in healthcare is gaining recognition among industry experts. Neuromorphic chips have the ability to outpace traditional computers in energy and space efficiency as well as performance, presenting substantial advantages across various domains, including artificial intelligence, health care and robotics, according to the scientists.

Over the short term, neuromorphic computing will likely be focused on adding AI capabilities to specialty edge devices in healthcare and defense applications, notes industry analysis. In healthcare, it enables real-time disease diagnosis, personalized drug discovery, and intelligent prosthetics through its ability to analyze large datasets and detect patterns.

Neuromorphic engineering, which utilizes neural models in hardware and software to mimic brain-like functions, offers the potential to transform medicine by providing energy-efficient, fast, compact, and high-performance solutions. The development of neural interfaces and brain-machine interfaces represents a growing area of medical application.

Implications for Healthcare Delivery​

The development of QANA represents a considerable advancement toward democratizing access to dermatological diagnosis. By enabling accurate skin lesion classification on portable devices without requiring cloud connectivity, the system can support healthcare delivery in underserved regions and home-based care settings.

The privacy-preserving nature of the approach addresses growing concerns about medical data security while maintaining the diagnostic accuracy necessary for clinical decision-making. The low energy requirements and compact hardware footprint make the system suitable for integration into handheld diagnostic devices that can operate for extended periods without external power sources.

Beyond Dermatology​

Potential Applications Across Medical Specialties​

While the current work focuses on skin lesion classification, the architectural principles developed in QANA may be applicable to other medical imaging domains where edge deployment is desirable. The combination of quantization-aware design and neuromorphic deployment strategies provides a template for developing similar systems across various medical specialties.

Ophthalmology: Portable retinal screening devices could use QANA’s architecture to detect diabetic retinopathy, glaucoma, and age-related macular degeneration in remote clinics and developing countries where ophthalmologists are scarce.

Oral Health: Dental screening applications could identify early-stage oral cancers, periodontal disease, and other oral pathologies using smartphone-based imaging systems powered by neuromorphic processors.

Cancer Screening: The architecture could be adapted for various cancer screening applications, from cervical cancer detection using colposcopy images to lung nodule identification in portable X-ray systems.

Wound Assessment: Chronic wound monitoring in home healthcare settings could benefit from QANA-style systems that track healing progress and identify signs of infection without requiring specialist visits.

Market Adoption Challenges and Opportunities​

The transition to neuromorphic-powered medical devices faces several considerations. Regulatory approval processes will need to adapt to evaluate AI systems that learn and evolve on-device. Healthcare providers will require training on new diagnostic workflows that integrate edge-based AI recommendations with clinical judgment.

However, the potential benefits are substantial. Reduced healthcare costs, improved access to specialist-level diagnostics, and enhanced patient privacy could drive rapid adoption once initial barriers are overcome.

Broader Applications and Versatility​

The demonstrated success in handling class-imbalanced medical datasets through SMOTE integration and attention mechanisms suggests potential applications in other diagnostic areas where rare conditions require accurate detection with limited training data.

The study establishes a foundation for further research into privacy-preserving, energy-aware medical diagnostics that can operate effectively in resource-constrained environments while maintaining the accuracy standards necessary for clinical use. As neuromorphic hardware continues to mature, such approaches may become increasingly important for enabling distributed healthcare technologies that bring specialized diagnostic capabilities directly to patients.

The melding of neuromorphic computing and medical AI points toward truly portable, privacy-assuring diagnostic systems that could markedly alter healthcare provision in underserved communities globally.

Citation: Wang, H., Wang, X., Wang, Y., Lee, K., Geng, Z., Zhang, X., Kiran, K., Zhang, Y., & Miao, B. (2025). Quantization-Aware Neuromorphic Architecture for Efficient Skin Disease Classification on Resource-Constrained Devices. arXiv preprint arXiv:2507.15958. https://doi.org/10.48550/arXiv.2507.15958

#AICancerDiagnosis, #DermatologyTech, #EdgeAI, #HealthcareAccess, #MedicalAI, #MobileHealth, #NeuromorphicComputing, #PrivacyInHealthcare, #RuralHealthcare, #SkinHealthAI

Healthcare
Cancer
Technology
Medical
AI

https://medium.com/m/signin?actionU...----------------clap_footer------------------
Bradley Susser

Written by Bradley Susser​


56 followers
·560 following
Masters in Info Systems (Security & Assurance). Real Estate Professional. Ran a firm specializing in raising capital & investor relations for over a decade




60476F36-1EA5-4DEB-A8EC-7F6BDCE363F3.jpeg

206818F4-6943-4378-82AC-D6F9CE049DDC.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 17 users

MDhere

Top 20
But they didn't announce anything regarding Mercedes - Mercedes did.

They are certainly concerned about the regulators but as a director you need to do your job, you can't be so afraid of ASIC and the ASX that you don't actually fulfill your continuous disclosure obligations and director's duties properly and according to law (and that is what Fact Finder was trying to explain to them, quite correctly in my opinion, at the AGM).

Furthermore, I still can't understand why the partnerships can't at least be announced as non-price sensitive. If they are worthy enough to be announced on X and LinkedIn then they surely merit at least a non-price sensitive announcement. I don't see how a non-price sensitive announcement can be taken to be ramping the stock.
Hey Yoda
Yep Mercedes kindly announced it and I agree brn could have added a non-price sensitive. Bit like mining companies announcing we are drilling hole no.182 (and nothing comes of that hole).

Speaking of no.182 found it amusing that a Mercedes waa parked out from of our Brainchip headquarters in Perth taken just over a year ago :)

Yes yes, PURE COÏNCIDENCE!...... 🤣

Screenshot_20250724-075151_Gallery.jpg
 
  • Like
  • Fire
Reactions: 5 users
Top Bottom