BRN Discussion Ongoing

Evening Tothemoon24 ,

Loose connection dating back to 15th March 2023 ,

Relating to a company announcement on a new partner , IPSOLON , thay also let slip Toshiba , Cisco & Linaro , then quickly deleted the last three.

Someone may have captured said announcement , before it was eddited.

View attachment 89399

Regards,
Esq.
This would be the screenshot mate care of @stuart888


 
  • Like
  • Fire
  • Love
Reactions: 11 users

Diogenese

Top 20
No wonder ASX is dirty on tech (no relation):

https://www.abc.net.au/news/2025-08...boe-as-tpg-mishap-adds-10-year-woes/105623428
...

In 2015, the ASX began scouting for a replacement to the ageing technology it used to settle trades on the exchange.

Two years later, it created global headlines. In a market abuzz with talk of cryptocurrency and its open source ledger system, the ASX announced it would build the world's first industrial scale blockchain for financial services applications.

The timeline was always ambitious. It was supposed to be online by 2020.

But the project became ever-more complex as fights developed between various information providers about how they would interact with the new system.

Shares do not simply change hands between buyers and sellers — there are share registries, custodians and a host of other players, many of whom became concerned the new system would steal their business.

By the time the fifth delay to the rollout time was announced, it was obvious the project was on the rocks. At the end of 2022, it was canned, forcing the ASX to announce a $250 million write-off.

Brokers and investment houses had spent vast amounts too, replacing their systems to integrate with the blockchain dream that ultimately turned into a blocked drain.

Dominic Stevens, the ASX chief executive who commissioned the project, had left at the start of the year, leaving then chair Damian Roche to clean up the mess and to appoint Accenture to independently review what had gone wrong.

"On behalf of ASX, I apologise for the disruption experienced in relation to the CHESS replacement project over a number of years," he said at the time.


The original story was about letting the a Chicago mob set up a rival stock exchange to break ASX monopoly after a monumental cock up ...

" ASX faces losing virtual monopoly as TPG bungle adds to a decade of woes "

On Wednesday, the ASX confused a listed company with a similarly-named foreign owned private equity group that was engaged in a huge takeover.

The mistake resulted in TPG Telecom shares plummeting 5 per cent, wiping $400 million from its market value, even though it had nothing to do with the $645 million takeover of automotive software group Infomedia.

If the original mix-up was bad, the inability of the ASX to rectify the situation turned it into a debacle, as traders pounded TPG Telecom's stock for hours.

And it's unlikely to be the last the operator hears from TPG, with the telco understood to be considering its legal options.



So maybe we are moving to the Chicago exchange ... ?
 
  • Like
  • Wow
  • Fire
Reactions: 7 users

dippY22

Regular
What exactly do you regard is the catalyst for increasing volume?
Ah,....Hop. Nice try.

Growing interest. It will be growing interest in Brainchip that increases volume.

Actually I have no idea what the catalyst will be. I just know the volume will be huge.
 
  • Like
Reactions: 2 users
  • Like
Reactions: 2 users

CHIPS

Regular
Thanks mate. Still can't read it but I do need new glasses. Missus tried too but could only get half of it. Will keep an eye on Booz Allen. Interesting stuff.


SC

Even with better glasses, you won't be able to read it, I guess. I can't read all of it either. I could not get a better picture.
 
  • Like
Reactions: 1 users

7für7

Top 20
Thanks mate. Still can't read it but I do need new glasses. Missus tried too but could only get half of it. Will keep an eye on Booz Allen. Interesting stuff.


SC
ChatGPT was able to read it



TOP TITLE:

C2BMC

Command and Control, Battle Management, and Communications

Subtitle (below C2BMC):

The Command and Control, Battle Management, and Communications (C2BMC) program is the hub of the layered Missile Defense System. It is a vital operational system that enables the President, Secretary of Defense and Combatant Commanders at strategic, regional and operational levels to systematically plan missile defense operations, to collectively see the battle develop, and to dynamically manage shielded networked sensors and weapons systems to achieve global and regional mission objectives.


TOP COMMAND STRIP (color-coded):
  • NMCC
  • USSPACECOM
  • USNORTHCOM
  • USINDOPACOM
  • USEUCOM
  • USCENTCOM

MAIN TITLE (center of the image):

THE SYSTEM OF ELEMENTS


DEFENSE SEGMENTS (left to right):

BOOST

Defense Segment
(Shows missiles launching)


ASCENT / MIDCOURSE

Defense Segment
(Depicts missile intercept systems)
  • GBI
    Ground-Based Interceptor
  • SM-3 IIA
    Standard Missile
  • SM-3 IA/IB
    Standard Missile
  • AEGIS
    SHIP & ASHORE
    Ballistic Missile Defense

TERMINAL

Defense Segmen
  • THAAD
    Terminal High Altitude Area Defense
  • SM-6
    Standard Missile
  • AEGIS
    (Standard Missile)
  • PAC-3
    Patriot Advanced Capability
BOTTOM SECTION: SENSORS


Title:
SENSORS

Descriptive Text:

An effective layered defense relies on timely sensor information provided by space and ground-based assets. Sensors provide critical tracking and discrimination data to help warfighters see the threat, track it and engage it effectively.

Sensor Types (with icons):

  • SATELLITE SURVEILLANCE SYSTEMS
  • UPGRADED EARLY WARNING RADARS
  • FORWARD-BASED RADARS
  • SPY RADARS
  • DISCRIMINATING RADARS
 
  • Like
  • Love
Reactions: 4 users
Thesis just out of KTH.

Haven't downloaded to read it but just the abstract was enough for me on the confirmation of how just our AKD1000 First Gen stacked up and clearly showed its "edge" in the edge space over NVIDIA GPU.

Think it's a given that as complexity increases the processing power shifts however the power usage is still an advantage.

Be interesting to see how it goes when they focus on fully customised models.

Would also be great to see what step up AKD1500 / Gen 2 / TENNs etc is capable of in a comparison given this was just Gen 1.




Comparison of Akida Neuromorphic Processor and NVIDIA Graphics Processor Unit for Spiking Neural Networks

Chemnitz, Carl​

KTH, School of Electrical Engineering and Computer Science (EECS).

Ermis, Malik​

KTH, School of Electrical Engineering and Computer Science (EECS).

2025 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesisAlternative title
Jämförelse av neuromorfisk processor Akida och NVIDIA grafikkort för Spiking Neural Networks (Swedish)

Abstract [en]​


This thesis investigates the latency, throughput and energy efficiency of the BrainChip Akida AKD1000 neuromorphic processor compared to a NVIDIA GeForce GTX 1080 when running two different spiking neural network models on both hardwares. Spiking neural networks is a subset of neural networks that are specialized for neuromorphic processor. The first model is a simple image classification model (GXNOR on MNIST), and the second is a more complex object detection model (YOLOv2 on Pascal VOC). The models were trained and quantized to 2-bit and 4-bit weight precision, respectively, enabling spiking execution both on Akida AKD1000 and on GTX 1080, for the GPU CUDA was used. Results show that Akida achieved significant reductions in energy consumption and clock cycles for both models, consistent with prior findings within the field. Specifically, for the simple classification model the AKD1000 achieved 99.5 % energy reduction with 76.7 % faster inference times, despite having a clock rate 91.5 % slower than the GPU. However, for the more complex object detection model, the Akida took 118.1 % longer per inference, while reducing the energy expenditure by 96.0 %. For the MNIST model the AKD1000 showed no correlation in both cycles & time and cycle & energy. While for the YOLOv2 model it had a 0.2 correlation for both previous mentioned ratios. Suggesting that as model complexity increases, the Akida’s behaviour converges toward the GPU’s linear correlation patterns. In conclusion, the AKD1000 processor demonstrates clear advantages for low-power, edge-oriented applications where latency and efficiency are critical. However, these benefits diminish with increasing model complexity, where GPUs maintain superior scalability and performance. Due to limited documentation of the chosen models, a 1-to-1 comparison was not possible. Future work should focus on fully customized models to further explore the dynamics.
 
  • Like
  • Love
  • Fire
Reactions: 24 users

manny100

Top 20
Thesis just out of KTH.

Haven't downloaded to read it but just the abstract was enough for me on the confirmation of how just our AKD1000 First Gen stacked up and clearly showed its "edge" in the edge space over NVIDIA GPU.

Think it's a given that as complexity increases the processing power shifts however the power usage is still an advantage.

Be interesting to see how it goes when they focus on fully customised models.

Would also be great to see what step up AKD1500 / Gen 2 / TENNs etc is capable of in a comparison given this was just Gen 1.




Comparison of Akida Neuromorphic Processor and NVIDIA Graphics Processor Unit for Spiking Neural Networks

Chemnitz, Carl​

KTH, School of Electrical Engineering and Computer Science (EECS).

Ermis, Malik​

KTH, School of Electrical Engineering and Computer Science (EECS).

2025 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesisAlternative title
Jämförelse av neuromorfisk processor Akida och NVIDIA grafikkort för Spiking Neural Networks (Swedish)

Abstract [en]​


This thesis investigates the latency, throughput and energy efficiency of the BrainChip Akida AKD1000 neuromorphic processor compared to a NVIDIA GeForce GTX 1080 when running two different spiking neural network models on both hardwares. Spiking neural networks is a subset of neural networks that are specialized for neuromorphic processor. The first model is a simple image classification model (GXNOR on MNIST), and the second is a more complex object detection model (YOLOv2 on Pascal VOC). The models were trained and quantized to 2-bit and 4-bit weight precision, respectively, enabling spiking execution both on Akida AKD1000 and on GTX 1080, for the GPU CUDA was used. Results show that Akida achieved significant reductions in energy consumption and clock cycles for both models, consistent with prior findings within the field. Specifically, for the simple classification model the AKD1000 achieved 99.5 % energy reduction with 76.7 % faster inference times, despite having a clock rate 91.5 % slower than the GPU. However, for the more complex object detection model, the Akida took 118.1 % longer per inference, while reducing the energy expenditure by 96.0 %. For the MNIST model the AKD1000 showed no correlation in both cycles & time and cycle & energy. While for the YOLOv2 model it had a 0.2 correlation for both previous mentioned ratios. Suggesting that as model complexity increases, the Akida’s behaviour converges toward the GPU’s linear correlation patterns. In conclusion, the AKD1000 processor demonstrates clear advantages for low-power, edge-oriented applications where latency and efficiency are critical. However, these benefits diminish with increasing model complexity, where GPUs maintain superior scalability and performance. Due to limited documentation of the chosen models, a 1-to-1 comparison was not possible. Future work should focus on fully customized models to further explore the dynamics.
I think AKIDA 1500 is a big step up from the 1000.
One thing that we learnt from taping out the AKIDA 1000 chip was that minor improvements can be made with little cost to the original taping but major adjustments like the AKIDA1500 has to be completely 'retaped' from scratch which is expensive.
The fast pace of change weighed against taping out costs was likely at least a small factor in not taping out Gen2.
AKIDA 1000 and 1500 gets the chips 'out there'.
I am surprised we have not heard more about engagements utilising the1500.
Bascom Hunter purchased $100k worth.
8 bit processing, improved efficiency and the VIT of the1500 is a big step up IMO.
 
  • Like
  • Fire
Reactions: 10 users

IloveLamp

Top 20

1000009762.jpg
 
  • Like
  • Wow
  • Thinking
Reactions: 16 users

manny100

Top 20
Shorts are up significantly from a low of 1.42% on May 1st 2025 to 3.05% on 1st August with a steep slope up.
Explains the huge increase in downrapmer activity on the crapper and the re-appearance of a down ramper or 2 here recently.
Eventual short covering will see a coiled spring release push up.
Traders adding to the downramping to get in at a lower price to make the most of the eventual spike.
Likely expected pressure bought by LDA sales has shorters and traders interested. So far the SP has held around 20 ish.
We just have to ride this out. Its no issue concerning our progress or tech.
A little more good news this summer will help, eg progress on Gen 3 , Gen AI, quality partners added.
 
Last edited:
  • Like
Reactions: 12 users

IloveLamp

Top 20

View attachment 89403
Interesting the amount of BRN partners being absorbed by the big end of town.........off the top of my head -

Qualcomm $ edge impulse
Indie $ Emotion3d
AMD $ XILNIX (admittedly an older one)
Nvidia $ ARM (Unsuccessfully)

And I'm sure there are more i haven't mentioned
 
  • Like
  • Fire
  • Wow
Reactions: 14 users

Mccabe84

Regular
Interesting the amount of BRN partners being absorbed by the big end of town.........off the top of my head -

Qualcomm $ edge impulse
Indie $ Emotion3d
AMD $ XILNIX (admittedly an older one)
Nvidia $ ARM (Unsuccessfully)

And I'm sure there are more i haven't mentioned
I agree. Do you think BRN will be acquired at some point ?
 
  • Like
  • Love
Reactions: 2 users

IloveLamp

Top 20
I agree. Do you think BRN will be acquired at some point ?
2 years ago i would've said definitely, but now I'm not so sure.....

If it was going to happen one would assume it would've already happened.

But you cannot buy what is not for sale, and that to me is the most logical explanation as to why we haven't been bought (considering how many irons we have in the fire)

All conjecture of course.
 
  • Like
  • Fire
Reactions: 9 users

FuzM

Member
Interesting the amount of BRN partners being absorbed by the big end of town.........off the top of my head -

Qualcomm $ edge impulse
Indie $ Emotion3d
AMD $ XILNIX (admittedly an older one)
Nvidia $ ARM (Unsuccessfully)

And I'm sure there are more i haven't mentioned
Adding one to the list

Blue Ridge Envisioneering $ Parsons Corporation
 
  • Like
Reactions: 8 users

7für7

Top 20
WHOOP WHOOOP!!


38321BD8-B130-4DC4-A638-104B9A4793B3.png
 
  • Haha
Reactions: 6 users

HopalongPetrovski

I'm Spartacus!
Ah,....Hop. Nice try.

Growing interest. It will be growing interest in Brainchip that increases volume.

Actually I have no idea what the catalyst will be. I just know the volume will be huge.
Hi Dipp.
The question asked was in relation to....... on that day the market was up, but BrainChip was down.....
ergo....what does it take (to improve the share price)?

Huge volume (in trades) will of course occur with any rapid or sustained increasing share price but that volume will likely be in reaction to the change rather than causal.

Growing interest will come with commercial viability as evidenced by people paying us money for our product.

To say that volume will be what causes our share price to increase is like saying planes fly because they are at 30,000 feet.
It's the thrust and lift that gets them there.
 
  • Fire
  • Like
Reactions: 4 users

Diogenese

Top 20
I think AKIDA 1500 is a big step up from the 1000.
One thing that we learnt from taping out the AKIDA 1000 chip was that minor improvements can be made with little cost to the original taping but major adjustments like the AKIDA1500 has to be completely 'retaped' from scratch which is expensive.
The fast pace of change weighed against taping out costs was likely at least a small factor in not taping out Gen2.
AKIDA 1000 and 1500 gets the chips 'out there'.
I am surprised we have not heard more about engagements utilising the1500.
Bascom Hunter purchased $100k worth.
8 bit processing, improved efficiency and the VIT of the1500 is a big step up IMO.
Hi manny,

8-bit and VIT are in Akida 2.

1500 is just Akida 1 without the ARM Cortex processor. The NPUs are basically the same as Akida 1. It relies on an external processor for configuration. It is made by Global Foundries in 22nm FD-SoI which makes it a bit faster compared to 28nm and more power efficient (less leakage losses) than vanilla CMOS.

8-bit makes it easier to run more 3rd party models on Akida 2.

Akida 2 with the TENNs model also has long skip which cuts out reprocessing already classified blocks of data by bypassing subsequent layers and sending these blocks to the output. The TENNs model is loaded in one of the 4 NPUs (aka: NPEs) in each node. The TENNs model can be run on a non-Akida processor.
 
  • Love
  • Like
  • Fire
Reactions: 14 users

jrp173

Regular
  • Fire
Reactions: 1 users
  • Haha
  • Wow
Reactions: 5 users

7für7

Top 20
We can only hope so!
We can hope for many things at this stage here… 😂👌so let’s hope our hopes will come true! Hopefully
 
  • Like
Reactions: 2 users
Top Bottom