BRN Discussion Ongoing

Well I just don’t know what to think about the share price atm but I know that I am not alone on this thought.
I want to remain respectful tomorrow but not sure if I will be.
To much secret squirrel shit going on and Sean has a lot of explaining to do
Maybe a big fricken sorry could be a start , but we will see how they try to cover up this last piss poor year’s results
 
  • Like
  • Fire
Reactions: 10 users

MDhere

Top 20
Waiting for flight Sydney bound, glass of wine and checking the top 50 from previous papers. Interesting some accumulation from the usual suspects and some NEW names have appeared with significant Holdings 2.3 mil and above knocking off others from the podium.

Overall glad to see top holders from last year's top 50 accumulating and NEW or topped up holders to reach top 50.

And no I am not one of them :(

See some of you soon and @Pom down under I've not forgotten the $15 beer round i power u from jan 2024 notes.

Guys pm me where we meeting up if we doing breakky?

Happy Monday night :)
 
  • Like
  • Love
  • Fire
Reactions: 12 users
Waiting for flight Sydney bound, glass of wine and checking the top 50 from previous papers. Interesting some accumulation from the usual suspects and some NEW names have appeared with significant Holdings 2.3 mil and above knocking off others from the podium.

Overall glad to see top holders from last year's top 50 accumulating and NEW or topped up holders to reach top 50.

And no I am not one of them :(

See some of you soon and @Pom down under I've not forgotten the $15 beer round i power u from jan 2024 notes.

Guys pm me where we meeting up if we doing breakky?

Happy Monday night :)
I ain’t got a clue where I’m going in Sydney but should be off the plane about 9am so if anyone has any suggestions a place for breakfast close to the AGM that sells beer ill be keep for a catch up
 
  • Like
Reactions: 2 users
What the hell is going on just as the agm approaches
Last year I had Covid
The year before Covid
This year the flu
Is it sign to keep me away ??
 
  • Like
  • Haha
  • Sad
Reactions: 5 users

yogi

Regular
Waiting for flight Sydney bound, glass of wine and checking the top 50 from previous papers. Interesting some accumulation from the usual suspects and some NEW names have appeared with significant Holdings 2.3 mil and above knocking off others from the podium.

Overall glad to see top holders from last year's top 50 accumulating and NEW or topped up holders to reach top 50.

And no I am not one of them :(

See some of you soon and @Pom down under I've not forgotten the $15 beer round i power u from jan 2024 notes.

Guys pm me where we meeting up if we doing breakky?

Happy Monday night :)
see you tomorrow
 
  • Like
Reactions: 1 users

Rskiff

Regular
No problemo amigo!

May I be so bold as to ask what skills you might be able to bring to the table, that is if we can afford a table?

Do you know how to grow vegetables?
I'm a good a harvesting mushrooms. Plenty of practice of being left in dark and fed bullshit by BRN.
 
  • Haha
  • Like
Reactions: 10 users
That's great @Smoothsailing! Or should I call you "roomie"?

A cost shared is a cost halved and if a few more shareholders jump in with us, we might be able to afford a composting toilet as well! 🚽😝

@Sosimple, has most generously offered us some land to pitch our tee pee on, so things are finally looking up!
I got a second hand thunder box for free off a mate, needs a little bit of clean and a new door but it’s good to go.
 

Attachments

  • 7B5B8A94-CB30-41F2-8BBA-2D373CB0C05B.jpeg
    7B5B8A94-CB30-41F2-8BBA-2D373CB0C05B.jpeg
    159.2 KB · Views: 34
  • Haha
  • Like
Reactions: 9 users

Diogenese

Top 20
I'm a good a harvesting mushrooms. Plenty of practice of being left in dark and fed bullshit by BRN.
Is that you, Erin?
 
  • Haha
  • Like
Reactions: 11 users

manny100

Top 20
Details of Tony Lewis presentation at Embedded Vision Summit on Wednesday 21st May'25.
"
Date: Wednesday, May 21
Start Time: 2:05 pm
End Time: 2:35 pm
At the embedded edge, choices of language model architectures have profound implications on the ability to meet demanding performance, latency and energy efficiency requirements. In this presentation, we contrast state-space models (SSMs) with transformers for use in this constrained regime. While transformers rely on a read-write key-value cache, SSMs can be constructed as read-only architectures, enabling the use of novel memory types and reducing power consumption. Furthermore, SSMs require significantly fewer multiply-accumulate units—drastically reducing compute energy and chip area. New techniques enable distillation-based migration from transformer models such as Llama to SSMs without major performance loss. In latency-sensitive applications, techniques such as precomputing input sequences allow SSMs to achieve sub-100 ms time-to-first-token, enabling real-time interactivity. We present a detailed side-by-side comparison of these architectures, outlining their trade-offs and opportunities at the extreme edge."
See bold above. Seems that BRN are now able to migrate traditional transformer models to State Spece models (SSM's) without major performance loss.
Note the italics above that SSMs reduce energy requirements and chip area. Think Pico or a Pico plus.
Pico runs off TENNs which is as type of SSM.
Does that mean developers can now feast on the multitude of traditional models and distill them to SSMs? Appears so.
Is this potentially another game changer?
We might find out more tomorow?
I have the 'kiddy' co pilot which you do not get much out of.
Is someone with GPT4 or Grok etc able to quiz the AI to see what the possibilities are?
 
  • Like
  • Love
  • Fire
Reactions: 23 users
Details of Tony Lewis presentation at Embedded Vision Summit on Wednesday 21st May'25.
"
Date: Wednesday, May 21
Start Time: 2:05 pm
End Time: 2:35 pm
At the embedded edge, choices of language model architectures have profound implications on the ability to meet demanding performance, latency and energy efficiency requirements. In this presentation, we contrast state-space models (SSMs) with transformers for use in this constrained regime. While transformers rely on a read-write key-value cache, SSMs can be constructed as read-only architectures, enabling the use of novel memory types and reducing power consumption. Furthermore, SSMs require significantly fewer multiply-accumulate units—drastically reducing compute energy and chip area. New techniques enable distillation-based migration from transformer models such as Llama to SSMs without major performance loss. In latency-sensitive applications, techniques such as precomputing input sequences allow SSMs to achieve sub-100 ms time-to-first-token, enabling real-time interactivity. We present a detailed side-by-side comparison of these architectures, outlining their trade-offs and opportunities at the extreme edge."
See bold above. Seems that BRN are now able to migrate traditional transformer models to State Spece models (SSM's) without major performance loss.
Note the italics above that SSMs reduce energy requirements and chip area. Think Pico or a Pico plus.
Pico runs off TENNs which is as type of SSM.
Does that mean developers can now feast on the multitude of traditional models and distill them to SSMs? Appears so.
Is this potentially another game changer?
We might find out more tomorow?
I have the 'kiddy' co pilot which you do not get much out of.
Is someone with GPT4 or Grok etc able to quiz the AI to see what the possibilities are?
Fk. This is a game changer. We can run Facebook llama models on akida pico.
 
  • Like
  • Wow
  • Fire
Reactions: 8 users

Tothemoon24

Top 20
Gee Sean was unlucky


IMG_0995.jpeg
 
  • Haha
  • Like
Reactions: 14 users
Details of Tony Lewis presentation at Embedded Vision Summit on Wednesday 21st May'25.
"
Date: Wednesday, May 21
Start Time: 2:05 pm
End Time: 2:35 pm
At the embedded edge, choices of language model architectures have profound implications on the ability to meet demanding performance, latency and energy efficiency requirements. In this presentation, we contrast state-space models (SSMs) with transformers for use in this constrained regime. While transformers rely on a read-write key-value cache, SSMs can be constructed as read-only architectures, enabling the use of novel memory types and reducing power consumption. Furthermore, SSMs require significantly fewer multiply-accumulate units—drastically reducing compute energy and chip area. New techniques enable distillation-based migration from transformer models such as Llama to SSMs without major performance loss. In latency-sensitive applications, techniques such as precomputing input sequences allow SSMs to achieve sub-100 ms time-to-first-token, enabling real-time interactivity. We present a detailed side-by-side comparison of these architectures, outlining their trade-offs and opportunities at the extreme edge."
See bold above. Seems that BRN are now able to migrate traditional transformer models to State Spece models (SSM's) without major performance loss.
Note the italics above that SSMs reduce energy requirements and chip area. Think Pico or a Pico plus.
Pico runs off TENNs which is as type of SSM.
Does that mean developers can now feast on the multitude of traditional models and distill them to SSMs? Appears so.
Is this potentially another game changer?
We might find out more tomorow?
I have the 'kiddy' co pilot which you do not get much out of.
Is someone with GPT4 or Grok etc able to quiz the AI to see what the possibilities are?
U real? If pico can run llama 1b. It is not hard to stack up to run full llama 405b model. It will be significantly cheaper to buy and run this chip at much lower power than nvidia 5090
 
  • Thinking
  • Like
  • Fire
Reactions: 5 users

jrp173

Regular
Off-course, but you just keep ranting on and on, there is a difference!

As I said countless times, why can't you downrampers wait until after the AGM? Let them speak first, but no, you just bla-bla-bla and want them gone NOW!

It's you that comes out childish by having no patience like a little kid!

You don't understand how much it takes to shift AI technology to the edge but want it done yesterday!

This sub forum on HC taught me a lot and made me much more positive: Understanding the Technology adoption cycle! One poster, Obseverr is extremely knowledgeable and clearly knows the industry in depth.

Looking at it, I see positive signs, space and military have started using our technology, which actually is very positive because they are typically early adopters.

I think we will get there this year and the next one.

Yes I know that have been said for the last couple of years, but technology shift towards edge AI have slowly started now IMO, but I will say, if they can't sell IPs over the next year, Sean has failed, so I will listen to them carefully tomorrow and take it from there.
 
If it’s true, the stock price will skyrocket like Nvidia’s.
 
  • Haha
  • Like
Reactions: 4 users

manny100

Top 20
Fk. This is a game changer. We can run Facebook llama models on akida pico.
I do not want to get to excited but it looks pretty good. Even a Pico plus size would be good. Drastically reduce energy use and chip area. Hopefully a Pico or Pico plus?
If its small enough to fit into a mobile phone that would be a game changer for sure - the gap for hackers via mobiles would be closed - that is a game changer on its own. Guessing though
Its wait and see.
Although at tomorows tech meeting i do expect we will get some good tech news. Otherwise why schedule it right before the AGM?
To maybe get a little WOW factor before attending the AGM? Nothing like a game changer to soften the mood - speculation of course.
 
  • Like
  • Fire
  • Love
Reactions: 14 users
I do not want to get to excited but it looks pretty good. Even a Pico plus size would be good. Drastically reduce energy use and chip area. Hopefully a Pico or Pico plus?
If its small enough to fit into a mobile phone that would be a game changer for sure - the gap for hackers via mobiles would be closed - that is a game changer on its own. Guessing though
Its wait and see.
Although at tomorows tech meeting i do expect we will get some good tech news. Otherwise why schedule it right before the AGM?
To maybe get a little WOW factor before attending the AGM? Nothing like a game changer to soften the mood - speculation of course.
You didn’t fully understand the implication. If Pico can run LLaMA 1B, it can easily scale up to run DeepSeek, Qwen, and the full LLaMA model — something only the NVIDIA 5090 can currently handle(priced between $5,599 and $7,199 AUD), and even that is out of stock. BrainChip could manufacture and sell it directly to the consumer market, bypassing those outdated 10-year IP deal constraints. The demand for running local AI models is tremendous, especially since ChatGPT tokens are becoming increasingly expensive.

And this will immediately position us as one of the biggest competitors of Nvidia.
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 53 users

manny100

Top 20
You didn’t fully understand the implication. If Pico can run LLaMA 1B, it can easily scale up to run DeepSeek, Qwen, and the full LLaMA model — something only the NVIDIA 5090 can currently handle(priced between $5,599 and $7,199 AUD), and even that is out of stock. BrainChip could manufacture and sell it directly to the consumer market, bypassing those outdated 10-year IP deal constraints. The demand for running local AI models is tremendous, especially since ChatGPT tokens are becoming increasingly expensive.
Thanks Tony's presentation description looked the goods and i just hope its accurate.
 
  • Like
  • Fire
  • Love
Reactions: 14 users

FJ-215

Regular
You didn’t fully understand the implication. If Pico can run LLaMA 1B, it can easily scale up to run DeepSeek, Qwen, and the full LLaMA model — something only the NVIDIA 5090 can currently handle(priced between $5,599 and $7,199 AUD), and even that is out of stock. BrainChip could manufacture and sell it directly to the consumer market, bypassing those outdated 10-year IP deal constraints. The demand for running local AI models is tremendous, especially since ChatGPT tokens are becoming increasingly expensive.

And this will immediately position us as one of the biggest competitors of Nvidia.
Hi KK,
The problem with Pico (and Akida 2) is that they only exist on the drawing board. BRN have stated that there is no plan to tape out gen 2, although they did come up with a compromise and we now have a FPGA version that potential customers can use via the cloud.

That was the story with AKD1000, the IP was released in May 2019 but no one would look at it until we stumped up the cash to make the actual, physical chip.

History repeating???
 
  • Like
  • Thinking
Reactions: 6 users

Frangipani

Top 20
Steve talks about Edge Impulse. I wonder when this was recorded.


Hi @JB49,

it must be the CES 2025 media interview with Bill Wong, which was recorded two months before Edge Impulse was acquired by Qualcomm.

This is a screenshot I took on 9 January (the one that got subsequently deleted):

96C0A258-F314-412F-8DAE-8A112ABC55CB.jpeg


(I’m pretty sure this accompanying picture of a podcast recording shows Steve Brightfield interviewing Bill Eichen from De Girum, though, not Bill Wong.)



Have a look at this picture and note the armchair, sofa and sofa cushions in the background:

97E2D428-F40D-44D9-8804-811FC40E7F85.jpeg



As you can see below, the above photo must have been taken in the Venetian Tower suite BrainChip had booked for CES 2025:

015C87F8-D359-4742-9674-B09A2F64831D.jpeg
4F04D132-8CEB-437D-9623-7C2D23CFB918.jpeg



At the time, everything between BrainChip and Edge Impulse seemed perfectly harmonious:


4B6C5A8A-FDC0-4264-8FE4-45560E9E5366.jpeg
AB9ADFDA-7BE1-4256-A319-1D0FA59F95B6.jpeg







Two months later, on 10 March, we found out that Edge Impulse had been acquired by Qualcomm:


F01CB92E-F028-40DB-9833-917DD2581E34.jpeg




Fast forward to 5 April, when you spotted the suspension of BrainChip models on Edge Impulse, which continues to be the status quo even today, a whole month later (https://docs.edgeimpulse.com/docs/edge-ai-hardware/cpu-+-ai-accelerators/akd1000), although according to https://brainchip.com/partners/ Edge Impulse remains a BrainChip enablement partner…

60B637E6-5C9F-4C75-9812-4B304CD470EE.jpeg
 
  • Like
  • Thinking
  • Sad
Reactions: 14 users
Top Bottom