BRN Discussion Ongoing

manny100

Top 20
I asked IR to request that Sean address the implications of the Department of War release for Brainchip in the podcast due later this month.
 
  • Like
  • Haha
  • Fire
Reactions: 15 users

manny100

Top 20
We have Lockheed -Martin (LM) on our CyberNeutro-RT hook.
LM Skunkworks division has been testing Drones with Neuromorphic Edge AI with our Water safety 'Drones' partner Arquimea.
 
  • Like
  • Fire
  • Thinking
Reactions: 18 users

TopCat

Regular
  • Haha
  • Like
Reactions: 3 users

Diogenese

Top 20
Thankyou ALL for the positive feedback.
I need to give myself a clip over the head every now and then. I think it's driven as much as anything by Trumpy MAGA and Intel's American flag. Maybe that was what was driving the magical redomicile talk?
I remind myself about TENNS as Dio points out, our Bagfull of patents, the new Provinence tech, PICO as well as AKIDA 1, 11/2, 2 and assuming soonish 3, the META software, tapeout in play, customers on the hook, all the backroom hush hush, etc etc.
Also Dio, serious question, is it possible for lolhi to incorporate akida special sauce? or is this just a ridiculous thought?
WTF was I thinking.
I shall redirect my thinking for a moment to the nice steak soon to be on the bbq, and the accompanying glass of red.
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

1768726249223.png



[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
 
  • Fire
  • Like
  • Thinking
Reactions: 11 users
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
I thought it was the JAST rules we bought Dio?

Renesas have already borrowed the N & M from us for $1M when they bought the licence and then went solo :(

Could be mistaken too, going on memory and that was a few years back.
 
  • Like
  • Fire
Reactions: 5 users

Diogenese

Top 20
I thought it was the JAST rules we bought Dio?

Renesas have already borrowed the N & M from us for $1M when they bought the licence and then went solo :(

Could be mistaken too, going on memory and that was a few years back.
Forgive a befuddled old man. There was something rattling around in the back of my brain ... (plenty of room to rattle around there).

In fact @uiux 's All Roads Lead to JAST thread all those years ago explains JAST.

However, I believe we did also get N-of-M from ST. I assumed that N-of-M was the secret sauce because JAST was not a secret.
 
  • Like
  • Fire
Reactions: 8 users

Wags

Regular
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Thanks ganDis a
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Thank
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Hi Wags,

I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology

I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs

WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326

View attachment 94377


[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).

So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Thanks Dio,
This N-of-M coding corner seems a little crowded.
Yr a clever man with a good sense of humor, cheers to you
 
  • Like
  • Love
Reactions: 5 users

manny100

Top 20
Interesting. Under the Heading "AI-Native Warfighting." Page 5.
"We must put aside legacy approaches to combat and ensure we use this disruptive technology to compound the lethality of our military. Exercises and experiments that do not meaningfully incorporate AI and autonomous capabilities will be reviewed by the Director of Cost Assessment and Program Evaluation for resourcing adjustment"
 
  • Like
Reactions: 4 users

Diogenese

Top 20
Interesting. Under the Heading "AI-Native Warfighting." Page 5.
"We must put aside legacy approaches to combat and ensure we use this disruptive technology to compound the lethality of our military. Exercises and experiments that do not meaningfully incorporate AI and autonomous capabilities will be reviewed by the Director of Cost Assessment and Program Evaluation for resourcing adjustment"
Hi Manny,

I'm not sure compounding lethality was at the front of PvdM's mind when he invented Akida, but it is inevitable.
 
  • Like
  • Sad
Reactions: 8 users

Guzzi62

Regular
Interesting. Under the Heading "AI-Native Warfighting." Page 5.
"We must put aside legacy approaches to combat and ensure we use this disruptive technology to compound the lethality of our military. Exercises and experiments that do not meaningfully incorporate AI and autonomous capabilities will be reviewed by the Director of Cost Assessment and Program Evaluation for resourcing adjustment"
We should not forget that we already have a contract running with the US Air Force Research Laboratory.




 
  • Like
  • Fire
  • Haha
Reactions: 7 users

manny100

Top 20
Hi Manny,

I'm not sure compounding lethality was at the front of PvdM's mind when he invented Akida, but it is inevitable.
Hi Dio, agree it is inevitable.
It seems historically disruptive tech starts in Defense and Space and spreads to consumers in one form or another.
AKIDA could function equally as well in a toy or a Drone, missile, satellite, or sophisticated cyber security system.
The key is how the AKIDA buyer uses it. Likely why it will not take long for the 'Prime' defense suppliers to take out an IP and apply/add their own 'secret sauce' to it - economies of scale apply as well. Hence not millions of 1500 chips being produced.
In no time most will have an AKIDA chip in something they own.
It all comes down to how smart those working with it are.
We should do quite well from DOD.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 13 users

SERA2g

Founding Member
I’m thinking we wouldn’t need to do a capital raise if it were Brainchip!
We have a minimum amount we must raise with LDA by 30 June 2026 as a part of the agreement. Whether we need it or not.
 
  • Like
Reactions: 7 users

perceptron

Regular
450,000 chips by 2028 at an average sale price of $20 is $9,000,000.






Cost = $2.94
Price per chip is volume dependant = $4- $50

View attachment 94370
Where does it explicitly state the price to customers? Further, what is meant by Volumes < $10 within the "Fill Key Market Gap" heading?
 
Artificial Intelligence Strategy for the Department of War
Released very recently.
Fits BRN like a glove. Seems huge!!!!!!
No further comment from me right now - still absorbing it.
What do posters think?
FF

Bascom Hunter see their 3U VPX SNAP Card

Parsons BRE see their electronic warfare solutions

RTX Raytheon, ISL, USAFRL see their Doppler radar solutions

Quantum Ventura Metaguard Ai see their cybersecurity solution.

Intellisense see their cognitive communication solution

AiLab see their machine monitoring system

ONSOR see their ECG monitoring system

Haila see their RF Technology solution

Neurobus see their Drone detection system

Probably enough for DARPA to go on with.
 
  • Like
  • Fire
  • Love
Reactions: 25 users

keyeat

Regular
FF

Bascom Hunter see their 3U VPX SNAP Card

Parsons BRE see their electronic warfare solutions

RTX Raytheon, ISL, USAFRL see their Doppler radar solutions

Quantum Ventura Metaguard Ai see their cybersecurity solution.

Intellisense see their cognitive communication solution

AiLab see their machine monitoring system

ONSOR see their ECG monitoring system

Haila see their RF Technology solution

Neurobus see their Drone detection system

Probably enough for DARPA to go on with.
NICE !

This is what I see

1768780676469.png
 
  • Like
Reactions: 4 users
Bonus few extra shares than when I sold them


IMG_4406.png
 
  • Like
  • Fire
Reactions: 11 users

manny100

Top 20
NICE !

This is what I see

View attachment 94378
.... And that is the key. A long base around lows. The longer the base the bigger the......... on news.
If you believe BRN will take part in the mandated red tape slashed DOD transition to AI its very enticing at these levels.
 
  • Like
  • Fire
  • Thinking
Reactions: 11 users

Guzzi62

Regular
NICE !

This is what I see

View attachment 94378
So you sold and moved on? Na, still posting here, so likely a disgruntled SH?

I bought more shares during the admittedly long wait because I am still convinced it will be worth it and believe in what BRN are doing.

Looking at the chart you provided, 4 long years since the crazy +$2 spike, that hasn't done us any favors.

Agree with manny, the squeeze upon positive news could be epic, this has been kept down for a very long time by market makers.
 
  • Like
  • Fire
  • Thinking
Reactions: 16 users

IloveLamp

Top 20
1000017277.jpg
 
  • Like
  • Fire
  • Love
Reactions: 21 users
Top Bottom