ukdefencejournal.org.uk
Almost enough to cover Sean and the BOD’s salaries450,000 chips by 2028 at an average sale price of $20 is $9,000,000.
Cost = $2.94
Price per chip is volume dependant = $4- $50
View attachment 94370
Hi Wags,Thankyou ALL for the positive feedback.
I need to give myself a clip over the head every now and then. I think it's driven as much as anything by Trumpy MAGA and Intel's American flag. Maybe that was what was driving the magical redomicile talk?
I remind myself about TENNS as Dio points out, our Bagfull of patents, the new Provinence tech, PICO as well as AKIDA 1, 11/2, 2 and assuming soonish 3, the META software, tapeout in play, customers on the hook, all the backroom hush hush, etc etc.
Also Dio, serious question, is it possible for lolhi to incorporate akida special sauce? or is this just a ridiculous thought?
WTF was I thinking.
I shall redirect my thinking for a moment to the nice steak soon to be on the bbq, and the accompanying glass of red.
I thought it was the JAST rules we bought Dio?Hi Wags,
I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology
I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs
WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326
View attachment 94377
[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).
So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Forgive a befuddled old man. There was something rattling around in the back of my brain ... (plenty of room to rattle around there).I thought it was the JAST rules we bought Dio?
Renesas have already borrowed the N & M from us for $1M when they bought the licence and then went solo
Could be mistaken too, going on memory and that was a few years back.
Thanks ganDis aHi Wags,
I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology
I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs
WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326
View attachment 94377
[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).
So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Hi Wags,
I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology
I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs
WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326
View attachment 94377
[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).
So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
ThankHi Wags,
I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology
I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs
WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326
View attachment 94377
[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).
So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Hi Wags,
I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology
I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs
WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326
View attachment 94377
[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).
So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Thanks Dio,Hi Wags,
I had thought that the special sauce was the N-of-M coding which greatly enhances sparsity without affecting accuracy. This was developed by Simon Thorpe's group, but was not patented. It was licensed and then sold to Brainchip. It was also developed independently by Steve Furber (SpiNNaker, Man Uni). If I recall correctly, Applied Brain Research (ABR - Chris Eliasmith) appears to use this (as well as state space models). https://www.appliedbrainresearch.com/technology
I haven't seen a one-to-one comparison with ABR, but a quick scan of the most recently published ABR NN processor patent application suggests to me that it is very complex and would probably be slower than TENNs
WO2024197396A1 EVENT-BASED NEURAL NETWORK PROCESSING SYSTEM 20230326
View attachment 94377
[0108] Returning to the address generator 335 in the processing element 120, the generator 335 comprises two major components: the loop parameter generator, and the loop iterator. The loop iterator is relatively simple to implement: in operation, it receives loop parameters through a stream interface, validates them, and implements two nested for loops using a simple state machine. The loop parameter generator, on the other hand, is more complex and temporally multiplexed. An arithmetic logic unit (ALU) is used to perform the individual operations described above and is depicted in FIG. 15. The flow of data is controlled using a micro-program, as mentioned above. A number of possible input sources are provided at the top of the diagram, including includes the event source x- and y-location, the address generator configuration registers (abbreviated as "config”), and the fed-back loop-parameter output. The computation performed by the ALU is controlled through seven multiplexers/ demultiplexers, as well as a write-enable control signal and a logic-unit operation signal (write-enable and logic-unit signals are not shown in FIG. 15). The input multiplexer "muxjn” selects either a configuration word or the source x- and y-location. The output read multiplexer "mux_out_rd” selects one of the loop-parameter values as an input; the output write demultiplexer "mux_out_wr” determines which loop-parameter should be updated. Lastly, the multiplexers mux.a, mux_b, mux.c, mux_d what input should be routed to the arithmetic and logic units, as well as the output. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The logic unit (LU) performs operations such as logical "AND” as well as right-shift operations with a latency of one cycle. The multiply-accumulate-unit (MAC) performs the fixed computation a x b + c with a latency of three cycles. The short horizontal bars following the multiplexers, LU, and MAC represent register boundaries (i.e., one clock cycle passes between the top- and bottom of the black bar).
So, while TENNs may trump (pardon the expression) N-of-M, we need to maintain constant vigilance.
Hi Manny,Interesting. Under the Heading "AI-Native Warfighting." Page 5.
"We must put aside legacy approaches to combat and ensure we use this disruptive technology to compound the lethality of our military. Exercises and experiments that do not meaningfully incorporate AI and autonomous capabilities will be reviewed by the Director of Cost Assessment and Program Evaluation for resourcing adjustment"
We should not forget that we already have a contract running with the US Air Force Research Laboratory.Interesting. Under the Heading "AI-Native Warfighting." Page 5.
"We must put aside legacy approaches to combat and ensure we use this disruptive technology to compound the lethality of our military. Exercises and experiments that do not meaningfully incorporate AI and autonomous capabilities will be reviewed by the Director of Cost Assessment and Program Evaluation for resourcing adjustment"
Hi Dio, agree it is inevitable.Hi Manny,
I'm not sure compounding lethality was at the front of PvdM's mind when he invented Akida, but it is inevitable.
We have a minimum amount we must raise with LDA by 30 June 2026 as a part of the agreement. Whether we need it or not.I’m thinking we wouldn’t need to do a capital raise if it were Brainchip!
Where does it explicitly state the price to customers? Further, what is meant by Volumes < $10 within the "Fill Key Market Gap" heading?450,000 chips by 2028 at an average sale price of $20 is $9,000,000.
Cost = $2.94
Price per chip is volume dependant = $4- $50
View attachment 94370
FFArtificial Intelligence Strategy for the Department of War
Released very recently.
Fits BRN like a glove. Seems huge!!!!!!
No further comment from me right now - still absorbing it.
What do posters think?
NICE !FF
Bascom Hunter see their 3U VPX SNAP Card
Parsons BRE see their electronic warfare solutions
RTX Raytheon, ISL, USAFRL see their Doppler radar solutions
Quantum Ventura Metaguard Ai see their cybersecurity solution.
Intellisense see their cognitive communication solution
AiLab see their machine monitoring system
ONSOR see their ECG monitoring system
Haila see their RF Technology solution
Neurobus see their Drone detection system
Probably enough for DARPA to go on with.
Nice.... hopefully it stays low till paid day , iam buying more.
.... And that is the key. A long base around lows. The longer the base the bigger the......... on news.
So you sold and moved on? Na, still posting here, so likely a disgruntled SH?