BRN Discussion Ongoing

IloveLamp

Top 20
Probably worth keeping an eye on

1000013298.jpg
 
  • Like
  • Fire
Reactions: 5 users

Doz

Regular
  • Like
  • Fire
  • Love
Reactions: 10 users

Galaxycar

Regular
This is the reason why the price is dropping shorts over 100 million plus, where they are getting the shares from???NAKED bet yah and the fact there will be another capital raise in the future to support managements high burn rate of cash.
IMG_2216.png
 
  • Like
Reactions: 2 users

Diogenese

Top 20
Apologies if already posted .


View attachment 92767
View attachment 92769

Hi Doz,

Now for some deja vu:

This is great. It addresses my MACs question. We're using MAC-Lite although it does not explain it fully. I'll have to wait for the movie.

I guess this bit is about N-of-M coding from Simon Thorpe's Spikenet:

The name of the game here is “sparse.” We’re talking sparse data (streaming inputs are converted to events at the hardware level, reducing the volume of data by up to 10x before processing begins)

Something else I hadn't realized, although obvious in retrospect, is Akida 1 is also available on the cloud along with the Akida 2 FPGA.
 
  • Like
  • Love
  • Haha
Reactions: 12 users

Doz

Regular
For those unaware .

The ASX recently changed the reporting of short positions to save the reporting participants time and money on what they claimed as information that no one used . This eliminated access to the net short positions that was considered more accurate than the gross numbers as used by Shortman for example .


1762313128984.png



The below is an example of how retail are now exposed to false market information regarding short positions . Over this small example on Brainchip you can clearly see the current issue with accuracy . Due to this new level of abusive corruption in my opinion , I am currently seeking with the ASX to return to the previous method of reporting and to reinstate access to the more accurate net positions .

Note : Anyone thinking Brainchip currently has 100+ million being shorted and that professional shorts are going to be BurNt you may need to do some extra research , especially if you are an amateur retail investor with a current short position relying on the pro’s to protect you .

1762313437504.png


All in my opinion …..
 
Last edited:
  • Like
  • Fire
  • Wow
Reactions: 11 users

7für7

Top 20
With so many positive things happening in the business at the moment it is hard to fathom why the share price continues to sink.

Angry The Office GIF
It’s a complete joke
 
  • Like
  • Haha
Reactions: 4 users

manny100

Top 20
This is the reason why the price is dropping shorts over 100 million plus, where they are getting the shares from???NAKED bet yah and the fact there will be another capital raise in the future to support managements high burn rate of cash. View attachment 92768
Every knows there will be cap raises until revenue is sufficient to self sustain. So there is no surprise with that at all.
I am surprised that you are surprised.
The price is still within the range above the base. We have been to the bottom of the range only a relatively few times over the last 2 years. If we get there again it will only rise off that.
So what is the problem.
 
  • Like
  • Love
Reactions: 8 users

HopalongPetrovski

I'm Spartacus!
Hi all.

I asked Grok to "contrast and compare Brainchip's Akida technology with Nanoveu's ECS-DoT AI SoC."

I have a finger in both pies and thought perhaps others may be interested as well.
For what it is worth, here below is it's answer.




Overview

BrainChip's Akida and Nanoveu's ECS-DoT (developed by its subsidiary EMASS) are both ultra-low-power System-on-Chip (SoC) solutions designed for edge AI applications, targeting always-on processing in power-constrained environments like wearables, IoT devices, drones, and industrial sensors. Akida emphasizes neuromorphic computing, mimicking the brain's sparse, event-based processing for efficiency in neural network tasks. In contrast, ECS-DoT focuses on traditional RISC-V-based AI acceleration with non-volatile memory optimizations, prioritizing multimodal sensor fusion and benchmark-proven energy savings. While both enable cloud-free inference, Akida excels in bio-inspired learning, whereas ECS-DoT offers superior raw efficiency in standardized tests.

Key Comparison Table
AspectBrainChip AkidaNanoveu ECS-DoT (EMASS)
ArchitectureNeuromorphic (Spiking Neural Networks - SNNs); event-based, digital neuron fabric with sparsity exploitation (neurons fire only on thresholds). Supports CNNs, DNNs, RNNs, ViTs.RISC-V core with AI accelerators and non-volatile memory (e.g., ReRAM integration); optimized for multimodal fusion (vision, audio, sensors). Traditional ANN focus.
Power ConsumptionUltra-low: ~1 mW (Akida Pico variant); milliwatt-scale for inference. Leverages sparsity for energy savings.Milliwatt-scale (0.1–10 mW); benchmarks show 90% less energy vs. competitors (e.g., 0.8 µJ/inference in anomaly detection, 20% lower overall vs. peers).
PerformanceUp to 1.2M neurons, 10B synapses per chip; scalable to 1,024 chips (1.2B neurons total). 8-bit weights/activations; low latency via multi-pass processing.Up to 30 GOPS/W; 93% faster execution vs. competitors (e.g., 1.22 ms in anomaly detection, 3.9 ms in keyword spotting). 4 MB on-board SRAM for efficient compute.
MemoryConfigurable local scratchpads; supports LPDDR4 SDRAM (e.g., 256M x 16 bytes in dev kits).4 MB on-board SRAM; non-volatile tech reduces leakage and enables always-on modes.
Learning/TrainingOn-chip edge learning via reinforcement/inhibition; incremental learning supported.Primarily inference-focused; on-device training not emphasized (relies on cloud/offline optimization).
Interfaces/ConnectivityPCIe 2.0, ARM Cortex-M4 (300 MHz), GPIO; multi-chip fabric for scaling.Sensor-integrated (vision/audio); SDKs for IoT integration; partnerships for reference designs.
ApplicationsEdge vision (e.g., industrial inspection), voice, vibration; automotive, consumer electronics, IoT. Strong in pattern recognition.Drones (extended flight time), wearables, healthcare (biometrics), smart cities; excels in real-time 2D-to-3D conversion, anomaly detection.
Process Node28 nm (AKD1000); considering 14 nm.Not specified; modular for future scaling to 6 nm/4 nm.
Software EcosystemMetaTF framework (TensorFlow/Keras integration); Edge Impulse support; cloud dev tools.Enhanced SDKs/reference designs via Arrow Electronics; RISC-V tools for custom AI.
Maturity/AvailabilityCommercial since 2022 (AKD1000); dev kits (PCIe/Raspberry Pi) available; partnerships (e.g., Edge Impulse).Emerged from stealth in 2025; benchmarks completed, OEM integrations underway (e.g., drones); sales reps appointed.
StrengthsBio-mimetic efficiency for sparse data; on-device adaptability; scalable for larger networks.Benchmark dominance in speed/energy; multimodal versatility; thermal efficiency (no cooling needed).
ChallengesHigher power in dense workloads; neuromorphic requires model optimization.Less emphasis on learning; newer market entry may limit ecosystem breadth.

Similarities
  • Ultra-Low-Power Edge Focus: Both target milliwatt-level operation for battery-powered, always-on AI, reducing cloud dependency, latency, and privacy risks. They enable real-time sensor processing (e.g., vision/audio) in constrained devices.
  • Efficiency-Driven Design: Prioritize sparsity/exploitation of data patterns (Akida via neurons; ECS-DoT via memory optimizations) for 10–100x energy savings over general-purpose chips.
  • Scalability and Integration: Modular for multi-chip or application-specific variants; support standard ML workflows (e.g., TensorFlow) and dev tools for rapid deployment.
  • Target Markets: Overlap in IoT, wearables, drones, and industrial uses, addressing the growing edge AI market (projected >$20B by 2028).
Differences
  • Core Philosophy: Akida's neuromorphic approach provides brain-like adaptability and on-chip learning, ideal for dynamic, sparse environments (e.g., event-driven sensors). ECS-DoT's RISC-V foundation delivers predictable, high-throughput performance in multimodal tasks, with superior benchmark results against incumbents like STMicroelectronics or Syntiant.
  • Efficiency Metrics: ECS-DoT claims quantifiable edges (e.g., 287x faster in anomaly detection), positioning it for ultra-constrained apps like extended drone flights. Akida shines in qualitative sparsity gains but lacks direct head-to-head benchmarks here.
  • Maturity and Ecosystem: Akida is more established with commercial products and broader partnerships, suiting immediate prototyping. ECS-DoT, as a 2025 newcomer, leverages Nanoveu's display tech synergies (e.g., 3D imaging) but focuses on rapid OEM adoption via distributors like Arrow.
  • Innovation Trade-offs: Akida enables unique features like incremental learning for evolving models, while ECS-DoT prioritizes raw speed and thermal simplicity for "set-and-forget" deployments.
In summary, choose Akida for adaptive, brain-inspired AI in vision-heavy or learning-centric apps; opt for ECS-DoT for energy-critical, multimodal inference where benchmarks matter most. Both advance edge AI, but ECS-DoT's recent benchmarks suggest it could disrupt in power-sensitive niches.
 
Last edited:
  • Like
  • Love
Reactions: 5 users

manny100

Top 20
For those unaware .

The ASX recently changed the reporting of short positions to save the reporting participants time and money on what they claimed as information that no one used . This eliminated access to the net short positions that was considered more accurate than the gross numbers as used by Shortman for example .


View attachment 92770


The below is an example of how retail are now exposed to false market information regarding short positions . Over this small example on Brainchip you can clearly see the current issue with accuracy . Due to this new level of abusive corruption in my opinion , I am currently seeking with the ASX to return to the previous method of reporting and to reinstate access to the more accurate net positions .

Note : Anyone thinking Brainchip currently has 100+ million being shorted and that professional shorts are going to be BurNt you may need to do some extra research , especially if you are an amateur retail invest with a current short position relying on the pro’s to protect you .

View attachment 92771

All in my opinion …..
Agree, gross shorts alone conceal the true directional position (market bet) of a fund. Retail investors do not know what the funds are ultimately betting on. Funds have different strategies eg may have % of shorts to hedge against the long bet but we only see the gross shorts not the long or the net.
If the net position is no where as 'bad' as the gross shorts suggest it actually encourages selling. Retailers can be 'spooked' - hence downrampers on forums.
There are many, many reasons why funds short to profit. For example even those who hold from lower prices and believe in the longer term prospects short price spikes to make some cash on the retrace. We only see the gross shorts figure though.
We saw a recent quick paced rise to 27 cents which was way overbought in relation to the time it took to get there. That was a certainty to be sold/shorted big.
A good news catalyst would be good right now.
 
  • Like
Reactions: 5 users

Diogenese

Top 20
For those unaware .

The ASX recently changed the reporting of short positions to save the reporting participants time and money on what they claimed as information that no one used . This eliminated access to the net short positions that was considered more accurate than the gross numbers as used by Shortman for example .


View attachment 92770


The below is an example of how retail are now exposed to false market information regarding short positions . Over this small example on Brainchip you can clearly see the current issue with accuracy . Due to this new level of abusive corruption in my opinion , I am currently seeking with the ASX to return to the previous method of reporting and to reinstate access to the more accurate net positions .

Note : Anyone thinking Brainchip currently has 100+ million being shorted and that professional shorts are going to be BurNt you may need to do some extra research , especially if you are an amateur retail investor with a current short position relying on the pro’s to protect you .

View attachment 92771

All in my opinion …..
So the gamekeeper is giving the poacher a leg up over the fence?
 
  • Like
  • Fire
  • Sad
Reactions: 5 users
With so many positive things happening in the business at the moment it is hard to fathom why the share price continues to sink.
One major announcement and expecting us to hit 0.10 in no time 😂
 
  • Haha
  • Fire
  • Like
Reactions: 8 users

manny100

Top 20
Hi all.

I asked Grok to "contrast and compare Brainchip's Akida technology with Nanoveu's ECS-DoT AI SoC."

I have a finger in both pies and thought perhaps others may be interested as well.
For what it is worth, here below is it's answer.




Overview

BrainChip's Akida and Nanoveu's ECS-DoT (developed by its subsidiary EMASS) are both ultra-low-power System-on-Chip (SoC) solutions designed for edge AI applications, targeting always-on processing in power-constrained environments like wearables, IoT devices, drones, and industrial sensors. Akida emphasizes neuromorphic computing, mimicking the brain's sparse, event-based processing for efficiency in neural network tasks. In contrast, ECS-DoT focuses on traditional RISC-V-based AI acceleration with non-volatile memory optimizations, prioritizing multimodal sensor fusion and benchmark-proven energy savings. While both enable cloud-free inference, Akida excels in bio-inspired learning, whereas ECS-DoT offers superior raw efficiency in standardized tests.

Key Comparison Table
AspectBrainChip AkidaNanoveu ECS-DoT (EMASS)
ArchitectureNeuromorphic (Spiking Neural Networks - SNNs); event-based, digital neuron fabric with sparsity exploitation (neurons fire only on thresholds). Supports CNNs, DNNs, RNNs, ViTs.RISC-V core with AI accelerators and non-volatile memory (e.g., ReRAM integration); optimized for multimodal fusion (vision, audio, sensors). Traditional ANN focus.
Power ConsumptionUltra-low: ~1 mW (Akida Pico variant); milliwatt-scale for inference. Leverages sparsity for energy savings.Milliwatt-scale (0.1–10 mW); benchmarks show 90% less energy vs. competitors (e.g., 0.8 µJ/inference in anomaly detection, 20% lower overall vs. peers).
PerformanceUp to 1.2M neurons, 10B synapses per chip; scalable to 1,024 chips (1.2B neurons total). 8-bit weights/activations; low latency via multi-pass processing.Up to 30 GOPS/W; 93% faster execution vs. competitors (e.g., 1.22 ms in anomaly detection, 3.9 ms in keyword spotting). 4 MB on-board SRAM for efficient compute.
MemoryConfigurable local scratchpads; supports LPDDR4 SDRAM (e.g., 256M x 16 bytes in dev kits).4 MB on-board SRAM; non-volatile tech reduces leakage and enables always-on modes.
Learning/TrainingOn-chip edge learning via reinforcement/inhibition; incremental learning supported.Primarily inference-focused; on-device training not emphasized (relies on cloud/offline optimization).
Interfaces/ConnectivityPCIe 2.0, ARM Cortex-M4 (300 MHz), GPIO; multi-chip fabric for scaling.Sensor-integrated (vision/audio); SDKs for IoT integration; partnerships for reference designs.
ApplicationsEdge vision (e.g., industrial inspection), voice, vibration; automotive, consumer electronics, IoT. Strong in pattern recognition.Drones (extended flight time), wearables, healthcare (biometrics), smart cities; excels in real-time 2D-to-3D conversion, anomaly detection.
Process Node28 nm (AKD1000); considering 14 nm.Not specified; modular for future scaling to 6 nm/4 nm.
Software EcosystemMetaTF framework (TensorFlow/Keras integration); Edge Impulse support; cloud dev tools.Enhanced SDKs/reference designs via Arrow Electronics; RISC-V tools for custom AI.
Maturity/AvailabilityCommercial since 2022 (AKD1000); dev kits (PCIe/Raspberry Pi) available; partnerships (e.g., Edge Impulse).Emerged from stealth in 2025; benchmarks completed, OEM integrations underway (e.g., drones); sales reps appointed.
StrengthsBio-mimetic efficiency for sparse data; on-device adaptability; scalable for larger networks.Benchmark dominance in speed/energy; multimodal versatility; thermal efficiency (no cooling needed).
ChallengesHigher power in dense workloads; neuromorphic requires model optimization.Less emphasis on learning; newer market entry may limit ecosystem breadth.

Similarities
  • Ultra-Low-Power Edge Focus: Both target milliwatt-level operation for battery-powered, always-on AI, reducing cloud dependency, latency, and privacy risks. They enable real-time sensor processing (e.g., vision/audio) in constrained devices.
  • Efficiency-Driven Design: Prioritize sparsity/exploitation of data patterns (Akida via neurons; ECS-DoT via memory optimizations) for 10–100x energy savings over general-purpose chips.
  • Scalability and Integration: Modular for multi-chip or application-specific variants; support standard ML workflows (e.g., TensorFlow) and dev tools for rapid deployment.
  • Target Markets: Overlap in IoT, wearables, drones, and industrial uses, addressing the growing edge AI market (projected >$20B by 2028).
Differences
  • Core Philosophy: Akida's neuromorphic approach provides brain-like adaptability and on-chip learning, ideal for dynamic, sparse environments (e.g., event-driven sensors). ECS-DoT's RISC-V foundation delivers predictable, high-throughput performance in multimodal tasks, with superior benchmark results against incumbents like STMicroelectronics or Syntiant.
  • Efficiency Metrics: ECS-DoT claims quantifiable edges (e.g., 287x faster in anomaly detection), positioning it for ultra-constrained apps like extended drone flights. Akida shines in qualitative sparsity gains but lacks direct head-to-head benchmarks here.
  • Maturity and Ecosystem: Akida is more established with commercial products and broader partnerships, suiting immediate prototyping. ECS-DoT, as a 2025 newcomer, leverages Nanoveu's display tech synergies (e.g., 3D imaging) but focuses on rapid OEM adoption via distributors like Arrow.
  • Innovation Trade-offs: Akida enables unique features like incremental learning for evolving models, while ECS-DoT prioritizes raw speed and thermal simplicity for "set-and-forget" deployments.
In summary, choose Akida for adaptive, brain-inspired AI in vision-heavy or learning-centric apps; opt for ECS-DoT for energy-critical, multimodal inference where benchmarks matter most. Both advance edge AI, but ECS-DoT's recent benchmarks suggest it could disrupt in power-sensitive niches.
Thanks, AKIDA will be favoured where on chip learning is needed eg, some Defense, Health space and robotics etc applications. I think we may in time find hybrid chips which include AKIDA and another chip for its qualities.
 
  • Fire
  • Like
  • Love
Reactions: 3 users
This is the reason why the price is dropping shorts over 100 million plus, where they are getting the shares from???NAKED bet yah and the fact there will be another capital raise in the future to support managements high burn rate of cash. View attachment 92768
Another cap raise must be close and I’m guessing the shorters are taking such high position knowing that they will be able to cover these back from LDA probably. Let’s just hope we get a decent $$$ announcement fall before the share price does.
 
  • Like
Reactions: 4 users

7für7

Top 20
It will an explosive joy to see the shorts burn.. come on Sean… what’s the deal here?
 
It will an explosive joy to see the shorts burn.. come on Sean… what’s the deal here?
He must supports the shorters and is secretly accumulating, as there can’t be any other reason why our SP is where is is currently
 
  • Fire
  • Like
Reactions: 2 users

HopalongPetrovski

I'm Spartacus!
Hi all.

I asked Grok to "contrast and compare Brainchip's Akida technology with Nanoveu's ECS-DoT AI SoC."

I have a finger in both pies and thought perhaps others may be interested as well.
For what it is worth, here below is it's answer.




Overview

BrainChip's Akida and Nanoveu's ECS-DoT (developed by its subsidiary EMASS) are both ultra-low-power System-on-Chip (SoC) solutions designed for edge AI applications, targeting always-on processing in power-constrained environments like wearables, IoT devices, drones, and industrial sensors. Akida emphasizes neuromorphic computing, mimicking the brain's sparse, event-based processing for efficiency in neural network tasks. In contrast, ECS-DoT focuses on traditional RISC-V-based AI acceleration with non-volatile memory optimizations, prioritizing multimodal sensor fusion and benchmark-proven energy savings. While both enable cloud-free inference, Akida excels in bio-inspired learning, whereas ECS-DoT offers superior raw efficiency in standardized tests.

Key Comparison Table
AspectBrainChip AkidaNanoveu ECS-DoT (EMASS)
ArchitectureNeuromorphic (Spiking Neural Networks - SNNs); event-based, digital neuron fabric with sparsity exploitation (neurons fire only on thresholds). Supports CNNs, DNNs, RNNs, ViTs.RISC-V core with AI accelerators and non-volatile memory (e.g., ReRAM integration); optimized for multimodal fusion (vision, audio, sensors). Traditional ANN focus.
Power ConsumptionUltra-low: ~1 mW (Akida Pico variant); milliwatt-scale for inference. Leverages sparsity for energy savings.Milliwatt-scale (0.1–10 mW); benchmarks show 90% less energy vs. competitors (e.g., 0.8 µJ/inference in anomaly detection, 20% lower overall vs. peers).
PerformanceUp to 1.2M neurons, 10B synapses per chip; scalable to 1,024 chips (1.2B neurons total). 8-bit weights/activations; low latency via multi-pass processing.Up to 30 GOPS/W; 93% faster execution vs. competitors (e.g., 1.22 ms in anomaly detection, 3.9 ms in keyword spotting). 4 MB on-board SRAM for efficient compute.
MemoryConfigurable local scratchpads; supports LPDDR4 SDRAM (e.g., 256M x 16 bytes in dev kits).4 MB on-board SRAM; non-volatile tech reduces leakage and enables always-on modes.
Learning/TrainingOn-chip edge learning via reinforcement/inhibition; incremental learning supported.Primarily inference-focused; on-device training not emphasized (relies on cloud/offline optimization).
Interfaces/ConnectivityPCIe 2.0, ARM Cortex-M4 (300 MHz), GPIO; multi-chip fabric for scaling.Sensor-integrated (vision/audio); SDKs for IoT integration; partnerships for reference designs.
ApplicationsEdge vision (e.g., industrial inspection), voice, vibration; automotive, consumer electronics, IoT. Strong in pattern recognition.Drones (extended flight time), wearables, healthcare (biometrics), smart cities; excels in real-time 2D-to-3D conversion, anomaly detection.
Process Node28 nm (AKD1000); considering 14 nm.Not specified; modular for future scaling to 6 nm/4 nm.
Software EcosystemMetaTF framework (TensorFlow/Keras integration); Edge Impulse support; cloud dev tools.Enhanced SDKs/reference designs via Arrow Electronics; RISC-V tools for custom AI.
Maturity/AvailabilityCommercial since 2022 (AKD1000); dev kits (PCIe/Raspberry Pi) available; partnerships (e.g., Edge Impulse).Emerged from stealth in 2025; benchmarks completed, OEM integrations underway (e.g., drones); sales reps appointed.
StrengthsBio-mimetic efficiency for sparse data; on-device adaptability; scalable for larger networks.Benchmark dominance in speed/energy; multimodal versatility; thermal efficiency (no cooling needed).
ChallengesHigher power in dense workloads; neuromorphic requires model optimization.Less emphasis on learning; newer market entry may limit ecosystem breadth.

Similarities
  • Ultra-Low-Power Edge Focus: Both target milliwatt-level operation for battery-powered, always-on AI, reducing cloud dependency, latency, and privacy risks. They enable real-time sensor processing (e.g., vision/audio) in constrained devices.
  • Efficiency-Driven Design: Prioritize sparsity/exploitation of data patterns (Akida via neurons; ECS-DoT via memory optimizations) for 10–100x energy savings over general-purpose chips.
  • Scalability and Integration: Modular for multi-chip or application-specific variants; support standard ML workflows (e.g., TensorFlow) and dev tools for rapid deployment.
  • Target Markets: Overlap in IoT, wearables, drones, and industrial uses, addressing the growing edge AI market (projected >$20B by 2028).
Differences
  • Core Philosophy: Akida's neuromorphic approach provides brain-like adaptability and on-chip learning, ideal for dynamic, sparse environments (e.g., event-driven sensors). ECS-DoT's RISC-V foundation delivers predictable, high-throughput performance in multimodal tasks, with superior benchmark results against incumbents like STMicroelectronics or Syntiant.
  • Efficiency Metrics: ECS-DoT claims quantifiable edges (e.g., 287x faster in anomaly detection), positioning it for ultra-constrained apps like extended drone flights. Akida shines in qualitative sparsity gains but lacks direct head-to-head benchmarks here.
  • Maturity and Ecosystem: Akida is more established with commercial products and broader partnerships, suiting immediate prototyping. ECS-DoT, as a 2025 newcomer, leverages Nanoveu's display tech synergies (e.g., 3D imaging) but focuses on rapid OEM adoption via distributors like Arrow.
  • Innovation Trade-offs: Akida enables unique features like incremental learning for evolving models, while ECS-DoT prioritizes raw speed and thermal simplicity for "set-and-forget" deployments.
In summary, choose Akida for adaptive, brain-inspired AI in vision-heavy or learning-centric apps; opt for ECS-DoT for energy-critical, multimodal inference where benchmarks matter most. Both advance edge AI, but ECS-DoT's recent benchmarks suggest it could disrupt in power-sensitive niches.
Fact Finder's response from over on the crapper..........

Hi Hop
There are some serious flaws in Grok's reasoning:

1. AKIDA Pico does not use a CPU just like all AKIDA 2.0 variants it is a standalone processor.
2. AKIDA Pico can be made up of one node or two nodes of AKIDA 2.0 neural fabric which can be expanded to 256 nodes with 131 TOPS.
3. Each node of AKIDA 2.0 is made up of four NPU with one of the four NPU configured to process TENNS,
4. AKIDA Pico therefore can run TENNS State Space Models for such things as Key Word Spotting and Noise cancellation.
5. All AKIDA technologies have the ability to process in parallel a range of inputs and fuse them to produce the desired action.
6. All AKIDA technologies only use as many nodes or NPU's as needed to process any given event.

You may remember that in the published paper introducing AERO Peter van der Made, Anil Mankar, Adam Osserian & Co used AKD1000 to process the 20 Gas Data Set each set made up of 10 different gases with State of the ART efficiency. What stood out was that this achievement involved AKD1000 only using 300 neurons leaving the majority of the neural fabric free to process other inputs.

The bottom line is that if you set up AKIDA Pico and AKIDA 2.0 to detect and take action on the same event they would both only use the same number of NPU and thus the same amount of energy. This means that if most of the time you are only detecting a single simple event, say a light being turned on or off your could use AKIDA Pico but if at much less frequent intervals some other very complex event occurs that you also want to monitor and react to you could use AKIDA 2.0 as it could do the one node processing for lengthy periods but spring into action when the major event occurs.

7. The other think which Grok misses is that all AKIDA IP can be embedded in RISC-V architecture to offload the Ai function as has been done by Frontgrade Gaisler, Andes and SiFive. This one fact puts Nanoveu's chip in a very difficult position in a terrestrial market already dominated by Andes and Si Five and in a very precarious position in the Radhard market where Frontgrade Gaisler plays.

My opinion only DYOR

Fact Finder
 
  • Like
  • Fire
Reactions: 10 users

TECH

Regular


I feel very comfortable with this arrangement, oh, just one question.... who approached who and the big question, is why?

I will leave that for you to ponder over Einstein's
einstein GIF
......:geek:
 
  • Like
Reactions: 3 users

manny100

Top 20
The question is - In what ways is the 'new' AKIDA1500 superior to the 2000 if at all?
 
Top Bottom