I couldn't resist another peek at the Gen 2 Product Brief:
https://brainchip.com/wp-content/uploads/2023/03/BrainChip_second_generation_Platform_Brief.pdf
I can't wait!!!!!!!!!!!!!! ...
"Exceptional spatio-temporal capability: Patent-pending Temporal Event-based Neural Nets (TENNs) revolutionize time-series data applications"
"Efficient Vision Transformer acceleration: Vision Transformer encoder acceleration to provide radically better vision solutions"
Note that, when not operating in pure SNN mode, some processor participation is needed:
"Accelerates today’s networks: CNNs, DNNs, RNNs, Vision Transformers (ViT), and more, directly in hardware with minimal CPU intervention ...
Independent neural processor operation: Intelligent DMA minimizes or eliminates need for CPU in AI acceleration; minimizes system load"
Well I haven't looked at Renesas DRP-AI (dynamically reconfigurable processor -AI), but Akida can do it with your eyes closed.
Multi-Pass Processing Delivers Scalability, Future-Proofing:
Extremely scalable
• Runs larger networks on given set of nodes
• Reduces Silicon footprint and Power in SoC
Transparent to application developer and users
• Handled by runtime software
• Segments and processes network sequentially
Minimizes incremental latency
• Handles multiple layers concurrently
• Minimizes CPU intervention
That's pretty Tardus-like - if you need more nodes than there are on the SoC, we'll just keep using the ones we have until the jobs done. It's like when the team bus breaks down and you've only got a Mini ...
"Akida is model-, network-, and OS-agnostic"
So we can use custom models like nViso's as well as the standard models and out own in-house models.
https://brainchip.com/wp-content/uploads/2023/03/BrainChip_second_generation_Platform_Brief.pdf
I can't wait!!!!!!!!!!!!!! ...
"Exceptional spatio-temporal capability: Patent-pending Temporal Event-based Neural Nets (TENNs) revolutionize time-series data applications"
"Efficient Vision Transformer acceleration: Vision Transformer encoder acceleration to provide radically better vision solutions"
Note that, when not operating in pure SNN mode, some processor participation is needed:
"Accelerates today’s networks: CNNs, DNNs, RNNs, Vision Transformers (ViT), and more, directly in hardware with minimal CPU intervention ...
Independent neural processor operation: Intelligent DMA minimizes or eliminates need for CPU in AI acceleration; minimizes system load"
Well I haven't looked at Renesas DRP-AI (dynamically reconfigurable processor -AI), but Akida can do it with your eyes closed.
Multi-Pass Processing Delivers Scalability, Future-Proofing:
Extremely scalable
• Runs larger networks on given set of nodes
• Reduces Silicon footprint and Power in SoC
Transparent to application developer and users
• Handled by runtime software
• Segments and processes network sequentially
Minimizes incremental latency
• Handles multiple layers concurrently
• Minimizes CPU intervention
That's pretty Tardus-like - if you need more nodes than there are on the SoC, we'll just keep using the ones we have until the jobs done. It's like when the team bus breaks down and you've only got a Mini ...
"Akida is model-, network-, and OS-agnostic"
So we can use custom models like nViso's as well as the standard models and out own in-house models.
Last edited: