BRN Discussion Ongoing

  • Haha
  • Like
Reactions: 4 users

MrRomper

Regular
Tata patent / application
Brainchip mention




mand large memory and computation power to run effi- ciently, thus limiting their use in power and memory con- strained edge devices. Present application/disclosure provides a Spiking Neural Network based system which is a robust low power edge compatible ultrasound-based gesture detection system. The system uses a plurality of speakers and microphones that mimics a Multi Input Multi Output (MIMO) setup thus providing requisite diversity to effectively address fading. The system also makes use of distinctive Channel Impulse Response (CIR) estimat- ed by imposing sparsity prior for robust gesture detection. A multi-layer Convolutional Neural Network (CNN) has been trained on these distinctive CIR images and the trained CNN model is converted into an equivalent Spik- ing Neural Network (SNN) via an ANN (Artificial Neural Network)-to-SNN conversion mechanism. The SNN is further configured to detect/classify gestures performed by user(s).
Conventional gesture detection approaches de-
Processed
A separate Tata patent published on same day at USPTO.
https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/20230334300
 
  • Like
  • Love
  • Fire
Reactions: 20 users
Latest GitHub info & updates from early Sept. Lots nice 2.0 :)


Sep 5
@ktsiknos-brainchip
ktsiknos-brainchip
2.4.0-doc-1
cb33735
Upgrade to QuantizeML 0.5.3, Akida/CNN2SNN 2.4.0 and Akida models 1.2.0
Latest


Update QuantizeML to version 0.5.3

  • "quantize" (both method and CLI) will now also perform calibration and cross-layer equalization
  • Changed default quantization scheme to 8 bits (from 4) for both weights and activations

Update Akida and CNN2SNN to version 2.4.0

New features​

  • [Akida] Updated compatibility with python 3.11, dropped support for python 3.7
  • [Akida] Support for unbounded ReLU activation by default
  • [Akida] C++ helper added on CLI to allow testing Akida engine from a host PC
  • [Akida] Prevent user from mixing V1 and V2 Layers
  • [Akida] Add fixtures for the DepthwiseConv2D
  • [Akida] Add AKD1500 virtual device
  • [Akida] Default buffer_bitwidth for all layers is now 32.
  • [Akida] InputConv2D parameters and Stem convolution parameters take the same parameters
  • [Akida] Estimated bit width of variables added to json serialised model
  • [Akida] Added Akida 1500 PCIe driver support
  • [Akida] Shifts are now uint8 instead of uint4
  • [Akida] Bias variables are now int8
  • [Akida] Support of Vision Transformer inference
  • [Akida] Model.predict now supports Akida 2.0 models
  • [Akida] Add an Akida 2.0 ExtractToken layer
  • [Akida] Add an Akida 2.0 Conv2D layer
  • [Akida] Add an Akida 2.0 Dense1D layer
  • [Akida] Add an Akida 2.0 DepthwiseConv2D layer
  • [Akida] Add an Akida 2.0 DepthwiseConv2DTranspose layer
  • [Akida] Add an Akida Dequantizer layer
  • [Akida] Support the conversion of QuantizeML CNN models into Akida 1.0 models
  • [Akida] Support the conversion of QuantizeML CNN models into Akida 2.0 models
  • [Akida] Support Dequantizer and Softmax on conversion of a QuantizeML model
  • [Akida] Model metrics now include configuration clocks
  • [Akida] Pretty-print serialized JSON model
  • [Akida] Include AKD1000 tests when deploying engine
  • [Akida/infra] Add first official ADK1500 PCIe driver support
  • [CNN2SNN] Updated dependency to QuantizeML 0.5.0
  • [CNN2SNN] Updated compatibility with tensorflow 2.12
  • [CNN2SNN] Provide a better solution to match the block pattern with the right conversion function
  • [CNN2SNN] Implement DenseBlockConverterVX
  • [CNN2SNN] GAP output quantizer can be signed
  • [CNN2SNN] removed input_is_image from convert API, now deduced by input channels

Bug fixes:​

  • [Akida] Fixed wrong buffer size in update_learn_mem, leading to handling of bigger buffers than required
  • [Akida] Fixed issue in matmul operation leading to an overflow in corner cases
  • [Akida] Akida models could not be created by a list of layers starting from InputConv2D
  • [Akida] Increasing batch size between two forward did not work
  • [Akida] Fix variables shape check failure
  • [engine] Optimize output potentials parsing
  • [CNN2SNN] Fixed conversion issue when converting QuantizeML model with Reshape + Dense
  • [CNN2SNN] Convert with input_is_image=False raises an exception if the first layer is a Stem or InputConv2D
Note that version 2.3.7 is the last Akida and CNN2SNN drop supporting Python 3.7 (EOL end of June 2023).

Update Akida models to 1.2.0

  • Updated CNN2SNN minimal required version to 2.4.0 and QuantizeML to 0.5.2
  • Pruned the zoo from several models: Imagenette, cats_vs_dogs, melanoma classification, both occular disease, ECG classification, CWRU fault detection, VGG, face verification
  • Added load_model/save_models utils
  • Added a 'fused' option to separable layer block
  • Added a helper to unfuse SeparableConvolutional2D layers
  • Added a 'post_relu_gap' option to layer blocks
  • Stride 2 is now the default for MobileNet models
  • Training scripts will now always save the model after tuning/calibration/rescaling
  • Reworked GXNOR/MNIST pipeline to get rid of distillation
  • Removed the renaming module
  • Data server with pretrained models reorganized in preparation for Akida 2.0 models
  • Legacy 1.0 models have been updated towards 2.0, providing both a compatible architecture and a pretrained model
  • 2.0 models now also come with a pretrained 8bit helper (ViT, DeiT, CenterNet, AkidaNet18 and AkidaUNet)
  • ReLU max value is now configurable in layer_blocks module
  • It is now possible to build ‘unfused’ separable layer blocks
  • Legacy quantization parameters removed from model creation APIs
  • Added an extract.py module that allows samples extraction for model calibration
  • Dropped pruning tools support
  • Added Conv3D blocks

Bug fixes:​

  • Removed duplicate DVS builders in create CLI
  • Silenced unexpected verbosity in detection models evaluation pipeline

Known issues:​

  • Pretrained helpers will fail downloading models on Windows
  • Edge models are not available for 2.0 yet

Documentation update

  • Large rework of the documentation to integrate changes for 2.0
  • Added QuantizeML user guide, reference API and examples
  • Introduced a segmentation example
  • Introduced a vision transformer example
  • Introduce a tutorial to upgrade 1.0 to 2.0
  • Updated zoo performance page with 2.0 models
  • Aligned overall theme with Brainchip website
  • Fixed a menu display issue in the example section
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 42 users

Frangipani

Top 20

This Zoom talk (I assume it will be held in English) sounds intriguing, especially given that it is organised by TU Darmstadt’s Institute of Automotive Engineering (FZD), whose head is Prof. Dr.-Ing. Steven Peters (founder and former Head of AI Research at Mercedes-Benz, responsible for implementing neuromorphic technology in the Vision EQXX concept car that - as the rest of the world found out in January 2022 - used Akida for in-cabin AI).

The paragraph underneath the actual announcement says that this Zoom talk is part of a series of public lectures primarily aimed at students, research associates, professors, and representatives of the automotive and supplier industries, and that the intention of this series of public lectures is to nurture closer cooperation (well, that’s my own free translation, it literally says “deepen the contact”) between industry, university students and research institutions. The university then adds that this series of public lectures does not pursue any economic interests, and that participation is hence free and does not require pre-registration.


20BFBD8A-5D6B-4F4C-97D7-233436EF0FEA.jpeg

2CE250F0-F43E-4D67-8AFE-DB4CA8BCEAF8.jpeg



The Zoom talk’s speaker is a lady from TWT GmbH in Science and Information (www.twt-innovation.de/en/) - the initialism TWT stands for Technisch-Wissenschaftlicher Transfer (Technical Scientific Transfer):


24FB61DB-6F07-494C-A654-B12E47A4334D.jpeg


0DE97679-C9C2-4BBA-898C-494D50932FA0.jpeg


Under Topics we are passionate about, the TWT website (https://twt-innovation.de/en/themen/) lists Artificial Intelligence, Future Engineering, Cloud Transformation, Autonomous Driving, Data Science, Virtual Experience, E-Mobility, Model Based System Engineering and Quantum Computing.

Now comes the electrifying part: TWT has quite a number of illustrious partners that are or could be of interest to us (in alphabetical order): Airbus Group, AMG, Audi, aws, BMW, Bosch, CARIAD, Continental, Dassault Systèmes, Daimler Truck, EnBW (German energy supplier), ESA, Here Technologies, Lufthansa, MAN, Mercedes-Benz, MINI, NVIDIA, Porsche, Rolls Royce, Samsung, T-Systems, VW… (TWT is also partnered with IBM, by the way.)

As mentioned above, the Zoom meeting is open to the public and does not require pre-registration. Unfortunately for those of you residing in Oceania and East Asia/SEA, it will be held on Nov 20 at 6 pm Central European Time, hence at ungodly hours for East Coast Australians, but maybe some early bird Kiwis (6 am on Nov 21) or night owl Sandgropers (1 am on Nov 21) would want to listen in and possibly ask some clever questions. For those in the Americas, the Nov 20 call will start at 12pm EST/9 am PST.
 

Attachments

  • 995EA671-CFB1-4274-81A5-FB1569A0C763.jpeg
    995EA671-CFB1-4274-81A5-FB1569A0C763.jpeg
    455 KB · Views: 103
  • CFA7A5E8-5D3D-42D4-A36C-B92095881E67.jpeg
    CFA7A5E8-5D3D-42D4-A36C-B92095881E67.jpeg
    455 KB · Views: 105
Last edited:
  • Like
  • Love
  • Fire
Reactions: 35 users

Kachoo

Regular
Not sure what the last bit went weird, maybe cause of length.

Anyway, last bit here.

ES: You’re enabling your customers, it sounds to me, to focus on what they know best, and you’re providing these tools that are just so completely out of the normal expertise of these organizations. That’s got to be tremendously valuable.

MD: Yeah, absolutely. I think, just to add to this, it’s all about creating an opportunity for a more sustainable future as well. AI’s great, whether it’s generative AI or predictive AI. We have to make sure it’s sustainable, and it’s able to add real value to consumers or developers of those embedded products. Ultimately, it has to be good for the wider world and the humanity. And I think that’s where it really all boils down to. And that’s the core of Renesas, really making this world more smarter and more efficient for a more sustainable future.

ES: Exciting stuff for the folks working in that space to get to have such a powerful set of tools at their disposal. With that, I think we are out of time. And I just want to thank both of you so much for joining us today, sharing your insights on the industry at large, as well as clueing us all in to some of the tools that Renesas is making available to the marketplace. Thank you, Mo, for being here.

MD: Thank you, appreciate it.
FMF,

Renesas not mentioning BRN or Akida is what woild be the norm if they purchased the IP.

When you buy the right to produce another companies say IP you are then not required to mention then as your the owner your obligation is the to just pay royalties if comercial.

My c company i worked for had equipment that had patented tech on the tool on its own and they were required to monitor the units used to later pay a royalty to the IP owner.

Then there could be other rules but in the end base on what is and is not allowed to be disclosed.

Cheers
 
  • Like
  • Fire
Reactions: 16 users

Tothemoon24

Top 20

LEADERS

The Future of Generative AI Is the Edge​

mm

Published
October 19, 2023
By
Ravi Annavajjhala
edge-computing.png

The advent of ChatGPT, and Generative AI in general, is a watershed moment in the history of technology and is likened to the dawn of the Internet and the smartphone. Generative AI has shown limitless potential in its ability to hold intelligent conversations, pass exams, generate complex programs/code, and create eye-catching images and video. While GPUs run most Gen AI models in the cloud – both for training and inference – this is not a long-term scalable solution, especially for inference, owing to factors that include cost, power, latency, privacy, and security. This article addresses each of these factors along with motivating examples to move Gen AI compute workloads to the edge.
Most applications run on high-performance processors – either on device (e.g., smartphones, desktops, laptops) or in data centers. As the share of applications that utilize AI expands, these processors with only CPUs are inadequate. Furthermore, the rapid expansion in Generative AI workloads is driving an exponential demand for AI-enabled servers with expensive, power-hungry GPUs that in turn, is driving up infrastructure costs. These AI-enabled servers can cost upwards of 7X the price of a regular server and GPUs account for 80% of this added cost.
Additionally, a cloud-based server consumes 500W to 2000W, whereas an AI-enabled server consumes between 2000W and 8000W – 4x more! To support these servers, data centers need additional cooling modules and infrastructure upgrades – which can be even higher than the compute investment. Data centers already consume 300 TWH per year, almost 1% of the total worldwide power consumption. If the trends of AI adoption continue, then as much as 5% of worldwide power could be used by data centers by 2030. Additionally, there is an unprecedented investment into Generative AI data centers. It is estimated that data centers will consume up to $500 billion for capital expenditures by 2027, mainly fueled by AI infrastructure requirements.
powerconsumption.png

The electricity consumption of Data centers, already 300 TwH, will go up significantly with the adoption of generative AI.
AI compute cost as well as energy consumption will impede mass adoption of Generative AI. Scaling challenges can be overcome by moving AI compute to the edge and using processing solutions optimized for AI workloads. With this approach, other benefits also accrue to the customer, including latency, privacy, reliability, as well as increased capability.

Compute follows data to the Edge

Ever since a decade ago, when AI emerged from the academic world, training and inference of AI models has occurred in the cloud/data center. With much of the data being generated and consumed at the edge – especially video – it only made sense to move the inference of the data to the edge thereby improving the total cost of ownership (TCO) for enterprises due to reduced network and compute costs. While the AI inference costs on the cloud are recurring, the cost of inference at the edge is a one-time, hardware expense. Essentially, augmenting the system with an Edge AI processor lowers the overall operational costs. Like the migration of conventional AI workloads to the Edge (e.g., appliance, device), Generative AI workloads will follow suit. This will bring significant savings to enterprises and consumers.
The move to the edge coupled with an efficient AI accelerator to perform inference functions delivers other benefits as well. Foremost among them is latency. For example, in gaming applications, non-player characters (NPCs) can be controlled and augmented using generative AI. Using LLM models running on edge AI accelerators in a gaming console or PC, gamers can give these characters specific goals, so that they can meaningfully participate in the story. The low latency from local edge inference will allow NPC speech and motions to respond to players' commands and actions in real-time. This will deliver a highly immersive gaming experience in a cost effective and power efficient manner.
In applications such as healthcare, privacy and reliability are extremely important (e.g., patient evaluation, drug recommendations). Data and the associated Gen AI models must be on-premise to protect patient data (privacy) and any network outages that will block access to AI models in the cloud can be catastrophic. An Edge AI appliance running a Gen AI model purpose built for each enterprise customer – in this case a healthcare provider – can seamlessly solve the issues of privacy and reliability while delivering on lower latency and cost.
Picture2.png

Generative AI on edge devices will ensure low latency in gaming and preserve patient data and improve reliability for healthcare.
Many Gen AI models running on the cloud can be close to a trillion parameters – these models can effectively address general purpose queries. However, enterprise specific applications require the models to deliver results that are pertinent to the use case. Take the example of a Gen AI based assistant built to take orders at a fast-food restaurant – for this system to have a seamless customer interaction, the underlying Gen AI model must be trained on the restaurant’s menu items, also knowing the allergens and ingredients. The model size can be optimized by using a superset Large Language Model (LLM) to train a relatively small, 10-30 billion parameter LLM and then use additional fine tuning with the customer specific data. Such a model can deliver results with increased accuracy and capability. And given the model’s smaller size, it can be effectively deployed on an AI accelerator at the Edge.

Gen AI will win at the Edge

There will always be a need for Gen AI running in the cloud, especially for general-purpose applications like ChatGPT and Claude. But when it comes to enterprise specific applications, such as Adobe Photoshop’s generative fill or Github copilot, Generative AI at Edge is not only the future, it’s also the present. Purpose-built AI accelerators are the key to making this possible.



The Author

Ravi Annavajjhala
As a Silicon Valley veteran, and CEO of Kinara Inc, Ravi Annavajjhala brings more than 20 years of experience spanning business development, marketing, and engineering, building leading-edge technology products and
bringing them to market. In his current role as chief executive officer of Deep Vision, Ravi serves on
its board of directors and has raised $50M taking the company’s Ara-1 processor from pre-silicon to
full-scale production and to ramp the 2nd generation processor, Ara-2, in volume. Prior to joining
Deep Vision, Ravi held executive leadership positions at Intel and SanDisk where he played key roles
in driving revenue growth, evolving strategic partnerships, and developing product roadmaps that
led the industry with cutting-edge features and capabilities.
 
  • Like
  • Fire
  • Love
Reactions: 37 users
Not sure if posted before.

Was on AKD1000.

Degree project

Degree Project in Computer Science and Engineering, First Cycle 15 credits
Date: June 8, 2023
Supervisor: Jörg Conradt
Examiner: Pawel Herman
Swedish title: Medicinsk bildanalys med neuromorfisk hårdvara: On-Edge-träning med Akida
Brainchip
School of Electrical Engineering and Computer Science


Degree Project in Computer Science and Engineering
First Cycle, 15 credits
Neuromorphic Medical Image
Analysis at the Edge
On-Edge training with the Akida Brainchip

Future work lists some additional considerations on external inputs to refine results.

Paper HERE
 
  • Like
  • Love
  • Fire
Reactions: 20 users

IloveLamp

Top 20
Very interesting like from Nikunj.......... 🤔😃

Then again............ there's lots to like!!!


LINK TO ARTICLE -
The really astonishing thing is it can apparently outperform GPUs and CPUs specifically designed to tackle AI inference. For example, Numenta took a workload for which Nvidia reported performance figures with its A100 GPU, and ran it on an augmented 48-core 4th-Gen Sapphire Rapids CPU. In all scenarios, it was faster than Nvidia’s chip based on total throughput. In fact, it was 64 times faster than a 3rd-Gen Intel Xeon processor and ten times faster than the A100 GPU.


1000006912.png

1000006916.png

1000006914.png
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 27 users

jtardif999

Regular
  • Haha
  • Fire
  • Like
Reactions: 7 users

Diogenese

Top 20
Very interesting like from Nikunj.......... 🤔😃

Then again............ there's lots to like!!!


LINK TO ARTICLE -
The really astonishing thing is it can apparently outperform GPUs and CPUs specifically designed to tackle AI inference. For example, Numenta took a workload for which Nvidia reported performance figures with its A100 GPU, and ran it on an augmented 48-core 4th-Gen Sapphire Rapids CPU. In all scenarios, it was faster than Nvidia’s chip based on total throughput. In fact, it was 64 times faster than a 3rd-Gen Intel Xeon processor and ten times faster than the A100 GPU.


View attachment 47693
View attachment 47696
View attachment 47697
Numenta does not use spikes. They are addicted to AI fast food - MACs:

US2022108157A1 HARDWARE ARCHITECTURE FOR INTRODUCING ACTIVATION SPARSITY IN NEURAL NETWORK

1697931296942.png



[0072] A multiply circuit 330 may take various forms. In one embodiment, a multiply circuit 330 is a multiply-accumulate circuit (MAC) that includes multiply units and accumulators. The multiply units may be used to perform multiplications and additions. A multiply unit is a circuit with a known structure and may be used for binary multiplication or floating-point multiplication. An accumulator is a memory circuit that receives and stores values from the multiply units. The values may be stored individually or added together in the accumulator.
 
  • Like
  • Haha
  • Love
Reactions: 19 users

IloveLamp

Top 20
Numenta does not use spikes. They are addicted to AI fast food - MACs:

US2022108157A1 HARDWARE ARCHITECTURE FOR INTRODUCING ACTIVATION SPARSITY IN NEURAL NETWORK

View attachment 47698


[0072] A multiply circuit 330 may take various forms. In one embodiment, a multiply circuit 330 is a multiply-accumulate circuit (MAC) that includes multiply units and accumulators. The multiply units may be used to perform multiplications and additions. A multiply unit is a circuit with a known structure and may be used for binary multiplication or floating-point multiplication. An accumulator is a memory circuit that receives and stores values from the multiply units. The values may be stored individually or added together in the accumulator.
For me -

Brainchip employee liking Senior designer @microsoft post is positive

Brainchip employee Liking post about intels advances is also positive

I checked out Numentas LinkedIn page they advertise themselves as a SOFTWARE company.

If you read the article posted, they attribute the gains to the numenta software imo

Perhaps Intel is using their software to integrate akida ip and claiming the advantages are from Numenta when actually the advantages will come from our ip?

1000006918.png


The performance and efficiency advances they're claiming make me highly suspicious

Not the longest of bows to draw imo

Waiting fot the intel / SiFive ip agreement like...

1000006917.gif
 
Last edited:
  • Like
  • Haha
  • Fire
Reactions: 14 users

Earlyrelease

Regular
Numenta does not use spikes. They are addicted to AI fast food - MACs:

Diogenese. On behalf of the silent majority that lurk on these pages can I just call out what a magnificent contribution you bring to these pages. Your input saves us non engineering mortals, hours of research and angst. Thanks for the supreme effort. Drinks are on me when we have out $3 party late next year.
 
  • Like
  • Fire
  • Love
Reactions: 13 users

Diogenese

Top 20
For me -

Brainchip employee liking Senior designer @microsoft post = half win

Brainchip employee Liking post about intels advances = half win

I checked out Numentas LinkedIn page they advertise themselves as a SOFTWARE company.

If you read the article posted, they attribute the gains to the numenta software imo

Perhaps Intel is using their software to integrate akida ip and claiming the advantages are from Numenta when actually the advantages will come from our ip?

View attachment 47700

The performance and efficiency advances they're claiming make me highly suspicious

Not the longest of bows to draw imo

Waiting fot the intel ip agreement like...

View attachment 47699
Hi Ill,

Yes. Numenta make a big thing about sparsity. They use a winner-take-all approach which ignores the lower probability possibilities from other neurons.

If Numenta's patents are indicative of their software architecture, their system will use a lot more power than Akida. In fact, Akida could greatly improve both the power efficiency and speed of their system. Their patent even allows for the possibility of different AI accelerators. Unfortunately, Numenta think ML is software, so if Akida were to be used, Numenta would be largely redundant.

US2022108156A1 HARDWARE ARCHITECTURE FOR PROCESSING DATA IN SPARSE NEURAL NETWORK 20201005

1697938198834.png


[0042] AI accelerator 104 may be a processor that is efficient at performing certain machine learning operations such as tensor multiplications, convolutions, tensor dot products, etc. In various embodiments, AI accelerator 104 may have different hardware architectures. For example, in one embodiment, AI accelerator 104 may take the form of field-programmable gate arrays (FPGAs). In another embodiment, AI accelerator 104 may take the form of application-specific integrated circuits (ASICs), which may include circuits along or circuits in combination with firmware.
...
[0048] Machine learning models 140 may include different types of algorithms for making inferences based on the training of the models. Examples of machine learning models 140 include regression models, random forest models, support vector machines (SVMs) such as kernel SVMs, and artificial neural networks (ANNs) such as convolutional network networks (CNNs), recurrent network networks (RNNs), autoencoders, long short term memory (LSTM), reinforcement learning (RL) models. Some of the machine learning models may include a sparse network structure whose detail will be further discussed with reference to FIG. 2B through 2D. A machine learning model 140 may be an independent model that is run by a processor. A machine learning model 140 may also be part of a software application 130 . Machine learning models 140 may perform various tasks.

Numenta have their foot in Intel's door:

Home | Numenta

1697938698669.png


... but that may not prevent Akida from coming down the chimney.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 31 users

IloveLamp

Top 20
Hi Ill,

Yes. Numenta make a big thing about sparsity. They use a winner-take-all approach which ignores the lower probability possibilities from other neurons.

If Numenta's patents are indicative of their software architecture, their system will use a lot more power than Akida. In fact, Akida could greatly improve both the power efficiency and speed of their system. Their patent even allows for the possibility of different AI accelerators. Unfortunately, Numenta think ML is software, so if Akida were to be used, Numenta would be redundant.

US2022108156A1 HARDWARE ARCHITECTURE FOR PROCESSING DATA IN SPARSE NEURAL NETWORK 20201005

View attachment 47706

[0042] AI accelerator 104 may be a processor that is efficient at performing certain machine learning operations such as tensor multiplications, convolutions, tensor dot products, etc. In various embodiments, AI accelerator 104 may have different hardware architectures. For example, in one embodiment, AI accelerator 104 may take the form of field-programmable gate arrays (FPGAs). In another embodiment, AI accelerator 104 may take the form of application-specific integrated circuits (ASICs), which may include circuits along or circuits in combination with firmware.
...
[0048] Machine learning models 140 may include different types of algorithms for making inferences based on the training of the models. Examples of machine learning models 140 include regression models, random forest models, support vector machines (SVMs) such as kernel SVMs, and artificial neural networks (ANNs) such as convolutional network networks (CNNs), recurrent network networks (RNNs), autoencoders, long short term memory (LSTM), reinforcement learning (RL) models. Some of the machine learning models may include a sparse network structure whose detail will be further discussed with reference to FIG. 2B through 2D. A machine learning model 140 may be an independent model that is run by a processor. A machine learning model 140 may also be part of a software application 130 . Machine learning models 140 may perform various tasks.

Numenta have their foot in Intel's door:

Home | Numenta

View attachment 47707

... but that may not prevent Akida from coming down the chimney.
Thanks for explaining in a way even i can understand 😚

1000006921.gif
 
  • Haha
  • Like
  • Fire
Reactions: 12 users

Neuromorphia

fact collector
Last edited:
  • Like
  • Haha
  • Love
Reactions: 16 users

IloveLamp

Top 20
Cheif data scientist at the Indian Institute of Technology gives us a nice mention here along side all the big players......😌


1000006940.png
1000006942.png
 
  • Like
  • Fire
  • Love
Reactions: 60 users

IloveLamp

Top 20
  • Like
  • Fire
  • Thinking
Reactions: 31 users

TopCat

Regular
Infineon using snn for radar...

Akida?

View attachment 47721
I’d like it to be Akida, but from other papers I’ve come across I think Infineon research with spinnaker as the photo suggests. But maybe they could use Akida as it’s commercially available 🤞🤞
 
  • Like
  • Thinking
  • Wow
Reactions: 13 users

TECH

Regular
Hang on to your hats, our 4C will more than likely be released this coming week.

I have already placed my bet, but would be over the moon to be 100% wrong !....the journey continues.

Another thing to remember is that it's taken us this long to achieve what's already been achieved through hard work, self discipline
and founders who are at the top of their game, they have both been very flexible, employed first class staff and let staff go as the need
arose, so if we are, say 3 years ahead (conservative maybe) of other companies attempting to achieve excellence at the far edge, well they
to will be faced with a similar timeline in my view, dispite being large corporations, time will prove me right or wrong I guess.

See you during the week.....Tech (NZ)
 
  • Like
  • Fire
  • Thinking
Reactions: 24 users
I’d like it to be Akida, but from other papers I’ve come across I think Infineon research with spinnaker as the photo suggests. But maybe they could use Akida as it’s commercially available 🤞🤞
Yeah true but remember we have just been in the hackathon with Infineon and Sony for the pedestrian detection.

Would be great if Infineon had some kind of revelation with us.

I know Nikunj speaks about how sensors from Sony or Infineon can be used with Akida in his TinyML video on the hackathon.




 
  • Like
  • Fire
  • Love
Reactions: 48 users
Top Bottom