BRN Discussion Ongoing

Diogenese

Top 20
I may have missed it earlier, Diogenese, but what exactly is 'taping-out'?

If you don't have dreams, you can't have dreams come true!
In the olden days, ICs were made by using photomasks to define the patterns of each layer of the silicon (doped to be semiconductive, positive, negative, insulative). The wafer was coated with photoresist and light shone through the mask to harden parts of the resist, the unhardened photoresist removed, and then an acid etch was performed to remove the unwanted silicon, leaving the bits under the hardened photoresist.

Edit: Rinse and repeat.

The early masks were made by using black tape on glass slides.

Taping out was the process of forming the masks.

Nowadays, the patterns of the layers are encoded in digital files, and they use short wavelength UV or X-rays to cure the photoresist.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 36 users

Murphy

Life is not a dress rehearsal!
In the olden days, ICs were made by using photomasks to define the patterns of each layer of the silicon (doped to be semiconductive, positive, negative, insulative). The wafer was coated with photoresist and light shone through the mask to harden parts of the resist, the unhardened photoresist removed, and then an acid etch was performed to remove the unwanted silicon, leaving the bits under the hardened photoresist.

The early masks were made by using black tape on glass slides.

Taping out was the process of forming the masks.

Nowadays, the patterns of the layers are encoded in digital files, and they use short wavelength UV or X-rays to cure the photoresist.
Thank you sir.

If you don't have dreams, you can't have dreams come true!
 
  • Like
  • Love
Reactions: 7 users

GrandRhino

Founding Member
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

Hey @TechGirl, good find!
It's not password protected for me, maybe they just unlocked it?
 
  • Love
  • Like
  • Fire
Reactions: 13 users

DK6161

Regular
Looking at the increasing Buy and Sell numbers, can't help to think that something big is about to happen.
Definitely a lot of traders queueing up.
Either a huge announcement is comming (maybe with the next 4C) or another meme coming from me 🤣 😅.
 
  • Like
  • Haha
  • Fire
Reactions: 9 users

BaconLover

Founding Member
giphy.gif
Total Divas GIF by E!

Especially when you're a brainchip investor.
Extreme edge where's things are at.
 
  • Like
  • Haha
  • Love
Reactions: 17 users

GrandRhino

Founding Member
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

4 Bits Are Enough​


Peter AJ van der Made
Traditional convolutional neural networks (CNNs) use 32-bit floating point parameters and activations. They require extensive computing and memory resources. Early convolutional neural networks such as AlexNet had 62 million parameters.
Over time, CNNs have increased in size and capabilities. GPT3 is a transformer network that has 175 billion parameters. The most significant AI training runs have increased exponentially, with an average doubling period of 3.4-months. Millions or billions of Multiply and Accumulate (MAC) functions must be executed for each inference. These operations are performed in batches of data on large servers with stacks of Graphics Processing Units (GPU) or costly cloud services, and these requirements keep accelerating.
At the other end, the increasing popularity of Deep Learning networks in small electronic devices demands energy-efficient, low-latency solutions of, sometimes, similar models. Deep Learning networks are seen in smartphones, industrial IoT, home appliances, and security devices. Many of these devices are subject to stringent power and security requirements. Security issues can be mitigated by eliminating the uploading of raw data to the internet and performing all or most of the processing on the device itself. However, given the constraints at the edge, models running for these devices must be much more compact in every dimension, without compromising accuracy.
A new architectural approach such as event-based processing with at-memory compute is fundamental to addressing the efficiency challenge. These draw inspiration from neuromorphic principles, mimicking the brain to minimize operations and hence energy consumption. However, energy efficiency is the cumulative effect of not just the architecture, but model size including width of weights and activation parameters. In particular, support for 32-bit floating point requires complex and large-footprint hardware. The reduction in size of these parameters and weights can provide a substantial benefit in performance and in reducing the hardware needed to compute. However, this must be judiciously and innovatively done to keep the outcomes and accuracy similar to the larger models. With the process of quantization, activation parameters and weights can be converted to low bit-width values. Several sources have reported that lower precision computation can provide similar classification accuracy at lower power consumption and better latency. This enables smaller footprint hardware implementation, that reduces development, silicon, and packaging cost, enabling on-device processing in handheld, portable, and edge devices.
MetaTF_Header-400x257-1.jpeg

To make the development process easier, Brainchip has developed the MetaTF™ software that integrates with TensorFlow ™ (and other edge AI development flows), including APIs for 4-bit processing and quantization functionality, to enable retraining and optimization.
The developers can therefore seamlessly build and optimize for the Akida Neural Processor and benefit from executing neural networks entirely on-chip, efficiently, with low latency.
Quantization is the process of mapping continuous infinite values to discrete finite values. Or in the case of modern AI, mapping larger floating-point values to a discrete set of smaller real numbers. The quantization method obtains an efficient representation, manipulation, and communication of numeric values in Machine Learning (ML) applications. 32-bit floating point numbers are distributed over a discrete set of real numbers (0 to 15) to minimize the number of bits required while maintaining the accuracy of the classification. Remarkable performance is achieved in 4-bit quantized models for diverse tasks such as object classification, face recognition, segmentation, object detection, and keyword recognition.
The Brainchip Akida neural processor performs all the operations needed to execute a low bit-width Convolutional Neural Network, thereby offloading the entire task from the central processor or microcontroller. The design is optimized for high-performance Machine Learning applications, resulting in efficient, low power consumption while performing thousands of operations simultaneously on each phase of the 300 MHz clock cycle. A unique feature of the Akida neural processor is the ability to learn in real time, allowing products to be conveniently configured in the field without cloud access. The technology is available as a chip or a small IP block to integrate into an ASIC.
“Table 1 provides the accuracy of several 4-bit CNN networks comparable to floating-point accuracies. For example, AkidaNet is a version of Mobilnet optimized for 4-bit classification, and many other example networks can be downloaded from the Brainchip website. In the quantization column below, ‘a’/’b’/’c’ where ‘a’ means weights bits for first layer, ‘b’ means weights bits for subsequent layers, ‘c’ means output activation map bits for every layer.
Screenshot-2023-01-05-at-4.15.10-PM-1024x587.png

Table 1. Accuracy of inference.
AkidaNet is a feed-forward network optimized to work with 4-bit weights and activations. AkidaNet 0.5 has half the parameters of AkidaNet 1.0. The Akida hardware supports Yolo, DeviceNet, VGG, and other feed-forward networks. Recurrent networks and transformer networks are supported with minimal CPU participation. An example recurrent network implemented on the AKD1000 chip required just 3% CPU participation with 97% of the network running on Akida.
4-bit network resolution is not unique. Brainchip pioneered this Machine Learning technology as early as 2015 and, through multiple silicon implementations, tested and delivered a commercial offering to the market. Others have recently published papers on its advantages, such as IBM, Stanford University and MIT.
Untitled-design-1.png

Akida is based on a neuromorphic, event-based, fully digital design with additional convolutional features. The combination of spiking, event-based neurons, and convolutional functions is unique. It offers many advantages, including on-chip learning, small size, sparsity, and power consumption in the microwatt/milliwatt ranges. The underlying technology is not the usual matrix multiplier, but up to a million digital neurons with either 1, 2, or 4-bit synapses. Akida’s extremely efficient event-based neural processor IP is commercially available as a device (AKD1000) and as an IP offering that can be integrated into partner System on Chips (SoC). The hardware can be configured through the MetaTF software, integrated into TensorFlow layers equating up to 5 million filters, thereby simplifying model development, tuning and optimization through popular development platforms like TensorFlow/Keras and Edge Impulse. There are a fast-growing number of models available through the Akida model zoo and the Brainchip ecosystem.
To dive a little bit deeper into the value of 4-bit, in its 2020 NeurIPS paper IBM described the various pieces that are already present and how they come together. They prove the readiness and the benefit through several experiments simulating 4-bit training for a variety of deep-learning models in computer vision, speech, and natural language processing. The results show a minimal loss of accuracy in the models’ overall performance compared with 16-bit deep learning. The results are also more than seven times faster and seven times more energy efficient. And Boris Murmann, a professor at Stanford who was not involved in the research, calls the results exciting. “This advancement opens the door for training in resource-constrained environments,” he says. It would not necessarily make new applications possible, but it would make existing ones faster and less battery-draining“ by a good margin.”
With the focus on edge AI solutions that are extremely energy-sensitive and thermally constrained and require efficient real-time response, this advantage of 4-bit weights and activations is compelling and shows a strong trend in the coming years. Brainchip has pioneered this path since 2016 and invested in a simplified flow and ecosystem to enable developers. BrainChip’s MetaTF compilation and tooling are integrated into TensorFlow™ and Edge Impulse. TensorFlow/Keras is a familiar environment to most data scientists, while Edge Impulse is a strong emerging platform for Edge AI and TinyML. MetaTF, many application examples, and source code are available free from the Brainchip website: https://doc.brainchipinc.com/examples/index.html
Brainchip continues to invest in advanced machine-learning technologies to further its market leadership.
Source: IBM NeurIPS proceedings 2020: https://proceedings.neurips.cc/paper/2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf
Source: MIT Technology Review. https://www.technologyreview.com/2020/12/11/1014102/ai-trains-on-4-bit-computers/
 
  • Like
  • Fire
  • Love
Reactions: 53 users

Damo4

Regular
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

Not sure if it was only temporarily protected, but it's open now!

https://brainchip.com/4-bits-are-enough/
 
  • Like
  • Love
  • Fire
Reactions: 9 users

Tezza

Regular
A question from the not to bright! If and when a product is put on the shelf with Akida in it, let's say a Samsung fridge, wouldn't it stand to reason that competitors would grab one pull it down and see what makes it tick? If the answer to this is yes, than wouldn't the nda become obsolete and said company could shout from the rooftops, We are using Akida!
 
  • Like
  • Fire
Reactions: 4 users

equanimous

Norse clairvoyant shapeshifter goddess
  • Haha
  • Like
  • Fire
Reactions: 22 users

AARONASX

Holding onto what I've got
A question from the not to bright! If and when a product is put on the shelf with Akida in it, let's say a Samsung fridge, wouldn't it stand to reason that competitors would grab one pull it down and see what makes it tick? If the answer to this is yes, than wouldn't the nda become obsolete and said company could shout from the rooftops, We are using Akida!
I think from memory someone has posted before it's designed in such a way so this cannot be reversed engineered... however not 100%
 
Last edited:
  • Like
Reactions: 8 users

stuart888

Regular
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

Wow, @TechGirl has magic, as once she spoke: Open, it took 59 minutes.

My total guess is @Fact Finder shot off an email and that is all it took.
 
  • Like
  • Love
Reactions: 9 users

equanimous

Norse clairvoyant shapeshifter goddess
Looking at the increasing Buy and Sell numbers, can't help to think that something big is about to happen.
Definitely a lot of traders queueing up.
Either a huge announcement is comming (maybe with the next 4C) or another meme coming from me 🤣 😅.
im tempted to buy more
 
  • Like
  • Haha
  • Fire
Reactions: 9 users

SERA2g

Founding Member
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725

This is no longer password protected and the date has been updated to the 10th.

 
  • Like
  • Fire
  • Love
Reactions: 19 users

AC/DC

Member

SERA2g

Founding Member
  • Like
  • Haha
Reactions: 5 users

equanimous

Norse clairvoyant shapeshifter goddess
 
  • Like
  • Love
  • Fire
Reactions: 42 users

GazDix

Regular
Screenshot_20230111_132303_com.twitter.android.jpg
 
  • Like
  • Fire
  • Love
Reactions: 26 users
H, everyone does their own stuff. BRN has been a traders wet dream for the last couple of years. Good luck to them. We're all in the game to make money. I've been in the market now for nearly 30 yrs. I've learnt lessons.
1/ I'm really bad at trading. No matter how confident I am, it always seems to turn to shit.
2/ I'm good at picking, and holding long term. I've done well
Everyone here has their portfolio. We all have our favourites. I only hold about 8 stocks. 3 excite me. PNV, CHN and BRN. I've done very well out of all 3. None excite me more than BRN. It was the only loser on my portfolio today. Ironic. In 30 yrs of investing, I've never been more excited, more confident, nor more interested in a stock I hold. I just find the potential for this co to be mind blowing. For those that have the courage, I think the next 2 yrs will be very rewardin

F/F, do u ever have a down day in relation to BRN? I love ur enthusiasm. Today was truely shit for all holders. Took us back to s/p levels pre Mercedes ann of 2 yrs ago. Very, very frustrating, and disappointing. No doubt this stock tests the courage, belief and patience of all holders.
GLTA
Hi hamilton66,

Good to hear from you. Been awhile. I have about 2/3 of my Super in PNV and BRN so with you on that. Don't follow PNV much anymore as won't go on HC and no one really talks on Tsex about PNV. Doesn't matter happy with where I am at. They are long term and well worth the wait I reckon. Hope you had a good NY.

SC
 
  • Like
  • Love
Reactions: 4 users

TECH

Regular
Great to see Peter write a new article.

He's the correct staff member to speak about all this, as is Anil the correct staff member to speak all about chip design.

As I have mentioned before, this "business marriage" has always been a real blessing, they both are great blokes, who
100% complement each other, it's the "real glue" that will hold our company together over the long run.

Get the Strawberries and Ice Cream out, I just wonder how many of the remaining "household names" will finally reveal
themselves over the coming year, one can but hope.

Have a positive day.

Tech x
 
  • Like
  • Love
  • Fire
Reactions: 28 users

Diogenese

Top 20
Just browsing BrainChip website & came across a new blog from 7th Jan titled "4 Bits Are Enough" but it's password protected & we can't read it. Wonder if it will be about benchmarking? Wonder when it will be available?

View attachment 26725



In the interest of full disclosure:
Some time after the 4-bit Akida was announced, I asked the company (can't recall if it was Tony or PvdM) by email if the move to 4-bits would require the use of MACs, and received the reply "No".

Now from "4 Bits Are Enough":

"The underlying technology is not the usual matrix multiplier, but up to a million digital neurons with either 1, 2, or 4-bit synapses."

The other bit I found interesting is that now there is some (minimal) involvement of the CPU in the operation of Akida:

"AkidaNet is a feed-forward network optimized to work with 4-bit weights and activations. AkidaNet 0.5 has half the parameters of AkidaNet 1.0. The Akida hardware supports Yolo, DeviceNet, VGG, and other feed-forward networks. Recurrent networks and transformer networks are supported with minimal CPU participation. An example recurrent network implemented on the AKD1000 chip required just 3% CPU participation with 97% of the network running on Akida."

As I said yesterday, I think that transformers will require a significant increase in memory and some additional logic. Possibly at least some of the logic will be provided by the CPU software and some in the silicon.

PS: The presence of MACs in a NN is one of the factors I take into account in eliminating Akida as a suspect.
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 40 users
Top Bottom