BRN Discussion Ongoing

MDhere

Top 20
Bring on 27th June @Pom down under :)
 
  • Like
Reactions: 2 users

7für7

Top 20
You are turning the question around!

If anyone is interested in buying BRN that's not the same as not for sale is it??

My house is not currently for sale but if someone offered me 2 mill$, its gone on the spot.

So this topic will be coming up not matter if you like it not.

No it would likely not be good for us shareholders if the company is sold now but we were discussing IF ANYONE ARE INTERESTED!!!!!!

An offer have to be approved by the shareholders.

Do you understand now 007???
I think you don’t understand… and I would sell my bicycle for 2,5 million if someone would make me an offer.. what kind of statement is that? IF ANYONE IS INTERESTED… if a dog had wheels, he would be a bicycle…
 
  • Haha
Reactions: 3 users

jtardif999

Regular
Did the Optimus Satellite, have Beacon on board, is the big question?..

ANT61 are encouraging organisations to use it, so it must be ready for implemention?

View attachment 64963

You would think that they would've taken the opportunity with the Optimus Satellite, as a first use case and not real good, if they couldn't use it, to establish a connection..

Or maybe it's all part of their plan..

"Hey, we just established contact with Optimus, using Beacon. Good thing we had that particular piece of hardware onboard!"..
..the ultimate in proving a technology, demonstrate the usecase you are trying to sell.
 
  • Like
Reactions: 2 users

Quatrojos

Regular
I think you don’t understand… and I would sell my bicycle for 2,5 million if someone would make me an offer.. what kind of statement is that? IF ANYONE IS INTERESTED… if a dog had wheels, he would be a bicycle…
My dog has four legs.
 
  • Haha
  • Like
Reactions: 9 users

KKFoo

Regular
I asked Ant61 whether the Optimus satellite carries the Ant61 Beacon and the answer is no. Not sure this has been posted, I never read all the messages in this forum because of too much noises. Have a nice day..
 
  • Like
Reactions: 10 users
Bring on 27th June @Pom down under :)
May I ask why as the only thing of interest on the 27th is

IMG_0630.png
 
Last edited:
  • Haha
  • Like
  • Thinking
Reactions: 11 users

7für7

Top 20
  • Like
  • Haha
Reactions: 2 users
Yr super contribution :)
It takes a while to clear so maybe a week or so after 😆 but I’ve got a few $$$$ at the ready and holding off for a bit 🙏
 
  • Like
Reactions: 3 users

em1389

Member
  • Like
Reactions: 1 users

Frangipani

Top 20
While it is perfectly conceivable that Meta could be exploring BrainChip’s offerings behind an NDA wall of silence, I don’t see any conclusive evidence at present that we are indeed engaged with them.

D58E16BD-29F5-4898-A4FC-2BC3034DF3A6.jpeg



In my opinion, FF is once again jumping to conclusions.
He is basing his reasoning for adding Meta to his personal list of BrainChip’s engagements on a supposed fact, namely that Chris Jones was introduced to BrainChip and TENNs while working at Meta, even though this premise has not been verified - it is merely FF’s interpretation of the following quote (from 3:18 min, my transcript):

“So, about a year ago, uh, I was at Meta, I was in their AI Infrastructure Group, and on an almost daily basis I would see new neural network architectures.

So, when I was introduced to BrainChip, I didn’t think I would really be impressed by anything a small team was gonna develop, erm. They told me about TENNs, I was a little bit skeptical to be honest at first. As I started getting to understand the benchmarks and a little bit more of the math and how it worked, I started to get pretty excited by what they had.”


It is a hasty judgement to draw the conclusion that the above quote necessarily expresses simultaneity rather than considering the alternative - posteriority.

Chris Jones himself does not explicitly state that he was introduced to BrainChip and TENNs while working for Meta (picture BrainChip staff giving a power point presentation at Meta’s premises).
Rather, this is what FF read into his words.

But there is another way to interpret that quote; one, which makes more sense to me:

When I started watching the video of Chris Jones’s presentation on TENNs, my immediate thoughts were that a) BrainChip had sent an excellent speaker to the 2024 Embedded Vision Summit in May to replace Nandan Nayampally (who had originally been scheduled to give that talk), and b) how job loss can turn out to be a blessing in disguise.

What FF doesn’t seem to be aware of is that Chris Jones had been affected by one of Meta’s 2023 mass layoffs. It must have been the April 2023 round, as in his LinkedIn post below he mentions over 4000 other employees sharing his fate as well as the fact that he had been with the company for only 11 months. This aligns with his employment history on LinkedIn, according to which he started as a Senior Product Technical Program Manager at Meta in June 2022.

115847CC-9330-45C2-A96F-B659C223B2ED.jpeg



21B6AEA0-9E91-49F6-BF80-E53D44F67217.jpeg



Under California law (the so-called WARN Act) companies over a certain size need to give affected employees at least 60 days advance notice in case of significant workforce reductions, which in combination with a severance pay package would account for Chris Jones’s LinkedIn profile stating he was with Meta until July 2023, even though he appears to have been laid off in April 2023.


E662A15D-3468-499D-9690-824DA99393EA.jpeg



Judging from the practice observed with other US tech giants handling domestic layoffs, it is however highly likely that from the day the layoff was communicated, the affected employees would - with immediate effect - no longer have had any access to their company emails, computer files and confidential information, despite remaining on the company payroll for another two to four months (depending on their respective severance pay packages).

And unless Meta was an EAP customer of BrainChip at the time (for which there has never been any indication whatsoever), Meta employees would not have been privy to any details about TENNs prior to BrainChip’s white paper publication on June 9, 2023 - weeks after Chris Jones had found out about being laid off and had since presumably been cut off from the company’s internal communication channels and flow of information.


31dee624-a87d-4bea-8043-c1c136c3b032-jpeg.65034


So chances are he did not develop his enthusiasm for BrainChip and TENNs in his role as Senior Product Technical Program Manager at Meta - but rather while job hunting post-layoff!

Luckily for both Chris Jones as well as BrainChip, getting introduced to each other seems to have turned out a win-win situation.
Chris Jones is clearly an asset to our company from what I can tell through him presenting in public, and hopefully he will be able to proudly tell his daughter one day that when she was a toddler, he turned a job loss into an immense gain by serendipitously discovering what BrainChip had to offer.

And who knows - maybe Chris Jones has been/is/will be the one introducing BrainChip to his former Meta colleagues. But as for now, Meta stays off my personal (virtual) list of BrainChip’s engagements.

DYOR.
 

Attachments

  • D34F9FCF-F9EA-4EBC-9D2B-8467608885C7.jpeg
    D34F9FCF-F9EA-4EBC-9D2B-8467608885C7.jpeg
    474.9 KB · Views: 68
  • 31DEE624-A87D-4BEA-8043-C1C136C3B032.jpeg
    31DEE624-A87D-4BEA-8043-C1C136C3B032.jpeg
    499.2 KB · Views: 970
  • Like
  • Fire
  • Love
Reactions: 25 users

DK6161

Regular
@DK6161 ... did you run out of steam champ? You had so much to say before. Are you waiting for @Iseki's next motley fool article before you have something to say that "fits the narrative"?

In any case, I hope you have been reflecting on your choices and are feeling more open to accepting your shortcomings. It's important that you respect yourself.

🥰🥰🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡🤡
Yeah mate, busy writing for MF at the moment. Sorry didn't have time to clown you here.
Anyway keep an eye out for me and @Iseki new articles tomorrow about IR that has done SFA and directors laughing all the way to the bank.
Really appreciated your baiting. Do keep going champ!
Seriously got nothing more to say really. Waiting for some positive news before thinking of buying back in. Have some cash ready to go at maybe around 10 or 15c.
Not advice
 
  • Like
  • Fire
Reactions: 2 users

Diogenese

Top 20
While it is perfectly conceivable that Meta could be exploring BrainChip’s offerings behind an NDA wall of silence, I don’t see any conclusive evidence at present that we are indeed engaged with them.

View attachment 65036


In my opinion, FF is once again jumping to conclusions.
He is basing his reasoning for adding Meta to his personal list of BrainChip’s engagements on a supposed fact, namely that Chris Jones was introduced to BrainChip and TENNs while working at Meta, even though this premise has not been verified - it is merely FF’s interpretation of the following quote (from 3:18 min, my transcript):

“So, about a year ago, uh, I was at Meta, I was in their AI Infrastructure Group, and on an almost daily basis I would see new neural network architectures.

So, when I was introduced to BrainChip, I didn’t think I would really be impressed by anything a small team was gonna develop, erm. They told me about TENNs, I was a little bit skeptical to be honest at first. As I started getting to understand the benchmarks and a little bit more of the math and how it worked, I started to get pretty excited by what they had.”


It is a hasty judgement to draw the conclusion that the above quote necessarily expresses simultaneity rather than considering the alternative - posteriority.

Chris Jones himself does not explicitly state that he was introduced to BrainChip and TENNs while working for Meta (picture BrainChip staff giving a power point presentation at Meta’s premises).
Rather, this is what FF read into his words.

But there is another way to interpret that quote; one, which makes more sense to me:

When I started watching the video of Chris Jones’s presentation on TENNs, my immediate thoughts were that a) BrainChip had sent an excellent speaker to the 2024 Embedded Vision Summit in May to replace Nandan Nayampally (who had originally been scheduled to give that talk), and b) how job loss can turn out to be a blessing in disguise.

What FF doesn’t seem to be aware of is that Chris Jones had been affected by one of Meta’s 2023 mass layoffs. It must have been the April 2023 round, as in his LinkedIn post below he mentions over 4000 other employees sharing his fate as well as the fact that he had been with the company for only 11 months. This aligns with his employment history on LinkedIn, according to which he started as a Senior Product Technical Program Manager at Meta in June 2022.

View attachment 65030


View attachment 65031


Under California law (the so-called WARN Act) companies over a certain size need to give affected employees at least 60 days advance notice in case of significant workforce reductions, which in combination with a severance pay package would account for Chris Jones’s LinkedIn profile stating he was with Meta until July 2023, even though he appears to have been laid off in April 2023.


View attachment 65033


Judging from the practice observed with other US tech giants handling domestic layoffs, it is however highly likely that from the day the layoff was communicated, the affected employees would - with immediate effect - no longer have had any access to their company emails, computer files and confidential information, despite remaining on the company payroll for another two to four months (depending on their respective severance pay packages).

And unless Meta was an EAP customer of BrainChip at the time (for which there has never been any indication whatsoever), Meta employees would not have been privy to any details about TENNs prior to BrainChip’s white paper publication on June 9, 2023 - weeks after Chris Jones had found out about being laid off and had since presumably been cut off from the company’s internal communication channels and flow of information.


31dee624-a87d-4bea-8043-c1c136c3b032-jpeg.65034


So chances are he did not develop his enthusiasm for BrainChip and TENNs in his role as Senior Product Technical Program Manager at Meta - but rather while job hunting post-layoff!

Luckily for both Chris Jones as well as BrainChip, getting introduced to each other seems to have turned out a win-win situation.
Chris Jones is clearly an asset to our company from what I can tell through him presenting in public, and hopefully he will be able to proudly tell his daughter one day that when she was a toddler, he turned a job loss into an immense gain by serendipitously discovering what BrainChip had to offer.

And who knows - maybe Chris Jones has been/is/will be the one introducing BrainChip to his former Meta colleagues. But as for now, Meta stays off my personal (virtual) list of BrainChip’s engagements.

DYOR.
That is a tendentious reading of Mr Jones' comments about when he found out about Akida. The normal reading of the following is that he saw several NNs while he worked at Meta and, following on that statement, he says "So, when I was introduced to to BrainChip ... they told me about TeNNs."

"So, about a year ago, uh, I was at Meta, I was in their AI Infrastructure Group, and on an almost daily basis I would see new neural network architectures.

So, when I was introduced to BrainChip, I didn’t think I would really be impressed by anything a small team was gonna develop, erm. They told me about TENNs, I was a little bit skeptical to be honest at first. As I started getting to understand the benchmarks and a little bit more of the math and how it worked, I started to get pretty excited by what they had
.”

It's about Attention and LSTM. You need to take the whole context into consideration. The statements would normally be linked by the man on the Clapham omnibus. This is in normal English speech, not a statutory declaration or Kantian transcendentalism.

The following argument is a case of pulling the trigger before removing the pistol from its holster:

What FF doesn’t seem to be aware of is that Chris Jones had been affected by one of Meta’s 2023 mass layoffs. It must have been the April 2023 round, as in his LinkedIn post below he mentions over 4000 other employees sharing his fate as well as the fact that he had been with the company for only 11 months. This aligns with his employment history on LinkedIn, according to which he started as a Senior Product Technical Program Manager at Meta in June 2022.

The interesting thing is that the TeNNs patent was filed in mid-2022, so BrainChip would have only started talking to EAPs about TeNNs after the patent was filed, although public discussion of TeNNs did not take place until much later. He worked for Meta for 11 months from April 2022. Given that the patent filing preceded coincides with the period of Mr Jones' employment with Meta, the inference is open that Meta was an EAP or in discussion with BrainChip at least before Mr Jones was outplaced/right-sized from Meta. There was a 9 month period when Mr Jones was working with Meta when Brainchip was free to discuss TeNNs with EAPs under NDA.
 
  • Like
  • Love
  • Fire
Reactions: 57 users

Frangipani

Top 20
Those of you who have taken a closer look at the global neuromorphic research community will likely have come across the annual Telluride Neuromorphic Cognition Engineering Workshop, a three week project-based meeting in eponymous Telluride, a charming former Victorian mining town in the Rocky Mountain high country of southwestern Colorado. Nestled in a deep glacial valley, Telluride sits at an elevation of 8750 ft (2667 m) and is surrounded by majestic rugged peaks. Truly a scenic location for a workshop.

The National Science Foundation (NSF), which has continuously supported the Telluride Workshop since its beginnings in the 1990s, described it in a 2023 announcement as follows: It “will bring together an interdisciplinary group of researchers from academia and industry, including engineers, computer scientists, neuroscientists, behavioral and cognitive scientists (…) The annual three-week hands-on, project-based meeting is organized around specific topic areas to explore organizing principles of neural cognition that can inspire implementation in artificial systems. Each topic area is guided by a group of experts who will provide tutorials, lectures and hands-on project guidance.”

https://new.nsf.gov/funding/opportu...ng-augmented-intelligence/announcements/95341

View attachment 59073

View attachment 59075



The topic areas for the 2024 Telluride Neuromorphic Workshop are now online. As every year, the list of topic leaders and invited speakers includes the crème de la crème of neuromorphic researchers from all over the world. While no one from Brainchip has made the invited speakers’ list (at least not to date), I was extremely pleased to notice that Akida will be featured nevertheless! It has taken the academic neuromorphic community ages to take Brainchip seriously (cf my previous post on Open Neuromorphic: https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-404235), but here we are, finally getting acknowledged alongside the usual suspects:

View attachment 59076
View attachment 59077
Some readers will now presumably shrug their shoulders and consider this mention of Brainchip in a workshop programme as being insignificant as opposed to those coveted commercial announcements. To me, however, the inclusion of Brainchip at Telluride marks a milestone.

Also keep in mind what NSF Program Director Soo-Siang Lim said about Telluride (see link above): “This workshop has a long and successful track-record of advancing and integrating our understanding of biological and artificial systems of learning. Many collaborations catalyzed by the workshop have led to significant technology innovations, and the training of future industry and academic leaders.”

I’d just love to know who of the four topic leaders and/or co-organisers had suggested to include Brainchip for their hands-on project “Processing space-based data using neuromorphic computing hardware” (and whether this was readily agreed on or not):

Was it one of the two colleagues from Western Sydney University’s International Centre for Neuromorphic Systems (ICNS)? Gregory Cohen (who is responsible for Astrosite, WSU’s containerised neuromorphic inspired mobile telescope observatory as well as for the modification of the two neuromorphic cameras on the ISS as part of the USAFA Falcon Neuro project) or Alexandre Marcireau?

Or was it Gregor Lenz, who left Synsense in mid-2023 to co-found Neurobus (“At Neurobus we’re harnessing the power of neuromorphic computing to transform space technology”) and is also one of the co-founders of the Open Neuromorphic community? He was one of the few live viewers of Cristian Axenie’s January 15 online presentation on the TinyML Vision Zero San Jose Competition (where his TH Nürnberg team, utilising Akida for their event-based visual motion detection and tracking of pedestrians, had come runner-up), and asked a number of intriguing questions about Akida during the live broadcast.

Or was it possibly Jens Egholm Pedersen, the Danish doctoral student at Stockholm’s KTH Royal Institute of Technology, Sweden’s largest technical university, who hosted said presentation by Cristian Axenie on the Open Neuromorphic YouTube channel and appeared to be genuinely impressed about Akida (and the Edge Impulse platform), too?

Oh, and last, but not least:
Our CTO Anthony M Lewis aka Tony Lewis has been to Telluride numerous times: the workshop website lists him as one of the early participants back in 1996 (when he was with UCLA’s Computer Science Department). Tony Lewis is subsequently listed as a guest speaker for the 1999, 2000, 2001, 2002, 2003 and 2004 workshops (in his then capacity as the founder of Iguana Robotics) - information on the participants between 2006 - 2009 as well as for the year 2011 is marked as “lost”. In 2019, Tony Lewis had once again been invited as either topic leader or guest speaker, but according to the website could not come.

So I guess there is a good chance we will see him return to Telluride one day, this time as CTO of Brainchip, catching up with a lot old friends and acquaintances, many of whom he also keeps in touch with via his extensive LinkedIn network, so they’d definitely know what he’s been up to.

As I said in another post six weeks ago:

Anyone interested in registering online for remote participation in the upcoming hybrid Telluride Neuromorphic Workshop (June 30 - July 19)?




0FAEC2C7-79D6-4F18-B1AC-0E98365B2C2A.jpeg

Meanwhile, two more speakers have been invited by the organisers of the topic area SPA24: Neuromorphic systems for space applications, which is the one that will provide on-site participants with the opportunity to get hands-on-experience with neuromorphic hardware including Akida:

874DD3F7-6E62-4A50-BAE0-188318ADCB8A.jpeg


Dr Damien Joubert from Prophesee and … 🥁 🥁 🥁 Laurent Hili, our friend from ESA!



2A9DA16A-94B7-43D4-9C9C-54BC42EA85CA.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 16 users
Latest paper just released atoday from Gregor Lenz at Neurobus and Doug McLelland at Brainchip.


Low-power Ship Detection in Satellite Images Using Neuromorphic Hardware​

Gregor LenzCorresponding author. E-Mail: gregor@neurobus.space Neurobus, Toulouse, FranceDouglas McLellandBrainChip, Toulouse, France

Abstract​

Transmitting Earth observation image data from satellites to ground stations incurs significant costs in terms of power and bandwidth. For maritime ship detection, on-board data processing can identify ships and reduce the amount of data sent to the ground. However, most images captured on board contain only bodies of water or land, with the Airbus Ship Detection dataset showing only 22.1% of images containing ships. We designed a low-power, two-stage system to optimize performance instead of relying on a single complex model. The first stage is a lightweight binary classifier that acts as a gating mechanism to detect the presence of ships. This stage runs on Brainchip’s Akida 1.0, which leverages activation sparsity to minimize dynamic power consumption. The second stage employs a YOLOv5 object detection model to identify the location and size of ships. This approach achieves a mean Average Precision (mAP) of 76.9%, which increases to 79.3% when evaluated solely on images containing ships, by reducing false positives. Additionally, we calculated that evaluating the full validation set on a NVIDIA Jetson Nano device requires 111.4 kJ of energy. Our two-stage system reduces this energy consumption to 27.3 kJ, which is less than a fourth, demonstrating the efficiency of a heterogeneous computing system.
\makeCustomtitle

1Introduction​

Ship detection from satellite imagery is a critical application within the field of remote sensing, offering significant benefits for maritime safety, traffic monitoring, and environmental protection. The vast amount of data generated by satellite imagery cannot all be treated on the ground in data centers, as the downlinking of image data from a satellite is a costly process in terms of power and bandwidth.
Refer to caption
Figure 1:Data flow chart of our system.
To help satellites identify the most relevant data to downlink and alleviate processing on the ground, recent years have seen the emergence of edge artificial intelligence (AI) applications for Earth observation [xu2022lite, zhang2020ls, xu2021board, ghosh2021board, yao2019board, alghazo2021maritime, vstepec2019automated]. By sifting through the data on-board the satellite, we can discard a large number of irrelevant images and focus on the relevant information. Because satellites are subject to extreme constraints in size, weight and power, energy-efficient AI systems are crucial. In response to these demands, our research focuses on using low-power neuromorphic chips for ship detection tasks in satellite images. Neuromorphic computing, inspired by the neural structure of the human brain, offers a promising avenue for processing data with remarkable energy efficiency. The Airbus Ship Detection challenge [al2021airbus] on Kaggle aimed to identify the best object detection models. A post-challenge analysis [faudi2023detecting] revealed that a binary classification pre-processing stage was crucial in winning the challenge, as it reduced the rates of false positives and therefore boosted the relevant segmentation score. We introduce a ship detection system that combines a binary classifier with a powerful downstream object detection model. The first stage is implemented on a state-of-the-art neuromorphic chip and determines the presence of ships. Images identified as containing ships are then processed by a more complex detection model in the second stage, which can be run on more flexible hardware. Our work showcases a heterogeneous computing pipeline for a complex real-world task, combining the low-power efficiency of neuromorphic computing with the increased accuracy of a more complex model.
Refer to caption
Figure 2:The first two images are examples of 22% of annotated samples. The second two images are examples of the majority of images that do not contain ships but only clouds, water or land.

2Dataset​

The Airbus Ship Detection dataset [airbus_ship_detection_2018] contains 192k satellite images, of which 22.1% contain annotated bounding boxes for a single ship class. Key metrics of the dataset are described in Table 1. As can be seen in the sample images in Figure 2, a large part of the overall pixel space captures relatively homogenuous parts such as open water or clouds. We chose this dataset as it is part of the European Space Agency’s (ESA) On-Board Processing Benchmark suite for machine learning applications [obpmark], with the goal in mind to test and compare a variety of edge computing hardware platforms for the most common ML tasks related to space applications. The annotated ship bounding boxes have diagonals that vary from 1 to 380 pixels in length, and 48.3% of bounding boxes have diagonals of 40 pixels or shorter. Given that the images are 768×768px in size, this makes it a challenging dataset, as the model needs to be able to detect ships of a large variety of sizes. Since on Kaggle there are only annotations for the training set available, we used a random 80/20 split for training and validation, similarly to Huang et al [huang2020fast]. For our binary classifier, we downsized all images to 256×256px, to be compatible with the input resolution of Akida 1.0, and labeled the images as 1 if they contained at least one bounding box of any size, otherwise 0. For our detection model, we downsized all images to 640×640px in size.
RGB image size768×768
Total number of images192,555
Number of training images154,044
Percentage of images that contain ships22.1%
Total number of bounding boxes81,723
Median diagonal of all bounding boxes43.19px
Ratio of bounding box to image area0.3%
Table 1:Summary of image and bounding box data for the Airbus Ship Detection Training dataset.

3Models​

For our binary classifier, we used a 866k parameter model named AkidaNet 0.5, which is loosely inspired from MobileNet [howard2017mobilenets] with alpha = 0.5. It consists of standard convolutional, separable convolutional and linear layers, to reduce the number of parameters and to be compatible with Akida 1.0 hardware. To train the network, we used binary crossentropy loss, the Adam optimizer, a cosine decay learning rate scheduler with initial rate of 0.001 and lightweight L1 regularization on all model parameters over 10 epochs. For our detection model, we trained a YOLOv5 medium [ge2021yolox] model of 25m parameters with stochastic gradient descent, a learning rate of 0.01 and 0.9 momentum, plus blurring and contrast augmentations over 25 epochs.

4Akida hardware​

Akida by Brainchip is an advanced artificial intelligence processor inspired by the neural architecture of the human brain, designed to provide high-performance AI capabilities at the edge with exceptional energy efficiency. Version 1.0 is available for purchase in the form factor of PCIe x1 as shown in Figure 3, and supports convolutional neural network architectures. Version 2.0 adds support for a variety of neural network types including RNNs and transformer architectures, but is currently only available in simulation. The Akida processor operates in an event-based mode for intermediate layer activations, which only performs computations for non-zero inputs, significantly reducing operation counts and allowing direct, CPU-free communication between nodes. Akida 1.0 supports flexible activation and weight quantization schemes of 1, 2, or 4 bit. Models are trained in Brainchip’s MetaTF, which is a lightweight wrapper around Tensorflow. In March 2024, Akida has also been sent to space for the first time [brainchip2024launch].
Refer to caption
Figure 3:AKD1000 chip on a PCB with PCIe x1 connector.

5Results​

5.1Classification accuracy​

The key metrics for our binary classification model are provided in Table 2. The trained floating point model reaches an accuracy of 97.91%, which drops to 95.75% after quantizing the weights and activations to 4 bit and the input layer weights to 8 bit. After one epoch of quantization-aware training with a tenth of the normal learning rate, the model recovers nearly its floating point accuracy, at 97.67%. Work by Alghazo et al [alghazo2021maritime] reaches an accuracy of 89.7% in the same binary classification setting, albeit on a subset of the dataset and on images that are downscaled to 100 pixels. In addition, the corresponding recall and precision metrics for our model are shown in the table. In our system we prioritize recall, because false negatives (missing a ship) have a higher cost than false positives (detecting ships where there are none), as the downstream detection model can correct for mistakes of the classifier stage. By default we obtain a recall of 94.4 and a precision of 95.07%, but by adjusting the decision threshold on the output, we bias the model to include more images at the cost of precision, obtaining a recall of 97.64% for a precision of 89.73%.
Table 2:Model performance comparison in percent. FP is floating point, 4 bit is the quantized model of 8 bit inputs, and 4 bit activations and weights. QAT is quantization-aware training for 1 epoch with reduced learning rate. Precision and recall values are given for a decision threshold of 0.5 and 0.1.
FP4 bitAfter QAT
Accuracy97.9195.7597.67
Accuracy [alghazo2021maritime] 89.70--
Recall95.2385.1294.40/97.64
Precision95.3895.3295.07/89.73

5.2Performance on Akida 1.0​

The model that underwent QAT is then deployed to the Akida 1.0 reference chip, AKD1000, where the same accuracy, recall and precision are observed as in simulation. As detailed in Table 3, feeding a batch of 100 input images takes 1.168 s and consumes 440 mW of dynamic power. The dynamic energy used to process the whole batch is therefore 515 mJ, which translates to 5.15 mJ per image. The network is distributed across 51 of the 78 available neuromorphic processing cores. During our experiments, we measured 921 mW of static power usage on the AKD1000 reference chip. We note that this value is considerably reduced in later chip generations.

Table 3:Summary of performance metrics on Akida 1.0 for a batch size of 100.
Total duration (ms)1167.95
Duration per sample (ms)11.7
Throughput (fps)85.7
Total dynamic power (mW)440.8
Energy per batch (mJ)514.84
Energy per sample (mJ)5.15
Total neuromorphic processing cores51
We can further break down performance across the different layers in the model. The top plot in Figure 4 shows the latency per frame: it increases as layers are added up to layer 7, but beyond that, the later layers make almost no difference. As each layer is added, we can measure energy consumption, and estimate the per-layer contribution as the difference from the previous measurement, shown in the middle plot of Figure 4. We observe that most of the energy during inference is spent on earlier layers, even though the work required per layer is expected to be relatively constant as spatial input sizes are halved, but the number of filters doubled throughout the model. The very low energy measurements of later layers are explained by the fact that Akida is an event-based processor that exploits sparsity. When measuring the density of input events per layer as shown in the bottom plot of Figure 4, we observe that energy per layer correlates well with the event density. The very first layer receives dense images, but subsequent layers have much sparser inputs, presumably due to a lot of input pixels that are not ships, which in turn reduces the activation of filters that encode ship features. We observe an average event density of 29.3% over all layers including input, reaching less than 5% in later layers. This level of sparsity is achieved through the combination of ReLU activation functions and L1 regularization on activations during training.
Refer to caption
Figure 4:Layer-wise statistics per sample image for inference of binary classifier on Akida v1, measured for a batch of 100 images over 10 repeats.

5.3Detection model performance​

For our subsequent stage, our YOLOv5 model of 25m parameters achieves 76.9% mAP when evaluated on the full validation set containing both ship and non-ship images. When we evaluate the same model on the subset of the validation set that just contains ships, the mAP jumps to 79.3%, as the false positives are reduced considerably. That means that our classifier stage already has a beneficial influence on the detection performance of the downstream model. Table 4 provides an overview of detection performance in the literature. Machado et al. [machado2022estimating] provide measurements for different YOLOv5 models on the NVIDIA Jetson Nano series, a hardware platform designed for edge computing. For the YOLOv5 medium model, the authors report an energy consumption of 0.804 mWh per frame and a throughput of 2.7 frames per second at input resolution of 640 pixels, which translates to a power consumption of 7.81 W. The energy necessary to process the full validation dataset of 38,511 images on a Jetson is therefore 38511×7.81/2.7=111.4 kJ. For our proposed two-stage system, we calculate the total energy as the sum of processing the full validation set on Akida plus processing the identified ship images on the Jetson device. Akida has a power consumption of 0.921+0.44=1.361 W at a throughput of 85.7 images/s. With a recall of 97.64% and a precision of 89.73%, 9243 images, equal to 24.03% of the validation data, are classified to contain ships, in contrast to the actual 22.1%. We therefore obtain an overall energy consumption of 38511×1.361/85.7+9243×7.81/2.7=27.3 kJ. Our proposed system uses 4.07 times less energy to evaluate this specific dataset.
ModelmAP (%)Energy (kJ)
YOLOv3 [patel2022deep] 49-
YOLOv4 [patel2022deep] 61-
YOLOv5 [patel2022deep] 65-
Faster RCNN [al2021airbus] 80-
YOLOv576.9111.4
AkidaNet + YOLOv579.327.3
Table 4:Mean average precision and energy consumption evaluated on the Airbus ship detection dataset.

6Discussion​

For edge computing tasks, it is common to have a small gating model which activates more costly downstream processing whenever necessary. As only 22.1% of images in the Airbus detection dataset contain ships, a two-stage processing pipeline can leverage different model and hardware architectures to optimize the overall system. We show that our classifier stage running on Akida benefits from a high degree of sparsity when processing the vast amounts of homogeneous bodies of water, clouds or land in satellite images, where only 0.3% of the pixels are objects of interest. We hypothesise that many filter maps that encode ship features are not activated most of the times. This has a direct impact on the dynamic power consumption and latency during inference due to Akida’s event-based processing nature. In addition, we show that a two-stage system actually increases the mAP of the downstream model by reducing false positive rates, as is also mentioned in the post-challenge analysis of the Airbus Kaggle challenge [faudi2023detecting]. The energy consumption of the hybrid system is less than a fourth compared to running the detection model on the full dataset, with more room for improvement when using Akida v2, which is going to reduce both static and dynamic power consumption and allow the deployment of more complex models that likely achieve higher recall rates. The limitations of our system are the increased needs of size, having to fit two different accelerators instead of a single one. But by combining the strengths of different hardware platforms, we can optimize the overall performance, which is critical for edge computing applications in space
 
  • Like
  • Fire
  • Love
Reactions: 69 users
  • Haha
  • Like
  • Fire
Reactions: 11 users

Frangipani

Top 20
The interesting thing is that the TeNNs patent was filed in mid-2022, so BrainChip would have only started talking to EAPs about TeNNs after the patent was filed, although public discussion of TeNNs did not take place until much later. He worked for Meta for 11 months from April 2022.

No, Chris Jones started working for Meta in June 2022, see my post and his LinkedIn profile.

Given that the patent filing preceded coincides with the period of Mr Jones' employment with Meta, the inference is open that Meta was an EAP or in discussion with BrainChip at least before Mr Jones was outplaced/right-sized from Meta.


That’s exactly why I wrote the following:

And unless Meta was an EAP customer of BrainChip at the time (for which there has never been any indication whatsoever), Meta employees would not have been privy to any details about TENNs prior to BrainChip’s white paper publication on June 9, 2023 - weeks after Chris Jones had found out about being laid off and had since presumably been cut off from the company’s internal communication channels and flow of information.


31dee624-a87d-4bea-8043-c1c136c3b032-jpeg.65034


So chances are he did not develop his enthusiasm for BrainChip and TENNs in his role as Senior Product Technical Program Manager at Meta - but rather while job hunting post-layoff!

But has there ever been any indication that Meta was indeed an EAP customer at the time? Any announcement as with other EAP customers? If not, we shouldn’t simply assume so.

But for the sake of the discussion, let’s assume for a minute it was indeed the case.
Given that the patent filing preceded coincides with the period of Mr Jones' employment with Meta, the inference is open that Meta was an EAP or in discussion with BrainChip at least before Mr Jones was outplaced/right-sized from Meta. There was a 9 month period when Mr Jones was working with Meta when Brainchip was free to discuss TeNNs with EAPs under NDA.

In practice, the overlapping period regarding Chris Jones working for Meta and his introduction to BrainChip and TENNs would have been much shorter, though, given that Chris Jones said on May 23, 2024

So, about a year ago, uh, I was at Meta, I was in their AI Infrastructure Group, and on an almost daily basis I would see new neural network architectures.

So, when I was introduced to BrainChip, I didn’t think I would really be impressed by anything a small team was gonna develop, erm. They told me about TENNs, I was a little bit skeptical to be honest at first. As I started getting to understand the benchmarks and a little bit more of the math and how it worked, I started to get pretty excited by what they had.”



Saying “about a year ago” on May 23, 2024 could mean July, June, May, April, possibly even March 2023. He certainly wouldn’t have put it that way if he and his colleagues had already been introduced to BrainChip in let’s say October 2022. And since he appears to have been laid off in mid-April, the potential time window shrinks to a maximum of six weeks, I’d say. That’s far from the nine months you claimed.

Your use of the participle “outplaced” instead of “laid off” implies that Meta would have helped him to find his current job? Again, there is no indication of that at all when you read his LinkedIn post, especially the last paragraph:

5069E8E9-DB90-439B-A5B2-A60C00F3E355.jpeg



Mind you, I did not say there is no way that Chris Jones could have found out about TENNs while still working for Meta, but to me his words are certainly not conclusive evidence, the way FF presented them. They can very well be interpreted differently, especially with the background knowledge that he was laid off more than a year ago (which he didn’t mention in the video). I had already taken notice of that a while ago, when I had had a look at his LinkedIn profile after learning that he would be the one giving the talk Nandan Nayampally was supposed to have presented. (This was even before we found out from the Quarterly Investor podcast that Nandan and Rob had all of a sudden left the company.)

So no, mine is not a tendentious reading and I am not shooting myself in the foot either, if that is what you meant to say. My argument is well-founded. I don’t exclude the possibility that Chris Jones got introduced to BrainChip while still working for Meta, but I believe it is the unlikelier sequence of events for the reasons stated.

Also: Why would he have asked his LinkedIn network for assistance in finding a new job in his April 2023 LinkedIn post and only started working for BrainChip in October 2023? If he had already been that excited about our company prior to being laid off at Meta, they might even have been able to offer him a new position from August onwards, a smooth transition from Meta to BrainChip without a paycheck missing. Of course I have no idea whether it was possibly a deliberate decision of Chris Jones to pick October as the start date for his new job (maybe he wanted to spend quality time with his family between jobs, go on a long vacation, rest and recharge, renovate the house or perhaps he was suffering from an illness, was taking care of elderly relatives or was grieving for a loved one etc) or whether there was simply no earlier job vacancy for his position at BrainChip, but the two month gap between jobs could just as well signify that he didn’t yet know about BrainChip’s offerings by the time he started looking for a new job and that they were possibly not even his first choice.

Ultimately, everything - and that includes FF’s reading - is speculation, unless we hear it from the horse’s mouth. Can we at least agree on that?
 
  • Like
  • Fire
Reactions: 14 users

Frangipani

Top 20
  • Haha
Reactions: 3 users

Frangipani

Top 20
7D04C7C9-F891-4614-886A-C06DD06A9D8C.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 26 users
Top Bottom