BRN Discussion Ongoing

Diogenese

Top 20
Many of the long term holders have been here for going on 10 years, so it's not surprising that patience occasionally wears thin. However, it is only in the last 5 years that Akida has been available. Before that there was Brainchip Studio (software) and subsequently Brainchip Accelerator (hardware adapted to improve the function of Studio). Akida has gone through a number of iterations since then, and the business model has changed from IP plus ASIC to IP only, and now back to IP plus a somewhat limited hardware embodiment plus "software" in the form of models and MetaTF.

One of the significant developments in Akida 1 was on-chip learning which mitigates the need for retraining when new items need to be included in the model, a task which required the retraining of the entire model in the cloud. Cloud training is a major cost in terms of power and energy for the major AI engines.

Our tech has continued to evolve at a rapid rate under pressure of expanding market requirements. The commercial version of Akida 1 went from 1-bit weights and activations to 4-bits. Akida 1 has been incorporated in groundbreaking on-chip cybersecurity applications (QV, DoE), which is surely a precursor for Akida2/TENNs in microDoppler radar for AFRL/RTX. Akida 2 is embodied in FPGA for demonstration purposes.

Akida 2 went to 8-bits. It also introduced new features like skip, TENNs and state-space models.

The Roadmap also included Akida GenAI with INT16, also in FPGA, and Akida 3 (INT16, FP32), to be incorporated in FPGA within a year.

There is a clear trend to increased accuracy (INT16/FP32) in Akida 3. This indicates an improved ability to distinguish smaller features, although at the expense of greater power consumption. To at least partially compensate for the increased power consumption, Jonathan Tapson did mention a couple of under-wraps patent applications to improve the efficiency of data movement in memory. So I see Akida 3 as a highly specialized chip adapted for applications needing extremely high accuracy, while I think Akida2/TENNs will be the basis for our general business IP/SoC.

The fact that Akida has evolved so rapidly in such large steps is a tribute to the flexibility of its basic design.

https://brainchip.com/brainchip-sho...essing-ip-and-device-at-tinyml-summit-2020-2/

BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 07.02.2020​

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.

In the Poster Session, “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” representatives from BrainChip will explain to attendees how the company’s Akida™ Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.

BrainChip will also demonstrate “On-Chip Learning with Akida” in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.

“We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit,” said Louis DiNardo, CEO of BrainChip. “We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at
https://tinymlsummit.org/
An application requiring accuracy of INT16 (64k) would be far beyond the capability of an analog SNN.
 
  • Like
  • Fire
Reactions: 8 users

manny100

Top 20
@Frangipani

Great research and logic as usual.

When we go back to the BH & Navair transition program paper and roadmap, we can see that part of the commercialisation project will be a crossover to other DoD areas incl AFRL.

Given the web interconnecting all the players incl BRN there is always the possibility imo, that the card is advanced along enought that AFRL could be currently looking at the BH card as well.



View attachment 88803
Good pick up FMF, in the paragraph earlier BF say they intend to be an industry leader. I have no doubt they would have spruiked their 'card' to all and sundry.
No doubt RTX and Lockheed Martin have a card as well.
 
  • Like
  • Fire
Reactions: 4 users

Yoda

Regular
I can’t really explain why they’re not doing it either…

But one possible reason could be that they want to avoid situations like the one with Mercedes happening again.

Things like that attract the attention of regulatory authorities, who might take action that could further damage BrainChip’s reputation.

Extreme ups and downs are a red flag for regulators…

We all know how fast they put a speeding ticket out of the 🎩 after a nice rise! 🙄
But they didn't announce anything regarding Mercedes - Mercedes did.

They are certainly concerned about the regulators but as a director you need to do your job, you can't be so afraid of ASIC and the ASX that you don't actually fulfill your continuous disclosure obligations and director's duties properly and according to law (and that is what Fact Finder was trying to explain to them, quite correctly in my opinion, at the AGM).

Furthermore, I still can't understand why the partnerships can't at least be announced as non-price sensitive. If they are worthy enough to be announced on X and LinkedIn then they surely merit at least a non-price sensitive announcement. I don't see how a non-price sensitive announcement can be taken to be ramping the stock.
 
  • Like
  • Love
Reactions: 6 users

jrp173

Regular
But they didn't announce anything regarding Mercedes - Mercedes did.

They are certainly concerned about the regulators but as a director you need to do your job, you can't be so afraid of ASIC and the ASX that you don't actually fulfill your continuous disclosure obligations and director's duties properly and according to law (and that is what Fact Finder was trying to explain to them, quite correctly in my opinion, at the AGM).

Furthermore, I still can't understand why the partnerships can't at least be announced as non-price sensitive. If they are worthy enough to be announced on X and LinkedIn then they surely merit at least a non-price sensitive announcement. I don't see how a non-price sensitive announcement can be taken to be ramping the stock.

Yoda, totally agree with what you have said. The fact remains that BrainChip CAN make price sensitive and non price sensitive announcements around deals and partnerships, but Antonio made it crystal clear at the AGM, they don't because he doesn't want to (he actually said he did not want to get in trouble with the ASX). What a load of nonsense.

But additionally, even if BrainChip did make an ASX announcement regarding Mercedes (which we know they did not), so what if the share price skyrocketed.. if that's how the market reacts to news, then that's how the market reacts to news.

It's not ramping... it's just the market's legitimate reaction to news flow! Our ridiculous Chairman needs to understand the difference between the two!
 
Last edited:
  • Like
  • Love
  • Fire
Reactions: 4 users

HopalongPetrovski

I'm Spartacus!
But they didn't announce anything regarding Mercedes - Mercedes did.

They are certainly concerned about the regulators but as a director you need to do your job, you can't be so afraid of ASIC and the ASX that you don't actually fulfill your continuous disclosure obligations and director's duties properly and according to law (and that is what Fact Finder was trying to explain to them, quite correctly in my opinion, at the AGM).

Furthermore, I still can't understand why the partnerships can't at least be announced as non-price sensitive. If they are worthy enough to be announced on X and LinkedIn then they surely merit at least a non-price sensitive announcement. I don't see how a non-price sensitive announcement can be taken to be ramping the stock.
The potential saving's grace and perhaps even the strategy of being so tight lipped regarding ASX announcements is that once something commercially significant is announced, years of pent up coiled spring momentum, and capital that has been kept in reserve for just such a run, may propel our share price in a similar manner to when we had the MB announcement.

Shortly after that I would expect the redomicile to be refloated with perhaps the incentive of a cornerstone investor or stakeholder willing to come aboard, providing we shift to a USA market. We'll all be so giddy and drunk with happiness at holding for so long and finally getting a result, we'll give them whatever they want. 🤣
 
  • Like
  • Fire
  • Love
Reactions: 8 users

TECH

Regular
Many of the long term holders have been here for going on 10 years, so it's not surprising that patience occasionally wears thin. However, it is only in the last 5 years that Akida has been available. Before that there was Brainchip Studio (software) and subsequently Brainchip Accelerator (hardware adapted to improve the function of Studio). Akida has gone through a number of iterations since then, and the business model has changed from IP plus ASIC to IP only, and now back to IP plus a somewhat limited hardware embodiment plus "software" in the form of models and MetaTF.

One of the significant developments in Akida 1 was on-chip learning which mitigates the need for retraining when new items need to be included in the model, a task which required the retraining of the entire model in the cloud. Cloud training is a major cost in terms of power and energy for the major AI engines.

Our tech has continued to evolve at a rapid rate under pressure of expanding market requirements. The commercial version of Akida 1 went from 1-bit weights and activations to 4-bits. Akida 1 has been incorporated in groundbreaking on-chip cybersecurity applications (QV, DoE), which is surely a precursor for Akida2/TENNs in microDoppler radar for AFRL/RTX. Akida 2 is embodied in FPGA for demonstration purposes.

Akida 2 went to 8-bits. It also introduced new features like skip, TENNs and state-space models.

The Roadmap also included Akida GenAI with INT16, also in FPGA, and Akida 3 (INT16, FP32), to be incorporated in FPGA within a year.

There is a clear trend to increased accuracy (INT16/FP32) in Akida 3. This indicates an improved ability to distinguish smaller features, although at the expense of greater power consumption. To at least partially compensate for the increased power consumption, Jonathan Tapson did mention a couple of under-wraps patent applications to improve the efficiency of data movement in memory. So I see Akida 3 as a highly specialized chip adapted for applications needing extremely high accuracy, while I think Akida2/TENNs will be the basis for our general business IP/SoC.

The fact that Akida has evolved so rapidly in such large steps is a tribute to the flexibility of its basic design.

https://brainchip.com/brainchip-sho...essing-ip-and-device-at-tinyml-summit-2020-2/

BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 07.02.2020​

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.

In the Poster Session, “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” representatives from BrainChip will explain to attendees how the company’s Akida™ Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.

BrainChip will also demonstrate “On-Chip Learning with Akida” in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.

“We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit,” said Louis DiNardo, CEO of BrainChip. “We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at
https://tinymlsummit.org/


Great post Dio.... right on point......Tech... (I 💓AKD) ..........we both agree, AKD will succeed.
 
  • Like
  • Fire
Reactions: 7 users

manny100

Top 20

BrainChip’s Runway: Longer Haul, Not Instant Failure​

BrainChip’s near-term revenue lag isn’t an omen of doom but a hallmark of deep-tech ventures: cutting-edge R&D takes time to commercialize. While SynSense and Innatera captured early inference-only wins, BrainChip bet on the harder path—on-chip learning—which demands more silicon complexity, partner integration and validation before scale sales kick in.

Why It’s Likely to Succeed—Eventually​

  • Strategic Pilot Programs Military (AFRL), aerospace (Airbus, European Space Agency) and automotive partners are already evaluating Akida’s on-chip learning. Converting these pilots into production licences can unlock substantial royalties over time.
  • Unique Technological Differentiator Incremental and one-shot learning directly on device remains rare. As edge AI markets grow, on-chip learning will command premium adoption in privacy-sensitive, low-latency systems (wearables, smart cameras, industrial sensors).
  • Strong Cash Runway and Funding BrainChip’s recent capital raises, combined with modest burn rates relative to its R&D outlay, give it the financial cushion to refine Akida IP and expand silicon tape-outs with foundry partners (e.g., GlobalFoundries, Intel Foundry Services).
 
  • Like
  • Fire
Reactions: 5 users

7für7

Top 20
But they didn't announce anything regarding Mercedes - Mercedes did.

They are certainly concerned about the regulators but as a director you need to do your job, you can't be so afraid of ASIC and the ASX that you don't actually fulfill your continuous disclosure obligations and director's duties properly and according to law (and that is what Fact Finder was trying to explain to them, quite correctly in my opinion, at the AGM).

Furthermore, I still can't understand why the partnerships can't at least be announced as non-price sensitive. If they are worthy enough to be announced on X and LinkedIn then they surely merit at least a non-price sensitive announcement. I don't see how a non-price sensitive announcement can be taken to be ramping the stock.


Read this
 

TECH

Regular
Regarding Edward Lien moving on, Brainchip does have a requirement in their job description of an evaluation after 1 year,
with regards to an IP signing (1 or more) or at least solid traction shown by a potential customer.

Very happy to be corrected by the company, not a forum poster, now whether Edward decided to leave for his own personal
reasons can't be disclosed for obvious privacy reasons, but we move forward, so many top-class Taiwanese technology brains
to choose from, we just need the right individual to connect with the right entities .......... our technology is proven, it's evolving
to the point of explosion .... time will eventually be our mate!!!

Thanks for your efforts, Edward..........kind regards Tech (y)
 
  • Like
  • Fire
  • Thinking
Reactions: 10 users

Diogenese

Top 20
Many of the long term holders have been here for going on 10 years, so it's not surprising that patience occasionally wears thin. However, it is only in the last 5 years that Akida has been available. Before that there was Brainchip Studio (software) and subsequently Brainchip Accelerator (hardware adapted to improve the function of Studio). Akida has gone through a number of iterations since then, and the business model has changed from IP plus ASIC to IP only, and now back to IP plus a somewhat limited hardware embodiment plus "software" in the form of models and MetaTF.

One of the significant developments in Akida 1 was on-chip learning which mitigates the need for retraining when new items need to be included in the model, a task which required the retraining of the entire model in the cloud. Cloud training is a major cost in terms of power and energy for the major AI engines.

Our tech has continued to evolve at a rapid rate under pressure of expanding market requirements. The commercial version of Akida 1 went from 1-bit weights and activations to 4-bits. Akida 1 has been incorporated in groundbreaking on-chip cybersecurity applications (QV, DoE), which is surely a precursor for Akida2/TENNs in microDoppler radar for AFRL/RTX. Akida 2 is embodied in FPGA for demonstration purposes.

Akida 2 went to 8-bits. It also introduced new features like skip, TENNs and state-space models.

The Roadmap also included Akida GenAI with INT16, also in FPGA, and Akida 3 (INT16, FP32), to be incorporated in FPGA within a year.

There is a clear trend to increased accuracy (INT16/FP32) in Akida 3. This indicates an improved ability to distinguish smaller features, although at the expense of greater power consumption. To at least partially compensate for the increased power consumption, Jonathan Tapson did mention a couple of under-wraps patent applications to improve the efficiency of data movement in memory. So I see Akida 3 as a highly specialized chip adapted for applications needing extremely high accuracy, while I think Akida2/TENNs will be the basis for our general business IP/SoC.

The fact that Akida has evolved so rapidly in such large steps is a tribute to the flexibility of its basic design.

https://brainchip.com/brainchip-sho...essing-ip-and-device-at-tinyml-summit-2020-2/

BrainChip Showcases Vision and Learning Capabilities of its Akida Neural Processing IP and Device at tinyML Summit 07.02.2020​

SAN FRANCISCO–(BUSINESS WIRE)–
BrainChip Holdings Ltd. (ASX: BRN), a leading provider of ultra-low power, high-performance edge AI technology, today announced that it will present its revolutionary new breed of neuromorphic processing IP and Device in two sessions at the tinyML Summit at the Samsung Strategy & Innovation Center in San Jose, California February 12-13.

In the Poster Session, “Bio-Inspired Edge Learning on the Akida Event-Based Neural Processor,” representatives from BrainChip will explain to attendees how the company’s Akida™ Neuromorphic System-on-Chip processes standard vision CNNs using industry standard flows and distinguishes itself from traditional deep-learning accelerators through key features of design choices and bio-inspired learning algorithm. These features allow Akida to require 40 to 60 percent fewer computations to process a given CNN when compared to a DLA, as well as allowing it to perform learning directly on the chip.

BrainChip will also demonstrate “On-Chip Learning with Akida” in a presentation by Senior Field Applications Engineer Chris Anastasi. The demonstration will involve capturing a few hand gestures and hand positions from the audience using a Dynamic Vision Sensor camera and performing live learning and classification using the Akida neuromorphic platform. This will showcase the fast and lightweight unsupervised live learning capability of the spiking neural network (SNN) and the Akida neuromorphic chip, which takes much less data than a traditional deep neural network (DNN) counterpart and consuming much less power during training.

“We look forward to having the opportunity to share the advancements we have made with our flexible neural processing technology in our Poster Session and Demonstration at the tinyML Summit,” said Louis DiNardo, CEO of BrainChip. “We recognize the growing need for low-power machine learning for emerging applications and architectures and have worked diligently to provide a solution that performs complex neural network training and inference for these systems. We believe that as a high-performance and ultra-power neural processor, Akida is ideally suited to be implemented at the Edge and IoT applications.”

Akida is available as a licensable IP technology that can be integrated into ASIC devices and will be available as an integrated SoC, both suitable for applications such as surveillance, advanced driver assistance systems (ADAS), autonomous vehicles (AV), vision guided robotics, drones, augmented and virtual reality (AR/VR), acoustic analysis, and Industrial Internet-of-Things (IoT). Akida performs neural processing on the edge, which vastly reduces the computing resources required of the system host CPU. This unprecedented efficiency not only delivers faster results, it consumes only a tiny fraction of the power resources of traditional AI processing while enabling customers to develop solutions with industry standard flows, such as Tensorflow/Keras. Functions like training, learning, and inferencing are orders of magnitude more efficient with Akida.

Tiny machine learning is broadly defined as a fast growing field of machine learning technologies and applications including hardware (dedicated integrated circuits), algorithms and software capable of performing on-device sensor (vision, audio, IMU, biomedical, etc.) data analytics at extremely low power, typically in the mW range and below, and hence enabling a variety of always-on use-cases and targeting battery operated devices. tinyML Summit 2020 will continue the tradition of high-quality invited talks, poster and demo presentations, open and stimulating discussions, and significant networking opportunities. It will cover the whole stack of technologies (Systems-Hardware-Algorithms-Software-Applications) at the deep technical levels, a unique feature of the tinyML Summits. Additional information about the event is available at
https://tinymlsummit.org/
I should have prefaced this discussion by saying this is all just my conjecture.

Akida 3 steps up from INT8 of Akida 2 to INT16/FP32, which means it will have a significantly larger silicon footprint (fewer chips per wafer), hence much more expensive to manufacture, so it will be going in Rolls-Royce applications (not literally (as far as I know)), where accuracy is the determining feature.

This would require a high information density sensor.

Thinking about that, DVS (Prophesee) is not a highly detailed sensor. Nor is Lidar. They are both inherently sparse. On the other hand, cameras are high density sensors. I would include microDoppler in the high density/small signal variation basket because there is a need to detect small variations in the frequency of the reflected signal. We know of one application for microDoppler radar. I can't think of an application for such high discrimination for images, but that does not exclude the possibility, eg, forged paintings?

Changing tack, it recently occurred to me that DNA analysis would be right up Akida's alley.

.
 
  • Like
  • Love
  • Fire
Reactions: 5 users
Regarding Edward Lien moving on, Brainchip does have a requirement in their job description of an evaluation after 1 year,
with regards to an IP signing (1 or more) or at least solid traction shown by a potential customer.

Very happy to be corrected by the company, not a forum poster, now whether Edward decided to leave for his own personal
reasons can't be disclosed for obvious privacy reasons, but we move forward, so many top-class Taiwanese technology brains
to choose from, we just need the right individual to connect with the right entities .......... our technology is proven, it's evolving
to the point of explosion .... time will eventually be our mate!!!

Thanks for your efforts, Edward..........kind regards Tech (y)
I don't think Edward is leaving. IMO this is an additional role in the region......
 
  • Like
Reactions: 3 users
I should have prefaced this discussion by saying this is all just my conjecture.

Akida 3 steps up from INT8 of Akida 2 to INT16/FP32, which means it will have a significantly larger silicon footprint (fewer chips per wafer), hence much more expensive to manufacture, so it will be going in Rolls-Royce applications (not literally (as far as I know)), where accuracy is the determining feature.

This would require a high information density sensor.

Thinking about that, DVS (Prophesee) is not a highly detailed sensor. Nor is Lidar. They are both inherently sparse. On the other hand, cameras are high density sensors. I would include microDoppler in the high density/small signal variation basket because there is a need to detect small variations in the frequency of the reflected signal. We know of one application for microDoppler radar. I can't think of an application for such high discrimination for images, but that does not exclude the possibility, eg, forged paintings?

Changing tack, it recently occurred to me that DNA analysis would be right up Akida's alley.

.
By DNA analysis do you mean like in DNA sequencers?
 

Diogenese

Top 20
By DNA analysis do you mean like in DNA sequencers?
Just like in CSI, where you line up all the A, C, G, & Ts. (Above my paygrade really).

Akida does pattern recognition ...
 
Top Bottom