BRN Discussion Ongoing

FJ-215

Regular
A short interview with ARM CEO Rene Haas. Worth a look as we are copying their business model.



 

manny100

Top 20
Good morning,

Look, some criticism is justified in my opinion during different stages of our development, from poor communication, poor management of
our funds, poor business planning, as in, not addressing traction issues early enough, while funds kept being pumped into sales teams who
clearly couldn't sell the idea, which I believe was a management issue myself, and possibly internal staff accountability issues etc.... but I
would suggest that many young start-ups battle similar growing pains.

I have been onboard for just shy of 10 years, and currently have a sense of calm surrounding this investment, are we in the best position
we have been in for years? yes, I believe so, a lot has been to do with education, not only of the semi-conductor industry or companies
actually, genuinely wanting to understand the real benefits of SNN technologies, but also our company as a whole, it's been a real learning
experience, which all groups appear to be coming out the other side of into clear air.

Peter's years of hard work and knock backs clearly show just how far advanced his research was, hence, we appear to be coming to that
inflection point, the bridge that was once too far, feels like that same bridge is close to completion as we as a company claim our share
of this new frontier, there's plenty to share around, our share of the pie will be huge moving forward I'm convinced of that, my opening
paragraph was fair I believe, but I, like many long termers have moved on, learnt from those early errors (growing pains) and can finally
see a horizon with a bright light emerging, our day is closing in, Brainchip will be successful, stay the journey with me and many others,
we all deserve to rise up as one!

Love our company.... Tech. 💘
Thanks Tech, my take below,
I think a 'prime' reason was that PVM's Neuromorphic invention was way ahead of its time. There were no products it could slip into.
No one had any use for it at all. So it was never going to sell.
Late 2021 Sean who had a mountain of experience was employed to commercialise the product. The ASX announcement that we were full on commercial was issued in Jan'22.
He had a 5 year plan approved by the BOD - 18 months shy of expiring.
An eco system has been built and it's a living system given its always growing. It's based on the premise that in the semi conductor industry 'no one stands alone' - if you do you perish and die. So well done.
I have no doubt that even the experienced Sean and the BOD were surprised at the time it has taken for industry to take up neuromorphic AI at the Edge for which we are the standout leader. The latest tech road map shows that we will remain the leader.
Humans change slowly unless it becomes necessary. The conflict in Europe and ME and loose war talk has defense build ups world wide.
That deemed necessary urgent DOD beef up is our opportunity.
Sean mentioned at the AGM that we are submitting proposals every month to the DOD. US AFRL, Navy transition to the Edge via Bascom Hunter.
Industry will start to full on follow the DOS and of course space program public useful tech.
IMO we get judged from here. The past has passed.
Times 'they are a changing' and we must take full advantage.
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 37 users

White Horse

Regular
Appears in a recent preprint, that Basharat Ali has been running Akida, with others, as part of his / her work on cybersecurity.

Akida gives some pretty good results.

Kinda craps on NVIDiA.


Neuromorphic Quantum Adversarial Learning
(NQAL): A Bio-Inspired Paradigm for DNS over HTTPS Threat Detection
Basharat Ali
Nanjing University

Research Article
Keywords: Network Security, NQAL in Network Security, Network Protocols, Enhancing Network Security,
Enhancing DoH Protocol Security, Threats Detection in Encrypted Network, Cyber Attacks Detections
Posted Date: April 30th, 2025


Abstract Excerpt:

To overcome these complex issues, this work proposes a new architecture—Neuromorphic Quantum Adversarial Learning (NQAL)—a bio-inspired, zero-knowledge-supported detection
mechanism combining spiking neural networks (SNNs), quantum noise injection (QNI), and federated swarm intelligence to immunize, rather than detect, DoH-based attacks.

The method relies on a neuromorphic model employing Dynamic Spiking Graph Attention (DSGAT) and Spike-Timing-Dependent Plasticity (STDP) to encode encrypted traffic as dynamic spike trains to enable ultra-fast, energy-efficient inference on processors such as Intel Loihi and BrainChip Akida

Experiment set up Except:

Experiments were carried out on neuromorphic hardware platforms such as Intel Loihi 2 and BrainChip Akida that provide sub-millisecond latency with low-power event-driven processing characteristics.

Akida results related Excerpt:

Table 5: Hardware Deployment Metrics
Platform Accuracy Latency Power Throughput
GPU (NVIDIA V100) 89.2% 3.1 ms 45 W 1,200 QPS
TPUv4 91.5% 2.8 ms 32 W 1,500 QPS
Loihi 2 98.7% 0.9 ms 4 W 9,800 QPS
Akida 99.1% 0.7 ms 3 W 12,400 QPS

Outcome of Table 5:

Hardware Installation Metrics presents the excellent performance of our neuromorphic hardware solutions towards accomplishing peak performance for DoH security systems. When comparing Loihi 2 and Akaida to GPU platforms and TPU platforms depicts easily how changing towards neuromorphic chips invokes important boosts in terms of both accuracy and efficiency. Both the GPU (NVIDIA V100) and TPUv4 initiated with low performance at 89.2% and 91.5% accuracy, respectively, but when executed on Loihi 2, accuracy jumped dramatically to 98.7%, and a further improved 99.1% on Akida.

This increase in accuracy is accompanied by a drastic reduction in latency, from 3.1 ms for GPU to 0.7 ms for Akida, illustrating the real-time processing capability of the neuromorphic hardware
.

Besides this, the power usage of the
Loihi 2 and Akida platforms—4 W and 3 W respectively—is a brilliant power efficiency against traditional GPU-based systems consuming 45 W. Throughput is also dramatically increased, with Akida being able to support 12,400 QPS, in strong contrast to the GPU’s 1,200 QPS.


Such results justify the single value of neuromorphic hardware as an approach for energy-efficient high-performance DoH anomaly detection and prove how
our new approach beats current systems and becomes the future standard for real-time system encrypted traffic protection[14].

Full paper HERE
BrainChip is doing business in Korea, Japan and Taiwan. We have been working with international companies like Renesas. It is hard to imagine that China hasn’t had a good look at Akida. Being an Australian company we are not restricted in dealing directly with China by US sanctions. If we have stepped back from dealing with China to please the US then I hope it proves worth it.
Hi Slade, remember this from 2022.
https://www.ex3.simula.no/resources

Posted by FMF.
https://thestockexchange.com.au/thr...rver-kunpeng-920-processor-using-akida.29899/

Well, if anybody was doubting interest from China.
I'd say the answer is YES.!!!
 
  • Like
  • Fire
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Lockheed Martin! Cybersecurity!

Let's get a wriggle on and get this CyberNeuro-RT technology out into the market! 🤞

Blog published 3 days ago.



Pioneering Innovation:

How Lockheed Martin is using AI to transform cybersecurity​

May 05, 2025



In the unseen battlespaces of the digital age, where cyber threats hide around every corner, the U.S. military and its allies are in an unrelenting battle to safeguard their most critical assets: the networks, systems, and secrets that power national defense.
As the cyber landscape rapidly evolves, industry leaders like Lockheed Martin are driving an unprecedented shift in operations by incorporating artificial intelligence (AI) and machine learning (ML) to jump ahead of emerging cyber threats.

Advancing Cybersecurity​

When looking at the future of defense, cyber capabilities are at the frontlines. It is where conflict starts and deterrence is tested. With modern military operations relying heavily on software and data, cyber-attacks can have devastating effects. With networks down, you cannot operate, communicate, or make informed decisions. Because of this, increasing the cyber security and resiliency is a critical aspect of Lockheed Martin's operations. The Lockheed Martin Artificial Intelligence Center (LAIC) is at the heart of this innovation, driving the development and implementation of cutting-edge AI and ML solutions to enhance cyber defenses for the warfighter.
“The current pace of change in AI research and the ever-increasing level of investments means that the state of the art (SOTA) AI five years ago is commonplace now,” explained Dan Reese, Lockheed Martin Associate Fellow. “Lockheed Martin continues to mature and invest in cyber advancements and new capabilities using AI/ML that will allow our customers to maintain an edge on the battlefield."

Here are three ways it’s being done:​

Threat Detection ___
The LAIC is developing AI-powered threat detection systems that can identify and respond to emerging threats in real-time, reducing the risk of cyber-attacks and data breaches before they occur.
These systems use advanced analytics and ML algorithms to analyze network traffic and system data to identify patterns and anomalies that may indicate a potential threat. This in turn enables swift and effective response, reducing the risk of cyber-attacks, protecting critical systems and infrastructure.
Vulnerability Assessment ___
Lockheed Martin has developed a new approach to identifying cyber vulnerabilities, exposing exploits and potential attack vectors. By leveraging a multi-agent reinforcement learning (MARL) framework, cyber-attack sequences can be prioritized based on adversarial mission objectives. By understanding how adversaries can potentially compromise our mission systems/platforms, our defenders can now develop countermeasures and effective mitigation techniques with AI at a fraction of the previous cost.
Cloud Transformation ___
To further enhance cybersecurity, Lockheed Martin is leading the way in cloud transformation, recognizing the immense benefits it offers in terms of scalability, flexibility, and cost savings. By migrating its systems and applications to the cloud, the company is able to enhance cybersecurity, reduce latency, and improve data processing speeds. The LAIC is playing a critical role in this effort, leveraging its expertise in AI and ML to develop cloud-based solutions that enable real-time data analysis, predictive maintenance, and advanced threat detection.
One notable example of Lockheed Martin's cloud transformation is its work on the Department of Defense's (DoD) Joint All-Domain Command and Control (JADC2) program. The LAIC is collaborating with the DoD to develop a cloud-based architecture that enables seamless communication and data sharing across different domains, including air, land, sea, space, and cyber. This innovative approach will enhance situational awareness, improve decision-making, and increase the speed of response to emerging threats.

Warfighting Solutions​

Pioneering Innovation


By embracing cloud computing and AI/ML technologies, Lockheed Martin is enhancing cybersecurity, improving efficiency, and unlocking new insights and capabilities. These new cutting-edge technologies are also increasing the effectiveness of military operations, and drive the development of a more agile, adaptive, and responsive military force, capable of addressing the complex and evolving threats of the modern battlefield.
“What's exciting about AI is its ability to transform not only how we do work within Lockheed, but ultimately how we transform and provide new capabilities to the warfighter,” stated Greg Forrest, AI Foundations Director. “At the end of the day, that's why we're here. We're here to support our customers. We're here to support our service members. And I think there's tremendous opportunity to utilize AI across the board to enable a more safe, secure world for our warfighters.”




 
  • Like
  • Fire
  • Love
Reactions: 34 users

manny100

Top 20
Lockheed Martin! Cybersecurity!

Let's get a wriggle on and get this CyberNeuro-RT technology out into the market! 🤞

Blog published 3 days ago.



Pioneering Innovation:

How Lockheed Martin is using AI to transform cybersecurity​

May 05, 2025



In the unseen battlespaces of the digital age, where cyber threats hide around every corner, the U.S. military and its allies are in an unrelenting battle to safeguard their most critical assets: the networks, systems, and secrets that power national defense.
As the cyber landscape rapidly evolves, industry leaders like Lockheed Martin are driving an unprecedented shift in operations by incorporating artificial intelligence (AI) and machine learning (ML) to jump ahead of emerging cyber threats.

Advancing Cybersecurity​

When looking at the future of defense, cyber capabilities are at the frontlines. It is where conflict starts and deterrence is tested. With modern military operations relying heavily on software and data, cyber-attacks can have devastating effects. With networks down, you cannot operate, communicate, or make informed decisions. Because of this, increasing the cyber security and resiliency is a critical aspect of Lockheed Martin's operations. The Lockheed Martin Artificial Intelligence Center (LAIC) is at the heart of this innovation, driving the development and implementation of cutting-edge AI and ML solutions to enhance cyber defenses for the warfighter.
“The current pace of change in AI research and the ever-increasing level of investments means that the state of the art (SOTA) AI five years ago is commonplace now,” explained Dan Reese, Lockheed Martin Associate Fellow. “Lockheed Martin continues to mature and invest in cyber advancements and new capabilities using AI/ML that will allow our customers to maintain an edge on the battlefield."

Here are three ways it’s being done:​

Threat Detection ___
The LAIC is developing AI-powered threat detection systems that can identify and respond to emerging threats in real-time, reducing the risk of cyber-attacks and data breaches before they occur.
These systems use advanced analytics and ML algorithms to analyze network traffic and system data to identify patterns and anomalies that may indicate a potential threat. This in turn enables swift and effective response, reducing the risk of cyber-attacks, protecting critical systems and infrastructure.
Vulnerability Assessment ___
Lockheed Martin has developed a new approach to identifying cyber vulnerabilities, exposing exploits and potential attack vectors. By leveraging a multi-agent reinforcement learning (MARL) framework, cyber-attack sequences can be prioritized based on adversarial mission objectives. By understanding how adversaries can potentially compromise our mission systems/platforms, our defenders can now develop countermeasures and effective mitigation techniques with AI at a fraction of the previous cost.
Cloud Transformation ___
To further enhance cybersecurity, Lockheed Martin is leading the way in cloud transformation, recognizing the immense benefits it offers in terms of scalability, flexibility, and cost savings. By migrating its systems and applications to the cloud, the company is able to enhance cybersecurity, reduce latency, and improve data processing speeds. The LAIC is playing a critical role in this effort, leveraging its expertise in AI and ML to develop cloud-based solutions that enable real-time data analysis, predictive maintenance, and advanced threat detection.
One notable example of Lockheed Martin's cloud transformation is its work on the Department of Defense's (DoD) Joint All-Domain Command and Control (JADC2) program. The LAIC is collaborating with the DoD to develop a cloud-based architecture that enables seamless communication and data sharing across different domains, including air, land, sea, space, and cyber. This innovative approach will enhance situational awareness, improve decision-making, and increase the speed of response to emerging threats.

Warfighting Solutions​

Pioneering Innovation


By embracing cloud computing and AI/ML technologies, Lockheed Martin is enhancing cybersecurity, improving efficiency, and unlocking new insights and capabilities. These new cutting-edge technologies are also increasing the effectiveness of military operations, and drive the development of a more agile, adaptive, and responsive military force, capable of addressing the complex and evolving threats of the modern battlefield.
“What's exciting about AI is its ability to transform not only how we do work within Lockheed, but ultimately how we transform and provide new capabilities to the warfighter,” stated Greg Forrest, AI Foundations Director. “At the end of the day, that's why we're here. We're here to support our customers. We're here to support our service members. And I think there's tremendous opportunity to utilize AI across the board to enable a more safe, secure world for our warfighters.”





Hi, Bravo, thanks, after the AGM its only a matter of time before we drop a Quantum Ventura CRNT type gamechanger on Pico.
I am sure it's being planned as we speak.
If we look at the cybersecurity white paper its small devices that connect to the main network that are the weak/vulnerable points.
Pico or a Pico plus running with TENNs/QV cybersecurity SSM could close those loops.
The possibilities are huge.
Personal mobiles, Health small devices, all DOD small devices etc.
 
  • Like
  • Fire
  • Love
Reactions: 16 users

Diogenese

Top 20
Thanks FK,

The roadmap is chock full of groundbreaking advances. I felt that question time could have been better utilized by addressing the new opportunities these advances provide.
Akida GenAI & Akida 3 have been adapted to handle 16-bit integer and 32-bit FP. This, in addition to the malleable architecture, enables these two chips to be flexibly configured to handle all types of models and to be adapted for future applications.

The provision of a LUT in place of an activation function seems like a patentable idea if original. We are also told by JT that a patent application is in the pipeline for a new technique for retrieving data from memory. This is the most energy intensive action so the invention will further increase the power efficiency and probably latency.

Akida 2 is 8 times more efficient than Akida 1, and presumably that also applies to GenAI & Akida 3 for equivalent Akida 1 tasks. However, 16-bit integer and 32-bit FP seem to provide excessive capabilities for an edge device. Does Nvidia need to look over its shoulder "like one that on a lonesome road doth walk in fear and dread, and having once turned round, walks on, and turns no more his head, because he knows that close behind a frightful fiend doth tread"?
 
Last edited:
  • Like
  • Fire
  • Love
Reactions: 63 users
  • Like
  • Fire
Reactions: 28 users

TECH

Regular
Akida GenAI & Akida 3 have been adapted to handle 16-bit integer and 32-bit FP. This, in addition to the malleable architecture, enables these two chips to be flexibly configured to handle all types of models and to be adapted for future applications.

The provision of a LUT in place of an activation function seems like a patentable idea if original. We are also told by JT that a patent application is in the pipeline for a new technique for retrieving data from memory. This is the most energy intensive action so the invention will further increase the power efficiency and probably latency.

Akida 2 is 8 times more efficient than Akida 1, and presumably that also applies to GenAI & Akida 3 for equivalent Akida 1 tasks. However, 16-bit integer and 32-bit FP seem to provide excessive capabilities for an edge device. Does Nvidia need to look over its shoulder "like one that on a lonesome road doth walk in fear and dread, and having once turned round, walks on and turns no more his head, because he knows that close behind a frightful fiend doth tread"?

Hi Dio,

We, Brainchip are at an amazing point in our development, it's super exciting!

You, being a retired engineer could appreciate the amount of work that's been quietly going on behind the scenes over the last
12 months, we all thought no more doors could possibly open up to this technology, but this latest news out of the engineering
department dispels that idea, is it just me, or do you and others think that Nvidia are just plain stubborn in their approach to our
architecture? (if that makes sense).

Is the gap going to potentially widen after the release of our advancement of the Akida suite of offerings?

Our solid roadmap will definitely get the attention of all our potential competition, I'm hoping like many I'd suggest who would like
to see us succeed on our own two feet for a few years before considering an offer, if it happened to surface at some future point.

Regards....Tech
 
  • Like
  • Fire
  • Thinking
Reactions: 26 users

MDhere

Top 20
Todaya price .24c x 10 would be where we need to be before we enter any redomicils at the very minimum.

.24 x 10 would be where we need to be for shareholders, all shareholders that experienced the Mercedes hype to feel better.

I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected.

Bring on zero shot learning after that and we are on a winner for the long run!

But bring on the .24 x 10 first then we can get back to where we were.

I'm encouraged by the roadmap but moreso encouraged by Sean axknowledging that he needs to deliver the promise of "watch the financials".
 
  • Like
  • Fire
  • Thinking
Reactions: 23 users
Something’s wrong we are green on a Friday 😂
 
  • Like
  • Haha
  • Fire
Reactions: 16 users
For those who missed the board meeting, here’s an important update:


In the past, BrainChip focused on pursuing large deals with major corporations—essentially targeting “big whale” contracts that promised substantial immediate revenue. However, this approach has faced challenges due to the slow decision-making processes typical of large enterprises. (That's what they said, my personal opinion is Brainchip couldn't offer enough features with old Akida)


The company has now shifted its strategy to focus on signing deals of all sizes, with an emphasis on faster execution. This new approach aligns with market demand, as many clients today are willing to pay quickly in order to get working solutions delivered without delay. This has already proven to be the right move, as seen with recent deals involving Onsor, Frontgrade Gaisler, and others.

With the integration of state space model use cases, BrainChip is well-positioned to see a significant uptick in deal volume this year. While individual deal values may be smaller, the ability to deliver repeatable solutions across a niche can generate strong cumulative returns.


Additionally, since many modern products already use state space models—often implicitly—combining them with Akida’s spiking neural network (SNN) and TENNS technology enables ultra-low power consumption. This gives BrainChip a major competitive edge and positions it to quickly dominate the edge AI and IoT device markets.
 
  • Like
  • Love
  • Fire
Reactions: 46 users

Diogenese

Top 20
Hi Dio,

We, Brainchip are at an amazing point in our development, it's super exciting!

You, being a retired engineer could appreciate the amount of work that's been quietly going on behind the scenes over the last
12 months, we all thought no more doors could possibly open up to this technology, but this latest news out of the engineering
department dispels that idea, is it just me, or do you and others think that Nvidia are just plain stubborn in their approach to our
architecture? (if that makes sense).

Is the gap going to potentially widen after the release of our advancement of the Akida suite of offerings?

Our solid roadmap will definitely get the attention of all our potential competition, I'm hoping like many I'd suggest who would like
to see us succeed on our own two feet for a few years before considering an offer, if it happened to surface at some future point.

Regards....Tech
Hi tech,

I think that the Akida 3/GenAI adaptability and capability to handle any model or to pass incompatible models to the CPU would be a major advantage in the cloud. I still see these as working as a coprocessor with a CPU/GPU because there will still be a need to run software. However using SSMs (state space machines) like TENNs in the cloud has the potential to substantially reduce the cooling power requirements.

Qualcomm's hexagon is designed to split AI workloads between CPU/GPU/NPU on the basis of workload size:

https://www.qualcomm.com/content/da...I-with-an-NPU-and-heterogeneous-computing.pdf

A personal assistant that offers a natural voice user interface (UI) to improve productivity and enhance user experiences is expected to be a popular generative AI application. The speech recognition, LLM, and speech models must all run with some concurrency, so it is desirable to split the models between the NPU, GPU, CPU, and the sensor processor. For PCs, agents are expected to run pervasively (always-on), so as much of it as possible should run on the NPU for performance and power efficiency.
...

...

7.1 The processors of the Qualcomm AI Engine

Our latest Hexagon NPU offers significant improvements for generative AI, delivering 98% faster performance and 40% improved performance per watt. It includes micro-architecture upgrades, enhanced micro-tile inferencing, reduced memory bandwidth, and a dedicated power rail for optimal performance and efficiency. These enhancements, along with INT4 hardware acceleration, make the Hexagon NPU the leading processor for on-device AI inferencing.

The Adreno GPU, besides being the powerhouse engine behind high-performance graphics and rich user experiences with low power consumption, is designed for parallel processing AI in high precision formats, supporting 32-bit floating point (FP32), 16-bit floating point (FP16), and 8-bit integer (INT8). The upgraded Adreno GPU in Snapdragon 8 Gen 3 yields 25% improved GPU power efficiency, enhanced AI, gaming, and streaming. Llama 2-7B can generate more than 13 tokens per second on the Adreno GPU.

...

As previously mentioned, most generative AI use cases can be categorized into on-demand, sustained, or pervasive. For on-demand applications, latency is the KPI since users do not want to wait. When these applications use small models, the CPU is usually the right choice. When models get bigger (e.g., billions of parameters), the GPU and NPU tend to be more appropriate. For sustained and pervasive use cases, in which battery life is vital and power efficiency is the critical factor, the NPU is the best option.

...

As mentioned in the prior section, CPUs can perform well for low-compute AI workloads that require low latency.

...


Note that Adreno GPU does 32-bit FP and INT8 for high precision. Akida 3 will also have 32-bit FP, but will have INT16.

Qualcomm recommends CPU for low compute AI workloads!!???

Looking forward to the Hexagon/Akida 3 High Noon!
 
  • Like
  • Fire
  • Love
Reactions: 26 users
 
  • Like
  • Fire
  • Love
Reactions: 13 users

jrp173

Regular
Todaya price .24c x 10 would be where we need to be before we enter any redomicils at the very minimum.

.24 x 10 would be where we need to be for shareholders, all shareholders that experienced the Mercedes hype to feel better.

I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected.

Bring on zero shot learning after that and we are on a winner for the long run!

But bring on the .24 x 10 first then we can get back to where we were.

I'm encouraged by the roadmap but moreso encouraged by Sean axknowledging that he needs to deliver the promise of "watch the financials".

MDhere, what do you mean by "I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected?

Are you hoping that one two million dollars of bookings is going to bring in a mountain of revenue, or are you hoping we are going to get one or two million individual bookings?
 
  • Haha
Reactions: 1 users
Talks a fair bit about AX45MP

 
  • Like
Reactions: 2 users
MDhere, what do you mean by "I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected?

Are you hoping that one two million dollars of bookings is going to bring in a mountain of revenue, or are you hoping we are going to get one or two million individual bookings?
The latter sounds the better 👍
 
  • Fire
  • Haha
Reactions: 3 users

Drewski

Regular
Todaya price .24c x 10 would be where we need to be before we enter any redomicils at the very minimum.

.24 x 10 would be where we need to be for shareholders, all shareholders that experienced the Mercedes hype to feel better.

I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected.

Bring on zero shot learning after that and we are on a winner for the long run!

But bring on the .24 x 10 first then we can get back to where we were.

I'm encouraged by the roadmap but moreso encouraged by Sean axknowledging that he needs to deliver the promise of "watch the financials".
One might say, the prescription and the medicine.
 
  • Like
Reactions: 1 users

MDhere

Top 20
MDhere, what do you mean by "I am hoping 1 or two million bookings brings what a mountain of revenue and positive onflow which is expected?

Are you hoping that one two million dollars of bookings is going to bring in a mountain of revenue, or are you hoping we are going to get one or two million individual bookings?
I look at the lowest point of the needle then that needle moves up as Sean mentioned 9mil usd.
 

Rach2512

Regular
Mentions neuromorphic processors at 55second mark.

 
  • Like
  • Love
  • Fire
Reactions: 19 users
Top Bottom