BRN Discussion Ongoing

Just because you cannot think of any doesn’t mean there aren’t any… 😉

Not all of BrainChip’s EAP customers have been disclosed to date.
Nevertheless, a few of them were officially announced (which obviously reduces the possible number of yet undisclosed EAP customers and hence the likelihood of Meta being one of them).




View attachment 65127



View attachment 65128




View attachment 65129



It also sounds as if BrainChip considered Valeo an EAP customer:




View attachment 65131


And then there was also the ASX announcement on an agreement with a Tier-1 Automotive Manufacturer (which the next day was revealed to be Ford, after a “please explain” by the ASX forced BrainChip to do so)

View attachment 65132



View attachment 65133
Yeah, but apart from Ford (which was never actually announced as an EAP, but safe to assume) which was not a good situation for the Company to have been put in and a major reason, why the Company now keeps its lips tightly zipped, the EAPs announced, have either been obscure, or relative small fry.

NASA is a huge name, but not really a business.

"Not all of BrainChip’s EAP customers have been disclosed to date"

Your statement infers, that most have, which would indicate you think we only have a handful..
I really don't think and hope that's not the case..

Mercedes can also be safely assumed, but not announced as an EAP (or I'm sure you would have dug it up).

So my point still stands, that it's a very weak argument, to say Meta not being announced as an EAP, has any weight against it's possibility 😛
 
Last edited:
  • Like
Reactions: 16 users

manny100

Top 20
Yeah, but apart from Ford (which was never actually announced as an EAP, but safe to assume) which was not a good situation for the Company to have been put in and a major reason, why the Company now keeps its lips tightly zipped, the EAPs announced, have either been obscure, or relative small fry.

NASA is a huge name, but not really a business.

"Not all of BrainChip’s EAP customers have been disclosed to date"

Your statement infers, that most have, which would indicate you think we only have a handful..
I really don't think and hope that's not the case..

Mercedes can also be safely assumed, but not announced as an EAP (or I'm sure you would have dug it up).

So my point still stands, that it's a very weak argument, to say Meta not being announced as an EAP, has any weight against it's possibility 😛
It's likely almost all the relevant big companies would have at least had a look at AKIDA.
The quick and the dead and no one wants to be left behind including holders because once some window shoppers who have been trialling the product become customers the SP will fly.
Not all will buy but we do not need all.
 
  • Like
Reactions: 12 users

IloveLamp

Top 20
  • Wow
  • Like
Reactions: 3 users
It's likely almost all the relevant big companies would have at least had a look at AKIDA.
The quick and the dead and no one wants to be left behind including holders because once some window shoppers who have been trialling the product become customers the SP will fly.
Not all will buy but we do not need all.
It is well over 2 years now (March 2022) when Rob Telson, had the interview with Al Martin, of "Making Data Simple.



In it, Al asked and Rob stated, the following..

Al: "Who is your biggest competitor?"

Rob Telson: "There are a lot of great companies that are designing and have developed applications and devices to support AI in the future. I think that when you look at the company that has really seen some success incorporating AI into active working products it’s the big guy that’s developed GPU’S and that is NVIDIA. But what they’re doing doesn’t support the edge devices of the future, and that’s where we strongly believe two things. Number one we don’t see companies like that as a competitor. We actually see them as a partner where we can complement what they have started and what they're doing, and our technology can work side by side in those environments or it can work independent in those environments."



This was when Nvidia was the Company, that few had heard of, outside of gamers and crypto miners..

Can you imagine, what would happen now, if BrainChip announced that NVIDIA, now the largest Company on the Planet (by market cap) had become a partner, or customer?

The mind boggles..


giphy.gif

"Please God!"
 
  • Like
  • Love
  • Fire
Reactions: 48 users

jtardif999

Regular
Somewhat disappointingly, there is no image of Akida to be spotted in the pictures showing impressions of the first “Swedish SNN network seminar on industrial applications of SNN”.

View attachment 65066


Ericsson’s Ahsan Javed Awan continues to be enamoured with Intel, although I noticed a slight change in his slide, where “neuromorphic hardware” is no longer followed by “(Loihi 2)”. Interesting, given that “Lava is platform-agnostic, so that applications can be prototyped on conventional CPUs/GPUs and deployed to heterogeneous system architectures spanning both conventional processors as well as a range of neuromorphic chips such as Intel’s Loihi.”
(https://lava-nc.org/)
Could that signify he is also trying out other processors these days?

June 2024
View attachment 65075

https://thestockexchange.com.au/threads/brn-discussion-ongoing.1/post-418864

compare to July 2023
View attachment 65074

Dylan Muir from SynSense gave a remote presentation on Speck:

View attachment 65067

And I also spotted the SpiNNaker and IBM logos on the opening slide of the online presentation by Jörg Conradt (KTH Stockholm) - no surprise here.

View attachment 65098

Hopefully, researchers in Jörg Conradt’s Neuro Computing Systems lab that moved from Munich (TUM) to Stockholm (KTH), will give Akida another chance one of these days (after the not overly glorious assessment of two KTH Master students in their degree project Neuromorphic Medical Image Analysis at the Edge, which was shared here before: https://www.diva-portal.org/smash/get/diva2:1779206/FULLTEXT01.pdf), trusting the positive feedback by two more advanced researchers Jörg Conradt knows well, who have (resp soon will have) first-hand-experience with AKD1000:

When he was still at TUM, Jörg Conradt was the PhD supervisor of Cristian Axenie (now head of SPICES lab at TH Nürnberg, whose team came runner-up in the 2023 tinyML Pedestrian Detection Hackathon utilising Akida) and co-authored a number of papers with him, and now at Stockholm, he is the PhD supervisor of Jens Egholm Pedersen, who is one of the co-organisers of the topic area Neuromorphic systems for space applications at the upcoming Telluride Neuromorphic Workshop, that will provide participants with neuromorphic hardware, including Akida. (I’d venture a guess that the name Jens on the slide refers to him).



Let’s savour once again the above quote by Rasmus Lundqvist, who is a Senior Researcher in Autonomous Systems at RISE (Sweden’s state-owned research institute and innovation partner), with a focus on drones and innovative aerial mobility.


“And mark my words; there is no more suitable AI tech for low-power low-latency than SNNs and neuromorphic chips to run them.”


RISE’s ongoing project Visual Inspection of airspace for air traffic and SEcuRity (a collaboration with SAAB, https://www.saab.com/) sounds like a perfect use case for Akida:

View attachment 65118
View attachment 65119





View attachment 65121
We must be the elephant in the room 😉
 
  • Like
  • Haha
Reactions: 6 users
It is well over 2 years now (March 2022) when Rob Telson, had the interview with Al Martin, of "Making Data Simple.



In it, Al asked and Rob stated, the following..

Al: "Who is your biggest competitor?"

Rob Telson: "There are a lot of great companies that are designing and have developed applications and devices to support AI in the future. I think that when you look at the company that has really seen some success incorporating AI into active working products it’s the big guy that’s developed GPU’S and that is NVIDIA. But what they’re doing doesn’t support the edge devices of the future, and that’s where we strongly believe two things. Number one we don’t see companies like that as a competitor. We actually see them as a partner where we can complement what they have started and what they're doing, and our technology can work side by side in those environments or it can work independent in those environments."



This was when Nvidia was the Company, that few had heard of, outside of gamers and crypto miners..

Can you imagine, what would happen now, if BrainChip announced that NVIDIA, now the largest Company on the Planet (by market cap) had become a partner, or customer?

The mind boggles..


View attachment 65138
"Please God!"

1718867449376.gif
 
  • Haha
  • Like
Reactions: 15 users
  • Haha
  • Like
  • Fire
Reactions: 20 users

7für7

Top 20
i think WE WILL BUY NVIDIA !!! THIS WILL MAKE A HUGE IMPACT IN THE INDUSTRY HAHAHAHAHAHA…:HAHAHAHAHAAAAAAAAA mmmmUUUHHHaaaaahahahah

1718869951304.gif
 
  • Haha
  • Like
Reactions: 14 users

Gazzafish

Regular
Why do I keep saying in my head …. Kodak: “We aren’t going to go digital because we are making so much money from film”………..Nvidea: “we aren’t going Neuromorphic because we are making so much money from GPU’s” ….. 🤔😁
 
  • Like
  • Haha
  • Fire
Reactions: 25 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
I wonder if we're involved in this? Is it mere co-incidence that our latest patent covers anomaly detection? And when you click on the link at the very end of the article it takes you to this Edge Impulse page mentioning " the latest neural accelerators" (see below).

At any rate, if I was Edge Impulse, I'd be trying to capitalise as much as possible on our relationship. If they bought a licence from us, couldn't they sell the low-power systems as a product? I suppose that's what I'd do, if I was smart.

And, didn't we do something previously with them with FOMO and object detection?



Edge-Impulse-Brings-Anomaly-Detection-to-Any-Edge-Device.png

Edge Impulse Brings Anomaly Detection to Any Edge Device​

June 6, 2024
Edge Impulse, a platform for building, refining and deploying machine learning models to edge devices, has unveiled a novel technology for unlocking visual anomaly detection on any edge device, from NVIDIA GPUs to Arm MCUs, through the first model architecture of its kind: FOMO-AD (Faster Objects, More Objects – Anomaly Detection).
The demand for edge-capable AI software has increased as a path to innovating factories and production lines, with on-device computing allowing faster access to critical data insights, low latency, and more robust security and privacy compliance.
Visual anomaly detection in particular is an important use case for industrial AI, but is not widely used as it requires creating a library of known anomalous samples to train the model to spot deviations in industrial environments. Companies cannot collect real-world samples for every anomaly, especially for unanticipated defects, limiting detection capabilities.
Increased Productivity of Visual Inspection
Edge Impulse’s FOMO-AD architecture, two years in development, offers the first widely accessible platform for visual anomaly detection on any edge device, from GPUs to MCUs. It is also the first scalable system capable of training models on an optimal state to detect and catalog anything outside that baseline as an anomaly in video and image data. This dramatically increases the productivity of visual inspection systems that will no longer have to be manually trained on anomalous samples before they can start generating real-time insights on-device.
“Virtually every industrial customer that wants to deploy computer vision really needs to know when something out of the ordinary happens,” said Jan Jongboom, co-founder and CTO at Edge Impulse. “Traditionally that’s been challenging with machine learning, as classification algorithms need examples of every potential fault state. FOMO-AD uniquely allows customers to build machine learning models by only providing ‘normal’ data.”
Most industrial camera systems capable of computer vision are powered by GPUs and CPUs, with a high install cost that requires wiring and a power-hungry connection to mains electricity. Recent advancements from top-of-the-line silicon manufacturers, and novel edge model architectures from companies like Edge Impulse, enable computer vision AI models to operate in either high- or low-power systems, giving businesses more choice. The benefits of low-power systems include the possibility of building battery-powered visual inspection systems, and lower production costs from using cost-effective hardware that can reduce the overall product form factor.
In recent months, Edge Impulse has been testing FOMO-AD with customers, achieving proven results in industrial environments when proactively detecting irregularities in multiple production scenarios. Use of FOMO-AD has led to marked improvements in machine performance and production line efficiencies for customers.
There are many manufacturing use cases for visual anomaly detection, including:
Industrial: Production line inspection, quality control monitoring, defect detection
Automotive: Part assembly quality control, crack detection, leak detection, EV battery inspection, painting and surface defect detection
Silicon: IC inspection, PCB defect detection, soldering inspection
Medical: Medical device inspection, pill inspection, vial contamination inspection, seal inspection
For more information: www.edgeimpulse.com



Here's what the link shows.
Screenshot 2024-06-20 at 5.59.54 pm.png
 
  • Like
  • Fire
Reactions: 36 users

IloveLamp

Top 20
  • Like
  • Haha
  • Love
Reactions: 27 users

7für7

Top 20
I wonder if we're involved in this? Is it mere co-incidence that our latest patent covers anomaly detection? And when you click on the link at the very end of the article it takes you to this Edge Impulse page mentioning " the latest neural accelerators" (see below).

At any rate, if I was Edge Impulse, I'd be trying to capitalise as much as possible on our relationship. If they bought a licence from us, couldn't they sell the low-power systems as a product? I suppose that's what I'd do, if I was smart.

And, didn't we do something previously with them with FOMO and object detection?



Edge-Impulse-Brings-Anomaly-Detection-to-Any-Edge-Device.png

Edge Impulse Brings Anomaly Detection to Any Edge Device​

June 6, 2024
Edge Impulse, a platform for building, refining and deploying machine learning models to edge devices, has unveiled a novel technology for unlocking visual anomaly detection on any edge device, from NVIDIA GPUs to Arm MCUs, through the first model architecture of its kind: FOMO-AD (Faster Objects, More Objects – Anomaly Detection).
The demand for edge-capable AI software has increased as a path to innovating factories and production lines, with on-device computing allowing faster access to critical data insights, low latency, and more robust security and privacy compliance.
Visual anomaly detection in particular is an important use case for industrial AI, but is not widely used as it requires creating a library of known anomalous samples to train the model to spot deviations in industrial environments. Companies cannot collect real-world samples for every anomaly, especially for unanticipated defects, limiting detection capabilities.
Increased Productivity of Visual Inspection
Edge Impulse’s FOMO-AD architecture, two years in development, offers the first widely accessible platform for visual anomaly detection on any edge device, from GPUs to MCUs. It is also the first scalable system capable of training models on an optimal state to detect and catalog anything outside that baseline as an anomaly in video and image data. This dramatically increases the productivity of visual inspection systems that will no longer have to be manually trained on anomalous samples before they can start generating real-time insights on-device.
“Virtually every industrial customer that wants to deploy computer vision really needs to know when something out of the ordinary happens,” said Jan Jongboom, co-founder and CTO at Edge Impulse. “Traditionally that’s been challenging with machine learning, as classification algorithms need examples of every potential fault state. FOMO-AD uniquely allows customers to build machine learning models by only providing ‘normal’ data.”
Most industrial camera systems capable of computer vision are powered by GPUs and CPUs, with a high install cost that requires wiring and a power-hungry connection to mains electricity. Recent advancements from top-of-the-line silicon manufacturers, and novel edge model architectures from companies like Edge Impulse, enable computer vision AI models to operate in either high- or low-power systems, giving businesses more choice. The benefits of low-power systems include the possibility of building battery-powered visual inspection systems, and lower production costs from using cost-effective hardware that can reduce the overall product form factor.
In recent months, Edge Impulse has been testing FOMO-AD with customers, achieving proven results in industrial environments when proactively detecting irregularities in multiple production scenarios. Use of FOMO-AD has led to marked improvements in machine performance and production line efficiencies for customers.
There are many manufacturing use cases for visual anomaly detection, including:
Industrial: Production line inspection, quality control monitoring, defect detection
Automotive: Part assembly quality control, crack detection, leak detection, EV battery inspection, painting and surface defect detection
Silicon: IC inspection, PCB defect detection, soldering inspection
Medical: Medical device inspection, pill inspection, vial contamination inspection, seal inspection
For more information: www.edgeimpulse.com



Here's what the link shows.
View attachment 65141
We are wondering if you START TO RUN AGAIN SOON!!!!! Ma’am!!

By the way we are wondering as well if we are integrated in so many other things 😂👌
 
  • Haha
  • Like
Reactions: 3 users

IloveLamp

Top 20

At MediaTek, we are building a future of ubiquitous AI. This includes Generative AI that can be processed "at the edge"

Instead of relying on the cloud and risking unguaranteed internet connectivity, or expensive services that can see and even take control of your data, the generative content is processed right inside your smartphone, tablet, smart TV, or vehicle.

1000016525.jpg
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 28 users
  • Like
  • Love
Reactions: 4 users

IloveLamp

Top 20
1000016535.jpg
1000016538.jpg
 

Attachments

  • Yu_EventPS_Real-Time_Photometric_Stereo_Using_an_Event_Camera_CVPR_2024_paper.pdf
    2.1 MB · Views: 148
  • Like
  • Love
  • Fire
Reactions: 26 users
  • Like
  • Fire
  • Love
Reactions: 25 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers ,

Good to see Accenture ticking away nicely.

😃.

For those new to BrainChip.... Accenture has , from memory, three or four granted Patents which include our tech.

Sweèeeet.



Regards,
Esq.
 
  • Like
  • Love
  • Fire
Reactions: 47 users

Frangipani

Top 20
Baya Systems, the company our former CMO Nandan Nayampally now works for as CCO, emerged from stealth mode today.

Chairman of the Board is Tenstorrent’s CEO Jim Keller.




Baya Systems emerges from stealth with chiplet interconnect

Baya Systems emerges from stealth with chiplet interconnect​

Technology News | June 20, 2024
By Nick Flaherty

Baya Systems has emerged from stealth with an interconnect tool for chiplet and complex system on chip (SoC) designs.

“What we feel we bring to the table is a more software driven development process,” Nandan Nayampally, chief commercial officer of Baya Systems tells eeNews Europe. “We consider this to be the first really focussed fabric for AI.”

The company has executive from Intel, Netspeed Systems, ARM, Brainchip, AMD and Nuvia and is chaired by microprocessor expert Jim Keller, who is also CEO of AI chip developer Tenstorrent. It is backed by Matrix and Intel Capital.

“We focus on the data driven generation of IP with both static and dynamic microarchitectural analysis at runtime,” said Nayampally. “When you do complex systems, the devil is in the detail and system fabric is not that easy.”

The WeaveIP provides components to build a unified fabric that has an efficient, scalable transport architecture that maximizes performance and throughput, while minimizing latency, silicon footprint and power. This supports standard protocols such as CHI, ACE5-Lite, AXI5 and extendable to others including CXL.

The WeaverPro tool supports the SoC designer from initial specification all the way to post-silicon tuning. This uses the WeaveIP to produce a unified mesh fabric approach to generating layer protocols on top of the fabric without the need for gaskets that translate the protocols and the bottlenecks that come with this and slow down the performance.

“The transport is common so you can customise that for the QoS or debug protocols and build protocols on top.”

This comes from the software-driven analysis of the design that can configure up to eight virtual channels across the chip at runtime.

Screenshot-2024-06-20-215135-1024x200.png


“What we do with our software is very efficient cache memory analysis, get accurate partitioning and caching, and a fabric component that generates a correct by construction physical design. We have two main flavours, where you can define your own protocol and one with multicast for AI designs,” he said.

“We can optimise the bandwidth and reduce the wires and logic to use those wires efficiently for optimising for latency and bandwidth, with up to 3GHz in a 4nm process.”

The fabric is customisable at runtime with APIs so you can tune the parameters for efficiency. “In general the flexibility we have put in helps overall with the reduction in silicon footprint and power and that comes from having an accurate understanding of the data movement,” he said.

The fabric is designed for inside the chip and the chiplet, and Baya will work with interconnect schemes such as Eliyan’s Bunch of Wires and UCIe.

“We stop at the die boundaries but what we end up doing outside of that is the software understands chiplet boundaries and can optimise across chiplet boundaries. So it’s a chiplet-ready network, as various things need to be built into the network to make it easier to use in chiplets,” he said.

“Our first partner is Blue Cheetah. They do the link layers, and once we understand their wires we can optimise and allow API tuning in the future as well,” said Nayampally

“The semiconductor industry is at an inflection point in how to overcome the widening gap between memory performance and the processing needs of AI,” said Dr. Sailesh Kumar, CEO at Baya Systems.

“These challenges are overwhelming the industry with design complexity, energy costs, and systems that are obsolete by the time they hit the market. Baya Systems is resolute in delivering foundational software, an industry-first, grounds-up fabric solution for future-proof multi-cluster and multi-chiplet designs, and a methodology that takes out the guesswork. I firmly believe that Baya unlocks the merchant chiplet market that is expected to grow to $107 billion by 2033.”

www.bayasystems.com






Baya Systems Logo
Go to...



Baya Systems Introduces New Technology to Transform and Accelerate Intelligent Computing​


Software and IP addresses system design complexity, performance, scalability and Time to Market of SoCs and chiplets for emerging applications.

Santa Clara, CALIF. – June 20, 2024 – Baya Systems today emerged from stealth mode to announce its software-driven IP technology portfolio designed to accelerate complex single-die and multi-die SoC designs. These innovations bolster the emerging chiplet economy and enable unprecedented scale for large-scale compute and AI processing.

Baya Systems technology simplifies the development process and reduces the risk, empowering designers to rapidly analyze, develop, optimize and deploy these complex systems. This enables highly energy-efficient data movement in single-die designs that can support over 4 terabyte/second throughput in a complex CPU cluster, culminating in multi-petabyte/second throughput in multi-chiplet designs for high-end AI installations.

With the exponential growth of computing requirements for artificial intelligence, best-in-class silicon vendors have consistently tried to scale performance efficiently by integrating various processors, including CPUs, GPUs, and neural network accelerators, into intelligent compute systems. This has led to a substantial challenge of efficient data movement across these different processors, and increasingly complex system and application software development. Baya Systems focuses on delivering hyper-efficient data movement that can be customized with efficient hardware-based coherency, correctness, and robustness to accelerate these platforms for applications across industries such as AI acceleration, data center, networking infrastructure, automotive and IoT.

“The rapid scaling in compute needed to support AI is bottlenecked by scaling silicon, memory, storage, and the increasingly huge amounts of data; requiring increasingly complex SoCs and chiplets,” said Kevin Krewell, Principal Analyst at TIRIAS Research. “The industry desperately needs a holistic way to design, analyze, and build intelligent fabrics to address this, and Baya seems to have the right ingredients to really drive the market forward.”

Baya Systems tackles the challenges of system design complexity, performance guarantees, high costs, and shrinking market windows in the SoC and chiplet industries. Its WeaverPro™ software platform supports the SoC designer from initial specification all the way to post-silicon tuning. Its WeaveIP™ provides components to build a unified fabric that has an extremely efficient, scalable transport architecture that maximizes performance and throughput, while minimizing latency, silicon footprint and power. Combined with advanced features for reliability and safety, this empowers designers to analyze, architect, customize, optimize and deploy complex SoCs and chiplets.

Baya’s solution is unique for the following reasons:
  • Software-driven architecture exploration helps optimize design to achieve performance guarantees based on built-simulator.
  • Engine to generate representative workloads from traffic specification.
  • Best-in-class, flexible network that can achieve 3GHz in a 4-nanometer process technology.
  • Algorithmic optimization that supports reuse and minimizes silicon and power footprints without compromising performance.
  • Industry’s first IP to offer multi-level cache coherency for single/multi-die systems, radically reducing costs of coherency across these large-scale systems.
  • Customizable protocol and multicast capabilities for advanced AI and CPU acceleration that support petabyte-level throughput.
  • Correct-by-construction design generation that radically reduces risk of failure.
  • WeaveIP supports standard protocols such as CHI, ACE5-Lite, AXI5 and extendable to others including CXL.
  • Physically aware flow with modularity and tiling support for ease of implementation

“The semiconductor industry is at an inflection point in how to overcome the widening gap between memory performance and the processing needs of AI,” said Dr. Sailesh Kumar, CEO at Baya Systems. “These challenges are overwhelming the industry with design complexity, energy costs, and systems that are obsolete by the time they hit the market. Baya Systems is resolute in delivering foundational software, an industry-first, grounds-up fabric solution for future-proof multi-cluster and multi-chiplet designs, and a methodology that takes out the guesswork. I firmly believe that Baya unlocks the merchant chiplet market that is expected to grow to $107 billion by 2033.”

Baya Systems was founded by Kumar, an ex-Intel Fellow and former founder of Netspeed Systems; Dr. Eric Norige, and Joji Philip, who were also key contributors at Netspeed Systems. It is backed by leading investors Matrix and Intel Capital, and is led by Silicon Valley semiconductor veterans with extensive experience in processing and systems, who have driven important initiatives at AMD, ARM, Apple, Intel, Meta, and other leading processor companies.

“When we invest, we look for key market gaps, disruptive technologies that address them, and teams that have a compelling vision and execution muscle and the hunger to achieve it,” said Stan Reiss, general partner, Matrix Partners. “Baya tops all of them for a market that is desperate for solving the scale and chiplet problem with a technology that not just fills the gap but unleashes disruptive innovation, and a proven team that has had a consistent out-sized success in entrepreneurial and high-growth settings.”

The company makes its public debut at DAC 2024, Booth No. 2446, at Moscone West in San Francisco June 23-27.

About Baya Systems
Baya Systems is accelerating the next wave of foundational chiplet-based, high-performance and modular semiconductor systems technologies to accelerate intelligent compute everywhere. Baya Systems was named for the Baya bird, aka the weaver, renowned for constructing cohesive nests from various materials. This approach mirrors Baya Systems’ integrated and efficient solutions from diverse components, and its mission to allow best-of-breed compute, communication and I/O components to be used together with the promise of improving performance, yield, reusability/composability and cost of development. Baya Systems is backed by leading investors Matrix Partners and Intel Capital. For more information visit https://bayasystems.com.
 
  • Like
  • Thinking
  • Sad
Reactions: 10 users

IloveLamp

Top 20
1000016541.jpg
 
  • Like
  • Fire
Reactions: 9 users
Top Bottom