BRN Discussion Ongoing

Boab

I wish I could paint like Vincent
The interesting bit
The passenger display supports gaming and streaming video via the internet. It can be operated on the run, but only for viewing by the front seat passenger. A camera continuously monitors the eye movement of the driver; when he or she attempts to view the passenger display, the content is hidden.
If you were not happy with your passenger you could easily make them very annoyed😂
 
  • Like
  • Haha
Reactions: 19 users

Tothemoon24

Top 20
  • Like
  • Fire
  • Love
Reactions: 5 users

zeeb0t

Administrator
Staff member
Hi all

To address some complaints or concerns recently regarding off-topic moderation.

I understand that friendly banter can be enjoyable, but it's important to keep in mind that our forum's rules exist to ensure that discussions remain focused and informative for everyone. I enforce these rules evenly for all members, even for those who are regulars or have good intentions, to ensure a fair and consistent experience for everyone. I do have an off-topic section called The Lounge, where members are free to chat about anything they like. Please feel free to continue your friendly banter there. Here's the link: https://thestockexchange.com.au/forums/the-lounge-where-anything-goes.15/. Thank you for your understanding.
 
  • Like
  • Love
  • Fire
Reactions: 47 users

TheDon

Regular
Hi all

To address some complaints or concerns recently regarding off-topic moderation.

I understand that friendly banter can be enjoyable, but it's important to keep in mind that our forum's rules exist to ensure that discussions remain focused and informative for everyone. I enforce these rules evenly for all members, even for those who are regulars or have good intentions, to ensure a fair and consistent experience for everyone. I do have an off-topic section called The Lounge, where members are free to chat about anything they like. Please feel free to continue your friendly banter there. Here's the link: https://thestockexchange.com.au/forums/the-lounge-where-anything-goes.15/. Thank you for your understanding.
Hi Zeboot, my post was removed and i got a warning about the post I've done. I'm only a simple guy and not the sharpest tool in the shed . please explain it to me in a simple way that I can understand.
Thank you

TheDon
 

zeeb0t

Administrator
Staff member
Hi Zeboot, my post was removed and i got a warning about the post I've done. I'm only a simple guy and not the sharpest tool in the shed . please explain it to me in a simple way that I can understand.
Thank you

TheDon
Please read the message it sent you. It seems pretty straight forward, but if there is a concern or confusion please send me a message.
 
  • Like
Reactions: 4 users

8C674926-CECE-4AB4-BF76-F716A0363AB2.jpeg
 
  • Like
  • Fire
  • Love
Reactions: 35 users

View attachment 33859

NVISO seems to be quite a bit ahead of these guys.
 
  • Like
Reactions: 1 users

Tothemoon24

Top 20

Announcing Support for Seeed Studio SenseCAP A1101 LoRaWAN Vision AI Sensor​

TINYML, MACHINE LEARNING
Edge Impulse
8 April 2023
Frame_1554_4_406f2d6f9e.png

Read the Table of contents sectionTable of contents​

Edge Impulse is partnering with Seeed Studio to launch official support for the SenseCAP A1101 LoRaWAN Vision AI Sensor, enabling users to use it to acquire, develop, and deploy vision-based ML applications with Edge Impulse Studio or the new Edge Impulse Python SDK.
The SenseCAP A1101 is a smart image sensor, supporting a variety of AI models such as image recognition, people counting, target detection, and meter recognition. Equipped with an IP66 enclosure and industrial-grade design, it can be implemented in challenging conditions such as places with extreme temperatures or difficult accessibility. It combines tinyML and LoRaWAN® to enable local inferencing and long-range transmission, which are the two major needs from outdoor use.
What's more, the sensor is battery-powered. This means that data can be collected in remote locations and transmitted over long distances, without requiring access to an AC power source. This makes it ideal for remote monitoring applications, such as agricultural automation or smart city projects, where power sources might not be easily accessible. This product is designed to deploy widely and be used in distributed monitoring systems without the user needing to worry about maintaining power sources across multiple sites. It is open for customization to meet your unique requirements, including camera, enclosure, transmission protocols, and more. You can also use the SenseCAP app for quick configuration with just three steps — scan, configure, done. Easy peasy.
H3cGFb6rSljSsZTB_5O7NAiizu966XAWQ3_C1EYxDySywfXBZTPNL7mDDNRjEOKA4RxFyUzMPCSBxQ-8O6G32W58Y7JyJBL3d7pWwIWN2E9AMII9kkrbY1LpegyQvOoxwONFaNkA2Ny7fFgWtq7p-NvU_iekRu7W

Read the How do I get started? sectionHow do I get started?​

You can purchase the SenseCAP A1101 here. Then, follow the SenseCAP A1101/Edge Impulse documentation page for instructions on how to quickly develop and deploy your first vision-based ML application!
SenseCAP A1101 with Edge Impulse in action
After connecting the SenseCAP A1101 with Edge Impulse Studio, the board will be listed as follows:
 
  • Like
Reactions: 6 users

Sirod69

bavarian girl ;-)
Imaging and Machine Vision Europe
Imaging and Machine Vision EuropeImaging and Machine Vision Europe

In our latest article, Luca Verre, Co-founder and CEO of PROPHESEE, highlights how event-based vision is set to revolutionise mobile photography.

"By combining an event-based sensor with a frame-based system, effective deblurring can be achieved, making photography and video capture more precise and lifelike."

 
  • Like
  • Love
  • Fire
Reactions: 21 users

Tothemoon24

Top 20
Great read .
I couldn’t copy the original article in its entirety, link provides full read 🐰





Montaseri said one factor that is driving intelligence at the edge is that connectivity is not a hundred percent available, which means delays getting information back from the cloud. Another is that microprocessors and micro controllers are becoming more powerful. This enables the necessary data management at the edge, he said, and allows to devices quickly analyze and make sense of data.

ITTIA is focused on providing the software necessary for edge data streaming, analysis and management for embedded systems and IoT devices – robust data management is foundational for doing AI and machine learning in embedded systems at the edge, Montaseri said.

diagram

ITTIA provides software for edge data streaming, analysis and management for embedded and IoT for uses such as transportation when it's not feasible to depend on a central cloud. (ITTIA)
Reliability is critical for smart edge devices, he added, whether it’s for industrial robotics, medical or transportation applications. “You want to make sure that they don't go down.”

What’s also becoming apparent is that not all edges are created equal – some will be smarter sooner than others depending on the use case and industry, such as robotics and medical devices. Montaseri said today’s embedded systems that gain intelligence through IoT deployments will be doing the jobs needed for the next generation of computing. “The nature of everything is changing,” he said. “We are seeing more security, more safety, and more functionality, like the ingestion rate and the query rate. Our focus is safety, security, and reliability.”


Not all edges are created equal

What makes the smart edge murky is the definition of edge, which means different things to different people, Nandan Nayampally, CMO at BrainChip, said in an interview with Fierce Electronics. He was previously at ARM for more than 15 years when the edge was viewed as primarily sensor driven. “That's how IoT kind of cropped up,” he said. “IoT is a sensor plus connectivity plus processing.” While a Dell or an Intel might think of the smart edge as another giant box that’s now smaller, the right starting point to him is IoT with AI.

AI on the edge is a step forward from a sensor just doing one function, with devices now having more storage, memory, and processing power. Nayampally said this battle between cloud and the edge has been going on a for a while, going back to the days of a terminal connected to mainframe before the move to a client/server model. “What you realize is however much we think that latency to cloud or connectivity to cloud is guaranteed, and the bandwidth assured, it's never going to be the case,” he said. “You need that intelligence and computational power at the edge.”

diagram of chip

BrainChip's Akida processor can learn at the edge to address security and privacy while limiting network congestion. (BrainChip)
Having the smarts at the edge is beneficial for preventative maintenance in factories and patient monitoring, Nayampally said, both in terms of latency and privacy. “Anytime you send raw data or sensitive data out, you are obviously going to have challenges.” Privacy and security have become especially important to the general public, he added. BrainChip was started with the idea that edge computing was necessary and that any approach to AI at the edge had to be different from the cloud. “The cloud kind of assumes almost infinite resources and infinite compute.”

While compute resources at the edge are rather finite, more AI is possible due to advances with low power hardware including memory and systems on chip (SoC), which means not all training and machine learning need be shipped back to the cloud. Nayampally said it’s a matter of scaling, with neuromorphic computing offering inspiration for how to low power intelligence at the edge. “Let's try to emulate the efficiency of it and start from there.”

Machine learning will increasingly happen at the edge both because of inherent capability but also out of necessity. Nayampally said some applications that require a real-time response can’t afford the delay between the edge and the cloud, or the power. “Any time you use radio and connectivity, especially to cloud, that burns a lot of power,” he said. “Radios are the most expensive parts of devices.” Smaller, more cost-effective devices may not be able to afford to have connectivity and need to do more compute locally.

Nayampally said the neuromorphic nature of BrainChip’s Akida platform allows it to learn at the edge, which also addresses security and privacy and reduces network congestion – today’s autonomous vehicles can generate a terabyte of data per day, he noted, so it makes sense to be selective about how much data needs to travel the network.

For the smart edge, simple is better and BrainChip’s processor does that from a computational standpoint, as well as from a development and deployment standpoint, Nayampally said. “It's almost like a self-contained processor.” It is neuromorphic and event driven, so it only processes data when needed, and only communicates when needed, he said.

Being event driven is an excellent example of how basic machine learning may express itself in a single device for the user or the environment, which is what Synaptics is calling the “edge of the edge,” said Elad Baram, director of product marketing for low-power edge AI. The company has a portfolio of low power AI offerings operating at a milliwatt scale which is enabling machine learning using minimal processing and minimal learning – an initiative in line with the philosophy of the tinyML Foundation, he said. While an ADAS uses gigabytes of memory, Synaptics is using megabytes of memory.

Baram’s expertise is in computer vision, and Synaptics sees a lot of potential around any kind of sensing and doing the compute right next to where the data is coming from. Moving data requires power and increases latency and creates privacy issues. Organizations like tinyML are an indicator of how smart the edge could get. “We are at an inflection point within this year or next year,” he said. “This inflection point is where it's booming.”

diagram of a chip

Synaptics has a context aware Smart Home SoC with an AI accelerator for 'edge at the edge'. (Synaptics)
Context aware edge remain application specific

Baram said just as the GPU boom occurred in the cloud five years ago, the same evolution is happening with TinyML. Workloads at the edge that previously required an Nvidia processor, such as detection and recognition, can now be done on a microcontroller. “There is a natural progression.”

Sound event detection is already relatively mature, Baram said, starting with Alexa and Siri and progressing to detecting glass breaking or a baby crying. “We are seeing a lot of use cases in smart home and home security around the audio space.” In the vision space, he said, Synaptics is supporting “context awareness” for laptops so they can detect whether a user is present or not, and to ensure privacy, any imaging stays on the on-camera module – it’s never stored on the computer’s main processor.

Power, of course, is important for context awareness applications. Baram said. “You don't want the power to the battery to drain too fast.” But having this awareness actually extends battery life, he said, because now the system understands if the user is engaged with the device and its content and can respond accordingly. “You approach the laptop, it's turned on and you log in and it's like magic. The machine just knows what you want to do, what you are doing, and it can adapt itself.”

Similarly, an HVAC system could adapt based on the number of occupants in a room, or a dishwasher could let you know how full it is. Baram said a fridge could be smart enough so you can know whether or not you need to pick up milk on the way home. Aside from smart laptops and home appliances, there are many safety applications in construction, manufacturing and agriculture that could benefit from context awareness. “The amount of use cases out there in the world is pretty amazing.”

Baram said the hardware is coming together to enable the smart edge, including machine learning, while algorithms and networking are also improving significantly. “The neural networks are way more efficient than they were a decade ago.” As compute capabilities advance, devices will be able to have more general purposes, but for now processor and algorithm constraints mean smart edge devices will have targeted applications.

In the long run, making the edge smarter is ultimately contingent on successfully pulling on these elements together, which requires an AI framework, Baram said, such as TensorFlow, an open source ML platform. “Those frameworks make it much easier to deploy a neural network into edge devices.”
 
Last edited:
  • Like
  • Fire
Reactions: 26 users

Tothemoon24

Top 20
The Next SiFive Revolution

As SiFive announces its long-term plans to meet the rapidly changing needs of the automotive markets, this blog explores the many advantages the SiFive Automotive™ portfolio brings across a wide range of current and future applications, including; electrification, cockpit, ADAS, safety, and others. Our power efficient, flexible and high-performance cores are ideally suited for the most critical applications, and are of course, supported by the global RISC-V ecosystem.

Why SiFive?

Founded by the same engineers who invented the RISC-V ISA, SiFive is uniquely positioned to extend and broaden the reach of RISC-V based processors in new and growing markets. The rate of SiFive innovation and product development in the last few years has been astounding. SiFive has grown rapidly, and today offers not only the most comprehensive portfolio of RISC-V IP, but is rapidly approaching performance at the high-end that matches or exceeds any CPU IP vendor. SiFive’s growing portfolio of IP, ranging from small 32-bit real time CPUs all the way up to high-performance 64-bit application processors, singles out SiFive as the only RISC-V IP supplier that can cater to the entire spectrum of automotive compute requirements as a one stop shop. From our CEO, who grew Qualcomm’s automotive division, to our many automotive SOC experts, we are a team that uniquely understands the needs of today’s automotive manufacturer.

A Portfolio Approach

SiFive’s roadmap delivers a portfolio of products that meets the requirements of the vast majority of automotive CPU applications, including MCUs, MPUs and, soon, SoCs. We often read about macro trends in automotive electronic architecture design, including centralization of computing, increased compute at the sensing edge, increased SW complexity due to mixed criticality support, and a shift from domain to zonal controllers, to list but a few of today’s trends. The implications are a need for newer, more capable ECUs, and a higher degree of mixed criticality functional integration into fewer devices, at the price of increased software complexity. Despite the evolution in electronic architectures, the variety of automotive semiconductor products continues to remain broad, from small MCUs all the way to complex SoCs; however, there are significant commonalities across the many use cases including functional safety and security. Virtually every electronic device in a vehicle must comply with ISO26262, ISO21434, and additional local standards as applicable, such as UNECE WP.29 r155. SiFive Automotive products provide tailored levels of integrity, with area optimized products for both ASIL B and ASIL D, while in-field configurable integrity levels can be enabled through the available split-lock variants.

Introducing the SiFive Automotive Family of Products

SiFive has launched its new product portfolio and the first three SiFive Automotive product series, each with area and performance optimized variants for their intended use. SiFive is the only RISC-V IP supplier to offer multiple processor series that optimally meet automotive designers’ exact compute, integrity, and security requirements across a broad range of computing applications, from MCUs to complex SoCs. All of the product variants offer best-in-class levels of functional safety and security capabilities, with a high degree of configurability to different integrity levels. The E6-A is the first commercially available offering. The E6-A is a 32-bit real-time core, with broad availability later in 2022. By the second half of 2023, two more product series will be added to the portfolio, the X280-A and S7-A. The X280-A is a vector capable processor ideally suited for edge sensing in ADAS, sensor fusion applications, and any ML/AI accelerated functionality in the vehicle. The S7-A is an optimized 64-bit CPU tailored for safety islands as often used in ADAS, gateways, and domain controllers. A standout feature of S7-A is native 64-bit support, meaning that the safety island can now access the same memory space as the application CPUs.

Later in 2023, a new high-end processor, configurable to up to 16-cores, will be added to the portfolio. Customers should expect best-in-class performance and ASIL support and a tailored automotive alternative for high performance CPU IP. And much more is on the way in the coming months as the roadmap expands to meet market needs, and our SW, tools and OS partners add new levels of support.

All of the above could not have been possible without SiFive strategically adding top talent from the industry, both in CPU design and the automotive market, including, our CEO who managed the growth of Qualcomm’s automotive business, and a renowned safety expert and co-founder of Yogitech, among others. The automotive team benefits from investments in processor design that are foundational to the underlying technology, while also leveraging expertise in automotive R&D, functional safety, and security. Our team is ready to answer your automotive questions.

Ecosystem and Next Steps

The automotive semiconductor market is dependent on advances from a broad range of SW, tool, and OS vendors, and their support for SiFive the RISC-V ecosystem is growing at a phenomenal pace. The ecosystem supporting RISC-V today is fast growing and reaching maturity, with many announcements in the pipeline.

SiFive is collaborating with a wide range of automotive technology companies to create a robust and vibrant RISC-V community.

SiFive’s first automotive products are arriving in the market at a time of rapid growth and change and the company brings a modern architecture, supported by the broad global RISC-V ecosystem, with unique solutions for some of automotive’s most critical technologies.
 
  • Like
  • Fire
Reactions: 17 users

IloveLamp

Top 20
If you want to know more about RISC=V / SIFIVE.....




This bit was interesting too

Screenshot_20230408_074310_Chrome.jpg
Screenshot_20230408_075011_Chrome.jpg
 
Last edited:
  • Like
  • Fire
  • Thinking
Reactions: 22 users

cosors

👀
Hi all

To address some complaints or concerns recently regarding off-topic moderation.

I understand that friendly banter can be enjoyable, but it's important to keep in mind that our forum's rules exist to ensure that discussions remain focused and informative for everyone. I enforce these rules evenly for all members, even for those who are regulars or have good intentions, to ensure a fair and consistent experience for everyone. I do have an off-topic section called The Lounge, where members are free to chat about anything they like. Please feel free to continue your friendly banter there. Here's the link: https://thestockexchange.com.au/forums/the-lounge-where-anything-goes.15/. Thank you for your understanding.
Our The Talga Bar works perfectly. I can only recommend opening The Brainchip Bar here. Maybe a task for Rise?

Not everyone wants to go to the other end of town to find their way to their neighborhood bar. Some just like to walk across the street.
 
Last edited:
  • Like
  • Love
  • Haha
Reactions: 12 users
D

Deleted member 118

Guest
  • Like
Reactions: 4 users

zeeb0t

Administrator
Staff member
Our The Talga Bar works perfectly. I can only recommend opening The Brainchip Bar here. Maybe a task for Rise?

Not everyone wants to go to the other end of town to find their way to their neighborhood bar. Some just like to walk across the street.
That can of course work too.
 
  • Like
Reactions: 4 users

cosors

👀
That can of course work too.
Mostly it's about the people you already know. Many people feel 'safe' there. The lounge is nice but also 'cosmopolitan'. Maybe a place like your own bar could be the solution.
A reflection of the real environment.
 
Last edited:
  • Like
Reactions: 5 users
Around 120 million shares

About 3.38 million shares up for sale on commsec. If something really go goes BRN way well shit i wouldnt be left holding any shorts...
I’d suspect most of those shorts are hedging positions by funds..
Most pros shorting individually wouldn’t hold BRN short borrow overnight as it always has a chance of a 50% move at the drop of one new IP deal or related Ann.
 
  • Like
  • Thinking
Reactions: 6 users
$3 SP = $5.4B MC. At a high PE100 will require $54M NPAT / 0.7 ATO = $77.14M EBITDA / 0.65 = $118.68M revenue.

To maintain around $3 SP with hot PE100 requires circa $120M revenue.

$5.4B MC / $120M revenue = x45 revenue multiple.

A Nasdaq listing could result in PE150 similar to Nvidia & would require $36M NPAT & $79.12M revenue.

$5.4B MC / $80M revenue = x67.5 revenue multiple.

PE100 & PE150 are quite high so would require high growth rate & high forward looking revenue.

So the question becomes, how long until BRN can generate $80-120M revenue?

Could be quite fast such as via 50c royalty from Qualcomm x 200M smart phones = $100M revenue.

Or slower such as via Mercedes with premium cars having about 70 MCU's x 15% AI equipped = 10 x 2M cars pa = 20M x $1 royalty = $20M revenue. If 50% of the 70 MCU's are AI equipped in future due to extra sensors/Lidar etc = 35 x 2M cars pa = 70M x $1 royalty = $70M revenue.

I allowed higher royalty for Mercedes as their products are more expensive & royalty was going to be higher for more expensive products.

Then there will also be washing machine, dryer, air conditioner etc applications for MCU's as well. Will take a few years for mass adoption & there will be hundreds of millions units per year. May start with 25-50M units first year x 50c royalty = $12.5-25M revenue & within 2-3 years reach $100M revenue.

Combination of the above should result in about $100M revenue by end of next year including more license agreements. Thus SP should maintain around $3 SP towards the end of 2024. But it will need to be at a high PE100-150.

However, BRN SP tends to spike due to market excitement so it would not surprise me to see SP at $3 on the back of a couple of big name license agreements. If Qualcomm were to be announced it would create a frenzy & the $100M revenue will get priced in within 3-4 months of the announcement. 50c SP x 6 run to $3 SP would be highly likely as we had 39c SP x 6 run to $2.34 intraday peak when Mercedes was announced without any revenue.

So $3 SP could come as early as next 6 months or as late as end of 2024. The earlier it gets to $3 SP without revenue the higher the probability of a 50% retrace to $1.50.

Happy Easter to all.
Thanks for that nice share price breakdown on future revenue possibilities. Would love to see 3 bucks in the next year. And happy Easter 🐰 to you also..
 
  • Like
Reactions: 11 users

Steve10

Regular
Chart with BRN shorts & SP. Make your own decision on the effects.

There have been squeezes up & declines correlated to market sentiment as well. ie. short during rising sentiment results in squeeze up & short during declining sentiment results in big decline.

Shorts are like amplifiers during rises & declines. The effect of the move up/down is amplified.


1680909094481.png
 
  • Like
  • Sad
  • Fire
Reactions: 29 users

JoMo68

Regular
Our The Talga Bar works perfectly. I can only recommend opening The Brainchip Bar here. Maybe a task for Rise?

Not everyone wants to go to the other end of town to find their way to their neighborhood bar. Some just like to walk across the street.
Done - we now have our own bar! Come, relax, and enjoy the chat.
 
  • Like
  • Love
Reactions: 10 users
Top Bottom