BRN Discussion Ongoing

Guzzi62

Regular
My uneducated thoughts...
Back on shorters radar, selling for tax happening or underperfomance shares are about to be given out again.
US bombing of Iran's nuclear sites is creating uncertainty on the markets.

Oil and gold are going up currently.

Oil supply might be getting tight and gold is still considered safe haven.

Speculative stocks like BRN are not popular during uncertain times.

So not really surprising that we drop today.
 
  • Like
  • Fire
Reactions: 9 users

MrNick

Regular
Neither has any stock truly recovered since the Ukraine invasion.

Screenshot 2025-06-23 at 9.56.46 am.png
 
  • Like
Reactions: 1 users

7für7

Top 20
My uneducated thoughts...
Back on shorters radar, selling for tax happening or underperfomance shares are about to be given out again.

It’s not or, it’s and. We’ve long become a target of every possible negative force … an easy victim exposed to endless attacks. Honestly, what’s happened to the share price is just ridiculous.

And don’t come at me with the ‘war in Iran’ excuse. There’s war everywhere. First it was Iraq that was going to spark WW3, then Afghanistan, then Syria, then scattered conflicts in Africa, then tensions between Greece and Turkey, then China and Taiwan… and then came Ukraine.

Oh, and now it’s Iran. Of course… I forgot… they control the strait all the global shipping passes through, right? Yeah, there’s always a reason why things could escalate.

Maybe it’s finally time for things to change over there. I mean, the U.S. has plenty of experience with regime changes. That always goes well, doesn’t it? Sounds like a win already…

BUY THE F…IN DIPP …..WOHOOoo 🤡
 
  • Like
Reactions: 2 users

Slade

Top 20
It’s not or, it’s and. We’ve long become a target of every possible negative force … an easy victim exposed to endless attacks. Honestly, what’s happened to the share price is just ridiculous.

And don’t come at me with the ‘war in Iran’ excuse. There’s war everywhere. First it was Iraq that was going to spark WW3, then Afghanistan, then Syria, then scattered conflicts in Africa, then tensions between Greece and Turkey, then China and Taiwan… and then came Ukraine.

Oh, and now it’s Iran. Of course… I forgot… they control the strait all the global shipping passes through, right? Yeah, there’s always a reason why things could escalate.

Maybe it’s finally time for things to change over there. I mean, the U.S. has plenty of experience with regime changes. That always goes well, doesn’t it? Sounds like a win already…

BUY THE F…IN DIPP …..WOHOOoo 🤡
It’s the end of the financial year. Have a beer.
 
  • Haha
  • Like
Reactions: 5 users

FiveBucks

Regular
It’s the end of the financial year. Have a beer.

I need something harder than beer.
 
  • Like
  • Haha
Reactions: 5 users

7für7

Top 20
The stock market is collapsing, inflation’s rising…..Help me, decentralized XRP-ledger. You’re my only hope.

IMG_4729.jpeg
 
  • Haha
  • Like
Reactions: 4 users

miaeffect

Oat latte lover
HELL-----O????!!
I am stuck down here
james-franco-hello.gif
 
  • Haha
  • Like
  • Love
Reactions: 8 users

Yoda

Regular
Why is our product not selling? :(
 
  • Like
  • Sad
  • Thinking
Reactions: 4 users

7für7

Top 20
Why is our product not selling? :(

Are you not this guy which ist like 🤞 with obi ? Just saying man…

Ps.. probably because they all know they can use it for free … “partnering is the key”
 

Rach2512

Regular

In the comments section, sorry if already posted.

Screenshot_20250623_124723_Samsung Internet.jpg
 
  • Like
  • Fire
  • Wow
Reactions: 29 users

Baneino

Regular

In the comments section, sorry if already posted.

View attachment 87485
This is very positive news:
Akida 2 will be practically usable even before it is installed in series devices.
It is a signal to the industry: "Start developing with Akida 2!"
Could lead to partnerships or pilot tests in the next few weeks.
 
  • Like
  • Fire
  • Love
Reactions: 26 users
I am surprised that no one has yet come forward to challenge Bravo’s claim that TSMC are currently working on their own concept AR glasses, which - she then speculated - would likely incorporate our technology.

TSMC (Taiwan Semiconductor Manufacturing Company) is the world’s largest semiconductor manufacturer and contract chipmaker. Their business model is that of a “dedicated foundry”, which means they only produce chips for other companies rather than design and manufacture their own chips.

Q1/2025 saw TSMC’s market share of the global pure play wafer foundry business rise to a whopping 67.6%, while its nearest competitor Samsung came in as a distant second with a market share of only 7.7%.(https://focustaiwan.tw/business/202506140017)

Fabless companies (that design chips but lack their own manufacturing facilities) such as Apple, NVIDIA, AMD, Qualcomm, Broadcom, MediaTek etc find TSMC’s advanced and continuously innovative technologies, quality (-> superior yield rates) and reliability unmatched, and use TSMC as their foundry of choice, either exclusively or at least partly. Even select IDMs (Integrated Device Manufacturers that design and produce their own chips) like Intel have outsourced some of their chips to be manufactured by their competitor TSMC.

Incidentally, Google announced recently that they will switch to TSMC for their Tensor G5 chip - after Samsung manufactured the first four Tensor chip generations between 2021 and 2024 - which reportedly came as a total shock to executives at the South Korean foundry.

With TSMC’s strategic 2024 rebranding as Foundry 2.0, TSMC Chairman and CEO C.C. Wei redefined the foundry industry as not merely focussing on chip production but to further include advanced packaging technologies, testing, mask making etc.

However, it struck me as very odd and improbable when I read in Bravo’s posts that TSMC had allegedly ventured into the AR wearables product design business overnight.
So I had a closer look at what she referred to as “recent articles” that “discuss TSMC’s concept glasses”.

Liz Allan, the author of the quoted Semiconductor Engineering article who wrote “In terms of custom products, TSMC recently showed a concept for AR glasses”, appears to have either poorly worded what she wanted to express or misunderstood the content of the TSMC slide she used as Fig. 2 of her article.

I checked out her LinkedIn profile and was astonished to find out that someone with a background in “journalism, copywriting, creative writing, and editing” but no formal technical education or practical work experience in the semiconductor industry could actually land a job as technology editor (!) with Semiconductor Engineering. GenAI to the rescue?! Anyway, not exactly a reliable source, I’d say.


View attachment 87462
View attachment 87463

And contrary to what Bravo claimed, neither the article by SemiVision Research nor the Reddit post mention anything about TSMC’s alleged “concept AR glasses”. They simply highlight the important role TSMC will play in manufacturing next-Gen chips for future AR wearables thanks to the innovative and highly advanced silicon technologies the world’s leading foundry is able to offer its clients:

View attachment 87454 View attachment 87455

Anyway, it’s always advisable to check the original source: As referenced, the slide “Seamless, Immersive and Stylish AR - Enabled by Adoption of More Advanced Si Technologies” [Si = Silicon] was part of a TSMC presentation at the 2025 North America Technology Symposium.
You can find the recording here: https://www.tsmc.com/english/node/223

Click on the picture of Kevin Zhang to start his ~17 min “Semiconductor Market Outlook” presentation, which he begins by stating that he has been in the semiconductor industry for three decades, but has never felt more excited about its future. He then goes on to talk about the “market trend” and the impact that AI has had since 2024 by “rapidly shaping the landscape of semiconductor industry [sic]”, showing a slide titled “AI Fuels Exponential Data Center Growth - Strong AI Data Center Demand”.

For edge devices (smartphone, PC and IoT), Zhang predicts a mild growth in 2025.

Here are some of the slides from that April 2025 presentation:

View attachment 87458

View attachment 87461
View attachment 87457

View attachment 87459

View attachment 87456


It is in this market outlook context that the slide about the number of/kind of chips required for future AR glasses needs to be viewed.

The slide directly preceding it is that of a person wearing bulky XR Goggles:

View attachment 87460

Kevin Zhang comments (from 14:09 min):

"Can you wear this devices [sic] for 8 hours?

It's bulky, it's heavy. So I think we need [to] achieve a 10x improvement. Literally, we need a 10x improvement, in term of battery life, in term of the weight, in term of all the form factor we need a 10x improvement.

[At this point, the presentation moves to its last slide.]

To achieve that goal, to go really, to drive that device into so-called "seemless, immersive and stylish AR glasses" you need a lot of innovation. Underneath of this innovation powered by [??? logic??? unintelligible to me] from processor, energy-efficient processor to energy-efficient connectivity to all kind of sensor.
So again, this is another example to show we have so many opportunity ahead of us continue to drive the silicon application."



To sum it up:

In this presentation, TSMC let the world know that they are extremely confident the future will continue to be bright for them due to the projected stellar growth of the semiconductor market over the coming years, fuelled by the insatiable demand of advanced silicon technology chips for diverse applications, especially for HPC (high-performance computing).

So basically a foreseeable future’s gold mine for TSMC as the world’s leading dedicated foundry.

It is in this “market trend” context that the slide depicting a pair of generic AR glasses was shown to give the audience a better idea of how many different kind of advanced technology chips will be required to power future "Seamless, Immersive and Stylish AR" glasses that will vastly improve today's SWaP options.

At the same time, that presentation slide signals companies developing such AR wearables that they can continue to count on TSMC’s expertise, innovation and reliability to turn their present and future concepts into reality.

What the slide didn’t do was reveal that TSMC had suddenly ventured into a totally new field of business, namely product design of wearables, and were now working on their own brand of AR glasses.
Besides, we can safely assume THAT unexpected disclosure would have been all over the (semiconductor industry) news and in addition would have been preceded by rumours and relevant job ads…

Maybe she should use her editing skills to change her Top Skills profile of "Creative Writing" to "Very Creative Writing". Just sayin. 😂

SC
 
  • Like
  • Haha
Reactions: 5 users
  • Haha
Reactions: 2 users

CHIPS

Regular
There is still hope! BrainChip is going to show it to them all!!!


Very Funny Puppy GIF
 
  • Haha
Reactions: 11 users

itsol4605

Regular
Nice !!

 
  • Like
  • Love
  • Fire
Reactions: 13 users

Frangipani

Top 20


View attachment 86785

At yesterday’s 2025 Andes RISC-V CON Hsinchu, Edward Lien, our Regional Sales Manager in Taiwan, gave a presentation on “RISC-V and AI Acceleration” and how BrainChip’s IP fits into it:

View attachment 86786




View attachment 86787

Some more pictures featuring Edward Lien representing BrainChip at the recent 2025 Andes RISC-V CON in Hsinchu, Taiwan:


BD7747E6-5E3F-4237-89B4-C3A891889561.jpeg
EC1FA323-3059-4E9D-86F0-95A49F6B37F3.jpeg
40D3A1A0-4268-48A7-B33E-34CB8273811D.jpeg
DA2E9839-05E4-4C4D-8EA0-6350F62605D5.jpeg
AD9254CD-4656-484D-8ADF-94996EED85FB.jpeg
 
  • Like
  • Love
  • Fire
Reactions: 37 users
Release May 2025....limited to 200 copies...someone wants some extra cash for shares :ROFLMAO::LOL::oops::unsure:


Screenshot_2025-06-23-18-36-52-50_4641ebc0df1485bf6b47ebd018b5ee76.jpg


 
  • Haha
  • Love
  • Like
Reactions: 12 users

DK6161

Regular
This week can go get farrcckd already.
In fact bring on 2026. Keen for the next AGM
 
  • Haha
Reactions: 1 users

Frangipani

Top 20

Here is a recent interview with Florian Corgnou, CEO of BrainChip partner Neurobus, which was conducted in the run-up to the 12 June INPI* Pitch Contest at Viva Technology 2025, during which five start-ups competed against each other. Neurobus ended up winning the pitch contest by “showcasing our vision for low-power, bio-inspired edge AI for autonomous systems” (see above post by @itsol4605).
*INPI France is the Institut National de la Propriété Industrielle, France’s National Intellectual Property Office.

In this interview, Florian Corgnou mentions NeurOS, an embedded operating system that Neurobus is developing internally. Interesting…

“CF: Traditional AI, based on the deep learning [2], is computationally, data-intensive and energy-intensive. However, the equipment we equip – satellites, micro-drones, fully autonomous robots – operates in environments where these resources are rare, or even absent.

So we adopted a frugal AI, designed from the ground up to work with little: little data (thanks to event cameras), little energy (thanks to neuromorphic chips), and little memory.

This forces us to rethink the entire design chain: from hardware to algorithms, including the embedded operating system that we develop internally, NeurOS.”




I found another reference to NeurOS here: https://dealroom.launchvic.org/companies/neurobus/

MORE ABOUT NEUROBUS

Neurobus is pioneering a new era of ultra-efficient, autonomous intelligence for drones and satellites. Leveraging neuromorphic computing, an AI inspired by the brain’s structure and energy efficiency, our edge AI systems empower aerial and orbital platforms to perceive, decide, and act in real-time, with minimal power consumption and maximum autonomy.

Traditional AI architectures struggle in constrained environments, such as low-Earth orbit or on-board UAVs, where power, weight, and bandwidth are critical limitations. Neurobus addresses this with a disruptive approach: combining event-based sensors with neuromorphic processors that mimic biological neural networks. This unique integration enables fast, asynchronous data processing, up to 100 times more power-efficient than conventional methods, while preserving situational awareness in extreme or dynamic conditions.

Our embedded AI systems are designed to meet the needs of next-generation autonomous platforms in aerospace, defense, and space. From precision drone navigation in GPS-denied environments to on-orbit space surveillance and threat detection, Neurobus technology supports missions where latency, energy, and reliability matter most.

We offer a modular technology stack that includes hardware integration, a proprietary neuromorphic operating system (NeurOS), and real-time perception algorithms. This enables end-users and integrators to accelerate the deployment of innovative, autonomous capabilities at the edge without compromising performance or efficiency.


Backed by deeptech expertise, partnerships with leading sensor manufacturers, and strategic collaborations in the aerospace sector, Neurobus is building the foundation for intelligent autonomy across air and space.

Our mission is to unlock the full potential of edge autonomy with brain-inspired AI, starting with drones and satellites, and scaling to all autonomous systems.







French original:


NEUROBUS : une IA embarquée qui consomme très peu d’énergie​

Grâce à une intelligence artificielle sobre et efficiente, Neurobus réinvente, entre Toulouse et Paris, la façon dont les machines perçoivent et interagissent avec leur environnement, ouvrant ainsi la voie vers la conquête des milieux hostiles. Florian Corgnou, son dirigeant et fondateur, nous en dit un peu plus sur cette start-up de la Deeptech qu’il présentera à Viva Technology, lors du Pitch Contest INPI organisé en partenariat avec HEC Paris.​


PC-Neurobus-portrait%20seul-1024x683.jpg.webp


Pouvez-vous vous présenter en quelques mots ?

Florian Corgnou :
Je m’appelle Florian Corgnou, fondateur et CEO de Neurobus, une start-up deeptech que j’ai créée en 2023, entre Paris et Toulouse. Diplômé d’HEC, j’ai fondé une première entreprise dans le secteur du logiciel financier avant de rejoindre le siège européen de Tesla aux Pays-Bas. J’y ai travaillé sur des problématiques d’innovation et de stratégie produit.

Avec Neurobus, je me consacre à une mission : concevoir des systèmes embarqués d’intelligence artificielle neuromorphique, une technologie bio-inspirée qui réinvente la façon dont les machines perçoivent et interagissent avec leur environnement. Cette approche radicalement sobre et efficiente de l’IA ouvre des perspectives inédites pour les applications critiques dans la défense, le spatial, et la robotique autonome.

Notre conviction, c’est que l’autonomie embarquée ne peut émerger qu’en conciliant performance, sobriété énergétique et intelligence contextuelle, même dans les environnements les plus contraints, comme l’espace, les drones légers ou les missions en zones isolées.

Qu’est-ce qui rend votre entreprise innovante ?

F.C. :
Neurobus se distingue par l’intégration de technologies neuromorphiques, c’est-à-dire une IA capable de fonctionner en temps réel avec une consommation énergétique ultra-faible, à l’image du cerveau humain.
Nous combinons des caméras événementielles[1] avec des processeurs neuromorphiques pour traiter directement à la source des signaux complexes, sans avoir besoin d’envoyer toutes les données dans le cloud.

Ce changement de paradigme permet une autonomie décisionnelle embarquée inédite, essentielle dans les applications critiques comme la détection de missiles ou la surveillance orbitale.

Florian Corgnou, fondateur et CEO de Neurobus

Florian Corgnou, fondateur et CEO de Neurobus©

Vous avez choisi une IA sobre, adaptée aux contraintes de son environnement. Pourquoi ce choix et en quoi cela change-t-il la façon de concevoir vos solutions ?

F.C. :
L’IA traditionnelle, basée sur le deep learning [2], est gourmande en calcul, en données et en énergie. Or, les matériels que nous équipons — satellites, micro-drones, robots en autonomie complète — évoluent dans des environnements où ces ressources sont rares, voire absentes.

Nous avons donc adopté une IA frugale, conçue dès le départ pour fonctionner avec peu : peu de données (grâce aux caméras événementielles), peu d’énergie (grâce aux puces neuromorphiques), et peu de mémoire.

Cela nous force à repenser toute la chaîne de conception : du matériel jusqu’aux algorithmes, en passant par le système d’exploitation embarqué que nous développons en interne, le NeurOS.



Quel est le plus gros défi auquel vous avez dû faire face au cours du montage de votre projet ?

F.C. :
L’un des plus grands défis a été de convaincre nos premiers partenaires et financeurs que notre technologie, bien qu’encore émergente, pouvait surpasser les approches conventionnelles.

Cela impliquait de créer de la confiance sans produit final, de prouver la valeur de notre approche avec des démonstrateurs très en amont, et de naviguer dans des écosystèmes exigeants comme le spatial ou la défense, où la crédibilité technologique et la propriété intellectuelle sont clés.


Votre prise en compte de la propriété industrielle a-t-elle été naturelle ? Quel rôle a joué l’INPI ?

F.C. :
Dès le début, nous avons compris que la propriété industrielle serait un levier stratégique essentiel pour valoriser notre R&D et protéger notre avantage technologique.

Cela a été naturel, car notre innovation se situe à l’intersection du hardware, du software et des algorithmes.

L’INPI nous a accompagnés dans cette démarche, en nous aidant à structurer notre propriété industrielle — brevets, marques, enveloppes Soleau… — et à mieux comprendre les enjeux liés à la valorisation de l’innovation dans un contexte européen.

[1] Afin d’éviter des opérations inutilement coûteuses en temps comme en énergie, ce type de caméra n’enregistre une donnée qu’en cas de changement de luminosité.
[2] Le Deep learning est un type d'apprentissage automatique, utilisé dans le cadre de l’élaboration d’intelligence artificielle, basé sur des réseaux neuronaux artificiels, c’est-à-dire des algorithmes reproduisant le fonctionnement du cerveau humain pour apprendre à partir de grandes quantités de données.


Titre
Données clés :​

Contenu
  • Date de création : avril 2023
  • Secteur d’activité : Deeptech - IA neuromorphique embarquée (spatial, défense, robotique)
  • Effectif : 6
  • Chiffre d’affaires : 600 k€ (2024)
  • Part du CA consacrée à la R&D : 70 % (estimé)
  • Part du CA à l’export : 20 %
  • Site web : https://neurobus.space/

Titre
Propriété industrielle :​

Contenu
Enveloppe(s) Soleau : 1




English translation provided on the INPI website:


NEUROBUS: an on-board AI that consumes very little energy​

Using a simple and efficient artificial intelligence (AI), Neurobus is reinventing the way machines perceive and interact with their environment between Toulouse and Paris, paving the way for conquering hostile environments. Florian Corgnou, its director and founder, tells us a little more about this Deeptech startup, which he will present at Viva Technology during the INPI Pitch Contest organized in partnership with HEC Paris.​

PC-Neurobus-portrait%20seul-1024x683.jpg.webp


Can you introduce yourself in a few words?

Florian Corgnou:
My name is Florian Corgnou, founder and CEO of Neurobus, a start-up deeptech which I created in 2023, between Paris and Toulouse. A graduate of HEC, I founded my first company in the financial software sector before joining Tesla's European headquarters in the Netherlands. There, I worked on innovation and product strategy issues.

With Neurobus, I'm dedicated to a mission: to design embedded neuromorphic artificial intelligence systems, a bio-inspired technology that reinvents the way machines perceive and interact with their environment. This radically sober and efficient approach to AI opens up unprecedented perspectives for critical applications in defense, space, and autonomous robotics.

Our belief is that on-board autonomy can only emerge by reconciling performance, energy efficiency and contextual intelligence, even in the most constrained environments, such as space, light drones or missions in isolated areas.

What makes your company innovative?

CF:
Neurobus stands out for its integration of neuromorphic technologies, i.e., an AI capable of operating in real time with ultra-low energy consumption, like the human brain.

We combine event cameras[1] with neuromorphic processors to process complex signals directly at the source, without needing to send all the data into the cloud.

This paradigm shift enables unprecedented on-board decision-making autonomy, essential in critical applications such as missile detection or orbital surveillance.

Florian Corgnou, founder and CEO of Neurobus

Florian Corgnou, founder and CEO of Neurobus©

You've chosen a simple AI, adapted to the constraints of its environment. Why this choice, and how does it change the way you design your solutions?

CF:
Traditional AI, based on the deep learning [2], is computationally, data-intensive and energy-intensive. However, the equipment we equip – satellites, micro-drones, fully autonomous robots – operates in environments where these resources are rare, or even absent.

So we adopted a frugal AI, designed from the ground up to work with little: little data (thanks to event cameras), little energy (thanks to neuromorphic chips), and little memory.

This forces us to rethink the entire design chain: from hardware to algorithms, including the embedded operating system that we develop internally, NeurOS.


What was the biggest challenge you faced while setting up your project?

CF:
One of the biggest challenges was convincing our early partners and funders that our technology, while still emerging, could outperform conventional approaches.

This involved building trust without a final product, proving the value of our approach with early demonstrators, and navigating demanding ecosystems like space or defense, where technological credibility and intellectual property are key.


Was your consideration of industrial property a natural one? What role did the INPI play?

CF:
From the outset, we understood that industrial property would be an essential strategic lever to enhance our R&D and protect our technological advantage.

This was natural, because our innovation lies at the intersection of the hardware, with and algorithms. [There seems to be a translation error here, as the French original mentions hardware, software and algorithms: “l’intersection du hardware, du software et des algorithmes.”]

The INPI supported us in this process, helping us to structure our industrial property — patents, trademarks, Soleau envelopes, etc. — and to better understand the issues related to the promotion of innovation in a European context.

[1] To avoid unnecessarily costly operations in terms of time and energy, this type of camera only records data when there is a change in brightness.
[2] Le Deep learning is a type of machine learning, used in the development of artificial intelligence, based on artificial neural networks, that is, algorithms reproducing the functioning of the human brain to learn from large amounts of data.

Title
Key data:​

Contents
  • Date created: April 2023
  • Sector of activity: Deeptech - Embedded neuromorphic AI (space, defense, robotics)
  • Number: 6
  • Turnover: €600k (2024)
  • Share of turnover devoted to R&D: 70% (estimated)
  • Share of turnover from exports: 20%
  • Website: https://neurobus.space/

Title
Industrial property:​

Contents
Soleau envelope(s): 1



*Soleau envelope:


The Soleau envelope (French: Enveloppe Soleau), named after its French inventor, Eugène Soleau [fr], is a sealed envelope serving as proof of priority for inventions valid in France, exclusively to precisely ascertain the date of an invention, idea or creation of a work. It can be applied for at the French National Institute of Industrial Property(INPI). The working principles were defined in the ruling of May 9, 1986, published in the official gazette of June 6, 1986 (Journal officiel de la République française or JORF), although the institution of the Soleau envelope dates back to 1915.[1]

The envelope has two compartments which must each contain the identical version of the element for which registration is sought.[2] The INPI laser-marks some parts of the envelope for the sake of delivery date authentication and sends one of the compartments back to the original depositary who submitted the envelope.[2]

The originator must keep their part of the envelope sealed except in case of litigation.[3] The deposit can be made at the INPI, by airmail, or at the INPI's regional subsidiaries.[2] The envelope is kept for a period of five years, and the term can be renewed once.[3]

The envelope may not contain any hard element such as cardboard, rubber, computer disks, leather, staples, or pins. Each compartment can only contain up to seven A4-size paper sheets, with a maximum of 5 millimetres (0.2 in) thickness. If the envelope is deemed inadmissible, it is sent back to the depositary at their own expense.[2]

Unlike a patent or utility model, the depositor has no exclusivity right over the claimed element. The Soleau envelope, as compared to a later patent, only allows use of the technique, rather than ownership, and multiple people might submit envelopes to support separate similar use, before a patent is later granted to restrict application.
 
  • Like
  • Fire
  • Love
Reactions: 35 users
Maybe we need to offer our expertise to the Govt of Viet Nam.

They just released an AI paper in April and whilst it is one small sentence in the scheme of the paper, they clearly believe promoting neuromorphic for its energy efficiencies is a positive step.




April 11, 2025​



Artificial Intelligence Landscape Assessment (AILA): A Blueprint for Inclusive and Ethical AI in Viet Nam
Artificial Intelligence is no longer a distant horizon. It's here, reshaping how governments plan, how businesses grow, and how people live. In Viet Nam, AI holds real promise — but also real risks. The challenge now is not whether to use AI, but how to use it wisely, fairly, and in service of all.
To help answer that, UNDP Viet Nam, in collaboration with the Institute for Policy Studies and Media Development (IPS) and the UNDP Chief Digital Office, has launched the Artificial Intelligence Landscape Assessment (AILA). This landmark report offers the first deep dive into how prepared Viet Nam is to develop and govern AI systems in ways that are responsible and aligned with national values.
The report explores the government’s role as an AI user, ecosystem enabler, and ethical guardian. It maps current capacities, highlights gaps — such as limited data access and weak safeguards — and charts strategic actions: a national AI strategy, digital skills development, ethical frameworks, and inclusive investment in AI for public good.
Viet Nam is at a turning point. With smart, inclusive policies and long-term investment, the country has a real chance to shape an AI future that works for people — not just around them. Done right, AI can help advance equity, strengthen institutions, and accelerate progress toward the Sustainable Development Goals (SDGs).

Screenshot_2025-06-23-21-32-30-28_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
Screenshot_2025-06-23-21-33-18-43_e2d5b3f32b79de1d45acd1fad96fbb0f.jpg
 
  • Like
  • Fire
  • Love
Reactions: 15 users
Top Bottom