BRN Discussion Ongoing

yogi

Regular
Hi FMF,

Looks like your suspicions were correct!

"Arquimea has deployed Akida with a Prophesee camera on a drone to detect distressed swimmers and surfers in the ocean helping lifeguards scale their services for large beach areas, opting for an event-based computing solution for its superior efficiency and consistently high-quality results."

I wonder if this has anything to do with the recent hiring of Finn Ryder to Development Representative at BrainChip, since he was a Senior Lifeguard and First Responder for the City of Huntington Beach for 5 years prior to joining us?


View attachment 79709


View attachment 79711 View attachment 79710

Wow Brainchip removed it from their website wonder why
 
  • Thinking
  • Like
  • Wow
Reactions: 16 users

MDhere

Top 20
Wow Brainchip removed it from their website wonder why
Maybe there was a typo or maybe just maybe it will be part of a price sensitive announcement?
 
  • Like
  • Thinking
Reactions: 7 users

jtardif999

Regular
ADVISOR UPSIDE

Nasdaq to Open New Office on ‘Y’All Street

Davy Crockett famously said “You may all go to hell, and I will go to Texas.” Nasdaq seems to agree (at least with that last part).

The exchange operator announced Tuesday that it will set up a new regional headquarters in Dallas, with an expected opening by yearend, and is planning additional investments in Texas. The goal for the new site isn’t just about winning listings; it will also include part of Nasdaq’s corporate solutions and financial crime management technology businesses.

The move is just the latest development for the Lone Star State’s burgeoning “Y’all Street,” which is already set to become home to the New York Stock Exchange Texas (formerly NYSE Chicago), and the Texas Stock Exchange — an upstart exchange financed by names including BlackRock, Charles Schwab and Citadel.

Yippee Ki-Yay

Though a second US headquarters in Dallas is a new chapter for the New York-based Nasdaq, it’s had a presence in Texas for more than a decade, establishing an office in Irving in 2013. The decision to double down on the largest state in the US mainland comes as a result of Nasdaq’s growing reach in
the South as well as the regions’ economic success:


Today, Nasdaq generates more than $750 million in Texas and the Southeast region, and has about 800 clients in the state, including corporate issuers, financial institutions, and asset managers, according to the exchange operator.


Texas is also home to more than 200 companies listed on the Nasdaq Composite Index, representing nearly $2 trillion in market cap as of December.
“Nasdaq is deeply ingrained in the fabric of the Texas economy,” Adena Friedman, chair and CEO of Nasdaq, said in a statement.

 What’s So Great About Texas? Just like New York and California, Texas hosts businesses both big and small, and it has the benefits of a large and growing labor force, no corporate or personal income tax, and being a “right-to-work” state, meaning workers can’t be required to join a union as a condition of employment. Because of its business-friendly environment, Texas contains more Fortune 500 companies than any other state, including Hewlett Packard, Tesla, and Charles Schwab.

And if any of those names don’t impress you, even Chuck E. Cheese is based in Texas.
 
  • Like
  • Love
  • Fire
Reactions: 6 users

jtardif999

Regular
  • Haha
Reactions: 2 users


The Robots Are Coming – Physical AI and the Edge Opportunity

Hero Image


By Pete Bernard
CEO, EDGE AI FOUNDATION


We have imagined “robots” for thousands of years, dating back to 3000 B.C. when Egyptian water clocks used human figurines to strike hour bells. They have infused our cultural future with movies like Metropolis in 1927 through C3PO and R2D2 in Star Wars and more.

Practically speaking, today’s working robots are much less glamorous. They have been developed over the past decades to handle dangerous and repetitive tasks and resemble nothing like humans. They roll through warehouses, mines, and deposit fertilizer on our farms. They also extend our perceptual reach through aerial and ground-based inspection systems, using visual and other sensor input.

Now that edge AI technology has evolved and getting ever more mature, the notion of physical AI is taking hold and it promises to be a critical platform that is fundamentally enabled by edge AI technologies. A generally agreed definition of physical AI is:

A combination of AI workloads running on autonomous robotic systems that include physical actuators.

This is truly “AI in the real world” in that these systems physically interact with the real world through motion, touch, vision, and physical control mechanisms including grasping, carrying and more. It can combine a full suite of edge AI technologies in a single machine. Executing AI workloads where the data is created will be critical for the low latency and low needs of these platforms. These could range from:

  • tinyML workloads running in its sensor networks and sensor fusion
  • Neuromorphic computing for high performance/ultra-low power, fast latency and wide dynamic range scenarios
  • CNN/RNN/DNN models running AI vision on image feeds, LIDAR or other “seeing” and “perceiving” platforms
  • Transformer-based generative AI models (including reasoning) performing context, understanding and human-machine interface functions
These are designed all into one system, with the complex orchestration, safety/security and controls needed for enterprise grade deployment, management and servicing. In addition, as the TOPS/watt and lower power/higher performance edge AI platforms come to the market, this will positively impact the mobility, cost and battery life of these systems.



Robotics is where AI meets physics. They require sophisticated physical capabilities to move grasp, extend sense and perform a wide range of tasks, but they are also software platforms that require training and decision making, making them prime candidates for one of the most sophisticated combinations of AI capabilities. The advent of accelerated semiconductor platforms, advanced sensor networks, sophisticated middleware for orchestration, tuned AI models, emerging powerful SLMs, applications and high-performance communication networks are ushering in a new era of physical AI.

Let’s level set with a taxonomy of robots and a definition of terms. There are many ways to describe robots – they can be sliced by environment (warehouse) or by function (payload) or even by mobility (un-manned aerial vehicles). Here is a sample of some types of robots in deployment today:

  • Pre-programmed robots
    • These can be Heavy Industrial robots, used in very controlled environments for repetitive and precise manufacturing tasks. These robots are typically fixed behind protective barriers, costs hundreds of thousands of dollars.
  • Tele-operated robots
    • These are used as “range extenders” for humans to perform inspections, observations, or repairs in challenging human environments – including drones or underwater robots for welding and repair. Perhaps the best-known tele-operated robots were the robots sent to Mars by NASA in the last few decades. There has also been a fish robot named SoFi designed to mimic propulsion via its tail and twin fins, swimming in the Pacific Ocean at depths of up to 18 meters. [1]
  • Autonomous robots
    • You probably have one of these in your house in the form a vacuum cleaner robot navigating without supervision and relying on its sensors for navigation. Recently we have seen a number of “lawnmower” robots introduced to take on this laborious task. In Agriculture, robots are already inspecting and even harvesting crops in an industry with chronic labor shortages[2]. There is also a thriving industry for autonomous warehouse robots – including in Amazon warehouses. [3]
  • Augmenting robots
    • These are designed to aid or enhance human capabilities such as prosthetic limbs or exoskeletons. You probably first were exposed to this category of robots when you watched The Six Million Dollar Man” on TV –but on a more serious note, they are providing incredible capabilities for amputees and enabling safer work environments for physical labor.[4]
  • Humanoid robots
    • Here’s where it gets interesting. We have developed a bi-pedal world – why not develop robots that work in that world as it’s been designed? Humanoid robots represent humans – as bi-pedal (or quad pedal in the case of Boston Dynamics), can communicate in natural language and facial expressions and perform a broad range of tasks using their limbs, hands and human-like appendages. The number of quad-pedal robot have only been deployed in the low thousands worldwide and we are still in the very early stages of development, deployment, and reasonable cost. Companies like Enchanted Tools[5] are demonstrating humanoid robots that can move amongst humans for carry lighter loads, deliver items, and communicate in natural language. Although humanoid robots will catch the bulk of the attention of the media in coming years, and face the most “cultural impact,” the other robot categories will also benefit greatly from generative AI and drive significantly greater efficiencies across industries.


How Generative AI on the edge will impact Physical AI

It’s hard to overstate the impact that Generative AI will have on the field of robotics. Beyond the ability for much more natural communication and understanding, Generative AI model architectures like Transformers will be combined with other model architectures like CNNs, Isolated Forests and others to provide context and human machine interfaces for image recognition, anomaly detection and observational learning. It will be a “full stack” of edge AI from metal to cloud.

Let’s take a look at the differences between traditional AI used in robotics and what Generative AI can bring:

Traditional AIGenerative AI
Rule-Based Approach: Traditional AI relies on strict rules set by programmers – like an actor following a precise script. These rules dictate how the AI system behaves, processes data, and makes decisions.Learning from Data Examples: Generative AI learns from data examples – essentially “tokenized movement.” It adapts and evolves based on the patterns it recognizes in the training data – like a drummer that watches their teacher and keeps improving. This can be done in the physical world or in a simulated world for safer and more extensive “observational training.”
Focused Adaptability: ML and models such as CNN/RNN/DNN are designed for focused tasks and operates based on predefined instructions. They run on very resource constrained environments at very low power and cost.Creating New Data: Unlike traditional AI, generative AI can create new data based on experience and can adapt to new surroundings or conditions. However, this requires significant more TOPS/W and RAM, which can drive cost and battery powered applicability.
Data Analysis and Prediction: Non-generative AI excels at data analysis, pattern recognition, and making predictions. However, there is no creation of new data; it merely processes existing information.Applications in Robotics:Generative AI can drive new designs and implementations in robotics that leverages their ability to generate new data, whether it’s new communication/conversational techniques (in multiple languages), new movement scenarios or other creative problem solving.


In summary, while many forms edge AI are excellent and necessary for analyzing existing data and making predictions in resource constrained and low power environments, generative AI at the edge will now add the ability to create new data and adapt dynamically based on experience. The application of Generative AI to robotics will unlock observational learning, rich communication, and a much broader application of robots across our industries and our lives.



Safe and Ethical Robotics

Whenever robots are mentioned, the comparison to
“evil robots’ from our culture are not far behind. The Terminator, Ultron or Gunslinger from Westworld. And at the same time, we have enjoyed anthropomorphized robots like C3PO and R2D2, or Wall-E. And then there are ones in-between, like from the movie The Creator.

As attention has been paid to the scope Generative AI moving to AGI, what guardrails, best practices and outright legislation exists to keep robotic efforts – pared with Generative AI – in the category of good or neutral?

Isaac Asimov famously penned his three laws of robotics back as part of his short story “Runaround” in 1942:[6]
  • A robot shall not harm a human, or by inaction allow a human to come to harm
  • A robot shall obey any instruction given to it by a human
  • A robot shall avoid actions or situations that could cause it to come to harm itself
In 2021, Dr. Kate Darling – a research specialist in human-robot interaction, robot ethics and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab – wrote an article in The Guardian proposing that we think about robots more like animals than a rival to humans. Once we make that shift, we can better discuss who are responsible for robot actions and who is responsible for the societal impacts that robots bring, such as transformations in the labor market.[7]

The European Union published “Civil law rules on robotics” back in 2017 that addressed the definition of a robot, where liability lies, the role of insurance and other key items. In 2023 a law was introduced in Massachusetts in the US that would 1) ban the sale and use of weapons-mounted robotic devices, 2) ban the use of robotic devices to threaten or harass, and 3) ban the usage of robotic devices to physically restrain an individual. It’s unclear how or when similar legislation will make it to the federal level.



Observational Learning Is a Game Changer

In the world of edge AI, training has happened on “the cloud” or in server-class GPU environments and inferencing has happened on the light edge. With the introduction of reinforcement learning and new work in continuous learning we will see the edge becoming a much more viable area for training.

However, in physical AI platforms, observational learning (sometimes referred to as behavior cloning) in AI allows robots to learn new skills simply by watching humans – in reality or in a simulated physical environment. Instead of being programmed step-by-step, robots can make connections in their neural networks based on observing human behavior and actions. This kind of unstructured training will enable robots to better understand the nuances of a given task and make their interaction with humans much more natural.


There have been a number of key advanced in AI models for observational learning, starting with CNN model types and recently leveraging diffusion model types such as the one presented in the Microsoft Research paper in 2023 – Imitating Human Behaviour with Diffusion Models.[8]

In March of 2024, NVIDIA introduced Gr00t[9], their own foundational model designed for observational learning of their ISAAC/JETSON robotics platforms. It was demonstrated at the NVIDIA GTC keynote by Jensen Huang and also leverages their Omniverse “digital twin” environment to develop virtualized physical environments that can train robots via observational learning in a safe and flexible virtualized environment. This was updated in 2025 to Gr00t N1 as well as a new “Newton” physics engine. We’re now seeing Foundation models tuned for robotics platforms[10] like Gr00t, but also RFM-1 by Covoariant, among others. Expect this area to proliferate with options much like Foundation models for LLMs in the cloud.

Robotics as a “three computer problem” – there is an AI model training in the cloud using generative AI and LLMs, there is model execution and ROS running on a robotics platform itself, and a simulation/digital twin environment to safely and efficiently develop and train.



The Edge AI Opportunity for Robotics

“Everything That Moves Will Be Robotic” – Jensen Huang

The confluence of generative AI and robotics is swinging the robotic pendulum back into the spotlight. Although Boston Dynamics has only deployed around 1500 Spot robots worldwide so far, expect many more, and in many more configurations, throughout our warehouses, our farms, or manufacturing floor. Expect many more humanoid experiments and expect a hype wave washing over us with plenty of media coverage of every failure.

Running generative AI on these platforms will require significant TOPS horsepower as well as high performance memory subsystems in addition to advanced controls actuators and sensors. We will see “datacenter” class semiconductors moving down into these platforms but just as interesting will be edge native semiconductor platforms moving up into this space, with the kinds of ruggedized thermal and physical properties as well as low power and the integrated communications needed. We will also see many new stand-alone AI acceleration silicon paired with traditional server class silicon. Mainstream platforms like phones and AI PCs will help drive down costs with their market scale.

However, in addition to requiring top end semiconductors and plenty of RAM, robotic platforms – especially humanoid ones – will require very sophisticated sensors, actuators, and electro-mechanical equipment – costing tens of thousands of dollars for the foreseeable future.

To keep things in perspective, Goldman Sachs[11] forecasted a 2035 Humanoid Robot TAM of US$38bn with shipments reaching 1.4m units. That’s not a tremendous unit volume for humanoid robots (PCs ship around 250m units per year, smartphones north of a billion) – we can expect orders of magnitude more “functional form factor robots” in warehouse, vacuuming homes and doing other focused tasks.

These platforms – like the ones now available from Qualcomm, NVIDIA, NXP, Analog Devices and more – are attracting developers that are taking their server class software skills and combining them with embedded computing expertise. Like mobility, robotics and physical AI are challenging developers and designers in new ways and provides a unique opportunity for workforce development, skill enhancement and career growth.

A key challenge here is to avoid the pitfalls of Industry 4.0 and IoT – how do we collaborate as an industry to help standardize on data sharing models, digital twin models, code portability and other elements of the robotics stack? If this area becomes more fractured and siloed we could see significant delays in real deployments of more advanced genAI driven robots.

Developers, designers and scientists are pushing the envelope and closing the gap between our imaginations and reality. Like with cloud-based AI, the use of physical AI will require important guardrails and best practices to keep us not only safe but make this newfound expansion of physical AI capabilities accretive to our society, but the future

We cannot underestimate the impact that new robotics platforms will have on our culture, our labor force, and our existential mindset. We’re at a turning point as edge AI technologies like physical AI are leveraging traditional sensor AI and machine learning with generative AI, providing a call-to-action for all technology providers in the edge AI “stack,” from metal to cloud, as well an opportunity for business across segments to rethink how these new platforms will leverage this new edge AI technology in ways that are still in our imagination.


[1] https://www.csail.mit.edu/research/sofi-soft-robotic-fish

[2] https://builtin.com/robotics/farming-agricultural-robots

[3] https://www.aboutamazon.com/news/operations/amazon-introduces-new-robotics-solutions

[4] https://www.automate.org/robotics/service-robots/service-robots-exoskeleton

[5] https://enchanted.tools/

[6] https://www.goodreads.com/en/book/show/48928553

[7] https://tdwi.org/articles/2021/06/1...drails-into-ai-driven-robotic-assistants.aspx

[8] https://www.microsoft.com/en-us/res...tating-human-behaviour-with-diffusion-models/

[9] https://nvidianews.nvidia.com/news/foundation-model-isaac-robotics-platform

[10] Foundation Models in Robotics: Applications, Challenges, and the Future – https://arxiv.org/html/2312.07843v1

[11] https://www.goldmansachs.com/intell...n-humanoid-robot-the-ai-accelerant/report.pdf
Very sexy post Frangipani!
This is where it's at!
Nudge nudge, wink wink, say no more, say no more..




c3po.gif


2026 is going to be "Our" year!
 
  • Haha
Reactions: 9 users

Esq.111

Fascinatingly Intuitive.
Good Morning Chippers,

SoftBank dipping it's toes in the water , seems to be a few aqusitions happening of late in the tech sphere.



Regards,
Esq.
 
  • Like
  • Wow
  • Love
Reactions: 14 users

Rach2512

Regular

It states "This technology uses real-time machine-learning algorithms to calculate the best route .....


Screenshot_20250321_065223_Samsung Internet.jpg
 
  • Like
Reactions: 10 users

7für7

Top 20
I had yesterday evening time to watch the whole presentation. But guys, this looks so fake… I can’t believe people believe actually, that this robot is not controlled by someone. 🤦🏻‍♂️

 
  • Like
Reactions: 1 users

Bravo

If ARM was an arm, BRN would be its biceps💪!

SoftBank to buy arm-based AI chipmaker for $6.5bn​

Nadine Hawkins

March 20, 2025 09:56 AM
SoftBank Masayoshi Son.jpg

Softbank has expanded its AI portfolio with a $6.5 billion deal to buy Ampere Computing, a Silicon Valley startup developing Arm-based sustainable AI chips.​

SoftBank believes Ampere’s chips will play an integral role in the future of AI, an area currently dominated by Nvidia’s graphics processing units (GPUs).
The technology, licensed from Arm Holdings which SoftBank acquired in 2016, is the same that powers nearly all smartphones globally.
Capacity Banners 970x906 (1).jpg
Masayoshi Son, SoftBank’s chair and CEO said: "The future of AI requires breakthrough computing power. Ampere’s expertise in semiconductors and high-performance computing will help accelerate this vision and deepen our commitment to AI innovation in the US."
Ampere’s chips are designed to handle both general-purpose computing and AI tasks, which include critical workloads such as machine learning, natural language processing, and deep learning.
Companies are vying to build the next-generation chips to power growing AI training and inference workloads SoftBank’s acquisition of Ampere will allow the company to integrate Ampere’s expertise into its existing ARM ecosystem, which could help the firm meet the growing demand for AI-driven computing power.

Earlier this year SoftBank unveiled the $500 billion Stargate project that aims to build state-of-the-art AI infrastructure for OpenAI over the next four years.
The initiative launched with an immediate investment of $100 billion. At the time of launch SoftBank stated it will create hundreds of thousands of jobs in the US, spurring economic growth and offering long-term benefits to the global economy.
At the time of the launch Son said: “The Stargate Project is not just an investment in AI but in the future of American industry and global security.”
“This initiative will drive innovation and economic prosperity, positioning the U.S. at the forefront of AI technology development.”
The partnership structure of the project includes SoftBank, OpenAI, Oracle, and MGX as the initial equity funders. SoftBank and OpenAI will serve as the lead partners, with SoftBank taking financial responsibility and OpenAI handling operational oversight. Masayoshi Son will serve as the chairman of the project, guiding its development and execution.
Among the project's key technology partners are some of the biggest names in the industry, including Arm, Microsoft, NVIDIA, Oracle, and OpenAI. These companies will collaborate closely to build and operate the advanced computing systems required to support the ambitious goals of Stargate. The collaboration builds on a longstanding relationship between OpenAI and NVIDIA, which began in 2016, and a more recent partnership between OpenAI and Oracle.
 
  • Like
  • Love
  • Thinking
Reactions: 20 users

7für7

Top 20

SoftBank to buy arm-based AI chipmaker for $6.5bn​

Nadine Hawkins

March 20, 2025 09:56 AM
SoftBank Masayoshi Son.jpg

Softbank has expanded its AI portfolio with a $6.5 billion deal to buy Ampere Computing, a Silicon Valley startup developing Arm-based sustainable AI chips.​

SoftBank believes Ampere’s chips will play an integral role in the future of AI, an area currently dominated by Nvidia’s graphics processing units (GPUs).
The technology, licensed from Arm Holdings which SoftBank acquired in 2016, is the same that powers nearly all smartphones globally.
Capacity Banners 970x906 (1).jpg
Masayoshi Son, SoftBank’s chair and CEO said: "The future of AI requires breakthrough computing power. Ampere’s expertise in semiconductors and high-performance computing will help accelerate this vision and deepen our commitment to AI innovation in the US."
Ampere’s chips are designed to handle both general-purpose computing and AI tasks, which include critical workloads such as machine learning, natural language processing, and deep learning.
Companies are vying to build the next-generation chips to power growing AI training and inference workloads SoftBank’s acquisition of Ampere will allow the company to integrate Ampere’s expertise into its existing ARM ecosystem, which could help the firm meet the growing demand for AI-driven computing power.

Earlier this year SoftBank unveiled the $500 billion Stargate project that aims to build state-of-the-art AI infrastructure for OpenAI over the next four years.
The initiative launched with an immediate investment of $100 billion. At the time of launch SoftBank stated it will create hundreds of thousands of jobs in the US, spurring economic growth and offering long-term benefits to the global economy.
At the time of the launch Son said: “The Stargate Project is not just an investment in AI but in the future of American industry and global security.”
“This initiative will drive innovation and economic prosperity, positioning the U.S. at the forefront of AI technology development.”
The partnership structure of the project includes SoftBank, OpenAI, Oracle, and MGX as the initial equity funders. SoftBank and OpenAI will serve as the lead partners, with SoftBank taking financial responsibility and OpenAI handling operational oversight. Masayoshi Son will serve as the chairman of the project, guiding its development and execution.
Among the project's key technology partners are some of the biggest names in the industry, including Arm, Microsoft, NVIDIA, Oracle, and OpenAI. These companies will collaborate closely to build and operate the advanced computing systems required to support the ambitious goals of Stargate. The collaboration builds on a longstanding relationship between OpenAI and NVIDIA, which began in 2016, and a more recent partnership between OpenAI and Oracle.
Not impressed because it’s not my company 😂
 
  • Haha
  • Like
Reactions: 6 users

Bravo

If ARM was an arm, BRN would be its biceps💪!
Not impressed because it’s not my company 😂

My interpretation is that this acquisition will help support Arm's ambitions to develop it's own custom chips, which might in turn provide more of an incentive for Arm to integrate BrainChip's Akida neuromorphic processor to deliver advanced AI capabilities at the edge.
 
  • Like
  • Fire
  • Thinking
Reactions: 25 users

7für7

Top 20
My interpretation is that this acquisition will help support Arm's ambitions to develop it's own custom chips, which might in turn provide more of an incentive for Arm to integrate BrainChip's Akida neuromorphic processor to deliver advanced AI capabilities at the edge.
In the past, we’ve made a lot of speculations about possible connections between our partners and the big players… While I do find it interesting in general, after so many misinterpretations, I’m no longer convinced that this is of any help to us, especially since Arm doesn’t have only BrainChip in its portfolio. I don’t even pay attention to such news anymore. Only when something official comes from BrainChip or when other companies mention BrainChip directly. Otherwise, it only leads to disappointment. And eventually to the dolci syndrome.

Don’t get me wrong, I’m still positive and keep buying more. However, these kinds of reports are not decisive for me. It’s like when you were a kid, and someone promised to buy you something or take you to a certain amusement park the next day, just to keep you calm. At some point, you get old enough to realize it was just empty words… The biggest letdown of all: Santa Claus.
 
  • Like
  • Sad
  • Fire
Reactions: 12 users

7für7

Top 20
Don’t be sad @Deadpool … An unexpected top news can push us forward much faster than 1001 speculations. And I’m more convinced that out of nowhere, something will be announced that we never even imagined! 💪
 
  • Like
  • Fire
Reactions: 5 users

7für7

Top 20
"I don’t even pay attention to such news anymore."

Just ignore and scroll down to next one.
🤦🏻‍♂️

Just because I don’t chew on every bone that’s thrown my way doesn’t mean I don’t like the meat that surrounds it.

I find Bravo’s posts interesting, but I don’t immediately connect everything to BrainChip. So maybe slow down a bit before posting unnecessary comments.
 
Last edited:

Deadpool

Did someone say KFC
Don’t be sad @Deadpool … An unexpected top news can push us forward much faster than 1001 speculations. And I’m more convinced that out of nowhere, something will be announced that we never even imagined! 💪
Its not that.
It was the Santa comment, don't you dare tell me, he's not real.:cry:
 
  • Haha
  • Like
  • Love
Reactions: 10 users

7für7

Top 20
  • Like
  • Haha
Reactions: 2 users

HopalongPetrovski

I'm Spartacus!
  • Haha
  • Like
Reactions: 6 users

Esq.111

Fascinatingly Intuitive.
Request] How fast must the wind turbine spin to achieve this scenario? :  r/theydidthemath


Hate to break it to everyone but Santa did not pull through.
 
  • Haha
  • Like
Reactions: 22 users

AARONASX

Holding onto what I've got
  • Haha
  • Like
  • Sad
Reactions: 10 users

overpup

Regular
Request] How fast must the wind turbine spin to achieve this scenario? :  r/theydidthemath


Hate to break it to everyone but Santa did not pull through.
oh deer
 
  • Haha
  • Like
Reactions: 11 users
Top Bottom