Feed aggregator



Generative AI models are getting closer to taking action in the real world. Already, the big AI companies are introducing AI agents that can take care of web-based busywork for you, ordering your groceries or making your dinner reservation. Today, Google DeepMind announced two generative AI models designed to power tomorrow’s robots.

The models are both built on Google Gemini, a multimodal foundation model that can process text, voice, and image data to answer questions, give advice, and generally help out. DeepMind calls the first of the new models, Gemini Robotics, an “advanced vision-language-action model,” meaning that it can take all those same inputs and then output instructions for a robot’s physical actions. The models are designed to work with any hardware system, but were mostly tested on the two-armed Aloha 2 system that DeepMind introduced last year.

In a demonstration video, a voice says: “Pick up the basketball and slam dunk it” (at 2:27 in the video below). Then a robot arm carefully picks up a miniature basketball and drops it into a miniature net—and while it wasn’t a NBA-level dunk, it was enough to get the DeepMind researchers excited.

Google DeepMind released this demo video showing off the capabilities of its Gemini Robotics foundation model to control robots. Gemini Robotics

“This basketball example is one of my favorites,” said Kanishka Rao, the principal software engineer for the project, in a press briefing. He explains that the robot had “never, ever seen anything related to basketball,” but that its underlying foundation model had a general understanding of the game, knew what a basketball net looks like, and understood what the term “slam dunk” meant. The robot was therefore “able to connect those [concepts] to actually accomplish the task in the physical world,” says Rao.

What are the advances of Gemini Robotics?

Carolina Parada, head of robotics at Google DeepMind, said in the briefing that the new models improve over the company’s prior robots in three dimensions: generalization, adaptability, and dexterity. All of these advances are necessary, she said, to create “a new generation of helpful robots.”

Generalization means that a robot can apply a concept that it has learned in one context to another situation, and the researchers looked at visual generalization (for example, does it get confused if the color of an object or background changed), instruction generalization (can it interpret commands that are worded in different ways), and action generalization (can it perform an action it had never done before).

Parada also says that robots powered by Gemini can better adapt to changing instructions and circumstances. To demonstrate that point in a video, a researcher told a robot arm to put a bunch of plastic grapes into a clear Tupperware container, then proceeded to shift three containers around on the table in an approximation of a shyster’s shell game. The robot arm dutifully followed the clear container around until it could fulfill its directive.

Google DeepMind says Gemini Robotics is better than previous models at adapting to changing instructions and circumstances. Google DeepMind

As for dexterity, demo videos showed the robotic arms folding a piece of paper into an origami fox and performing other delicate tasks. However, it’s important to note that the impressive performance here is in the context of a narrow set of high-quality data that the robot was trained on for these specific tasks, so the level of dexterity that these tasks represent is not being generalized.

What is embodied reasoning?

The second model introduced today is Gemini Robotics-ER, with the ER standing for “embodied reasoning,” which is the sort of intuitive physical world understanding that humans develop with experience over time. We’re able to do clever things like look at an object we’ve never seen before and make an educated guess about the best way to interact with it, and this is what DeepMind seeks to emulate with Gemini Robotics-ER.

Parada gave an example of Gemini Robotics-ER’s ability to identify an appropriate grasping point for picking up a coffee cup. The model correctly identifies the handle, because that’s where humans tend to grasp coffee mugs. However, this illustrates a potential weakness of relying on human-centric training data: for a robot, especially a robot that might be able to comfortably handle a mug of hot coffee, a thin handle might be a much less reliable grasping point than a more enveloping grasp of the mug itself.

DeepMind’s Approach to Robotic Safety

Vikas Sindhwani, DeepMind’s head of robotic safety for the project, says the team took a layered approach to safety. It starts with classic physical safety controls that manage things like collision avoidance and stability, but also includes “semantic safety” systems that evaluate both its instructions and the consequences of following them. These systems are most sophisticated in the Gemini Robotics-ER model, says Sindhwani, which is “trained to evaluate whether or not a potential action is safe to perform in a given scenario.”

And because “safety is not a competitive endeavor,” Sindhwani says, DeepMind is releasing a new data set and what it calls the Asimov benchmark, which is intended to measure a model’s ability to understand common-sense rules of life. The benchmark contains both questions about visual scenes and text scenarios, asking models’ opinions on things like the desirability of mixing bleach and vinegar (a combination that make chlorine gas) and putting a soft toy on a hot stove. In the press briefing, Sindhwani said that the Gemini models had “strong performance” on that benchmark, and the technical report showed that the models got more than 80 percent of questions correct.

DeepMind’s Robotic Partnerships

Back in December, DeepMind and the humanoid robotics company Apptronik announced a partnership, and Parada says that the two companies are working together “to build the next generation of humanoid robots with Gemini at its core.” DeepMind is also making its models available to an elite group of “trusted testers”: Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools.



Generative AI models are getting closer to taking action in the real world. Already, the big AI companies are introducing AI agents that can take care of web-based busywork for you, ordering your groceries or making your dinner reservation. Today, Google DeepMind announced two generative AI models designed to power tomorrow’s robots.

The models are both built on Google Gemini, a multimodal foundation model that can process text, voice, and image data to answer questions, give advice, and generally help out. DeepMind calls the first of the new models, Gemini Robotics, an “advanced vision-language-action model,” meaning that it can take all those same inputs and then output instructions for a robot’s physical actions. The models are designed to work with any hardware system, but were mostly tested on the two-armed Aloha 2 system that DeepMind introduced last year.

In a demonstration video, a voice says: “Pick up the basketball and slam dunk it” (at 2:27 in the video below). Then a robot arm carefully picks up a miniature basketball and drops it into a miniature net—and while it wasn’t a NBA-level dunk, it was enough to get the DeepMind researchers excited.

Google DeepMind released this demo video showing off the capabilities of its Gemini Robotics foundation model to control robots. Gemini Robotics

“This basketball example is one of my favorites,” said Kanishka Rao, the principal software engineer for the project, in a press briefing. He explains that the robot had “never, ever seen anything related to basketball,” but that its underlying foundation model had a general understanding of the game, knew what a basketball net looks like, and understood what the term “slam dunk” meant. The robot was therefore “able to connect those [concepts] to actually accomplish the task in the physical world,” says Rao.

What are the advances of Gemini Robotics?

Carolina Parada, head of robotics at Google DeepMind, said in the briefing that the new models improve over the company’s prior robots in three dimensions: generalization, adaptability, and dexterity. All of these advances are necessary, she said, to create “a new generation of helpful robots.”

Generalization means that a robot can apply a concept that it has learned in one context to another situation, and the researchers looked at visual generalization (for example, does it get confused if the color of an object or background changed), instruction generalization (can it interpret commands that are worded in different ways), and action generalization (can it perform an action it had never done before).

Parada also says that robots powered by Gemini can better adapt to changing instructions and circumstances. To demonstrate that point in a video, a researcher told a robot arm to put a bunch of plastic grapes into a clear Tupperware container, then proceeded to shift three containers around on the table in an approximation of a shyster’s shell game. The robot arm dutifully followed the clear container around until it could fulfill its directive.

Google DeepMind says Gemini Robotics is better than previous models at adapting to changing instructions and circumstances. Google DeepMind

As for dexterity, demo videos showed the robotic arms folding a piece of paper into an origami fox and performing other delicate tasks. However, it’s important to note that the impressive performance here is in the context of a narrow set of high-quality data that the robot was trained on for these specific tasks, so the level of dexterity that these tasks represent is not being generalized.

What is embodied reasoning?

The second model introduced today is Gemini Robotics-ER, with the ER standing for “embodied reasoning,” which is the sort of intuitive physical world understanding that humans develop with experience over time. We’re able to do clever things like look at an object we’ve never seen before and make an educated guess about the best way to interact with it, and this is what DeepMind seeks to emulate with Gemini Robotics-ER.

Parada gave an example of Gemini Robotics-ER’s ability to identify an appropriate grasping point for picking up a coffee cup. The model correctly identifies the handle, because that’s where humans tend to grasp coffee mugs. However, this illustrates a potential weakness of relying on human-centric training data: for a robot, especially a robot that might be able to comfortably handle a mug of hot coffee, a thin handle might be a much less reliable grasping point than a more enveloping grasp of the mug itself.

DeepMind’s Approach to Robotic Safety

Vikas Sindhwani, DeepMind’s head of robotic safety for the project, says the team took a layered approach to safety. It starts with classic physical safety controls that manage things like collision avoidance and stability, but also includes “semantic safety” systems that evaluate both its instructions and the consequences of following them. These systems are most sophisticated in the Gemini Robotics-ER model, says Sindhwani, which is “trained to evaluate whether or not a potential action is safe to perform in a given scenario.”

And because “safety is not a competitive endeavor,” Sindhwani says, DeepMind is releasing a new data set and what it calls the Asimov benchmark, which is intended to measure a model’s ability to understand common-sense rules of life. The benchmark contains both questions about visual scenes and text scenarios, asking models’ opinions on things like the desirability of mixing bleach and vinegar (a combination that make chlorine gas) and putting a soft toy on a hot stove. In the press briefing, Sindhwani said that the Gemini models had “strong performance” on that benchmark, and the technical report showed that the models got more than 80 percent of questions correct.

DeepMind’s Robotic Partnerships

Back in December, DeepMind and the humanoid robotics company Apptronik announced a partnership, and Parada says that the two companies are working together “to build the next generation of humanoid robots with Gemini at its core.” DeepMind is also making its models available to an elite group of “trusted testers”: Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools.



After January’s Southern California wildfires, the question of burying energy infrastructure to prevent future fires has gained renewed urgency in the state. While the exact cause of the fires remains under investigation, California utilities have spent years undergrounding power lines to mitigate fire risks. Pacific Gas & Electric, which has installed over 1,287 kilometers of underground power lines since 2021, estimates the method is 98 percent effective in reducing ignition threats. Southern California Edison has buried over 40 percent of its high-risk distribution lines, and 63 percent of San Diego Gas & Electric’s regional distribution system is now underground.

Still, the exorbitant cost of underground construction leaves much of the U.S. power grid’s 8.8 million kilometers of distribution lines and 180 million utility poles exposed to tree strikes, flying debris, and other opportunities for sparks to cascade into a multi-acre blaze. Recognizing the need for cost-effective undergrounding solutions, the U.S. Department of Energy launched GOPHURRS in January 2024. The three-year program pours $34 million into 12 projects to develop more efficient undergrounding technologies that minimize surface disruptions while supporting medium-voltage power lines.

One recipient, Case Western Reserve University in Cleveland, Ohio, is building a self-propelled robotic sleeve that mimics earthworms’ characteristic peristaltic movement to advance through soil. Awarded $2 million, Case’s “peristaltic conduit” concept hopes to more precisely navigate underground and reduce the risk of unintended damage, such as breaking an existing pipe.

Why Is Undergrounding So Expensive?

Despite its benefits, undergrounding remains cost-prohibitive at US $1.1 to $3.7 million per kilometer ($1.8 to $6 million per mile) for distribution lines and $3.7 to $62 million per kilometer for transmission lines, according to estimates from California’s three largest utilities. That’s significantly more than overhead infrastructure, which costs $394,000 to $472,000 per kilometer for distribution lines and $621,000 to $6.83 million per kilometer for transmission lines.

The most popular method of undergrounding power lines, called open trenching, requires extensive excavation, conduit installation, and backfilling, making it expensive and logistically complicated. And it’s often impractical in dense urban areas where underground infrastructure is already congested with plumbing, fiber optics, and other utilities.

Trenchless methods like horizontal directional drilling (HDD) provide a less invasive way to get power lines under roads and railways by creating a controlled, curved bore path that starts at a shallow entry angle, deepens to pass obstacles, and resurfaces at a precise exit point. But HDD is even more expensive than open trenching due to specialized equipment, complex workflows, and the risk of damaging existing infrastructure.

Given the steep costs, utilities often prioritize cheaper fire mitigation strategies like trimming back nearby trees and other plants, using insulated conductors, and stepping up routine inspections and repairs. While not as effective as undergrounding, these measures have been the go-to option, largely because faster, cheaper underground construction methods don’t yet exist.

Ted Kury, director of energy studies at the University of Florida’s Public Utility Research Center, who has extensively studied the costs and benefits of undergrounding, says technologies implementing directional drilling improvements “could make undergrounding more practical in urban or densely populated areas where open trenching, and its attendant disruptions to the surrounding infrastructure, could result in untenable costs.”

Earthworm-Inspired Robotics for Power Lines

In Case’s worm-inspired robot, alternating sections are designed to expand and retract to anchor and advance the machine. This flexible force increases precision and reduces the risk of impacting and breaking pipes. Conventional methods require large turning radii exceeding 300 meters, but Case’s 1.5-meter turning radius will enable the device to flexibly maneuver around existing infrastructure.

“We use actuators to change the length and diameter of each segment,” says Kathryn Daltorio, an associate engineering professor and co-director of Case’s Biologically-Inspired Robotics Lab. “The short and fat segments press against the walls of the burrow, then they anchor so the thin segments can advance forward. If two segments aren’t touching the ground but they’re changing length at the same time, your anchors don’t slip and you advance forward.”

Daltorio and her colleagues have studied earthworm-inspired robotics for over a decade, originally envisioning the technology for surgical and confined-space applications before recognizing its potential for undergrounding power lines.

Case Western Reserve University’s worm-like digging robot can turn faster than other drilling techniques to avoid obstacles.Kathryn Daltorio/Case School of Engineering

Traditional HDD relies on pushing a drill head through soil, requiring more force as the bore length grows. Case’s drilling concept generates the force needed for the tip from the peristaltic segments within the borehole. As the path gets longer, only the front segments dig deeper. “If the robot hits something, operators can pull back and change directions, burrowing along the way to complete the circuit by changing the depth,” Daltorio says.

Another key difference from HDD is integrated conduit installation. In HDD, the drill goes through the entire length first, and then the power conduit is pulled through. Case’s peristaltic robot lays the conduit while traveling, reducing the overall installation time.

Advancements in Burrowing Precision

“The peristaltic conduit approach is fascinating [and] certainly seems to be addressing concerns regarding the sheer variety of underground obstacles,” says the University of Florida’s Kury. However, he highlights a larger concern with undergrounding innovations—not just Case’s—in meeting a constantly evolving environment. Today’s underground will look very different in 10 years, as soil profiles shift, trees grow, animals tunnel, and people dig and build. “Underground cables will live for decades, and the sustainability of these technologies depends on how they adapt to this changing structure,” Kury added.

Daltorio notes that current undergrounding practices involve pouring concrete around the lines before backfilling to protect them from future excavation, a challenge for existing trenchless methods. But Case’s project brings two major benefits. First, by better understanding borehole design, engineers have more flexibility in choosing conduit materials to match the standards for particular environments. Also, advancements in burrowing precision could minimize the likelihood of future disruptions from human activities.

The research team is exploring different ways to reinforce the digging robot’s exterior while it’s underground.Olivia Gatchall

Daltorio’s team is collaborating with several partners, with Auburn University in Alabama contributing geotechnical expertise, Stony Brook University in New York running the modeling, and the University of Texas at Austin studying sediment interactions.

The project aims to halve undergrounding costs, though Daltorio cautions that it’s too early to commit to a specific cost model. Still, the time-saving potential appears promising. “With conventional approaches, planning, permitting and scheduling can take months,” Daltorio says. “By simplifying the process, it might be a few inspections at the endpoints, a few days of autonomous burrowing with minimal disruption to traffic above, followed by a few days of cleaning, splicing, and inspection.”



After January’s Southern California wildfires, the question of burying energy infrastructure to prevent future fires has gained renewed urgency in the state. While the exact cause of the fires remains under investigation, California utilities have spent years undergrounding power lines to mitigate fire risks. Pacific Gas & Electric, which has installed over 1,287 kilometers of underground power lines since 2021, estimates the method is 98 percent effective in reducing ignition threats. Southern California Edison has buried over 40 percent of its high-risk distribution lines, and 63 percent of San Diego Gas & Electric’s regional distribution system is now underground.

Still, the exorbitant cost of underground construction leaves much of the U.S. power grid’s 8.8 million kilometers of distribution lines and 180 million utility poles exposed to tree strikes, flying debris, and other opportunities for sparks to cascade into a multi-acre blaze. Recognizing the need for cost-effective undergrounding solutions, the U.S. Department of Energy launched GOPHURRS in January 2024. The three-year program pours $34 million into 12 projects to develop more efficient undergrounding technologies that minimize surface disruptions while supporting medium-voltage power lines.

One recipient, Case Western Reserve University in Cleveland, Ohio, is building a self-propelled robotic sleeve that mimics earthworms’ characteristic peristaltic movement to advance through soil. Awarded $2 million, Case’s “peristaltic conduit” concept hopes to more precisely navigate underground and reduce the risk of unintended damage, such as breaking an existing pipe.

Why Is Undergrounding So Expensive?

Despite its benefits, undergrounding remains cost-prohibitive at US $1.1 to $3.7 million per kilometer ($1.8 to $6 million per mile) for distribution lines and $3.7 to $62 million per kilometer for transmission lines, according to estimates from California’s three largest utilities. That’s significantly more than overhead infrastructure, which costs $394,000 to $472,000 per kilometer for distribution lines and $621,000 to $6.83 million per kilometer for transmission lines.

The most popular method of undergrounding power lines, called open trenching, requires extensive excavation, conduit installation, and backfilling, making it expensive and logistically complicated. And it’s often impractical in dense urban areas where underground infrastructure is already congested with plumbing, fiber optics, and other utilities.

Trenchless methods like horizontal directional drilling (HDD) provide a less invasive way to get power lines under roads and railways by creating a controlled, curved bore path that starts at a shallow entry angle, deepens to pass obstacles, and resurfaces at a precise exit point. But HDD is even more expensive than open trenching due to specialized equipment, complex workflows, and the risk of damaging existing infrastructure.

Given the steep costs, utilities often prioritize cheaper fire mitigation strategies like trimming back nearby trees and other plants, using insulated conductors, and stepping up routine inspections and repairs. While not as effective as undergrounding, these measures have been the go-to option, largely because faster, cheaper underground construction methods don’t yet exist.

Ted Kury, director of energy studies at the University of Florida’s Public Utility Research Center, who has extensively studied the costs and benefits of undergrounding, says technologies implementing directional drilling improvements “could make undergrounding more practical in urban or densely populated areas where open trenching, and its attendant disruptions to the surrounding infrastructure, could result in untenable costs.”

Earthworm-Inspired Robotics for Power Lines

In Case’s worm-inspired robot, alternating sections are designed to expand and retract to anchor and advance the machine. This flexible force increases precision and reduces the risk of impacting and breaking pipes. Conventional methods require large turning radii exceeding 300 meters, but Case’s 1.5-meter turning radius will enable the device to flexibly maneuver around existing infrastructure.

“We use actuators to change the length and diameter of each segment,” says Kathryn Daltorio, an associate engineering professor and co-director of Case’s Biologically-Inspired Robotics Lab. “The short and fat segments press against the walls of the burrow, then they anchor so the thin segments can advance forward. If two segments aren’t touching the ground but they’re changing length at the same time, your anchors don’t slip and you advance forward.”

Daltorio and her colleagues have studied earthworm-inspired robotics for over a decade, originally envisioning the technology for surgical and confined-space applications before recognizing its potential for undergrounding power lines.

Case Western Reserve University’s worm-like digging robot can turn faster than other drilling techniques to avoid obstacles.Kathryn Daltorio/Case School of Engineering

Traditional HDD relies on pushing a drill head through soil, requiring more force as the bore length grows. Case’s drilling concept generates the force needed for the tip from the peristaltic segments within the borehole. As the path gets longer, only the front segments dig deeper. “If the robot hits something, operators can pull back and change directions, burrowing along the way to complete the circuit by changing the depth,” Daltorio says.

Another key difference from HDD is integrated conduit installation. In HDD, the drill goes through the entire length first, and then the power conduit is pulled through. Case’s peristaltic robot lays the conduit while traveling, reducing the overall installation time.

Advancements in Burrowing Precision

“The peristaltic conduit approach is fascinating [and] certainly seems to be addressing concerns regarding the sheer variety of underground obstacles,” says the University of Florida’s Kury. However, he highlights a larger concern with undergrounding innovations—not just Case’s—in meeting a constantly evolving environment. Today’s underground will look very different in 10 years, as soil profiles shift, trees grow, animals tunnel, and people dig and build. “Underground cables will live for decades, and the sustainability of these technologies depends on how they adapt to this changing structure,” Kury added.

Daltorio notes that current undergrounding practices involve pouring concrete around the lines before backfilling to protect them from future excavation, a challenge for existing trenchless methods. But Case’s project brings two major benefits. First, by better understanding borehole design, engineers have more flexibility in choosing conduit materials to match the standards for particular environments. Also, advancements in burrowing precision could minimize the likelihood of future disruptions from human activities.

The research team is exploring different ways to reinforce the digging robot’s exterior while it’s underground.Olivia Gatchall

Daltorio’s team is collaborating with several partners, with Auburn University in Alabama contributing geotechnical expertise, Stony Brook University in New York running the modeling, and the University of Texas at Austin studying sediment interactions.

The project aims to halve undergrounding costs, though Daltorio cautions that it’s too early to commit to a specific cost model. Still, the time-saving potential appears promising. “With conventional approaches, planning, permitting and scheduling can take months,” Daltorio says. “By simplifying the process, it might be a few inspections at the endpoints, a few days of autonomous burrowing with minimal disruption to traffic above, followed by a few days of cleaning, splicing, and inspection.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

Last year, we unveiled the new Atlas—faster, stronger, more compact, and less messy. We’re designing the world’s most dynamic humanoid robot to do anything and everything, but we get there one step at a time. Our first task is part sequencing, a common logistics task in automotive manufacturing. Discover why we started with sequencing, how we are solving hard problems, and how we’re delivering a humanoid robot with real value.

My favorite part is 1:40, where Atlas squats down to pick a part up off the ground.

[ Boston Dynamics ]

I’m mostly impressed that making contact with that stick doesn’t cause the robot to fall over.

[ Unitree ]

Professor Patrícia Alves-Oliveira is studying authenticity of artworks co-created by an artist and a robot. Her research lab, Robot Studio, is developing methods to authenticate artwork by analyzing their entire creative process. This is accomplished by using the artist’s biometrics as well as the process of artwork creation, from the first brushstroke to the final painting. This work aims to bring ownership back to artists in the age of generative AI.

[ Robot Studio ] at [ University of Michigan ]

Hard to believe that RoMeLa has been developing humanoid robots for 20 (!) years. Here’s to 20 more!

[ RoMeLa ] at [ University of California Los Angeles ]

In this demo, Reachy 2 autonomously sorts healthy and unhealthy foods. No machine learning, no pre-trained AI—just real-time object detection!

[ Pollen ]

Biological snakes achieve high mobility with numerous joints, inspiring snake-like robots for rescue and inspection. However, conventional designs feature a limited number of joints. This paper presents an underactuated snake robot consisting of many passive links that can dynamically change its joint coupling configuration by repositioning motor-driven joint units along internal rack gears. Furthermore, a soft robot skin wirelessly powers the units, eliminating wire tangling and disconnection risks.

[ Paper ]

Thanks, Ayato!

Tech United Eindhoven is working on quadrupedal soccer robots, which should be fun.

[ Tech United ]

Autonomous manipulation in everyday tasks requires flexible action generation to handle complex, diverse real-world environments, such as objects with varying hardness and softness. Imitation Learning (IL) enables robots to learn complex tasks from expert demonstrations. However, a lot of existing methods rely on position/unilateral control, leaving challenges in tasks that require force information/control, like carefully grasping fragile or varying-hardness objects. To address these challenges, we introduce Bilateral Control-Based Imitation Learning via Action Chunking with Transformers(Bi-ACT) and”A” “L”ow-cost “P”hysical “Ha”rdware Considering Diverse Motor Control Modes for Research in Everyday Bimanual Robotic Manipulation (ALPHA-α).

[ Alpha-Biact ]

Thanks, Masato!

Powered by UBTECH’s revolutionary framework “BrainNet”, a team of Walker S1 humanoid robots work together to master complex tasks at Zeekr’s Smart Factory! Teamwork makes the dream of robots work.

[ UBTECH ]

Personal mobile robotic assistants are expected to find wide applications in industry and healthcare. However, manually steering a robot while in motion requires significant concentration from the operator, especially in tight or crowded spaces. This work presents a virtual leash with which a robot can naturally follow an operator. We successfully validate on the ANYmal platform the robustness and performance of our entire pipeline in real-world experiments.

[ ETH Zurich Robotic Systems Lab ]

I do not ever want to inspect a wind turbine blade from the inside.

[ Flyability ]

Sometimes you can learn more about a robot from an instructional unboxing video than from a fancy demo.

[ DEEP Robotics ]

Researchers at Penn Engineering have discovered that certain features of AI-governed robots carry security vulnerabilities and weaknesses that were previously unidentified and unknown. Funded by the National Science Foundation and the Army Research Laboratory, the research aims to address the emerging vulnerability for ensuring the safe deployment of large language models (LLMs) in robotics.

[ RoboPAIR ]

ReachBot is a joint project between Stanford and NASA to explore a new approach to mobility in challenging environments such as martian caves. It consists of a compact robot body with very long extending arms, based on booms used for extendable antennas. The booms unroll from a coil and can extend many meters in low gravity. In this talk I will introduce the ReachBot design and motion planning considerations, report on a field test with a single ReachBot arm in a lava tube in the Mojave Desert, and discuss future plans, which include the possibility of mounting one or more ReachBot arms equipped with wrists and grippers on a mobile platform – such as ANYMal.

[ ReachBot ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

Last year, we unveiled the new Atlas—faster, stronger, more compact, and less messy. We’re designing the world’s most dynamic humanoid robot to do anything and everything, but we get there one step at a time. Our first task is part sequencing, a common logistics task in automotive manufacturing. Discover why we started with sequencing, how we are solving hard problems, and how we’re delivering a humanoid robot with real value.

My favorite part is 1:40, where Atlas squats down to pick a part up off the ground.

[ Boston Dynamics ]

I’m mostly impressed that making contact with that stick doesn’t cause the robot to fall over.

[ Unitree ]

Professor Patrícia Alves-Oliveira is studying authenticity of artworks co-created by an artist and a robot. Her research lab, Robot Studio, is developing methods to authenticate artwork by analyzing their entire creative process. This is accomplished by using the artist’s biometrics as well as the process of artwork creation, from the first brushstroke to the final painting. This work aims to bring ownership back to artists in the age of generative AI.

[ Robot Studio ] at [ University of Michigan ]

Hard to believe that RoMeLa has been developing humanoid robots for 20 (!) years. Here’s to 20 more!

[ RoMeLa ] at [ University of California Los Angeles ]

In this demo, Reachy 2 autonomously sorts healthy and unhealthy foods. No machine learning, no pre-trained AI—just real-time object detection!

[ Pollen ]

Biological snakes achieve high mobility with numerous joints, inspiring snake-like robots for rescue and inspection. However, conventional designs feature a limited number of joints. This paper presents an underactuated snake robot consisting of many passive links that can dynamically change its joint coupling configuration by repositioning motor-driven joint units along internal rack gears. Furthermore, a soft robot skin wirelessly powers the units, eliminating wire tangling and disconnection risks.

[ Paper ]

Thanks, Ayato!

Tech United Eindhoven is working on quadrupedal soccer robots, which should be fun.

[ Tech United ]

Autonomous manipulation in everyday tasks requires flexible action generation to handle complex, diverse real-world environments, such as objects with varying hardness and softness. Imitation Learning (IL) enables robots to learn complex tasks from expert demonstrations. However, a lot of existing methods rely on position/unilateral control, leaving challenges in tasks that require force information/control, like carefully grasping fragile or varying-hardness objects. To address these challenges, we introduce Bilateral Control-Based Imitation Learning via Action Chunking with Transformers(Bi-ACT) and”A” “L”ow-cost “P”hysical “Ha”rdware Considering Diverse Motor Control Modes for Research in Everyday Bimanual Robotic Manipulation (ALPHA-α).

[ Alpha-Biact ]

Thanks, Masato!

Powered by UBTECH’s revolutionary framework “BrainNet”, a team of Walker S1 humanoid robots work together to master complex tasks at Zeekr’s Smart Factory! Teamwork makes the dream of robots work.

[ UBTECH ]

Personal mobile robotic assistants are expected to find wide applications in industry and healthcare. However, manually steering a robot while in motion requires significant concentration from the operator, especially in tight or crowded spaces. This work presents a virtual leash with which a robot can naturally follow an operator. We successfully validate on the ANYmal platform the robustness and performance of our entire pipeline in real-world experiments.

[ ETH Zurich Robotic Systems Lab ]

I do not ever want to inspect a wind turbine blade from the inside.

[ Flyability ]

Sometimes you can learn more about a robot from an instructional unboxing video than from a fancy demo.

[ DEEP Robotics ]

Researchers at Penn Engineering have discovered that certain features of AI-governed robots carry security vulnerabilities and weaknesses that were previously unidentified and unknown. Funded by the National Science Foundation and the Army Research Laboratory, the research aims to address the emerging vulnerability for ensuring the safe deployment of large language models (LLMs) in robotics.

[ RoboPAIR ]

ReachBot is a joint project between Stanford and NASA to explore a new approach to mobility in challenging environments such as martian caves. It consists of a compact robot body with very long extending arms, based on booms used for extendable antennas. The booms unroll from a coil and can extend many meters in low gravity. In this talk I will introduce the ReachBot design and motion planning considerations, report on a field test with a single ReachBot arm in a lava tube in the Mojave Desert, and discuss future plans, which include the possibility of mounting one or more ReachBot arms equipped with wrists and grippers on a mobile platform – such as ANYMal.

[ ReachBot ]



Although they’re a staple of sci-fi movies and conspiracy theories, in real life, tiny flying microbots—weighed down by batteries and electronics—have struggled to get very far. But a new combination of circuits and lightweight solid-state batteries called a “flying batteries” topology could let these bots really take off, potentially powering microbots for hours from a system that weighs milligrams.

Microbots could be an important technology to find people buried in rubble or scout ahead in other dangerous situations. But they’re a difficult engineering challenge, says Patrick Mercier, an electrical and computer engineering professor at the University of California, San Diego. Mercier’s student Zixiao Lin described the new circuit last month at the IEEE International Solid State Circuits Conference (ISSCC). “You have these really tiny robots, and you want them to last as long as possible in the field,” Mercier says. “The best way to do that is to use lithium-ion batteries, because they have the best energy density. But there’s this fundamental problem, where the actuators need much higher voltage than what the battery is capable of providing.”

A lithium cell can provide about 4 volts, but piezoelectric actuators for microbots need tens to hundreds of volts, explains Mercier. Researchers, including Mercier’s own group, have developed circuits such as boost converters to pump up the voltage. But because they need relatively large inductors or a bunch of capacitors, these add too much mass and volume, typically taking up about as much room as the battery itself.

A new kind of solid-state battery, developed at the French national electronics laboratory CEA-Leti, offered a potential solution. The batteries are a thin-film stack of material, including lithium cobalt oxide and lithium phosphorus oxynitride, made using semiconductor processing technology, and they can be diced up into tiny cells. A 0.33-cubic-millimeter, 0.8-milligram cell can store 20 microampere-hours of charge, or about 60 ampere-hours per liter. (Lithium-ion earbud batteries provide more than 100 Ah/L, but are about 1,000 times as large.) A CEA-Leti spinoff based on the technology, Inject Power, in Grenoble, France, is gearing up to begin volume manufacturing in late 2026.

Stacking Batteries on the Fly

Because a solid-state battery can be diced up into tiny cells, researchers thought that they could achieve high voltages using a circuit that needs no capacitors or inductors. Instead, the circuit actively rearranges the connections among many tiny batteries moving them from parallel to serial and back again.

Imagine a microdrone that moves by flapping wings attached to a piezoelectric actuator. On its circuit board are a dozen or so of the solid-state microbatteries. Each battery is part of a circuit consisting of four transistors. These act as switches that can dynamically change the connection to that battery’s neighbor so that it is either parallel, so they share the same voltage, or serial, so their voltages are added.

At the start, all the batteries are in parallel, delivering a voltage that is nowhere near enough to trigger the actuator. The 2-square-millimeter IC the UCSD team built then begins opening and closing the transistor switches. This rearranges the connections between the cells so that first two cells are connected serially, then three, then four, and so on. In a few hundredths of a second, the batteries are all connected in series, and the voltage has piled so much charge onto the actuator that it snaps the microbot’s wings down. The IC then unwinds the process, making the batteries parallel again, one at a time.

The integrated circuit in the “flying battery” has a total area of 2 square millimeters.Patrick Mercier

Adiabatic Charging

Why not just connect every battery in series at once instead of going through this ramping up and down scheme? In a word, efficiency.

As long as the battery serialization and parallelization is done at a low-enough frequency, the system is charging adiabatically. That is, its power losses are minimized.

But it’s what happens after the actuator triggers “where the real magic comes in,” says Mercier. The piezoelectric actuator in the circuit acts like a capacitor, storing energy. “Just like you have regenerative breaking in a car, we can recover some of the energy that we stored in this actuator.” As each battery is unstacked, the remaining energy storage system has a lower voltage than the actuator, so some charge flows back into the batteries.

The UCSD team actually tested two varieties of solid-state microbatteries—1.5-volt ceramic version from Tokyo-based TDK (CeraCharge 1704-SSB) and a 4-V custom design from CEA-Leti. With 1.6 grams of TDK cells, the circuit reached 56.1 volts and delivered a power density of 79 milliwatts per gram, but with 0.014 grams of the custom storage, it maxed out at 68 V, and demonstrated a power density of 4,500 mW/g.

Mercier plans to test the system with robotics partners while his team and CEA-Leti work to improved the flying batteries system’s packaging, miniaturization, and other properties. One important characteristic that needs work is the internal resistance of the microbatteries. “The challenge there is that the more you stack, the higher the series resistance is, and therefore the lower the frequency we can operate the system,” he says.

Nevertheless, Mercier seems bullish on flying batteries’ chances of keeping microbots aloft. “Adiabatic charging with charge recovery and no passives: Those are two wins that help increase flight time.”



Although they’re a staple of sci-fi movies and conspiracy theories, in real life, tiny flying microbots—weighed down by batteries and electronics—have struggled to get very far. But a new combination of circuits and lightweight solid-state batteries called a “flying batteries” topology could let these bots really take off, potentially powering microbots for hours from a system that weighs milligrams.

Microbots could be an important technology to find people buried in rubble or scout ahead in other dangerous situations. But they’re a difficult engineering challenge, says Patrick Mercier, an electrical and computer engineering professor at the University of California, San Diego. Mercier’s student Zixiao Lin described the new circuit last month at the IEEE International Solid State Circuits Conference (ISSCC). “You have these really tiny robots, and you want them to last as long as possible in the field,” Mercier says. “The best way to do that is to use lithium-ion batteries, because they have the best energy density. But there’s this fundamental problem, where the actuators need much higher voltage than what the battery is capable of providing.”

A lithium cell can provide about 4 volts, but piezoelectric actuators for microbots need tens to hundreds of volts, explains Mercier. Researchers, including Mercier’s own group, have developed circuits such as boost converters to pump up the voltage. But because they need relatively large inductors or a bunch of capacitors, these add too much mass and volume, typically taking up about as much room as the battery itself.

A new kind of solid-state battery, developed at the French national electronics laboratory CEA-Leti, offered a potential solution. The batteries are a thin-film stack of material, including lithium cobalt oxide and lithium phosphorus oxynitride, made using semiconductor processing technology, and they can be diced up into tiny cells. A 0.33-cubic-millimeter, 0.8-milligram cell can store 20 microampere-hours of charge, or about 60 ampere-hours per liter. (Lithium-ion earbud batteries provide more than 100 Ah/L, but are about 1,000 times as large.) A CEA-Leti spinoff based on the technology, Inject Power, in Grenoble, France, is gearing up to begin volume manufacturing in late 2026.

Stacking Batteries on the Fly

Because a solid-state battery can be diced up into tiny cells, researchers thought that they could achieve high voltages using a circuit that needs no capacitors or inductors. Instead, the circuit actively rearranges the connections among many tiny batteries moving them from parallel to serial and back again.

Imagine a microdrone that moves by flapping wings attached to a piezoelectric actuator. On its circuit board are a dozen or so of the solid-state microbatteries. Each battery is part of a circuit consisting of four transistors. These act as switches that can dynamically change the connection to that battery’s neighbor so that it is either parallel, so they share the same voltage, or serial, so their voltages are added.

At the start, all the batteries are in parallel, delivering a voltage that is nowhere near enough to trigger the actuator. The 2-square-millimeter IC the UCSD team built then begins opening and closing the transistor switches. This rearranges the connections between the cells so that first two cells are connected serially, then three, then four, and so on. In a few hundredths of a second, the batteries are all connected in series, and the voltage has piled so much charge onto the actuator that it snaps the microbot’s wings down. The IC then unwinds the process, making the batteries parallel again, one at a time.

The integrated circuit in the “flying battery” has a total area of 2 square millimeters.Patrick Mercier

Adiabatic Charging

Why not just connect every battery in series at once instead of going through this ramping up and down scheme? In a word, efficiency.

As long as the battery serialization and parallelization is done at a low-enough frequency, the system is charging adiabatically. That is, its power losses are minimized.

But it’s what happens after the actuator triggers “where the real magic comes in,” says Mercier. The piezoelectric actuator in the circuit acts like a capacitor, storing energy. “Just like you have regenerative breaking in a car, we can recover some of the energy that we stored in this actuator.” As each battery is unstacked, the remaining energy storage system has a lower voltage than the actuator, so some charge flows back into the batteries.

The UCSD team actually tested two varieties of solid-state microbatteries—1.5-volt ceramic version from Tokyo-based TDK (CeraCharge 1704-SSB) and a 4-V custom design from CEA-Leti. With 1.6 grams of TDK cells, the circuit reached 56.1 volts and delivered a power density of 79 milliwatts per gram, but with 0.014 grams of the custom storage, it maxed out at 68 V, and demonstrated a power density of 4,500 mW/g.

Mercier plans to test the system with robotics partners while his team and CEA-Leti work to improved the flying batteries system’s packaging, miniaturization, and other properties. One important characteristic that needs work is the internal resistance of the microbatteries. “The challenge there is that the more you stack, the higher the series resistance is, and therefore the lower the frequency we can operate the system,” he says.

Nevertheless, Mercier seems bullish on flying batteries’ chances of keeping microbots aloft. “Adiabatic charging with charge recovery and no passives: Those are two wins that help increase flight time.”



Salto has been one of our favorite robots since we were first introduced to it in 2016 as a project out of Ron Fearing’s lab at UC Berkeley. The palm-sized spring-loaded jumping robot has gone from barely being able to chain together a few open-loop jumps to mastering landings, bouncing around outside, powering through obstacle courses, and occasionally exploding.

What’s quite unusual about Salto is that it’s still an active research project—nine years is an astonishingly long life time for any robot, especially one without any immediately obvious practical applications. But one of Salto’s original creators, Justin Yim (who is now a professor at the University of Illinois), has found a niche where Salto might be able to do what no other robot can: mid-air sampling of the water geysering out of the frigid surface of Enceladus, a moon of Saturn.

What makes Enceladus so interesting is that it’s completely covered in a 40 kilometer thick sheet of ice, and underneath that ice is a 10 km-deep global ocean. And within that ocean can be found—we know not what. Diving in that buried ocean is a problem that robots may be able to solve at some point, but in the near(er) term, Enceladus’ south pole is home to over a hundred cryovolcanoes that spew plumes of water vapor and all kinds of other stuff right out into space, offering a sampling opportunity to any robot that can get close enough for a sip.

“We can cover large distances, we can get over obstacles, we don’t require an atmosphere, and we don’t pollute anything.” —Justin Yim, University of Illinois

Yim, along with another Salto veteran Ethan Schaler (now at JPL), have been awarded funding through NASA’s Innovative Advanced Concepts (NIAC) program to turn Salto into a robot that can perform “Legged Exploration Across the Plume,” or in an only moderately strained backronym, LEAP. LEAP would be a space-ified version of Salto with a couple of major modifications allowing it to operate in a freezing, airless, low-gravity environment.

Exploring Enceladus’ Challenging Terrain

As best as we can make out from images taken during Cassini flybys, the surface of Enceladus is unfriendly to traditional rovers, covered in ridges and fissures, although we don’t have very much information on the exact properties of the terrain. There’s also essentially no atmosphere, meaning that you can’t fly using aerodynamics, and if you use rockets to fly instead, you run the risk of your exhaust contaminating any samples that you take.

“This doesn’t leave us with a whole lot of options for getting around, but one that seems like it might be particularly suitable is jumping,” Yim tells us. “We can cover large distances, we can get over obstacles, we don’t require an atmosphere, and we don’t pollute anything.” And with Enceladus’ gravity being just 1/80th that of Earth, Salto’s meter-high jump on Earth would enable it to travel a hundred meters or so on Enceladus, taking samples as it soars through cryovolcano plumes.

The current version of Salto does require an atmosphere, because it uses a pair of propellers as tiny thrusters to control yaw and roll. On LEAP, those thrusters would be replaced with an angled pair of reaction wheels instead. To deal with the terrain, the robot will also likely need a foot that can handle jumping from (and landing on) surfaces composed of granular ice particles.

LEAP is designed to jump through Enceladus’ many plumes to collect samples, and use the moon’s terrain to direct subsequent jumps.NASA/Justin Yim

While the vision is for LEAP to jump continuously, bouncing over the surface and through plumes in a controlled series of hops, sooner or later it’s going to have a bad landing, and the robot has to be prepared for that. “I think one of the biggest new technological developments is going to be multimodal locomotion,” explains Yim. “Specifically, we’d like to have a robust ability to handle falls.” The reaction wheels can help with this in two ways: they offer some protection by acting like a shell around the robot, and they can also operate as a regular pair of wheels, allowing the robot to roll around on the ground a little bit. “With some maneuvers that we’re experimenting with now, the reaction wheels might also be able to help the robot to pop itself back upright so that it can start jumping again after it falls over,” Yim says.

A NIAC project like this is about as early-stage as it gets for something like LEAP, and an Enceladus mission is very far away as measured by almost every metric—space, time, funding, policy, you name it. Long term, the idea with LEAP is that it could be an add-on to a mission concept called the Enceladus Orbilander. This US $2.5 billion spacecraft would launch sometime in the 2030s, and spend about a dozen years getting to Saturn and entering orbit around Enceladus. After 1.5 years in orbit, the spacecraft would land on the surface, and spend a further 2 years looking for biosignatures. The Orbilander itself would be stationary, Yim explains, “so having this robotic mobility solution would be a great way to do expanded exploration of Enceladus, getting really long distance coverage to collect water samples from plumes on different areas of the surface.”

LEAP has been funded through a nine-month Phase 1 study that begins this April. While the JPL team investigates ice-foot interactions and tries to figure out how to keep the robot from freezing to death, at the University of Illinois Yim will be upgrading Salto with self-righting capability. Honestly, it’s exciting to think that after so many years, Salto may have finally found an application where it offers the actual best solution for solving this particular problem of low-gravity mobility for science.



Salto has been one of our favorite robots since we were first introduced to it in 2016 as a project out of Ron Fearing’s lab at UC Berkeley. The palm-sized spring-loaded jumping robot has gone from barely being able to chain together a few open-loop jumps to mastering landings, bouncing around outside, powering through obstacle courses, and occasionally exploding.

What’s quite unusual about Salto is that it’s still an active research project—nine years is an astonishingly long life time for any robot, especially one without any immediately obvious practical applications. But one of Salto’s original creators, Justin Yim (who is now a professor at the University of Illinois), has found a niche where Salto might be able to do what no other robot can: mid-air sampling of the water geysering out of the frigid surface of Enceladus, a moon of Saturn.

What makes Enceladus so interesting is that it’s completely covered in a 40 kilometer thick sheet of ice, and underneath that ice is a 10 km-deep global ocean. And within that ocean can be found—we know not what. Diving in that buried ocean is a problem that robots may be able to solve at some point, but in the near(er) term, Enceladus’ south pole is home to over a hundred cryovolcanoes that spew plumes of water vapor and all kinds of other stuff right out into space, offering a sampling opportunity to any robot that can get close enough for a sip.

“We can cover large distances, we can get over obstacles, we don’t require an atmosphere, and we don’t pollute anything.” —Justin Yim, University of Illinois

Yim, along with another Salto veteran Ethan Schaler (now at JPL), have been awarded funding through NASA’s Innovative Advanced Concepts (NIAC) program to turn Salto into a robot that can perform “Legged Exploration Across the Plume,” or in an only moderately strained backronym, LEAP. LEAP would be a space-ified version of Salto with a couple of major modifications allowing it to operate in a freezing, airless, low-gravity environment.

Exploring Enceladus’ Challenging Terrain

As best as we can make out from images taken during Cassini flybys, the surface of Enceladus is unfriendly to traditional rovers, covered in ridges and fissures, although we don’t have very much information on the exact properties of the terrain. There’s also essentially no atmosphere, meaning that you can’t fly using aerodynamics, and if you use rockets to fly instead, you run the risk of your exhaust contaminating any samples that you take.

“This doesn’t leave us with a whole lot of options for getting around, but one that seems like it might be particularly suitable is jumping,” Yim tells us. “We can cover large distances, we can get over obstacles, we don’t require an atmosphere, and we don’t pollute anything.” And with Enceladus’ gravity being just 1/80th that of Earth, Salto’s meter-high jump on Earth would enable it to travel a hundred meters or so on Enceladus, taking samples as it soars through cryovolcano plumes.

The current version of Salto does require an atmosphere, because it uses a pair of propellers as tiny thrusters to control yaw and roll. On LEAP, those thrusters would be replaced with an angled pair of reaction wheels instead. To deal with the terrain, the robot will also likely need a foot that can handle jumping from (and landing on) surfaces composed of granular ice particles.

LEAP is designed to jump through Enceladus’ many plumes to collect samples, and use the moon’s terrain to direct subsequent jumps.NASA/Justin Yim

While the vision is for LEAP to jump continuously, bouncing over the surface and through plumes in a controlled series of hops, sooner or later it’s going to have a bad landing, and the robot has to be prepared for that. “I think one of the biggest new technological developments is going to be multimodal locomotion,” explains Yim. “Specifically, we’d like to have a robust ability to handle falls.” The reaction wheels can help with this in two ways: they offer some protection by acting like a shell around the robot, and they can also operate as a regular pair of wheels, allowing the robot to roll around on the ground a little bit. “With some maneuvers that we’re experimenting with now, the reaction wheels might also be able to help the robot to pop itself back upright so that it can start jumping again after it falls over,” Yim says.

A NIAC project like this is about as early-stage as it gets for something like LEAP, and an Enceladus mission is very far away as measured by almost every metric—space, time, funding, policy, you name it. Long term, the idea with LEAP is that it could be an add-on to a mission concept called the Enceladus Orbilander. This US $2.5 billion spacecraft would launch sometime in the 2030s, and spend about a dozen years getting to Saturn and entering orbit around Enceladus. After 1.5 years in orbit, the spacecraft would land on the surface, and spend a further 2 years looking for biosignatures. The Orbilander itself would be stationary, Yim explains, “so having this robotic mobility solution would be a great way to do expanded exploration of Enceladus, getting really long distance coverage to collect water samples from plumes on different areas of the surface.”

LEAP has been funded through a nine-month Phase 1 study that begins this April. While the JPL team investigates ice-foot interactions and tries to figure out how to keep the robot from freezing to death, at the University of Illinois Yim will be upgrading Salto with self-righting capability. Honestly, it’s exciting to think that after so many years, Salto may have finally found an application where it offers the actual best solution for solving this particular problem of low-gravity mobility for science.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

A bioinspired robot developed at EPFL can change shape to alter its own physical properties in response to its environment, resulting in a robust and efficient autonomous vehicle as well as a fresh approach to robotic locomotion.

[ Science Robotics ] via [ EPFL ]

A robot CAN get up this way, but SHOULD a robot get up this way?

[ University of Illinois Urbana-Champaign ]

I’m impressed with the capabilities here, but not the use case. There are already automated systems that do this much faster, much more reliably, and almost certainly much more cheaply. So, probably best to think of this as more of a technology demo than anything with commercial potential.

[ Figure ]

NEO Gamma is the next generation of home humanoids designed and engineered by 1X Technologies. The Gamma series includes improvements across NEO’s hardware and AI, featuring a new design that is deeply considerate of life at home. The future of Home Humanoids is here.

You all know by now not to take this video too seriously, but I will say that an advantage of building a robot like this for the home is that realistically it can spend most of its time sitting down and (presumably) charging.

[ 1X Technologies ]

This video compilation showcases novel aerial and underwater drone platforms and an ultra-quiet electric vertical takeoff and landing (eVTOL) propeller. These technologies were developed by the Advanced Vertical Flight Laboratory (AVFL) at Texas A&M University and Harmony Aeronautics, an AVFL spin-off company.

[ AVFL ]

Yes! More research like this please; legged robots (of all sizes) are TOO STOMPY.

[ ETH Zurich ]

Robosquirrel!

[ BBC ] via [ Laughing Squid ]

By watching their own motions with a camera, robots can teach themselves about the structure of their own bodies and how they move, a new study from researchers at Columbia Engineering now reveals. Equipped with this knowledge, the robots could not only plan their own actions, but also overcome damage to their bodies.

[ Columbia University, School of Engineering and Applied Science ]

If I was asking my robot to do a front flip for the first(ish) time, my face would probably look like the poor guy at 0:25. But it worked!

[ EngineAI ]

*We kindly request that all users refrain from making any dangerous modifications or using the robots in a hazardous manner.

A hazardous manner? Like teaching it martial arts...?

[ Unitree ]

Explore SLAMSpoof—a cutting-edge project by Keio-CSG that demonstrates how LiDAR spoofing attacks can compromise SLAM systems. In this video, we explore how spoofing attacks can compromise the integrity of SLAM systems, review the underlying methodology, and discuss the potential security implications for robotics and autonomous navigation. Whether you’re a robotics enthusiast, a security researcher, or simply curious about emerging technologies, this video offers valuable insights into both the risks and the innovations in the field.

[ SLAMSpoof ]

Thanks, Kentaro!

Sanctuary AI, a company developing physical AI for general purpose robots, announced the integration of new tactile sensor technology into its Phoenix general purpose robots. The integration enables teleoperation pilots to more effectively leverage the dexterity capabilities of general purpose robots to achieve complex, touch-driven tasks with precision and accuracy.

[ Sanctuary AI ]

I don’t know whether it’s the shape or the noise or what, but this robot pleases me.

[ University of Pennsylvania, Sung Robotics Lab ]

Check out the top features of the new Husky A300 - the next evolution of our rugged and customizable mobile robotic platform. Husky A300 offers superior performance, durability, and flexibility, empowering robotics researchers and innovators to tackle the most complex challenges in demanding environments.

[ Clearpath Robotics ]

The ExoMars Rosalind Franklin rover will drill deeper than any other mission has ever attempted on the Red Planet. Rosalind Franklin will be the first rover to reach a depth of up to two meters deep below the surface, acquiring samples that have been protected from harsh surface radiation and extreme temperatures.

[ European Space Agency ]

AI has been improving by leaps and bounds in recent years, and a string of new models can generate answers that almost feel as if they came from a person reasoning through a problem. But is AI actually close to reasoning like humans can? IBM distinguished scientist Murray Campbell chats with IBM Fellow Francesca Rossi about her time as president of the Association for the Advancement of Artificial Intelligence (AAAI). They discuss the state of AI, what modern reasoning models are actually doing, and whether we’ll see models that reason like we do.

[ IBM Research ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

A bioinspired robot developed at EPFL can change shape to alter its own physical properties in response to its environment, resulting in a robust and efficient autonomous vehicle as well as a fresh approach to robotic locomotion.

[ Science Robotics ] via [ EPFL ]

A robot CAN get up this way, but SHOULD a robot get up this way?

[ University of Illinois Urbana-Champaign ]

I’m impressed with the capabilities here, but not the use case. There are already automated systems that do this much faster, much more reliably, and almost certainly much more cheaply. So, probably best to think of this as more of a technology demo than anything with commercial potential.

[ Figure ]

NEO Gamma is the next generation of home humanoids designed and engineered by 1X Technologies. The Gamma series includes improvements across NEO’s hardware and AI, featuring a new design that is deeply considerate of life at home. The future of Home Humanoids is here.

You all know by now not to take this video too seriously, but I will say that an advantage of building a robot like this for the home is that realistically it can spend most of its time sitting down and (presumably) charging.

[ 1X Technologies ]

This video compilation showcases novel aerial and underwater drone platforms and an ultra-quiet electric vertical takeoff and landing (eVTOL) propeller. These technologies were developed by the Advanced Vertical Flight Laboratory (AVFL) at Texas A&M University and Harmony Aeronautics, an AVFL spin-off company.

[ AVFL ]

Yes! More research like this please; legged robots (of all sizes) are TOO STOMPY.

[ ETH Zurich ]

Robosquirrel!

[ BBC ] via [ Laughing Squid ]

By watching their own motions with a camera, robots can teach themselves about the structure of their own bodies and how they move, a new study from researchers at Columbia Engineering now reveals. Equipped with this knowledge, the robots could not only plan their own actions, but also overcome damage to their bodies.

[ Columbia University, School of Engineering and Applied Science ]

If I was asking my robot to do a front flip for the first(ish) time, my face would probably look like the poor guy at 0:25. But it worked!

[ EngineAI ]

*We kindly request that all users refrain from making any dangerous modifications or using the robots in a hazardous manner.

A hazardous manner? Like teaching it martial arts...?

[ Unitree ]

Explore SLAMSpoof—a cutting-edge project by Keio-CSG that demonstrates how LiDAR spoofing attacks can compromise SLAM systems. In this video, we explore how spoofing attacks can compromise the integrity of SLAM systems, review the underlying methodology, and discuss the potential security implications for robotics and autonomous navigation. Whether you’re a robotics enthusiast, a security researcher, or simply curious about emerging technologies, this video offers valuable insights into both the risks and the innovations in the field.

[ SLAMSpoof ]

Thanks, Kentaro!

Sanctuary AI, a company developing physical AI for general purpose robots, announced the integration of new tactile sensor technology into its Phoenix general purpose robots. The integration enables teleoperation pilots to more effectively leverage the dexterity capabilities of general purpose robots to achieve complex, touch-driven tasks with precision and accuracy.

[ Sanctuary AI ]

I don’t know whether it’s the shape or the noise or what, but this robot pleases me.

[ University of Pennsylvania, Sung Robotics Lab ]

Check out the top features of the new Husky A300 - the next evolution of our rugged and customizable mobile robotic platform. Husky A300 offers superior performance, durability, and flexibility, empowering robotics researchers and innovators to tackle the most complex challenges in demanding environments.

[ Clearpath Robotics ]

The ExoMars Rosalind Franklin rover will drill deeper than any other mission has ever attempted on the Red Planet. Rosalind Franklin will be the first rover to reach a depth of up to two meters deep below the surface, acquiring samples that have been protected from harsh surface radiation and extreme temperatures.

[ European Space Agency ]

AI has been improving by leaps and bounds in recent years, and a string of new models can generate answers that almost feel as if they came from a person reasoning through a problem. But is AI actually close to reasoning like humans can? IBM distinguished scientist Murray Campbell chats with IBM Fellow Francesca Rossi about her time as president of the Association for the Advancement of Artificial Intelligence (AAAI). They discuss the state of AI, what modern reasoning models are actually doing, and whether we’ll see models that reason like we do.

[ IBM Research ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.ICRA 2025: 19–23 May 2025, ATLANTA, GA.London Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

We’re introducing Helix, a generalist Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control to overcome multiple longstanding challenges in robotics.

This is moderately impressive; my favorite part is probably the handoffs and that extra little bit of HRI with what we’d call eye contact if these robots had faces. But keep in mind that you’re looking at close to best case for robotic manipulation, and that if the robots had been given the bag instead of well-spaced objects on a single color background, or if the fridge had a normal human amount of stuff in it, they might be having a much different time of it. Also, is it just me, or is the sound on this video very weird? Like, some things make noise, some things don’t, and the robots themselves occasionally sound more like someone just added in some “soft actuator sound” or something. Also also, I’m of a suspicious nature, and when there is an abrupt cut between “robot grasps door” and “robot opens door,” I assume the worst.

[ Figure ]

Researchers at EPFL have developed a highly agile flat swimming robot. This robot is smaller than a credit card, and propels on the water surface using a pair of undulating soft fins. The fins are driven at resonance by artificial muscles, allowing the robot to perform complex maneuvers. In the future, this robot can be used for monitoring water quality or help with measuring fertilizer concentrations in rice fields

[ Paper ] via [ Science Robotics ]

I don’t know about you, but I always dance better when getting beaten with a stick.

[ Unitree Robotics ]

This is big news, people: Sweet Bite Ham Ham, one of the greatest and most useless robots of all time, has a new treat.

All yours for about US $100, overseas shipping included.

[ Ham Ham ] via [ Robotstart ]

MagicLab has announced the launch of its first generation self-developed dexterous hand product, the MagicHand S01. The MagicHand S01 has 11 degrees of freedom in a single hand. The MagicHand S01 has a hand load capacity of up to 5 kilograms, and in work environments, can carry loads of over 20 kilograms.

[ MagicLab ]

Thanks, Ni Tao!

No, I’m not creeped out at all, why?

[ Clone Robotics ]

Happy 40th Birthday to the MIT Media Lab!

Since 1985, the MIT Media Lab has provided a home for interdisciplinary research, transformative technologies, and innovative approaches to solving some of humanity’s greatest challenges. As we celebrate our 40th anniversary year, we’re looking ahead to decades more of imagining, designing, and inventing a future in which everyone has the opportunity to flourish.

[ MIT Media Lab ]

While most soft pneumatic grippers that operate with a single control parameter (such as pressure or airflow) are limited to a single grasping modality, this article introduces a new method for incorporating multiple grasping modalities into vacuum-driven soft grippers. This is achieved by combining stiffness manipulation with a bistable mechanism. Adjusting the airflow tunes the energy barrier of the bistable mechanism, enabling changes in triggering sensitivity and allowing swift transitions between grasping modes. This results in an exceptional versatile gripper, capable of handling a diverse range of objects with varying sizes, shapes, stiffness, and roughness, controlled by a single parameter, airflow, and its interaction with objects.

[ Paper ] via [ BruBotics ]

Thanks, Bram!

In this article, we present a design concept, in which a monolithic soft body is incorporated with a vibration-driven mechanism, called Leafbot. This proposed investigation aims to build a foundation for further terradynamics study of vibration-driven soft robots in a more complicated and confined environment, with potential applications in inspection tasks.

[ Paper ] via [ IEEE Transactions on Robots ]

We present a hybrid aerial-ground robot that combines the versatility of a quadcopter with enhanced terrestrial mobility. The vehicle features a passive, reconfigurable single wheeled leg, enabling seamless transitions between flight and two ground modes: a stable stance and a dynamic cruising configuration.

[ Robotics and Intelligent Systems Laboratory ]

I’m not sure I’ve ever seen this trick performed by a robot with soft fingers before.

[ Paper ]

There are a lot of robots involved in car manufacturing. Like, a lot.

[ Kawasaki Robotics ]

Steve Willits shows us some recent autonomous drone work being done at the AirLab at CMU’s Robotics Institute.

[ Carnegie Mellon University Robotics Institute ]

Somebody’s got to test all those luxury handbags and purses. And by somebody, I mean somerobot.

[ Qb Robotics ]

Do not trust people named Evan.

[ Tufts University Human-Robot Interaction Lab ]

Meet the Mind: MIT Professor Andreea Bobu.

[ MIT ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.ICRA 2025: 19–23 May 2025, ATLANTA, GA.London Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

We’re introducing Helix, a generalist Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control to overcome multiple longstanding challenges in robotics.

This is moderately impressive; my favorite part is probably the handoffs and that extra little bit of HRI with what we’d call eye contact if these robots had faces. But keep in mind that you’re looking at close to best case for robotic manipulation, and that if the robots had been given the bag instead of well-spaced objects on a single color background, or if the fridge had a normal human amount of stuff in it, they might be having a much different time of it. Also, is it just me, or is the sound on this video very weird? Like, some things make noise, some things don’t, and the robots themselves occasionally sound more like someone just added in some “soft actuator sound” or something. Also also, I’m of a suspicious nature, and when there is an abrupt cut between “robot grasps door” and “robot opens door,” I assume the worst.

[ Figure ]

Researchers at EPFL have developed a highly agile flat swimming robot. This robot is smaller than a credit card, and propels on the water surface using a pair of undulating soft fins. The fins are driven at resonance by artificial muscles, allowing the robot to perform complex maneuvers. In the future, this robot can be used for monitoring water quality or help with measuring fertilizer concentrations in rice fields

[ Paper ] via [ Science Robotics ]

I don’t know about you, but I always dance better when getting beaten with a stick.

[ Unitree Robotics ]

This is big news, people: Sweet Bite Ham Ham, one of the greatest and most useless robots of all time, has a new treat.

All yours for about US $100, overseas shipping included.

[ Ham Ham ] via [ Robotstart ]

MagicLab has announced the launch of its first generation self-developed dexterous hand product, the MagicHand S01. The MagicHand S01 has 11 degrees of freedom in a single hand. The MagicHand S01 has a hand load capacity of up to 5 kilograms, and in work environments, can carry loads of over 20 kilograms.

[ MagicLab ]

Thanks, Ni Tao!

No, I’m not creeped out at all, why?

[ Clone Robotics ]

Happy 40th Birthday to the MIT Media Lab!

Since 1985, the MIT Media Lab has provided a home for interdisciplinary research, transformative technologies, and innovative approaches to solving some of humanity’s greatest challenges. As we celebrate our 40th anniversary year, we’re looking ahead to decades more of imagining, designing, and inventing a future in which everyone has the opportunity to flourish.

[ MIT Media Lab ]

While most soft pneumatic grippers that operate with a single control parameter (such as pressure or airflow) are limited to a single grasping modality, this article introduces a new method for incorporating multiple grasping modalities into vacuum-driven soft grippers. This is achieved by combining stiffness manipulation with a bistable mechanism. Adjusting the airflow tunes the energy barrier of the bistable mechanism, enabling changes in triggering sensitivity and allowing swift transitions between grasping modes. This results in an exceptional versatile gripper, capable of handling a diverse range of objects with varying sizes, shapes, stiffness, and roughness, controlled by a single parameter, airflow, and its interaction with objects.

[ Paper ] via [ BruBotics ]

Thanks, Bram!

In this article, we present a design concept, in which a monolithic soft body is incorporated with a vibration-driven mechanism, called Leafbot. This proposed investigation aims to build a foundation for further terradynamics study of vibration-driven soft robots in a more complicated and confined environment, with potential applications in inspection tasks.

[ Paper ] via [ IEEE Transactions on Robots ]

We present a hybrid aerial-ground robot that combines the versatility of a quadcopter with enhanced terrestrial mobility. The vehicle features a passive, reconfigurable single wheeled leg, enabling seamless transitions between flight and two ground modes: a stable stance and a dynamic cruising configuration.

[ Robotics and Intelligent Systems Laboratory ]

I’m not sure I’ve ever seen this trick performed by a robot with soft fingers before.

[ Paper ]

There are a lot of robots involved in car manufacturing. Like, a lot.

[ Kawasaki Robotics ]

Steve Willits shows us some recent autonomous drone work being done at the AirLab at CMU’s Robotics Institute.

[ Carnegie Mellon University Robotics Institute ]

Somebody’s got to test all those luxury handbags and purses. And by somebody, I mean somerobot.

[ Qb Robotics ]

Do not trust people named Evan.

[ Tufts University Human-Robot Interaction Lab ]

Meet the Mind: MIT Professor Andreea Bobu.

[ MIT ]



About a year ago, Boston Dynamics released a research version of its Spot quadruped robot, which comes with a low-level application programming interface (API) that allows direct control of Spot’s joints. Even back then, the rumor was that this API unlocked some significant performance improvements on Spot, including a much faster running speed. That rumor came from the Robotics and AI (RAI) Institute, formerly The AI Institute, formerly the Boston Dynamics AI Institute, and if you were at Marc Raibert’s talk at the ICRA@40 conference in Rotterdam last fall, you already know that it turned out not to be a rumor at all.

Today, we’re able to share some of the work that the RAI Institute has been doing to apply reality-grounded reinforcement learning techniques to enable much higher performance from Spot. The same techniques can also help highly dynamic robots operate robustly, and there’s a brand new hardware platform that shows this off: an autonomous bicycle that can jump.

See Spot Run

This video is showing Spot running at a sustained speed of 5.2 meters per second (11.6 miles per hour). Out of the box, Spot’s top speed is 1.6 m/s, meaning that RAI’s spot has more than tripled (!) the quadruped’s factory speed.

If Spot running this quickly looks a little strange, that’s probably because it is strange, in the sense that the way this robot dog’s legs and body move as it runs is not very much like how a real dog runs at all. “The gait is not biological, but the robot isn’t biological,” explains Farbod Farshidian, roboticist at the RAI Institute. “Spot’s actuators are different from muscles, and its kinematics are different, so a gait that’s suitable for a dog to run fast isn’t necessarily best for this robot.”

The best Farshidian can categorize how Spot is moving is that it’s somewhat similar to a trotting gait, except with an added flight phase (with all four feet off the ground at once) that technically turns it into a run. This flight phase is necessary, Farshidian says, because the robot needs that time to successively pull its feet forward fast enough to maintain its speed. This is a “discovered behavior,” in that the robot was not explicitly programmed to “run,” but rather was just required to find the best way of moving as fast as possible.

Reinforcement Learning Versus Model Predictive Control

The Spot controller that ships with the robot when you buy it from Boston Dynamics is based on model predictive control (MPC), which involves creating a software model that approximates the dynamics of the robot as best you can, and then solving an optimization problem for the tasks that you want the robot to do in real time. It’s a very predictable and reliable method for controlling a robot, but it’s also somewhat rigid, because that original software model won’t be close enough to reality to let you really push the limits of the robot. And if you try to say, “Okay, I’m just going to make a superdetailed software model of my robot and push the limits that way,” you get stuck because the optimization problem has to be solved for whatever you want the robot to do, in real time, and the more complex the model is, the harder it is to do that quickly enough to be useful. Reinforcement learning (RL), on the other hand, learns offline. You can use as complex of a model as you want, and then take all the time you need in simulation to train a control policy that can then be run very efficiently on the robot.

Your browser does not support the video tag. In simulation, a couple of Spots (or hundreds of Spots) can be trained in parallel for robust real-world performance.Robotics and AI Institute

In the example of Spot’s top speed, it’s simply not possible to model every last detail for all of the robot’s actuators within a model-based control system that would run in real time on the robot. So instead, simplified (and typically very conservative) assumptions are made about what the actuators are actually doing so that you can expect safe and reliable performance.

Farshidian explains that these assumptions make it difficult to develop a useful understanding of what performance limitations actually are. “Many people in robotics know that one of the limitations of running fast is that you’re going to hit the torque and velocity maximum of your actuation system. So, people try to model that using the data sheets of the actuators. For us, the question that we wanted to answer was whether there might exist some other phenomena that was actually limiting performance.”

Searching for these other phenomena involved bringing new data into the reinforcement learning pipeline, like detailed actuator models learned from the real-world performance of the robot. In Spot’s case, that provided the answer to high-speed running. It turned out that what was limiting Spot’s speed was not the actuators themselves, nor any of the robot’s kinematics: It was simply the batteries not being able to supply enough power. “This was a surprise for me,” Farshidian says, “because I thought we were going to hit the actuator limits first.”

Spot’s power system is complex enough that there’s likely some additional wiggle room, and Farshidian says the only thing that prevented them from pushing Spot’s top speed past 5.2 m/s is that they didn’t have access to the battery voltages so they weren’t able to incorporate that real-world data into their RL model. “If we had beefier batteries on there, we could have run faster. And if you model that phenomena as well in our simulator, I’m sure that we can push this farther.”

Farshidian emphasizes that RAI’s technique is about much more than just getting Spot to run fast—it could also be applied to making Spot move more efficiently to maximize battery life, or more quietly to work better in an office or home environment. Essentially, this is a generalizable tool that can find new ways of expanding the capabilities of any robotic system. And when real-world data is used to make a simulated robot better, you can ask the simulation to do more, with confidence that those simulated skills will successfully transfer back onto the real robot.

Ultra Mobility Vehicle: Teaching Robot Bikes to Jump

Reinforcement learning isn’t just good for maximizing the performance of a robot—it can also make that performance more reliable. The RAI Institute has been experimenting with a completely new kind of robot that it invented in-house: a little jumping bicycle called the Ultra Mobility Vehicle, or UMV, which was trained to do parkour using essentially the same RL pipeline for balancing and driving as was used for Spot’s high-speed running.

There’s no independent physical stabilization system (like a gyroscope) keeping the UMV from falling over; it’s just a normal bike that can move forward and backward and turn its front wheel. As much mass as possible is then packed into the top bit, which actuators can rapidly accelerate up and down. “We’re demonstrating two things in this video,” says Marco Hutter, director of the RAI Institute’s Zurich office. “One is how reinforcement learning helps make the UMV very robust in its driving capabilities in diverse situations. And second, how understanding the robots’ dynamic capabilities allows us to do new things, like jumping on a table which is higher than the robot itself.”

“The key of RL in all of this is to discover new behavior and make this robust and reliable under conditions that are very hard to model. That’s where RL really, really shines.” —Marco Hutter, The RAI Institute

As impressive as the jumping is, for Hutter, it’s just as difficult (if not more difficult) to do maneuvers that may seem fairly simple, like riding backwards. “Going backwards is highly unstable,” Hutter explains. “At least for us, it was not really possible to do that with a classical [MPC] controller, particularly over rough terrain or with disturbances.”

Getting this robot out of the lab and onto terrain to do proper bike parkour is a work in progress that the RAI Institute says it will be able to demonstrate in the near future, but it’s really not about what this particular hardware platform can do—it’s about what any robot can do through RL and other learning-based methods, says Hutter. “The bigger picture here is that the hardware of such robotic systems can in theory do a lot more than we were able to achieve with our classic control algorithms. Understanding these hidden limits in hardware systems lets us improve performance and keep pushing the boundaries on control.”

Your browser does not support the video tag. Teaching the UMV to drive itself down stairs in sim results in a real robot that can handle stairs at any angle.Robotics and AI Institute

Reinforcement Learning for Robots Everywhere

Just a few weeks ago, the RAI Institute announced a new partnership with Boston Dynamics “to advance humanoid robots through reinforcement learning.” Humanoids are just another kind of robotic platform, albeit a significantly more complicated one with many more degrees of freedom and things to model and simulate. But when considering the limitations of model predictive control for this level of complexity, a reinforcement learning approach seems almost inevitable, especially when such an approach is already streamlined due to its ability to generalize.

“One of the ambitions that we have as an institute is to have solutions which span across all kinds of different platforms,” says Hutter. “It’s about building tools, about building infrastructure, building the basis for this to be done in a broader context. So not only humanoids, but driving vehicles, quadrupeds, you name it. But doing RL research and showcasing some nice first proof of concept is one thing—pushing it to work in the real world under all conditions, while pushing the boundaries in performance, is something else.”

Transferring skills into the real world has always been a challenge for robots trained in simulation, precisely because simulation is so friendly to robots. “If you spend enough time,” Farshidian explains, “you can come up with a reward function where eventually the robot will do what you want. What often fails is when you want to transfer that sim behavior to the hardware, because reinforcement learning is very good at finding glitches in your simulator and leveraging them to do the task.”

Simulation has been getting much, much better, with new tools, more accurate dynamics, and lots of computing power to throw at the problem. “It’s a hugely powerful ability that we can simulate so many things, and generate so much data almost for free,” Hutter says. But the usefulness of that data is in its connection to reality, making sure that what you’re simulating is accurate enough that a reinforcement learning approach will in fact solve for reality. Bringing physical data collected on real hardware back into the simulation, Hutter believes, is a very promising approach, whether it’s applied to running quadrupeds or jumping bicycles or humanoids. “The combination of the two—of simulation and reality—that’s what I would hypothesize is the right direction.”



About a year ago, Boston Dynamics released a research version of its Spot quadruped robot, which comes with a low-level application programming interface (API) that allows direct control of Spot’s joints. Even back then, the rumor was that this API unlocked some significant performance improvements on Spot, including a much faster running speed. That rumor came from the Robotics and AI (RAI) Institute, formerly The AI Institute, formerly the Boston Dynamics AI Institute, and if you were at Marc Raibert’s talk at the ICRA@40 conference in Rotterdam last fall, you already know that it turned out not to be a rumor at all.

Today, we’re able to share some of the work that the RAI Institute has been doing to apply reality-grounded reinforcement learning techniques to enable much higher performance from Spot. The same techniques can also help highly dynamic robots operate robustly, and there’s a brand new hardware platform that shows this off: an autonomous bicycle that can jump.

See Spot Run

This video is showing Spot running at a sustained speed of 5.2 meters per second (11.6 miles per hour). Out of the box, Spot’s top speed is 1.6 m/s, meaning that RAI’s spot has more than tripled (!) the quadruped’s factory speed.

If Spot running this quickly looks a little strange, that’s probably because it is strange, in the sense that the way this robot dog’s legs and body move as it runs is not very much like how a real dog runs at all. “The gait is not biological, but the robot isn’t biological,” explains Farbod Farshidian, roboticist at the RAI Institute. “Spot’s actuators are different from muscles, and its kinematics are different, so a gait that’s suitable for a dog to run fast isn’t necessarily best for this robot.”

The best Farshidian can categorize how Spot is moving is that it’s somewhat similar to a trotting gait, except with an added flight phase (with all four feet off the ground at once) that technically turns it into a run. This flight phase is necessary, Farshidian says, because the robot needs that time to successively pull its feet forward fast enough to maintain its speed. This is a “discovered behavior,” in that the robot was not explicitly programmed to “run,” but rather was just required to find the best way of moving as fast as possible.

Reinforcement Learning Versus Model Predictive Control

The Spot controller that ships with the robot when you buy it from Boston Dynamics is based on model predictive control (MPC), which involves creating a software model that approximates the dynamics of the robot as best you can, and then solving an optimization problem for the tasks that you want the robot to do in real time. It’s a very predictable and reliable method for controlling a robot, but it’s also somewhat rigid, because that original software model won’t be close enough to reality to let you really push the limits of the robot. And if you try to say, “Okay, I’m just going to make a superdetailed software model of my robot and push the limits that way,” you get stuck because the optimization problem has to be solved for whatever you want the robot to do, in real time, and the more complex the model is, the harder it is to do that quickly enough to be useful. Reinforcement learning (RL), on the other hand, learns offline. You can use as complex of a model as you want, and then take all the time you need in simulation to train a control policy that can then be run very efficiently on the robot.

Your browser does not support the video tag. In simulation, a couple of Spots (or hundreds of Spots) can be trained in parallel for robust real-world performance.Robotics and AI Institute

In the example of Spot’s top speed, it’s simply not possible to model every last detail for all of the robot’s actuators within a model-based control system that would run in real time on the robot. So instead, simplified (and typically very conservative) assumptions are made about what the actuators are actually doing so that you can expect safe and reliable performance.

Farshidian explains that these assumptions make it difficult to develop a useful understanding of what performance limitations actually are. “Many people in robotics know that one of the limitations of running fast is that you’re going to hit the torque and velocity maximum of your actuation system. So, people try to model that using the data sheets of the actuators. For us, the question that we wanted to answer was whether there might exist some other phenomena that was actually limiting performance.”

Searching for these other phenomena involved bringing new data into the reinforcement learning pipeline, like detailed actuator models learned from the real-world performance of the robot. In Spot’s case, that provided the answer to high-speed running. It turned out that what was limiting Spot’s speed was not the actuators themselves, nor any of the robot’s kinematics: It was simply the batteries not being able to supply enough power. “This was a surprise for me,” Farshidian says, “because I thought we were going to hit the actuator limits first.”

Spot’s power system is complex enough that there’s likely some additional wiggle room, and Farshidian says the only thing that prevented them from pushing Spot’s top speed past 5.2 m/s is that they didn’t have access to the battery voltages so they weren’t able to incorporate that real-world data into their RL model. “If we had beefier batteries on there, we could have run faster. And if you model that phenomena as well in our simulator, I’m sure that we can push this farther.”

Farshidian emphasizes that RAI’s technique is about much more than just getting Spot to run fast—it could also be applied to making Spot move more efficiently to maximize battery life, or more quietly to work better in an office or home environment. Essentially, this is a generalizable tool that can find new ways of expanding the capabilities of any robotic system. And when real-world data is used to make a simulated robot better, you can ask the simulation to do more, with confidence that those simulated skills will successfully transfer back onto the real robot.

Ultra Mobility Vehicle: Teaching Robot Bikes to Jump

Reinforcement learning isn’t just good for maximizing the performance of a robot—it can also make that performance more reliable. The RAI Institute has been experimenting with a completely new kind of robot that it invented in-house: a little jumping bicycle called the Ultra Mobility Vehicle, or UMV, which was trained to do parkour using essentially the same RL pipeline for balancing and driving as was used for Spot’s high-speed running.

There’s no independent physical stabilization system (like a gyroscope) keeping the UMV from falling over; it’s just a normal bike that can move forward and backward and turn its front wheel. As much mass as possible is then packed into the top bit, which actuators can rapidly accelerate up and down. “We’re demonstrating two things in this video,” says Marco Hutter, director of the RAI Institute’s Zurich office. “One is how reinforcement learning helps make the UMV very robust in its driving capabilities in diverse situations. And second, how understanding the robots’ dynamic capabilities allows us to do new things, like jumping on a table which is higher than the robot itself.”

“The key of RL in all of this is to discover new behavior and make this robust and reliable under conditions that are very hard to model. That’s where RL really, really shines.” —Marco Hutter, The RAI Institute

As impressive as the jumping is, for Hutter, it’s just as difficult (if not more difficult) to do maneuvers that may seem fairly simple, like riding backwards. “Going backwards is highly unstable,” Hutter explains. “At least for us, it was not really possible to do that with a classical [MPC] controller, particularly over rough terrain or with disturbances.”

Getting this robot out of the lab and onto terrain to do proper bike parkour is a work in progress that the RAI Institute says it will be able to demonstrate in the near future, but it’s really not about what this particular hardware platform can do—it’s about what any robot can do through RL and other learning-based methods, says Hutter. “The bigger picture here is that the hardware of such robotic systems can in theory do a lot more than we were able to achieve with our classic control algorithms. Understanding these hidden limits in hardware systems lets us improve performance and keep pushing the boundaries on control.”

Your browser does not support the video tag. Teaching the UMV to drive itself down stairs in sim results in a real robot that can handle stairs at any angle.Robotics and AI Institute

Reinforcement Learning for Robots Everywhere

Just a few weeks ago, the RAI Institute announced a new partnership with Boston Dynamics “to advance humanoid robots through reinforcement learning.” Humanoids are just another kind of robotic platform, albeit a significantly more complicated one with many more degrees of freedom and things to model and simulate. But when considering the limitations of model predictive control for this level of complexity, a reinforcement learning approach seems almost inevitable, especially when such an approach is already streamlined due to its ability to generalize.

“One of the ambitions that we have as an institute is to have solutions which span across all kinds of different platforms,” says Hutter. “It’s about building tools, about building infrastructure, building the basis for this to be done in a broader context. So not only humanoids, but driving vehicles, quadrupeds, you name it. But doing RL research and showcasing some nice first proof of concept is one thing—pushing it to work in the real world under all conditions, while pushing the boundaries in performance, is something else.”

Transferring skills into the real world has always been a challenge for robots trained in simulation, precisely because simulation is so friendly to robots. “If you spend enough time,” Farshidian explains, “you can come up with a reward function where eventually the robot will do what you want. What often fails is when you want to transfer that sim behavior to the hardware, because reinforcement learning is very good at finding glitches in your simulator and leveraging them to do the task.”

Simulation has been getting much, much better, with new tools, more accurate dynamics, and lots of computing power to throw at the problem. “It’s a hugely powerful ability that we can simulate so many things, and generate so much data almost for free,” Hutter says. But the usefulness of that data is in its connection to reality, making sure that what you’re simulating is accurate enough that a reinforcement learning approach will in fact solve for reality. Bringing physical data collected on real hardware back into the simulation, Hutter believes, is a very promising approach, whether it’s applied to running quadrupeds or jumping bicycles or humanoids. “The combination of the two—of simulation and reality—that’s what I would hypothesize is the right direction.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREA

Enjoy today’s videos!

There is an immense amount of potential for innovation and development in the field of human-robot collaboration — and we’re excited to release Meta PARTNR, a research framework that includes a large-scale benchmark, dataset and large planning model to jump start additional research in this exciting field.

[ Meta PARTNR ]

Humanoid is the first AI and robotics company in the UK, creating the world’s leading, commercially scalable, and safe humanoid robots.

[ Humanoid ]

To complement our review paper, “Grand Challenges for Burrowing Soft Robots,” we present a compilation of soft burrowers, both organic and robotic. Soft organisms use specialized mechanisms for burrowing in granular media, which have inspired the design of many soft robots. To improve the burrowing efficacy of soft robots, there are many grand challenges that must be addressed by roboticists.

[ Faboratory Research ] at [ Yale University ]

Three small lunar rovers were packed up at NASA’s Jet Propulsion Laboratory for the first leg of their multistage journey to the Moon. These suitcase-size rovers, along with a base station and camera system that will record their travels on the lunar surface, make up NASA’s CADRE (Cooperative Autonomous Distributed Robotic Exploration) technology demonstration.]

[ NASA ]

MenteeBot V3.0 is a fully vertically integrated humanoid robot, with full-stack AI and proprietary hardware.

[ Mentee Robotics ]

What do assistance robots look like? From robotic arms attached to a wheelchair to autonomous robots that can pick up and carry objects on their own, assistive robots are making a real difference to the lives of people with limited motor control.

[ Cybathlon ]

Robots can not perform reactive manipulation and they mostly operate in open-loop while interacting with their environment. Consequently, the current manipulation algorithms either are very inefficient in performance or can only work in highly structured environments. In this paper, we present closed-loop control of a complex manipulation task where a robot uses a tool to interact with objects.

[ Paper ] via [ Mitsubishi Electric Research Laboratories ]

Thanks, Yuki!

When the future becomes the present, anything is possible. In our latest campaign, “The New Normal,” we highlight the journey our riders experience from first seeing Waymo to relishing in the magic of their first ride. How did your first-ride feeling change the way you think about the possibilities of AVs?

[ Waymo ]

One of a humanoid robot’s unique advantages lies in its bipedal mobility, allowing it to navigate diverse terrains with efficiency and agility. This capability enables Moby to move freely through various environments and assist with high-risk tasks in critical industries like construction, mining, and energy.

[ UCR ]

Although robots are just tools to us, it’s still important to make them somewhat expressive so they can better integrate into our world. So, we created a small animation of the robot waking up—one that it executes all by itself!

[ Pollen Robotics ]

In this live demo, an OTTO AMR expert will walk through the key differences between AGVs and AMRs, highlighting how OTTO AMRs address challenges that AGVs cannot.

[ OTTO ] by [ Rockwell Automation ]

This Carnegie Mellon University Robotics Institute Seminar is from CMU’s Aaron Johnson, on “Uncertainty and Contact with the World.”

As robots move out of the lab and factory and into more challenging environments, uncertainty in the robot’s state, dynamics, and contact conditions becomes a fact of life. In this talk, I’ll present some recent work in handling uncertainty in dynamics and contact conditions, in order to both reduce that uncertainty where we can but also generate strategies that do not require perfect knowledge of the world state.

[ CMU RI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREA

Enjoy today’s videos!

There is an immense amount of potential for innovation and development in the field of human-robot collaboration — and we’re excited to release Meta PARTNR, a research framework that includes a large-scale benchmark, dataset and large planning model to jump start additional research in this exciting field.

[ Meta PARTNR ]

Humanoid is the first AI and robotics company in the UK, creating the world’s leading, commercially scalable, and safe humanoid robots.

[ Humanoid ]

To complement our review paper, “Grand Challenges for Burrowing Soft Robots,” we present a compilation of soft burrowers, both organic and robotic. Soft organisms use specialized mechanisms for burrowing in granular media, which have inspired the design of many soft robots. To improve the burrowing efficacy of soft robots, there are many grand challenges that must be addressed by roboticists.

[ Faboratory Research ] at [ Yale University ]

Three small lunar rovers were packed up at NASA’s Jet Propulsion Laboratory for the first leg of their multistage journey to the Moon. These suitcase-size rovers, along with a base station and camera system that will record their travels on the lunar surface, make up NASA’s CADRE (Cooperative Autonomous Distributed Robotic Exploration) technology demonstration.]

[ NASA ]

MenteeBot V3.0 is a fully vertically integrated humanoid robot, with full-stack AI and proprietary hardware.

[ Mentee Robotics ]

What do assistance robots look like? From robotic arms attached to a wheelchair to autonomous robots that can pick up and carry objects on their own, assistive robots are making a real difference to the lives of people with limited motor control.

[ Cybathlon ]

Robots can not perform reactive manipulation and they mostly operate in open-loop while interacting with their environment. Consequently, the current manipulation algorithms either are very inefficient in performance or can only work in highly structured environments. In this paper, we present closed-loop control of a complex manipulation task where a robot uses a tool to interact with objects.

[ Paper ] via [ Mitsubishi Electric Research Laboratories ]

Thanks, Yuki!

When the future becomes the present, anything is possible. In our latest campaign, “The New Normal,” we highlight the journey our riders experience from first seeing Waymo to relishing in the magic of their first ride. How did your first-ride feeling change the way you think about the possibilities of AVs?

[ Waymo ]

One of a humanoid robot’s unique advantages lies in its bipedal mobility, allowing it to navigate diverse terrains with efficiency and agility. This capability enables Moby to move freely through various environments and assist with high-risk tasks in critical industries like construction, mining, and energy.

[ UCR ]

Although robots are just tools to us, it’s still important to make them somewhat expressive so they can better integrate into our world. So, we created a small animation of the robot waking up—one that it executes all by itself!

[ Pollen Robotics ]

In this live demo, an OTTO AMR expert will walk through the key differences between AGVs and AMRs, highlighting how OTTO AMRs address challenges that AGVs cannot.

[ OTTO ] by [ Rockwell Automation ]

This Carnegie Mellon University Robotics Institute Seminar is from CMU’s Aaron Johnson, on “Uncertainty and Contact with the World.”

As robots move out of the lab and factory and into more challenging environments, uncertainty in the robot’s state, dynamics, and contact conditions becomes a fact of life. In this talk, I’ll present some recent work in handling uncertainty in dynamics and contact conditions, in order to both reduce that uncertainty where we can but also generate strategies that do not require perfect knowledge of the world state.

[ CMU RI ]



In theory, one of the main applications for robots should be operating in environments that (for whatever reason) are too dangerous for humans. I say “in theory” because in practice it’s difficult to get robots to do useful stuff in semi-structured or unstructured environments without direct human supervision. This is why there’s been some emphasis recently on teleoperation: Human software teaming up with robot hardware can be a very effective combination.

For this combination to work, you need two things. First, an intuitive control system that lets the user embody themselves in the robot to pilot it effectively. And second, a robot that can deliver on the kind of embodiment that the human pilot needs. The second bit is the more challenging, because humans have very high standards for mobility, strength, and dexterity. But researchers at the Italian Institute of Technology (IIT) have a system that manages to check both boxes, thanks to its enormously powerful quadruped, which now sports a pair of massive arms on its head.

“The primary goal of this project, conducted in collaboration with INAIL, is to extend human capabilities to the robot, allowing operators to perform complex tasks remotely in hazardous and unstructured environments to mitigate risks to their safety by exploiting the robot’s capabilities,” explains Claudio Semini, who leads the Robot Teleoperativo project at IIT. The project is based around the HyQReal hydraulic quadruped, the most recent addition to IIT’s quadruped family.

Hydraulics have been very visibly falling out of favor in robotics, because they’re complicated and messy, and in general robots don’t need the absurd power density that comes with hydraulics. But there are still a few robots in active development that use hydraulics specifically because of all that power. If your robot needs to be highly dynamic or lift really heavy things, hydraulics are, at least for now, where it’s at.

IIT’s HyQReal quadruped is one of those robots. If you need something that can carry a big payload, like a pair of massive arms, this is your robot. Back in 2019, we saw HyQReal pulling a three-tonne airplane. HyQReal itself weighs 140 kilograms, and its knee joints can output up to 300 newton-meters of torque. The hydraulic system is powered by onboard batteries and can provide up to 4 kilowatts of power. It also uses some of Moog’s lovely integrated smart actuators, which sadly don’t seem to be in development anymore. Beyond just lifting heavy things, HyQReal’s mass and power make it a very stable platform, and its aluminum roll cage and Kevlar skin ensure robustness.

The HyQReal hydraulic quadruped is tethered for power during experiments at IIT, but it can also run on battery power.IIT

The arms that HyQReal is carrying are IIT-INAIL arms, which weigh 10 kg each and have a payload of 5 kg per arm. To put that in perspective, the maximum payload of a Boston Dynamics Spot robot is only 14 kg. The head-mounted configuration of the arms means they can reach the ground, and they also have an overlapping workspace to enable bimanual manipulation, which is enhanced by HyQReal’s ability to move its body to assist the arms with their reach. “The development of core actuation technologies with high power, low weight, and advanced control has been a key enabler in our efforts,” says Nikos Tsagarakis, head of the HHCM Lab at IIT. “These technologies have allowed us to realize a low-weight bimanual manipulation system with high payload capacity and large workspace, suitable for integration with HyQReal.”

Maximizing reachable space is important, because the robot will be under the remote control of a human, who probably has no particular interest in or care for mechanical or power constraints—they just want to get the job done.

To get the job done, IIT has developed a teleoperation system, which is weird-looking because it features a very large workspace that allows the user to leverage more of the robot’s full range of motion. Having tried a bunch of different robotic telepresence systems, I can vouch for how important this is: It’s super annoying to be doing some task through telepresence, and then hit a joint limit on the robot and have to pause to reset your arm position. “That is why it is important to study operators’ quality of experience. It allows us to design the haptic and teleoperation systems appropriately because it provides insights into the levels of delight or frustration associated with immersive visualization, haptic feedback, robot control, and task performance,” confirms Ioannis Sarakoglou, who is responsible for the development of the haptic teleoperation technologies in the HHCM Lab. The whole thing takes place in a fully immersive VR environment, of course, with some clever bandwidth optimization inspired by the way humans see that transmits higher-resolution images only where the user is looking.

HyQReal’s telepresence control system offers haptic feedback and a large workspace.IIT

Telepresence Robots for the Real World

The system is designed to be used in hazardous environments where you wouldn’t want to send a human. That’s why IIT’s partner on this project is INAIL, Italy’s National Institute for Insurance Against Accidents at Work, which is understandably quite interested in finding ways in which robots can be used to keep humans out of harm’s way.

In tests with Italian firefighters in 2022 (using an earlier version of the robot with a single arm), operators were able to use the system to extinguish a simulated tunnel fire. It’s a good first step, but Semini has plans to push the system to handle “more complex, heavy, and demanding tasks, which will better show its potential for real-world applications.”

As always with robots and real-world applications, there’s still a lot of work to be done, Semini says. “The reliability and durability of the systems in extreme environments have to be improved,” he says. “For instance, the robot must endure intense heat and prolonged flame exposure in firefighting without compromising its operational performance or structural integrity.” There’s also managing the robot’s energy consumption (which is not small) to give it a useful operating time, and managing the amount of information presented to the operator to boost situational awareness while still keeping things streamlined and efficient. “Overloading operators with too much information increases cognitive burden, while too little can lead to errors and reduce situational awareness,” says Yonas Tefera, who lead the development of the immersive interface. “Advances in immersive mixed-reality interfaces and multimodal controls, such as voice commands and eye tracking, are expected to improve efficiency and reduce fatigue in the future.”

There’s a lot of crossover here with the goals of the DARPA Robotics Challenge for humanoid robots, except IIT’s system is arguably much more realistically deployable than any of those humanoids are, at least in the near term. While we’re just starting to see the potential of humanoids in carefully controlled environment, quadrupeds are already out there in the world, proving how reliable their four-legged mobility is. Manipulation is the obvious next step, but it has to be more than just opening doors. We need it to use tools, lift debris, and all that other DARPA Robotics Challenge stuff that will keep humans safe. That’s what Robot Teleoperativo is trying to make real.

You can find more detail about the Robot Teleoperativo project in this paper, presented in November at the 2024 IEEE Conference on Telepresence, in Pasadena, Calif.



In theory, one of the main applications for robots should be operating in environments that (for whatever reason) are too dangerous for humans. I say “in theory” because in practice it’s difficult to get robots to do useful stuff in semi-structured or unstructured environments without direct human supervision. This is why there’s been some emphasis recently on teleoperation: Human software teaming up with robot hardware can be a very effective combination.

For this combination to work, you need two things. First, an intuitive control system that lets the user embody themselves in the robot to pilot it effectively. And second, a robot that can deliver on the kind of embodiment that the human pilot needs. The second bit is the more challenging, because humans have very high standards for mobility, strength, and dexterity. But researchers at the Italian Institute of Technology (IIT) have a system that manages to check both boxes, thanks to its enormously powerful quadruped, which now sports a pair of massive arms on its head.

“The primary goal of this project, conducted in collaboration with INAIL, is to extend human capabilities to the robot, allowing operators to perform complex tasks remotely in hazardous and unstructured environments to mitigate risks to their safety by exploiting the robot’s capabilities,” explains Claudio Semini, who leads the Robot Teleoperativo project at IIT. The project is based around the HyQReal hydraulic quadruped, the most recent addition to IIT’s quadruped family.

Hydraulics have been very visibly falling out of favor in robotics, because they’re complicated and messy, and in general robots don’t need the absurd power density that comes with hydraulics. But there are still a few robots in active development that use hydraulics specifically because of all that power. If your robot needs to be highly dynamic or lift really heavy things, hydraulics are, at least for now, where it’s at.

IIT’s HyQReal quadruped is one of those robots. If you need something that can carry a big payload, like a pair of massive arms, this is your robot. Back in 2019, we saw HyQReal pulling a three-tonne airplane. HyQReal itself weighs 140 kilograms, and its knee joints can output up to 300 newton-meters of torque. The hydraulic system is powered by onboard batteries and can provide up to 4 kilowatts of power. It also uses some of Moog’s lovely integrated smart actuators, which sadly don’t seem to be in development anymore. Beyond just lifting heavy things, HyQReal’s mass and power make it a very stable platform, and its aluminum roll cage and Kevlar skin ensure robustness.

The HyQReal hydraulic quadruped is tethered for power during experiments at IIT, but it can also run on battery power.IIT

The arms that HyQReal is carrying are IIT-INAIL arms, which weigh 10 kg each and have a payload of 5 kg per arm. To put that in perspective, the maximum payload of a Boston Dynamics Spot robot is only 14 kg. The head-mounted configuration of the arms means they can reach the ground, and they also have an overlapping workspace to enable bimanual manipulation, which is enhanced by HyQReal’s ability to move its body to assist the arms with their reach. “The development of core actuation technologies with high power, low weight, and advanced control has been a key enabler in our efforts,” says Nikos Tsagarakis, head of the HHCM Lab at IIT. “These technologies have allowed us to realize a low-weight bimanual manipulation system with high payload capacity and large workspace, suitable for integration with HyQReal.”

Maximizing reachable space is important, because the robot will be under the remote control of a human, who probably has no particular interest in or care for mechanical or power constraints—they just want to get the job done.

To get the job done, IIT has developed a teleoperation system, which is weird-looking because it features a very large workspace that allows the user to leverage more of the robot’s full range of motion. Having tried a bunch of different robotic telepresence systems, I can vouch for how important this is: It’s super annoying to be doing some task through telepresence, and then hit a joint limit on the robot and have to pause to reset your arm position. “That is why it is important to study operators’ quality of experience. It allows us to design the haptic and teleoperation systems appropriately because it provides insights into the levels of delight or frustration associated with immersive visualization, haptic feedback, robot control, and task performance,” confirms Ioannis Sarakoglou, who is responsible for the development of the haptic teleoperation technologies in the HHCM Lab. The whole thing takes place in a fully immersive VR environment, of course, with some clever bandwidth optimization inspired by the way humans see that transmits higher-resolution images only where the user is looking.

HyQReal’s telepresence control system offers haptic feedback and a large workspace.IIT

Telepresence Robots for the Real World

The system is designed to be used in hazardous environments where you wouldn’t want to send a human. That’s why IIT’s partner on this project is INAIL, Italy’s National Institute for Insurance Against Accidents at Work, which is understandably quite interested in finding ways in which robots can be used to keep humans out of harm’s way.

In tests with Italian firefighters in 2022 (using an earlier version of the robot with a single arm), operators were able to use the system to extinguish a simulated tunnel fire. It’s a good first step, but Semini has plans to push the system to handle “more complex, heavy, and demanding tasks, which will better show its potential for real-world applications.”

As always with robots and real-world applications, there’s still a lot of work to be done, Semini says. “The reliability and durability of the systems in extreme environments have to be improved,” he says. “For instance, the robot must endure intense heat and prolonged flame exposure in firefighting without compromising its operational performance or structural integrity.” There’s also managing the robot’s energy consumption (which is not small) to give it a useful operating time, and managing the amount of information presented to the operator to boost situational awareness while still keeping things streamlined and efficient. “Overloading operators with too much information increases cognitive burden, while too little can lead to errors and reduce situational awareness,” says Yonas Tefera, who lead the development of the immersive interface. “Advances in immersive mixed-reality interfaces and multimodal controls, such as voice commands and eye tracking, are expected to improve efficiency and reduce fatigue in the future.”

There’s a lot of crossover here with the goals of the DARPA Robotics Challenge for humanoid robots, except IIT’s system is arguably much more realistically deployable than any of those humanoids are, at least in the near term. While we’re just starting to see the potential of humanoids in carefully controlled environment, quadrupeds are already out there in the world, proving how reliable their four-legged mobility is. Manipulation is the obvious next step, but it has to be more than just opening doors. We need it to use tools, lift debris, and all that other DARPA Robotics Challenge stuff that will keep humans safe. That’s what Robot Teleoperativo is trying to make real.

You can find more detail about the Robot Teleoperativo project in this paper, presented in November at the 2024 IEEE Conference on Telepresence, in Pasadena, Calif.

Pages