IEEE Spectrum Robotics

IEEE Spectrum
Subscribe to IEEE Spectrum Robotics feed IEEE Spectrum Robotics


“Mooooo.”

This dairy barn is full of cows, as you might expect. Cows are being milked, cows are being fed, cows are being cleaned up after, and a few very happy cows are even getting vigorously scratched behind the ears. “I wonder where the farmer is,” remarks my guide, Jan Jacobs. Jacobs doesn’t seem especially worried, though—the several hundred cows in this barn are being well cared for by a small fleet of fully autonomous robots, and the farmer might not be back for hours. The robots will let him know if anything goes wrong.

At one of the milking robots, several cows are lined up, nose to tail, politely waiting their turn. The cows can get milked by robot whenever they like, which typically means more frequently than the twice a day at a traditional dairy farm. Not only is getting milked more often more comfortable for the cows, cows also produce about 10 percent more milk when the milking schedule is completely up to them.

“There’s a direct correlation between stress and milk production,” Jacobs says. “Which is nice, because robots make cows happier and therefore, they give more milk, which helps us sell more robots.”

Jan Jacobs is the human-robot interaction design lead for Lely, a maker of agricultural machinery. Founded in 1948 in Maassluis, Netherlands, Lely deployed its first Astronaut milking robot in the early 1990s. The company has since developed other robotic systems that assist with cleaning, feeding, and cow comfort, and the Astronaut milking robot is on its fifth generation. Lely is now focused entirely on robots for dairy farms, with around 135,000 of them deployed around the world.

Essential Jobs on Dairy Farms

The weather outside the barn is miserable. It’s late fall in the Netherlands, and a cold rain is gusting in from the sea, which is probably why the cows have quite sensibly decided to stay indoors and why the farmer is still nowhere to be found. Lely requires that dairy farmers who adopt its robots commit to letting their cows move freely between milking, feeding, and resting, as well as inside and outside the barn, at their own pace. “We believe that free cow traffic is a core part of the future of farming,” Jacobs says as we watch one cow stroll away from the milking robot while another takes its place. This is possible only when the farm operates on the cows’ schedule rather than a human’s.

A conventional dairy farm relies heavily on human labor. Lely estimates that repetitive daily tasks represent about a third of the average workday of a dairy farmer. In the morning, the cows are milked for the first time. Most dairy cows must be milked at least twice a day or they’ll become uncomfortable, and so the herd will line up on their own. Traditional milking parlors are designed to maximize human milking efficiency. A milking carousel, for instance, slowly rotates cows as they’re milked so that the dairy worker doesn’t have to move between stalls.

“We were spending 6 hours a day milking,” explains dairy farmer Josie Rozum, whose 120-cow herd at Takes Dairy Farm uses a pair of Astronaut A5 milking robots. “Now that the robots are handling all of that, we can focus more on animal care and comfort.”Lely

An experienced human using well-optimized equipment can attach a milking machine to a cow in just 20 to 30 seconds. The actual milking takes only a few minutes, but with the average small dairy farm in North America providing a home for several hundred cows, milking typically represents a time commitment of 4 to 6 hours per day.

There are other jobs that must be done every day at a dairy. Cows are happier with continuous access to food, which means feeding them several times a day. The feed is a mix of roughage (hay), silage (grass), and grain. The cows will eat all of this, but they prefer the grain, and so it’s common to see cows sorting their food by grabbing a mouthful and throwing it up into the air. The lighter roughage and silage flies farther than the grain does, leaving the cow with a pile of the tastier stuff as the rest gets tossed out of reach. This makes “feed pushing” necessary to shove the rest of the feed back within reach of the cow.

And of course there’s manure. A dairy cow produces an average of 68 kilograms of manure a day. All that manure has to be collected and the barn floors regularly cleaned.

Dairy Industry 4.0

The amount of labor needed to operate a dairy meant that until the early 1900s, most family farms could support only about eight cows. The introduction of the first milking machines, called bucket milkers, helped farmers milk 10 cows per hour instead of 4 by the mid-1920s. Rural electrification furthered dairy automation starting in the 1950s, and since then, both farm size and milk production have increased steadily. In the 1930s, a good dairy cow produced 3,600 kilograms of milk per year. Today, it’s almost 11,000 kilograms, and Lely believes that robots are what will enable small dairy farms to continue to scale sustainably.

Lely

But dairy robots are expensive. A milking robot can cost several hundred thousand dollars, plus an additional US $5,000 to $10,000 per year in operating costs. The Astronaut A5, Lely’s latest milking robot, uses a laser-guided robot arm to clean the cow’s udder before attaching teat cups one at a time. While the cow munches on treats, the Astronaut monitors her milk output, collecting data on 32 parameters, including indicators of the quality of the milk and the health of the cow. When milking is complete, the robot cleans the udder again, and the cow is free to leave as the robot steam cleans itself in preparation for the next cow.

Lely argues that although the initial cost is higher than that of a traditional milking parlor, the robots pay for themselves over time through higher milk production (due primarily to increased milking frequency) and lower labor costs. Lely’s other robots can also save on labor. The Vector mobile robot handles continuous feeding and feed pushing, and the Discovery Collector is a robotic manure vacuum that keeps the floors clean.

At Takes Dairy Farm, Rozum and her family used to spend several hours per day managing food for the cows. “The feeding robot is another amazing piece of the puzzle for our farm that allows us to focus on other things.”Takes Family Farm

For most dairy farmers, though, making more money is not the main reason to get a robot, explains Marcia Endres, a professor in the department of animal science at the University of Minnesota. Endres specializes in dairy-cattle management, behavior, and welfare, and studies dairy robot adoption. “When we first started doing research on this about 12 years ago, most of the farms that were installing robots were smaller farms that did not want to hire employees,” Endres says. “They wanted to do the work just with family labor, but they also wanted to have more flexibility with their time. They wanted a better lifestyle.”

Flexibility was key for the Takes family, who added Lely robots to their dairy farm in Ely, Iowa, four years ago. “When we had our old milking parlor, everything that we did as a family was always scheduled around milking,” says Josie Rozum, who manages the farm and a creamery along with her parents—Dan and Debbie Takes—and three brothers. “With the robots, we can prioritize our personal life a little bit more—we can spend time together on Christmas morning and know that the cows are still getting milked.”

Takes Family Dairy Farm’s 120-cow herd is milked by a pair of Astronaut A5 robots, with a Vector and three Discovery Collectors for feeding and cleaning. “They’ve become a crucial part of the team,” explains Rozum. “It would be challenging for us to find outside help, and the robots keep things running smoothly.” The robots also add sustainability to small dairy farms, and not just in the short term. “Growing up on the farm, we experienced the hard work, and we saw what that commitment did to our parents,” Rozum explains. “It’s a very tough lifestyle. Having the robots take over a little bit of that has made dairy farming more appealing to our generation.”

Takes Dairy Farm

Of the 25,000 dairy farms in the United States, Endres estimates about 10 percent have robots. This is about a third of the adoption rate in Europe, where farms tend to be smaller, so the cost of implementing the robots is lower. Endres says that over the last five years, she’s seen a shift toward robot adoption at larger farms with over 500 cows, due primarily to labor shortages. “These larger dairies are having difficulty finding employees who want to milk cows—it’s a very tedious job. And the robot is always consistent. The farmers tell me, ‘My robot never calls in sick, and never shows up drunk.’ ”

Endres is skeptical of Lely’s claim that its robots are responsible for increased milk production. “There is no research that proves that cows will be more productive just because of robots,” she says. It may be true that farms that add robots do see increased milk production, she adds, but it’s difficult to measure the direct effect that the robots have. “I have many dairies that I work with where they have both a robotic milking system and a conventional milking system, and if they are managing their cows well, there isn’t a lot of difference in milk production.”

The Lely Luna cow brush helps to keep cows’ skin healthy. It’s also relaxing and enjoyable, so cows will brush themselves several times a day.Lely

The robots do seem to improve the cows’ lives, however. “Welfare is not just productivity and health—it’s also the affective state, the ability to have a more natural life,” Endres says. “Again, it’s hard to measure, but I think that on most of these robot farms, their affective state is improved.” The cows’ relationship with humans changes too, comments Endres. When the cows no longer associate humans with being told where to go and what to do all the time, they’re much more relaxed and friendly toward people they meet. Rozum agrees. “We’ve noticed a tremendous change in our cows’ demeanor. They’re more calm and relaxed, just doing their thing in the barn. They’re much more comfortable when they can choose what to do.”

Cows Versus Robots

Cows are curious and clever animals, and have the same instinct that humans have when confronted with a new robot: They want to play with it. Because of this, Lely has had to cow-proof its robots, modifying their design and programming so that the machines can function autonomously around cows. Like many mobile robots, Lely’s dairy robots include contact-sensing bumpers that will pause the robot’s motion if it runs into something. On the Vector feeding robot, Lely product engineer René Beltman tells me, they had to add a software option to disable the bumper. “The cows learned that, ‘oh, if I just push the bumper, then the robot will stop and put down more feed in my area for me to eat.’ It was a free buffet. So you don’t want the cows to end up controlling the robot.” Emergency stop buttons had to be relocated so that they couldn’t be pressed by questing cow tongues.

There’s also a social component to cow-robot interaction. Within their herd, cows have a well-established hierarchy, and the robots need to work within this hierarchy to do their jobs. For example, a cow won’t move out of the way if it thinks that another cow is lower in the hierarchy than it is, and it will treat a robot the same way. The engineers had to figure out how the Discovery Collector could drive back and forth to vacuum up manure without getting blocked by cows. “In our early tests, we’d use sensors to have the robot stop to avoid running into any of the cows,” explains Jacobs. “But that meant that the robot became the weakest one in the hierarchy, and it would just end up crying in the corner because the cows wouldn’t move for it. So now, it doesn’t stop.”

One of the dirtiest jobs on a dairy farm is handled by the Discovery Collector, an autonomous manure vacuum. The robot relies on wheel odometry and ultrasonic sensors for navigation because it’s usually covered in manure.Evan Ackerman

“We make the robot drive slower for the first week, when it’s being introduced to a new herd,” adds Beltman. “That gives the cows time to figure out that the robot is at the top of the hierarchy.”

Besides maintaining their dominance at the top of the herd, the current generation of Lely robots doesn’t interact much with the cows, but that’s changing, Jacobs tells me. Right now, when a robot is driving through the barn, it makes a beeping sound to let the cows know it’s coming. Lely is looking into how to make these sounds more enjoyable for the cows. “This was a recent revelation for me,” Jacobs says. ”We’re not just designing interactions for humans. The cows are our users, too.”

Human-Robot Interaction

Last year, Jacobs and researchers from Delft University of Technology, in the Netherlands, presented a paper at the IEEE Human-Robot Interaction (HRI) Conference exploring this concept of robot behavior development on working dairy farms. The researchers visited robotic dairies, interviewed dairy farmers, and held workshops within Lely to establish a robot code of conduct—a guide that Lely’s designers and engineers use when considering how their robots should look, sound, and act, for the benefit of both humans and cows. On the engineering side, this includes practical things like colors and patterns for lights and different types of sounds so that information is communicated consistently across platforms.

But there’s much more nuance to making a robot seem “reliable” or “friendly” to the end user, since such things are not only difficult to define but also difficult to implement in a way that’s appropriate for dairy farmers, who prioritize functionality.

Jacobs doesn’t want his robots to try to be anyone’s friend—not the cow’s, and not the farmer’s. “The robot is an employee, and it should have a professional relationship,” he says. “So the robot might say ‘Hi,’ but it wouldn’t say, ‘How are you feeling today?’ ” What’s more important is that the robots are trustworthy. For Jacobs, instilling trust is simple: “You cannot gain trust by doing tricks. If your robot is reliable and predictable, people will trust it.”

The electrically driven, pneumatically balanced robotic arm that the Lely Astronaut uses to milk cows is designed to withstand accidental (or intentional) kicks.Lely

The real challenge, Jacobs explains, is that Lely is largely on its own when it comes to finding the best way of integrating its robots into the daily lives of people who may have never thought they’d have robot employees. “There’s not that much knowledge in the robot world about how to approach these problems,” Jacobs says. “We’re working with almost 20,000 farmers who have a bigger robot workforce than a human workforce. They’re robot managers. And I don’t know that there necessarily are other companies that have a customer base of normal people who have strategic dependence on robots for their livelihood. That is where we are now.”

From Dairy Farmers to Robot Managers

With the additional time and flexibility that the robots enable, some dairy farmers have been able to diversify. On our way back to Lely’s headquarters, we stop at Farm Het Lansingerland, owned by a Lely customer who has added a small restaurant and farm shop to his dairy. Large windows look into the barn so that restaurant patrons can watch the robots at work, caring for the cows that produce the cheese that’s on the menu. A self-guided tour takes you right up next to an Astronaut A5 milking robot, while signs on the floor warn of Vector feeding robots on the move. “This farmer couldn’t expand—this was as many cows as he’s allowed to have here,” Jacobs explains to me over cheese sandwiches. “So, he needs to have additional income streams. That’s why he started these other things. And the robots were essential for that.”

The farmer is an early adopter—someone who’s excited about the technology and actively interested in the robots themselves. But most of Lely’s tens of thousands of customers just want a reliable robotic employee, not a science project. “We help the farmer to prepare not just the environment for the robots, but also the mind,” explains Jacobs. “It’s a complete shift in their way of working.”

Besides managing the robots, the farmer must also learn to manage the massive amount of data that the robots generate about the cows. “The amount of data we get from the robots is a game changer,” says Rozum. “We can track milk production, health, and cow habits in real time. But it’s overwhelming. You could spend all day just sitting at the computer, looking at data and not get anything else done. It took us probably a year to really learn how to use it.”

The most significant advantages to farmers come from using the data for long-term optimization, says the University of Minnesota’s Endres. “In a conventional barn, the cows are treated as a group,” she says. “But the robots are collecting data about individual animals, which lets us manage them as individuals.” By combining data from a milking robot and a feeding robot, for example, farmers can close the loop, correlating when and how the cows are fed with their milk production. Lely is doing its best to simplify this type of decision making, says Jacobs. “You need to understand what the data means, and then you need to present it to the farmer in an actionable way.”

A Robotic Dairy
All dairy farms are different, and farms that decide to give robots a try will often start with just one or two. A highly roboticized dairy barn might look something like this illustration, with a team of many different robots working together to keep the cows comfortable and happy.

A: One Astronaut A5 robot can milk up to 60 cows. After the Astronaut cleans the teats, a laser sensor guides a robotic arm to attach the teat cups. Milking takes just a few minutes.

B: In the feed kitchen, the Vector robot recharges itself while different ingredients are loaded into its hopper and mixed together. Mixtures can be customized for different groups of cows.

C: The Vector robot dispenses freshly mixed food in small batches throughout the day. A laser measures the height of leftover food to make sure that the cows are getting the right amounts.

D: The Discovery Collector is a mop and vacuum for cow manure. It navigates the barn autonomously and returns to its docking station to remove waste, refill water, and wirelessly recharge.

E: As it milks, the Astronaut is collecting a huge amount of data—32 different parameters per teat. If it detects an issue, the farmer is notified, helping to catch health problems early.

F: Automated gates control meadow access and will keep a cow inside if she’s due to be milked soon. Cows are identified using RFID collars, which also track their behavior and health.

A Sensible Future for Dairy Robots

After lunch, we stop by Lely headquarters, where bright red life-size cow statues guard the entrance and all of the conference rooms are dairy themed. We get comfortable in Butter, and I ask Jacobs and Beltman what the future holds for their dairy robots.

In the near term, Lely is focused on making its existing robots more capable. Its latest feed-pushing robot is equipped with lidar and stereo cameras, which allow it to autonomously navigate around large farms without needing to follow a metal strip bolted to the ground. A new overhead camera system will leverage AI to recognize individual cows and track their behavior, while also providing farmers with an enormous new dataset that could allow Lely’s systems to help farmers make more nuanced decisions about cow welfare. The potential of AI is what Jacobs seems most excited about, although he’s cautious as well. “With AI, we’re suddenly going to take away an entirely different level of work. So, we’re thinking about doing research into the meaningfulness of work, to make sure that the things that we do with AI are the things that farmers want us to do with AI.”

“The idea of AI is very intriguing,” comments Rozum. “I think AI could help to simplify things for farmers. It would be a tool, a resource. But we know our cows best, and a farmer’s judgment has to be there too. There’s just some component of dairy farming that you cannot take the human out of. Robots are not going to be successful on a farm unless you have good farmers.”

Lely is aware of this and knows that its robots have to find the right balance between being helpful, and taking over. “We want to make sure not to take away the kinds of interactions that give dairy farmers joy in their work,” says Beltman. “Like feeding calves—every farmer likes to feed the calves.” Lely does sell an automated calf feeder that many dairy farmers buy, which illustrates the point: What’s the best way of designing robots to give humans the flexibility to do the work that they enjoy?

“This is where robotics is going,” Jacobs tells me as he gives me a lift to the train station. “As a human, you could have two other humans and six robots, and that’s your company.” Many industries, he says, look to robots with the objective of minimizing human involvement as much as possible so that the robots can generate the maximum amount of value for whoever happens to be in charge.

Dairy farms are different. Perhaps that’s because the person buying the robot is the person who most directly benefits from it. But I wonder if the concern over automation of jobs would be mitigated if more companies chose to emphasize the sustainability and joy of work equally with profit. Automation doesn’t have to be zero-sum—if implemented thoughtfully, perhaps robots can make work easier, more efficient, and more fun, too.

Jacobs certainly thinks so. “That’s my utopia,” he says. “And we’re working in the right direction.”



This is a sponsored article brought to you by Freudenberg Sealing Technologies.

The increasing deployment of collaborative robots (cobots) in outdoor environments presents significant engineering challenges, requiring highly advanced sealing solutions to ensure reliability and durability. Unlike industrial robots that operate in controlled indoor environments, outdoor cobots are exposed to extreme weather conditions that can compromise their mechanical integrity. Maintenance robots used in servicing wind turbines, for example, must endure intense temperature fluctuations, high humidity, prolonged UV radiation exposure, and powerful wind loads. Similarly, agricultural robots operate in harsh conditions where they are continuously exposed to abrasive dust, chemically aggressive fertilizers and pesticides, and mechanical stresses from rough terrains.

To ensure these robotic systems maintain long-term functionality, sealing solutions must offer effective protection against environmental ingress, mechanical wear, corrosion, and chemical degradation. Outdoor robots must perform flawlessly in temperature ranges spanning from scorching heat to freezing cold while withstanding constant exposure to moisture, lubricants, solvents, and other contaminants. In addition, sealing systems must be resilient to continuous vibrations and mechanical shocks, which are inherent to robotic motion and can accelerate material fatigue over time.

Comprehensive Technical Requirements for Robotic Sealing Solutions

The development of sealing solutions for outdoor robotics demands an intricate balance of durability, flexibility, and resistance to wear. Robotic joints, particularly those in high-mobility systems, experience multidirectional movements within confined installation spaces, making the selection of appropriate sealing materials and geometries crucial. Traditional elastomeric O-rings, widely used in industrial applications, often fail under such extreme conditions. Exposure to high temperatures can cause thermal degradation, while continuous mechanical stress accelerates fatigue, leading to early seal failure. Chemical incompatibility with lubricants, fuels, and cleaning agents further contributes to material degradation, shortening operational lifespans.

Friction-related wear is another critical concern, especially in robotic joints that operate at high speeds. Excessive friction not only generates heat but can also affect movement precision. In collaborative robotics, where robots work alongside humans, such inefficiencies pose safety risks by delaying response times and reducing motion accuracy. Additionally, prolonged exposure to UV radiation can cause conventional sealing materials to become brittle and crack, further compromising their performance.

Advanced IPSR Technology: Tailored for Cobots

To address these demanding conditions, Freudenberg Sealing Technologies has developed a specialized sealing solution: Ingress Protection Seals for Robots (IPSR). Unlike conventional seals that rely on metallic springs for mechanical support, the IPSR design features an innovative Z-shaped geometry that dynamically adapts to the axial and radial movements typical in robotic joints.

Numerous seals are required in cobots and these are exposed to high speeds and forces.Freudenberg Sealing Technologies

This unique structural design distributes mechanical loads more efficiently, significantly reducing friction and wear over time. While traditional spring-supported seals tend to degrade due to mechanical fatigue, the IPSR configuration eliminates this limitation, ensuring long-lasting performance. Additionally, the optimized contact pressure reduces frictional forces in robotic joints, thereby minimizing heat generation and extending component lifespans. This results in lower maintenance requirements, a crucial factor in applications where downtime can lead to significant operational disruptions.

Optimized Through Advanced Simulation Techniques

The development of IPSR technology relied extensively on Finite Element Analysis (FEA) simulations to optimize seal geometries, material selection, and surface textures before physical prototyping. These advanced computational techniques allowed engineers to predict and enhance seal behavior under real-world operational conditions.

FEA simulations focused on key performance factors such as frictional forces, contact pressure distribution, deformation under load, and long-term fatigue resistance. By iteratively refining the design based on simulation data, Freudenberg engineers were able to develop a sealing solution that balances minimal friction with maximum durability.

Furthermore, these simulations provided insights into how IPSR seals would perform under extreme conditions, including exposure to humidity, rapid temperature changes, and prolonged mechanical stress. This predictive approach enabled early detection of potential failure points, allowing for targeted improvements before mass production. By reducing the need for extensive physical testing, Freudenberg was able to accelerate the development cycle while ensuring high-performance reliability.

Material Innovations: Superior Resistance and Longevity

The effectiveness of a sealing solution is largely determined by its material composition. Freudenberg utilizes advanced elastomeric compounds, including Fluoroprene XP and EPDM, both selected for their exceptional chemical resistance, mechanical strength, and thermal stability.

Fluoroprene XP, in particular, offers superior resistance to aggressive chemicals, including solvents, lubricants, fuels, and industrial cleaning agents. Additionally, its resilience against ozone and UV radiation makes it an ideal choice for outdoor applications where continuous exposure to sunlight could otherwise lead to material degradation. EPDM, on the other hand, provides outstanding flexibility at low temperatures and excellent aging resistance, making it suitable for applications that require long-term durability under fluctuating environmental conditions.

To further enhance performance, Freudenberg applies specialized solid-film lubricant coatings to IPSR seals. These coatings significantly reduce friction and eliminate stick-slip effects, ensuring smooth robotic motion and precise movement control. This friction management not only improves energy efficiency but also enhances the overall responsiveness of robotic systems, an essential factor in high-precision automation.

Extensive Validation Through Real-World Testing

While advanced simulations provide critical insights into seal behavior, empirical testing remains essential for validating real-world performance. Freudenberg subjected IPSR seals to rigorous durability tests, including prolonged exposure to moisture, dust, temperature cycling, chemical immersion, and mechanical vibration.

Throughout these tests, IPSR seals consistently achieved IP65 certification, demonstrating their ability to effectively prevent environmental contaminants from compromising robotic components. Real-world deployment in maintenance robotics for wind turbines and agricultural automation further confirmed their reliability, with extensive wear analysis showing significantly extended operational lifetimes compared to traditional sealing technologies.

Safety Through Advanced Friction Management

In collaborative robotics, sealing performance plays a direct role in operational safety. Excessive friction in robotic joints can delay emergency-stop responses and reduce motion precision, posing potential hazards in human-robot interaction. By incorporating low-friction coatings and optimized sealing geometries, Freudenberg ensures that robotic systems respond rapidly and accurately, enhancing workplace safety and efficiency.

Tailored Sealing Solutions for Various Robotic Systems

Freudenberg Sealing Technologies provides customized sealing solutions across a wide range of robotic applications, ensuring optimal performance in diverse environments.

Automated Guided Vehicles (AGVs) operate in industrial settings where they are exposed to abrasive contaminants, mechanical vibrations, and chemical exposure. Freudenberg employs reinforced PTFE composites to enhance durability and protect internal components.

Delta robots can perform complex movements at high speed. This requires seals that meet the high dynamic and acceleration requirements.Freudenberg Sealing Technologies

Delta robots, commonly used in food processing, pharmaceuticals, and precision electronics, require FDA-compliant materials that withstand rigorous cleaning procedures such as Cleaning-In-Place (CIP) and Sterilization-In-Place (SIP). Freudenberg utilizes advanced fluoropolymers that maintain structural integrity under aggressive sanitation processes.

Seals for Scara robots must have high chemical resistance, compressive strength and thermal resistance to function reliably in a variety of industrial environments.Freudenberg Sealing Technologies

SCARA robots benefit from Freudenberg’s Modular Plastic Sealing Concept (MPSC), which integrates sealing, bearing support, and vibration damping within a compact, lightweight design. This innovation optimizes robot weight distribution and extends component service life.

Six-axis robots used in automotive, aerospace, and electronics manufacturing require sealing solutions capable of withstanding high-speed operations, mechanical stress, and chemical exposure. Freudenberg’s Premium Sine Seal (PSS), featuring reinforced PTFE liners and specialized elastomer compounds, ensures maximum durability and minimal friction losses.

Continuous Innovation for Future Robotic Applications

Freudenberg Sealing Technologies remains at the forefront of innovation, continuously developing new materials, sealing designs, and validation methods to address evolving challenges in robotics. Through strategic customer collaborations, cutting-edge material science, and state-of-the-art simulation technologies, Freudenberg ensures that its sealing solutions provide unparalleled reliability, efficiency, and safety across all robotic platforms.



A new prototype is laying claim to the title of smallest, lightest untethered flying robot.

At less than a centimeter in wingspan, the wirelessly powered robot is currently very limited in how far it can travel away from the magnetic fields that drive its flight. However, the scientists who developed it suggest there are ways to boost its range, which could lead to potential applications such as search and rescue operations, inspecting damaged machinery in industrial settings, and even plant pollination.

One strategy to shrink flying robots involves removing their batteries and supplying them electricity using tethers. However, tethered flying robots face problems operating freely in complex environments. This has led some researchers to explore wireless methods of powering robot flight.

“The dream was to make flying robots to fly anywhere and anytime without using an electrical wire for the power source,” says Liwei Lin, a professor of mechanical engineering at University of California at Berkeley. Lin and his fellow researchers detailed their findings in Science Advances.

3D-Printed Flying Robot Design

Each flying robot has a 3D-printed body that consists of a propeller with four blades. This rotor is encircled by a ring that helps the robot stay balanced during flight. On top of each body are two tiny permanent magnets.

All in all, the insect-scale prototypes have wingspans as small as 9.4 millimeters and weigh as little as 21 milligrams. Previously, the smallest reported flying robot, either tethered or untethered, was 28 millimeters wide.

When exposed to an external alternating magnetic field, the robots spin and fly without tethers. The lowest magnetic field strength needed to maintain flight is 3.1 millitesla. (In comparison, a refrigerator magnet has a strength of about 10 mT.)

When the applied magnetic field alternates with a frequency of 310 hertz, the robots can hover. At 340 Hz, they accelerate upward. The researchers could steer the robots laterally by adjusting the applied magnetic fields. The robots could also right themselves after collisions to stay airborne without complex sensing or controlling electronics, as long as the impacts were not too large.

Experiments show the lift force the robots generate can exceed their weight by 14 percent, to help them carry payloads. For instance, a prototype that’s 20.5 millimeters wide and weighing 162.4 milligrams could carry an infrared sensor weighing 110 mg to scan its environment. The robots proved efficient at converting the energy given them into lift force—better than nearly all other reported flying robots, tethered or untethered, and also better than fruit flies and hummingbirds.

Currently the maximum operating range of these prototypes is about 10 centimeters away from the magnetic coils. One way to extend the operating range of these robots is to increase the magnetic field strength they experience tenfold by adding more coils, optimizing the configuration of these coils, and using beamforming coils, Lin notes. Such developments could allow the robots to fly up to a meter away from the magnetic coils.

The scientists could also miniaturize the robots even further. This would make them lighter, and so reduce the magnetic field strength they need for propulsion. “It could be possible to drive micro flying robots using electromagnetic waves such as those in radio or cell phone transmission signals,” Lin says. Future research could also place devices that can convert magnetic energy to electricity onboard the robots to power electronic components, the researchers add.



Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

This robot can walk, without electronics, and only with the addition of a cartridge of compressed gas, right off the 3D-printer. It can also be printed in one go, from one material. Researchers from the University of California San Diego and BASF, describe how they developed the robot in an advanced online publication in the journal Advanced Intelligent Systems. They used the simplest technology available: a desktop 3D-printer and an off-the-shelf printing material. This design approach is not only robust, it is also cheap—each robot costs about $20 to manufacture.

And details!

[ Paper ] via [ University of California San Diego ]

Why do you want a humanoid robot to walk like a human? So that it doesn’t look weird, I guess, but it’s hard to imagine that a system that doesn’t have the same arrangement of joints and muscles that we do will move optimally by just trying to mimic us.

[ Figure ]

I don’t know how it manages it, but this little soft robotic worm somehow moves with an incredible amount of personality.

Soft actuators are critical for enabling soft robots, medical devices, and haptic systems. Many soft actuators, however, require power to hold a configuration and rely on hard circuitry for control, limiting their potential applications. In this work, the first soft electromagnetic system is demonstrated for externally-controlled bistable actuation or self-regulated astable oscillation.

[ Paper ] via [ Georgia Tech ]

Thanks, Ellen!

A 180-degree pelvis rotation would put the “break” in “breakdancing” if this were a human doing it.

[ Boston Dynamics ]

My colleagues were impressed by this cooking robot, but that may be because journalists are always impressed by free food.

[ Posha ]

This is our latest work about a hybrid aerial-terrestrial quadruped robot called SPIDAR, which shows unique and complex locomotion styles in both aerial and terrestrial domains including thrust-assisted crawling motion. This work has been presented in the International Symposium of Robotics Research (ISRR) 2024.

[ Paper ] via [ Dragon Lab ]

Thanks, Moju!

This fresh, newly captured video from Unitree’s testing grounds showcases the breakneck speed of humanoid intelligence advancement. Every day brings something thrilling!

[ Unitree ]

There should be more robots that you can ride around on.

[ AgileX Robotics ]

There should be more robots that wear hats at work.

[ Ugo ]

iRobot, who pioneered giant docks for robot vacuums, is now moving away from giant docks for robot vacuums.

[ iRobot ]

There’s a famous experiment where if you put a dead fish in current, it starts swimming, just because of its biomechanical design. Somehow, you can do the same thing with an unactuated quadruped robot on a treadmill.

[ Delft University of Technology ]

Mush! Narrowly!

[ Hybrid Robotics ]

It’s freaking me out a little bit that this couple is apparently wandering around a huge mall that is populated only by robots and zero other humans.

[ MagicLab ]

I’m trying, I really am, but the yellow is just not working for me.

[ Kepler ]

By having Stretch take on the physically demanding task of unloading trailers stacked floor to ceiling with boxes, Gap Inc has reduced injuries, lowered turnover, and watched employees get excited about automation intended to keep them safe.

[ Boston Dynamics ]

Since arriving at Mars in 2012, NASA’s Curiosity rover has been ingesting samples of Martian rock, soil, and air to better understand the past and present habitability of the Red Planet. Of particular interest to its search are organic molecules: the building blocks of life. Now, Curiosity’s onboard chemistry lab has detected long-chain hydrocarbons in a mudstone called “Cumberland,” the largest organics yet discovered on Mars.

[ NASA ]

This University of Toronto Robotics Institute Seminar is from Sergey Levine at UC Berkeley, on Robotics Foundation Models.

General-purpose pretrained models have transformed natural language processing, computer vision, and other fields. In principle, such approaches should be ideal in robotics: since gathering large amounts of data for any given robotic platform and application is likely to be difficult, general pretrained models that provide broad capabilities present an ideal recipe to enable robotic learning at scale for real-world applications.
From the perspective of general AI research, such approaches also offer a promising and intriguing approach to some of the grandest AI challenges: if large-scale training on embodied experience can provide diverse physical capabilities, this would shed light not only on the practical questions around designing broadly capable robots, but the foundations of situated problem-solving, physical understanding, and decision making. However, realizing this potential requires handling a number of challenging obstacles. What data shall we use to train robotic foundation models? What will be the training objective? How should alignment or post-training be done? In this talk, I will discuss how we can approach some of these challenges.

[ University of Toronto ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

Every time you see a humanoid demo in a warehouse or factory, ask yourself: Would a “superhumanoid” like this actually be a better answer?

[ Dexterity ]

The only reason that this is the second video in Video Friday this week, and not the first, is because you’ve almost certainly already seen it.

This is a collaboration between the Robotics and AI Institute and Boston Dynamics, and RAI has its own video, which is slightly different:

- YouTube

[ Boston Dynamics ] via [ RAI ]

Well this just looks a little bit like magic.

[ University of Pennsylvania Sung Robotics Lab ]

After hours of dance battles with professional choreographers (yes, real human dancers!), PM01 now nails every iconic move from Kung Fu Hustle.

[ EngineAI ]

Sanctuary AI has demonstrated industry-leading sim-to-real transfer of learned dexterous manipulation policies for our unique, high degree-of-freedom, high strength, and high speed hydraulic hands.

[ Sanctuary AI ]

This video is “introducing BotQ, Figure’s new high-volume manufacturing facility for humanoid robots,” but I just see some injection molding and finishing of a few plastic parts.

[ Figure ]

DEEP Robotics recently showcased its “One-Touch Navigation” feature, enhancing the intelligent control experience of its robotic dog. This feature offers two modes: map-based point selection and navigation and video-based point navigation, designed for open terrains and confined spaces respectively. By simply typing on a tablet screen or selecting a point in the video feed, the robotic dog can autonomously navigate to the target point, automatically planning its path and intelligently avoiding obstacles, significantly improving traversal efficiency.

What’s in the bags, though?

[ Deep Robotics ]

This hurts my knees to watch, in a few different ways.

[ Unitree ]

Why the recent obsession with two legs when instead robots could have six? So much cuter!

[ Jizai ] via [ RobotStart ]

The world must know: who killed Mini-Duck?

[ Pollen ]

Seven hours of Digit robots at work at ProMat.

And there are two more days of these livestreams if you need more!

[ Agility ]



When you see a squirrel jump to a branch, you might think (and I myself thought, up until just now) that they’re doing what birds and primates would do to stick the landing: just grabbing the branch and hanging on. But it turns out that squirrels, being squirrels, don’t actually have prehensile hands or feet, meaning that they can’t grasp things with any significant amount of strength. Instead, they manage to land on branches using a “palmar” grasp, which isn’t really a grasp at all, in the sense that there’s not much grabbing going on. It’s more accurate to say that the squirrel is mostly landing on its palms and then balancing, which is very impressive.

This kind of dynamic stability is a trait that squirrels share with one of our favorite robots: Salto. Salto is a jumper too, and it’s about as non-prehensile as it’s possible to get, having just one limb with basically no grip strength at all. The robot is great at bouncing around on the ground, but if it could move vertically, that’s an entire new mobility dimension that could lead to some potentially interesting applications, including environmental scouting, search and rescue, and disaster relief.

In a paper published today in Science Robotics, roboticists have now taught Salto to leap from one branch to another like squirrels do, using a low torque gripper and relying on its balancing skills instead.

Squirrel Landing Techniques in Robotics

While we’re going to be mostly talking about robots here (because that’s what we do), there’s an entire paper by many of the same robotics researchers that was published in late February in the Journal of Experimental Biology about how squirrels land on branches this way. While you’d think that the researchers might have found some domesticated squirrels for this, they actually spent about a month bribing wild squirrels on the UC Berkeley campus to bounce around some instrumented perches while high speed cameras were rolling.

Squirrels aim for perfectly balanced landings, which allow them to immediately jump again. They don’t always get it quite right, of course, and they’re excellent at recovering from branch landings where they go a little bit over or under where they want to be. The research showed how squirrels use their musculoskeletal system to adjust their body position, dynamically absorbing the impact of landing with their forelimbs and altering their mass distribution to turn near misses into successful perches.

It’s these kinds of skills that Salto really needs to be able to usefully make jumps in the real world. When everything goes exactly the way it’s supposed to, jumping and perching is easy, but that almost never happens and the squirrel research shows how important it is to be able to adapt when things go wonky. It’s not like the little robot has a lot of degrees of freedom to work with—it’s got just one leg, just one foot, a couple of thrusters, and that spinning component which, believe it or not, functions as a tail. And yet, Salto manages to (sometimes!) make it work.

Those balanced upright landings are super impressive, although we should mention that Salto only achieved that level of success with two out of 30 trials. It only actually fell off the perch five times, and the rest of the time, it did manage a landing but then didn’t quite balance and either overshot or undershot the branch. There are some mechanical reasons why this is particularly difficult for Salto—for example, having just one leg to use for both jumping and landing means that the robot’s leg has to be rotated mid-jump. This takes time, and causes Salto to jump more vertically than squirrels do, since squirrels jump with their back legs and land with their front legs.

Based on these tests, the researchers identified four key features for balanced landings that apply to robots (and squirrels):

  1. Power and accuracy are important!
  2. It’s easier to land a shallower jump with a more horizontal trajectory.
  3. Being able to squish down close to the branch helps with balancing.
  4. Responsive actuation is also important!

Of these, Salto is great at the first one, very much not great at the second one, and also not great at the third and fourth ones. So in some sense, it’s amazing that the roboticists have been able to get it to do this branch-to-branch jumping as well as they have. There’s plenty more to do, though. Squirrels aren’t the only arboreal jumpers out there, and there’s likely more to learn from other animals—Salto was originally inspired by the galago (also known as bush babies), although those are more difficult to find on the UC Berkeley campus. And while the researchers don’t mention it, the obvious extension to this work is to chain together multiple jumps, and eventually to combine branch jumping with the ground jumping and wall jumping that Salto can do already to really give those squirrels a jump for their nuts.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

European Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

In 2026, a JAXA spacecraft is heading to the Martian moon Phobos to chuck a little rover at it.

[ DLR ]

Happy International Women’s Day! UBTECH humanoid robots Walker S1 deliver flowers to incredible women and wish all women a day filled with love, joy and empowerment.

[ UBTECH ]

TRON 1 demonstrates Multi-Terrain Mobility as a versatile biped mobility platform, empowering innovators to push the boundaries of robotic locomotion, unlocking limitless possibilities in algorithm validation and advanced application development.

[ LimX Dynamics ]

This is indeed a very fluid running gait, and the flip is also impressive, but I’m wondering what sort of actual value these skills add, you know? Or even what kind of potential value they’re leading up to.

[ EngineAI ]

Designing trajectories for manipulation through contact is challenging as it requires reasoning of object & robot trajectories as well as complex contact sequences simultaneously. In this paper, we present a novel framework for simultaneously designing trajectories of robots, objects, and contacts efficiently for contact-rich manipulation.

[ Paper ] via [ Mitsubishi Electric Research Laboratories ]

Thanks, Yuki!

Running robot, you say? I’m thinking it might actually be a power walking robot.

[ MagicLab ]

Wake up, Reachy!

[ Pollen ]

Robot vacuum docks have gotten large enough that we’re now all supposed to pretend that we’re happy they’ve become pieces of furniture.

[ Roborock ]

The SeaPerch underwater robot, a “do-it-yourself” maker project, is a popular educational tool for middle and high school students. Developed by MIT Sea Grant, the remotely operated vehicle (ROV) teaches hand fabrication processes, electronics techniques, and STEM concepts, while encouraging exploration of structures, electronics, and underwater dynamics.

[ MIT Sea Grant ]

I was at this RoboGames match! In 2010! And now I feel old!

[ Hardcore Robotics ]

Daniel Simu with a detailed breakdown of his circus acrobat partner robot. If you don’t want to watch the whole thing, make sure and check out 3:30.

[ Daniel Simu ]



Ukraine’s young tech entrepreneurs think that a combination of robots and lessons from war-gaming could turn the tide in the war against Russia. They are developing an intelligent operating system to enable a single controller to remotely operate swarms of interconnected drones and cannon-equipped land robots. The tech, they say, could help Ukraine cope with Russia’s numerical advantage.

Kyiv-based start-up Ark Robotics is conducting trials on an embryo of such a system in cooperation with one of the brigades of Ukraine’s ground forces. The company emerged about a year ago, when a group of young roboticists heard a speech by one of the Ukrainian commanders detailing challenges on the frontline.

“At that time, we were building unmanned ground vehicles [UGVs],” Andryi Udovychenko, Ark Robotics’s operations lead, told IEEE Spectrum on the sidelines of the Brave 1 Defense Tech Innovations Forum held in Kyiv last month. “But we heard that what we had [to offer] wasn’t enough. They said they needed something more.”

Since the war began, a vibrant defense tech innovation ecosystem has emerged in Ukraine, having started from modest beginnings of modifying China-made DJI MAVIC drones to make up for the lack of artillery. Today, Ukraine is a drone-making powerhouse. Dozens of startup companies are churning out newer and better tech and rapidly refining it to improve the effectiveness of the beleaguered nation’s troops. First-person-view drones have become a symbol of this war, but since last year they have begun to be complemented by UGVs, which help on the ground with logistics, evacuation of the wounded and also act as a new means of attack.

The new approach allows the Ukrainians to keep their soldiers away from the battle ground for longer periods but doesn’t erase the fact that Ukraine has far fewer soldiers than Russia does.

“Every single drone needs one operator, complicated drones need two or three operators, and we don’t have that many people,” Serhii Kupriienko, the CEO and founder of Swarmer, said during a panel at the Kyiv event. Swarmer is a Kyiv-based start-up developing technologies to allow groups of drones to operate as one self-coordinated swarm.

Ark Robotics are trying to take that idea yet another step. The company’s Frontier OS aspires to become a unifying interface that would allow drones and UGVs made by various makers to work together under the control of operators seated in control rooms miles away from the action.

One Controller for Many Drones and Robots

“We have many types of drones that are using different controls, different interfaces and it’s really hard to build cohesion,” Udovychenko says. “To move forward, we need a system where we can control multiple different types of vehicles in a cohesive manner in complex operations.”

Udovychenko, a gaming enthusiast, is excited about the progress Ark Robotics has made. It could be a game-changer, he says, a new foundational technology for defense. It would make Ukraine “like Protoss,” the fictional technologically advanced nation in the military science fiction strategy game StarCraft.

But what powers him is much more than youthful geekiness. Building up Ukraine’s technological dominance is a mission fueled by grief and outrage.

“I don’t want to lose any more friends,” he remarks at one point, becoming visibly emotional. “We don’t want to be dying in the trenches, but we need to be able to defend our country and given that the societal math doesn’t favor us, we need to make our own math to win.”

Soldiers at an undisclosed location used laptops to test software from Ark Robotics.Ark Robotics

The scope of the challenge isn’t lost on him. The company has so far built a vehicle computing unit that serves as a central hub and control board for various unmanned vehicles including flying drones, UGVs and even marine vehicles.

“We are building this as a solution that enables the integration of various team developers and software, allowing us to extract the best components and rapidly scale them,” Udovychenko says. “This system pairs a high-performance computing module with an interface board that provides multiple connections for vehicle systems.

The platform allows a single operator to remotely guide a flock of robots but will in the future also incorporate autonomous navigation and task execution, according to Udovychenko. So far, the team has tested the technology in simple logistics exercises. For the grand vision to work, though, the biggest challenge will be maintaining reliable communication links between the controller and the robotic fleet, but also between the robots and drones.

Tests on Ukraine Battlefields to Begin Soon

“We’re not talking about communications in a relatively safe environment when you have an LTE network that has enough bandwidth to accommodate thousands of phones,” Udovychenko notes. “At the frontline, everything is affected by electronic warfare, so you need to be able to switch between different solutions including satellite, digital radio and radio mesh so that even if you lose connection to the server, you still have connection between the drones and robots so that they can move together and maintain some level of control between them.”

Udovychenko expects Ark Robotics’s partner brigade in the Ukraine armed forces to test the early version of the tech in a real-life situation within the next couple of months. His young drone operator friends are excited, he says. And how could they not be? The technology promises to turn warfighting into a kind of real-life video game. The new class of multi-drone operators will likely be recruited from the ranks of gaming aficionados.

“If we can take the best pilots and give them tools to combine the operations, we might see a tremendous advantage,” Udovychenko says. “It’s like in StarCraft. Some people are simply able to play the game right and obliterate their opponents within minutes even if they’re starting from the same basic conditions.”

Speaking at the Brave 1 Defense Tech Innovations Forum, Colonel Andrii Lebedenko, Deputy Commander-in-Chief of the Armed Forces of Ukraine, acknowledged that land battles have so far been Ukraine’s weakest area. He said that replacing “humans with robots as much as possible” is Ukraine’s near-term goal and he expressed confidence that upcoming technologies will give greater autonomy to the robot swarms.

Some roboticists, however, are more skeptical that swarms of autonomous robots will crawl en-masse across the battlefields of Eastern Ukraine any time soon. “Swarming is certainly a goal we should reach but it’s much easier with FPV drones than with ground-based robots,” Ivan Movchan, CEO of the Ukrainian Scale Company, a Kharkiv-based robot maker, told Spectrum.

“Navigation on the ground is more challenging simply because of the obstacles,” he adds. “But I do expect UGVs to become very common in Ukraine over the next year.”



Generative AI models are getting closer to taking action in the real world. Already, the big AI companies are introducing AI agents that can take care of web-based busywork for you, ordering your groceries or making your dinner reservation. Today, Google DeepMind announced two generative AI models designed to power tomorrow’s robots.

The models are both built on Google Gemini, a multimodal foundation model that can process text, voice, and image data to answer questions, give advice, and generally help out. DeepMind calls the first of the new models, Gemini Robotics, an “advanced vision-language-action model,” meaning that it can take all those same inputs and then output instructions for a robot’s physical actions. The models are designed to work with any hardware system, but were mostly tested on the two-armed Aloha 2 system that DeepMind introduced last year.

In a demonstration video, a voice says: “Pick up the basketball and slam dunk it” (at 2:27 in the video below). Then a robot arm carefully picks up a miniature basketball and drops it into a miniature net—and while it wasn’t a NBA-level dunk, it was enough to get the DeepMind researchers excited.

Google DeepMind released this demo video showing off the capabilities of its Gemini Robotics foundation model to control robots. Gemini Robotics

“This basketball example is one of my favorites,” said Kanishka Rao, the principal software engineer for the project, in a press briefing. He explains that the robot had “never, ever seen anything related to basketball,” but that its underlying foundation model had a general understanding of the game, knew what a basketball net looks like, and understood what the term “slam dunk” meant. The robot was therefore “able to connect those [concepts] to actually accomplish the task in the physical world,” says Rao.

What are the advances of Gemini Robotics?

Carolina Parada, head of robotics at Google DeepMind, said in the briefing that the new models improve over the company’s prior robots in three dimensions: generalization, adaptability, and dexterity. All of these advances are necessary, she said, to create “a new generation of helpful robots.”

Generalization means that a robot can apply a concept that it has learned in one context to another situation, and the researchers looked at visual generalization (for example, does it get confused if the color of an object or background changed), instruction generalization (can it interpret commands that are worded in different ways), and action generalization (can it perform an action it had never done before).

Parada also says that robots powered by Gemini can better adapt to changing instructions and circumstances. To demonstrate that point in a video, a researcher told a robot arm to put a bunch of plastic grapes into a clear Tupperware container, then proceeded to shift three containers around on the table in an approximation of a shyster’s shell game. The robot arm dutifully followed the clear container around until it could fulfill its directive.

Google DeepMind says Gemini Robotics is better than previous models at adapting to changing instructions and circumstances. Google DeepMind

As for dexterity, demo videos showed the robotic arms folding a piece of paper into an origami fox and performing other delicate tasks. However, it’s important to note that the impressive performance here is in the context of a narrow set of high-quality data that the robot was trained on for these specific tasks, so the level of dexterity that these tasks represent is not being generalized.

What is embodied reasoning?

The second model introduced today is Gemini Robotics-ER, with the ER standing for “embodied reasoning,” which is the sort of intuitive physical world understanding that humans develop with experience over time. We’re able to do clever things like look at an object we’ve never seen before and make an educated guess about the best way to interact with it, and this is what DeepMind seeks to emulate with Gemini Robotics-ER.

Parada gave an example of Gemini Robotics-ER’s ability to identify an appropriate grasping point for picking up a coffee cup. The model correctly identifies the handle, because that’s where humans tend to grasp coffee mugs. However, this illustrates a potential weakness of relying on human-centric training data: for a robot, especially a robot that might be able to comfortably handle a mug of hot coffee, a thin handle might be a much less reliable grasping point than a more enveloping grasp of the mug itself.

DeepMind’s Approach to Robotic Safety

Vikas Sindhwani, DeepMind’s head of robotic safety for the project, says the team took a layered approach to safety. It starts with classic physical safety controls that manage things like collision avoidance and stability, but also includes “semantic safety” systems that evaluate both its instructions and the consequences of following them. These systems are most sophisticated in the Gemini Robotics-ER model, says Sindhwani, which is “trained to evaluate whether or not a potential action is safe to perform in a given scenario.”

And because “safety is not a competitive endeavor,” Sindhwani says, DeepMind is releasing a new data set and what it calls the Asimov benchmark, which is intended to measure a model’s ability to understand common-sense rules of life. The benchmark contains both questions about visual scenes and text scenarios, asking models’ opinions on things like the desirability of mixing bleach and vinegar (a combination that make chlorine gas) and putting a soft toy on a hot stove. In the press briefing, Sindhwani said that the Gemini models had “strong performance” on that benchmark, and the technical report showed that the models got more than 80 percent of questions correct.

DeepMind’s Robotic Partnerships

Back in December, DeepMind and the humanoid robotics company Apptronik announced a partnership, and Parada says that the two companies are working together “to build the next generation of humanoid robots with Gemini at its core.” DeepMind is also making its models available to an elite group of “trusted testers”: Agile Robots, Agility Robotics, Boston Dynamics, and Enchanted Tools.



After January’s Southern California wildfires, the question of burying energy infrastructure to prevent future fires has gained renewed urgency in the state. While the exact cause of the fires remains under investigation, California utilities have spent years undergrounding power lines to mitigate fire risks. Pacific Gas & Electric, which has installed over 1,287 kilometers of underground power lines since 2021, estimates the method is 98 percent effective in reducing ignition threats. Southern California Edison has buried over 40 percent of its high-risk distribution lines, and 63 percent of San Diego Gas & Electric’s regional distribution system is now underground.

Still, the exorbitant cost of underground construction leaves much of the U.S. power grid’s 8.8 million kilometers of distribution lines and 180 million utility poles exposed to tree strikes, flying debris, and other opportunities for sparks to cascade into a multi-acre blaze. Recognizing the need for cost-effective undergrounding solutions, the U.S. Department of Energy launched GOPHURRS in January 2024. The three-year program pours $34 million into 12 projects to develop more efficient undergrounding technologies that minimize surface disruptions while supporting medium-voltage power lines.

One recipient, Case Western Reserve University in Cleveland, Ohio, is building a self-propelled robotic sleeve that mimics earthworms’ characteristic peristaltic movement to advance through soil. Awarded $2 million, Case’s “peristaltic conduit” concept hopes to more precisely navigate underground and reduce the risk of unintended damage, such as breaking an existing pipe.

Why Is Undergrounding So Expensive?

Despite its benefits, undergrounding remains cost-prohibitive at US $1.1 to $3.7 million per kilometer ($1.8 to $6 million per mile) for distribution lines and $3.7 to $62 million per kilometer for transmission lines, according to estimates from California’s three largest utilities. That’s significantly more than overhead infrastructure, which costs $394,000 to $472,000 per kilometer for distribution lines and $621,000 to $6.83 million per kilometer for transmission lines.

The most popular method of undergrounding power lines, called open trenching, requires extensive excavation, conduit installation, and backfilling, making it expensive and logistically complicated. And it’s often impractical in dense urban areas where underground infrastructure is already congested with plumbing, fiber optics, and other utilities.

Trenchless methods like horizontal directional drilling (HDD) provide a less invasive way to get power lines under roads and railways by creating a controlled, curved bore path that starts at a shallow entry angle, deepens to pass obstacles, and resurfaces at a precise exit point. But HDD is even more expensive than open trenching due to specialized equipment, complex workflows, and the risk of damaging existing infrastructure.

Given the steep costs, utilities often prioritize cheaper fire mitigation strategies like trimming back nearby trees and other plants, using insulated conductors, and stepping up routine inspections and repairs. While not as effective as undergrounding, these measures have been the go-to option, largely because faster, cheaper underground construction methods don’t yet exist.

Ted Kury, director of energy studies at the University of Florida’s Public Utility Research Center, who has extensively studied the costs and benefits of undergrounding, says technologies implementing directional drilling improvements “could make undergrounding more practical in urban or densely populated areas where open trenching, and its attendant disruptions to the surrounding infrastructure, could result in untenable costs.”

Earthworm-Inspired Robotics for Power Lines

In Case’s worm-inspired robot, alternating sections are designed to expand and retract to anchor and advance the machine. This flexible force increases precision and reduces the risk of impacting and breaking pipes. Conventional methods require large turning radii exceeding 300 meters, but Case’s 1.5-meter turning radius will enable the device to flexibly maneuver around existing infrastructure.

“We use actuators to change the length and diameter of each segment,” says Kathryn Daltorio, an associate engineering professor and co-director of Case’s Biologically-Inspired Robotics Lab. “The short and fat segments press against the walls of the burrow, then they anchor so the thin segments can advance forward. If two segments aren’t touching the ground but they’re changing length at the same time, your anchors don’t slip and you advance forward.”

Daltorio and her colleagues have studied earthworm-inspired robotics for over a decade, originally envisioning the technology for surgical and confined-space applications before recognizing its potential for undergrounding power lines.

Case Western Reserve University’s worm-like digging robot can turn faster than other drilling techniques to avoid obstacles.Kathryn Daltorio/Case School of Engineering

Traditional HDD relies on pushing a drill head through soil, requiring more force as the bore length grows. Case’s drilling concept generates the force needed for the tip from the peristaltic segments within the borehole. As the path gets longer, only the front segments dig deeper. “If the robot hits something, operators can pull back and change directions, burrowing along the way to complete the circuit by changing the depth,” Daltorio says.

Another key difference from HDD is integrated conduit installation. In HDD, the drill goes through the entire length first, and then the power conduit is pulled through. Case’s peristaltic robot lays the conduit while traveling, reducing the overall installation time.

Advancements in Burrowing Precision

“The peristaltic conduit approach is fascinating [and] certainly seems to be addressing concerns regarding the sheer variety of underground obstacles,” says the University of Florida’s Kury. However, he highlights a larger concern with undergrounding innovations—not just Case’s—in meeting a constantly evolving environment. Today’s underground will look very different in 10 years, as soil profiles shift, trees grow, animals tunnel, and people dig and build. “Underground cables will live for decades, and the sustainability of these technologies depends on how they adapt to this changing structure,” Kury added.

Daltorio notes that current undergrounding practices involve pouring concrete around the lines before backfilling to protect them from future excavation, a challenge for existing trenchless methods. But Case’s project brings two major benefits. First, by better understanding borehole design, engineers have more flexibility in choosing conduit materials to match the standards for particular environments. Also, advancements in burrowing precision could minimize the likelihood of future disruptions from human activities.

The research team is exploring different ways to reinforce the digging robot’s exterior while it’s underground.Olivia Gatchall

Daltorio’s team is collaborating with several partners, with Auburn University in Alabama contributing geotechnical expertise, Stony Brook University in New York running the modeling, and the University of Texas at Austin studying sediment interactions.

The project aims to halve undergrounding costs, though Daltorio cautions that it’s too early to commit to a specific cost model. Still, the time-saving potential appears promising. “With conventional approaches, planning, permitting and scheduling can take months,” Daltorio says. “By simplifying the process, it might be a few inspections at the endpoints, a few days of autonomous burrowing with minimal disruption to traffic above, followed by a few days of cleaning, splicing, and inspection.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

Last year, we unveiled the new Atlas—faster, stronger, more compact, and less messy. We’re designing the world’s most dynamic humanoid robot to do anything and everything, but we get there one step at a time. Our first task is part sequencing, a common logistics task in automotive manufacturing. Discover why we started with sequencing, how we are solving hard problems, and how we’re delivering a humanoid robot with real value.

My favorite part is 1:40, where Atlas squats down to pick a part up off the ground.

[ Boston Dynamics ]

I’m mostly impressed that making contact with that stick doesn’t cause the robot to fall over.

[ Unitree ]

Professor Patrícia Alves-Oliveira is studying authenticity of artworks co-created by an artist and a robot. Her research lab, Robot Studio, is developing methods to authenticate artwork by analyzing their entire creative process. This is accomplished by using the artist’s biometrics as well as the process of artwork creation, from the first brushstroke to the final painting. This work aims to bring ownership back to artists in the age of generative AI.

[ Robot Studio ] at [ University of Michigan ]

Hard to believe that RoMeLa has been developing humanoid robots for 20 (!) years. Here’s to 20 more!

[ RoMeLa ] at [ University of California Los Angeles ]

In this demo, Reachy 2 autonomously sorts healthy and unhealthy foods. No machine learning, no pre-trained AI—just real-time object detection!

[ Pollen ]

Biological snakes achieve high mobility with numerous joints, inspiring snake-like robots for rescue and inspection. However, conventional designs feature a limited number of joints. This paper presents an underactuated snake robot consisting of many passive links that can dynamically change its joint coupling configuration by repositioning motor-driven joint units along internal rack gears. Furthermore, a soft robot skin wirelessly powers the units, eliminating wire tangling and disconnection risks.

[ Paper ]

Thanks, Ayato!

Tech United Eindhoven is working on quadrupedal soccer robots, which should be fun.

[ Tech United ]

Autonomous manipulation in everyday tasks requires flexible action generation to handle complex, diverse real-world environments, such as objects with varying hardness and softness. Imitation Learning (IL) enables robots to learn complex tasks from expert demonstrations. However, a lot of existing methods rely on position/unilateral control, leaving challenges in tasks that require force information/control, like carefully grasping fragile or varying-hardness objects. To address these challenges, we introduce Bilateral Control-Based Imitation Learning via Action Chunking with Transformers(Bi-ACT) and”A” “L”ow-cost “P”hysical “Ha”rdware Considering Diverse Motor Control Modes for Research in Everyday Bimanual Robotic Manipulation (ALPHA-α).

[ Alpha-Biact ]

Thanks, Masato!

Powered by UBTECH’s revolutionary framework “BrainNet”, a team of Walker S1 humanoid robots work together to master complex tasks at Zeekr’s Smart Factory! Teamwork makes the dream of robots work.

[ UBTECH ]

Personal mobile robotic assistants are expected to find wide applications in industry and healthcare. However, manually steering a robot while in motion requires significant concentration from the operator, especially in tight or crowded spaces. This work presents a virtual leash with which a robot can naturally follow an operator. We successfully validate on the ANYmal platform the robustness and performance of our entire pipeline in real-world experiments.

[ ETH Zurich Robotic Systems Lab ]

I do not ever want to inspect a wind turbine blade from the inside.

[ Flyability ]

Sometimes you can learn more about a robot from an instructional unboxing video than from a fancy demo.

[ DEEP Robotics ]

Researchers at Penn Engineering have discovered that certain features of AI-governed robots carry security vulnerabilities and weaknesses that were previously unidentified and unknown. Funded by the National Science Foundation and the Army Research Laboratory, the research aims to address the emerging vulnerability for ensuring the safe deployment of large language models (LLMs) in robotics.

[ RoboPAIR ]

ReachBot is a joint project between Stanford and NASA to explore a new approach to mobility in challenging environments such as martian caves. It consists of a compact robot body with very long extending arms, based on booms used for extendable antennas. The booms unroll from a coil and can extend many meters in low gravity. In this talk I will introduce the ReachBot design and motion planning considerations, report on a field test with a single ReachBot arm in a lava tube in the Mojave Desert, and discuss future plans, which include the possibility of mounting one or more ReachBot arms equipped with wrists and grippers on a mobile platform – such as ANYMal.

[ ReachBot ]



Although they’re a staple of sci-fi movies and conspiracy theories, in real life, tiny flying microbots—weighed down by batteries and electronics—have struggled to get very far. But a new combination of circuits and lightweight solid-state batteries called a “flying batteries” topology could let these bots really take off, potentially powering microbots for hours from a system that weighs milligrams.

Microbots could be an important technology to find people buried in rubble or scout ahead in other dangerous situations. But they’re a difficult engineering challenge, says Patrick Mercier, an electrical and computer engineering professor at the University of California, San Diego. Mercier’s student Zixiao Lin described the new circuit last month at the IEEE International Solid State Circuits Conference (ISSCC). “You have these really tiny robots, and you want them to last as long as possible in the field,” Mercier says. “The best way to do that is to use lithium-ion batteries, because they have the best energy density. But there’s this fundamental problem, where the actuators need much higher voltage than what the battery is capable of providing.”

A lithium cell can provide about 4 volts, but piezoelectric actuators for microbots need tens to hundreds of volts, explains Mercier. Researchers, including Mercier’s own group, have developed circuits such as boost converters to pump up the voltage. But because they need relatively large inductors or a bunch of capacitors, these add too much mass and volume, typically taking up about as much room as the battery itself.

A new kind of solid-state battery, developed at the French national electronics laboratory CEA-Leti, offered a potential solution. The batteries are a thin-film stack of material, including lithium cobalt oxide and lithium phosphorus oxynitride, made using semiconductor processing technology, and they can be diced up into tiny cells. A 0.33-cubic-millimeter, 0.8-milligram cell can store 20 microampere-hours of charge, or about 60 ampere-hours per liter. (Lithium-ion earbud batteries provide more than 100 Ah/L, but are about 1,000 times as large.) A CEA-Leti spinoff based on the technology, Inject Power, in Grenoble, France, is gearing up to begin volume manufacturing in late 2026.

Stacking Batteries on the Fly

Because a solid-state battery can be diced up into tiny cells, researchers thought that they could achieve high voltages using a circuit that needs no capacitors or inductors. Instead, the circuit actively rearranges the connections among many tiny batteries moving them from parallel to serial and back again.

Imagine a microdrone that moves by flapping wings attached to a piezoelectric actuator. On its circuit board are a dozen or so of the solid-state microbatteries. Each battery is part of a circuit consisting of four transistors. These act as switches that can dynamically change the connection to that battery’s neighbor so that it is either parallel, so they share the same voltage, or serial, so their voltages are added.

At the start, all the batteries are in parallel, delivering a voltage that is nowhere near enough to trigger the actuator. The 2-square-millimeter IC the UCSD team built then begins opening and closing the transistor switches. This rearranges the connections between the cells so that first two cells are connected serially, then three, then four, and so on. In a few hundredths of a second, the batteries are all connected in series, and the voltage has piled so much charge onto the actuator that it snaps the microbot’s wings down. The IC then unwinds the process, making the batteries parallel again, one at a time.

The integrated circuit in the “flying battery” has a total area of 2 square millimeters.Patrick Mercier

Adiabatic Charging

Why not just connect every battery in series at once instead of going through this ramping up and down scheme? In a word, efficiency.

As long as the battery serialization and parallelization is done at a low-enough frequency, the system is charging adiabatically. That is, its power losses are minimized.

But it’s what happens after the actuator triggers “where the real magic comes in,” says Mercier. The piezoelectric actuator in the circuit acts like a capacitor, storing energy. “Just like you have regenerative breaking in a car, we can recover some of the energy that we stored in this actuator.” As each battery is unstacked, the remaining energy storage system has a lower voltage than the actuator, so some charge flows back into the batteries.

The UCSD team actually tested two varieties of solid-state microbatteries—1.5-volt ceramic version from Tokyo-based TDK (CeraCharge 1704-SSB) and a 4-V custom design from CEA-Leti. With 1.6 grams of TDK cells, the circuit reached 56.1 volts and delivered a power density of 79 milliwatts per gram, but with 0.014 grams of the custom storage, it maxed out at 68 V, and demonstrated a power density of 4,500 mW/g.

Mercier plans to test the system with robotics partners while his team and CEA-Leti work to improved the flying batteries system’s packaging, miniaturization, and other properties. One important characteristic that needs work is the internal resistance of the microbatteries. “The challenge there is that the more you stack, the higher the series resistance is, and therefore the lower the frequency we can operate the system,” he says.

Nevertheless, Mercier seems bullish on flying batteries’ chances of keeping microbots aloft. “Adiabatic charging with charge recovery and no passives: Those are two wins that help increase flight time.”



Salto has been one of our favorite robots since we were first introduced to it in 2016 as a project out of Ron Fearing’s lab at UC Berkeley. The palm-sized spring-loaded jumping robot has gone from barely being able to chain together a few open-loop jumps to mastering landings, bouncing around outside, powering through obstacle courses, and occasionally exploding.

What’s quite unusual about Salto is that it’s still an active research project—nine years is an astonishingly long life time for any robot, especially one without any immediately obvious practical applications. But one of Salto’s original creators, Justin Yim (who is now a professor at the University of Illinois), has found a niche where Salto might be able to do what no other robot can: mid-air sampling of the water geysering out of the frigid surface of Enceladus, a moon of Saturn.

What makes Enceladus so interesting is that it’s completely covered in a 40 kilometer thick sheet of ice, and underneath that ice is a 10 km-deep global ocean. And within that ocean can be found—we know not what. Diving in that buried ocean is a problem that robots may be able to solve at some point, but in the near(er) term, Enceladus’ south pole is home to over a hundred cryovolcanoes that spew plumes of water vapor and all kinds of other stuff right out into space, offering a sampling opportunity to any robot that can get close enough for a sip.

“We can cover large distances, we can get over obstacles, we don’t require an atmosphere, and we don’t pollute anything.” —Justin Yim, University of Illinois

Yim, along with another Salto veteran Ethan Schaler (now at JPL), have been awarded funding through NASA’s Innovative Advanced Concepts (NIAC) program to turn Salto into a robot that can perform “Legged Exploration Across the Plume,” or in an only moderately strained backronym, LEAP. LEAP would be a space-ified version of Salto with a couple of major modifications allowing it to operate in a freezing, airless, low-gravity environment.

Exploring Enceladus’ Challenging Terrain

As best as we can make out from images taken during Cassini flybys, the surface of Enceladus is unfriendly to traditional rovers, covered in ridges and fissures, although we don’t have very much information on the exact properties of the terrain. There’s also essentially no atmosphere, meaning that you can’t fly using aerodynamics, and if you use rockets to fly instead, you run the risk of your exhaust contaminating any samples that you take.

“This doesn’t leave us with a whole lot of options for getting around, but one that seems like it might be particularly suitable is jumping,” Yim tells us. “We can cover large distances, we can get over obstacles, we don’t require an atmosphere, and we don’t pollute anything.” And with Enceladus’ gravity being just 1/80th that of Earth, Salto’s meter-high jump on Earth would enable it to travel a hundred meters or so on Enceladus, taking samples as it soars through cryovolcano plumes.

The current version of Salto does require an atmosphere, because it uses a pair of propellers as tiny thrusters to control yaw and roll. On LEAP, those thrusters would be replaced with an angled pair of reaction wheels instead. To deal with the terrain, the robot will also likely need a foot that can handle jumping from (and landing on) surfaces composed of granular ice particles.

LEAP is designed to jump through Enceladus’ many plumes to collect samples, and use the moon’s terrain to direct subsequent jumps.NASA/Justin Yim

While the vision is for LEAP to jump continuously, bouncing over the surface and through plumes in a controlled series of hops, sooner or later it’s going to have a bad landing, and the robot has to be prepared for that. “I think one of the biggest new technological developments is going to be multimodal locomotion,” explains Yim. “Specifically, we’d like to have a robust ability to handle falls.” The reaction wheels can help with this in two ways: they offer some protection by acting like a shell around the robot, and they can also operate as a regular pair of wheels, allowing the robot to roll around on the ground a little bit. “With some maneuvers that we’re experimenting with now, the reaction wheels might also be able to help the robot to pop itself back upright so that it can start jumping again after it falls over,” Yim says.

A NIAC project like this is about as early-stage as it gets for something like LEAP, and an Enceladus mission is very far away as measured by almost every metric—space, time, funding, policy, you name it. Long term, the idea with LEAP is that it could be an add-on to a mission concept called the Enceladus Orbilander. This US $2.5 billion spacecraft would launch sometime in the 2030s, and spend about a dozen years getting to Saturn and entering orbit around Enceladus. After 1.5 years in orbit, the spacecraft would land on the surface, and spend a further 2 years looking for biosignatures. The Orbilander itself would be stationary, Yim explains, “so having this robotic mobility solution would be a great way to do expanded exploration of Enceladus, getting really long distance coverage to collect water samples from plumes on different areas of the surface.”

LEAP has been funded through a nine-month Phase 1 study that begins this April. While the JPL team investigates ice-foot interactions and tries to figure out how to keep the robot from freezing to death, at the University of Illinois Yim will be upgrading Salto with self-righting capability. Honestly, it’s exciting to think that after so many years, Salto may have finally found an application where it offers the actual best solution for solving this particular problem of low-gravity mobility for science.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

A bioinspired robot developed at EPFL can change shape to alter its own physical properties in response to its environment, resulting in a robust and efficient autonomous vehicle as well as a fresh approach to robotic locomotion.

[ Science Robotics ] via [ EPFL ]

A robot CAN get up this way, but SHOULD a robot get up this way?

[ University of Illinois Urbana-Champaign ]

I’m impressed with the capabilities here, but not the use case. There are already automated systems that do this much faster, much more reliably, and almost certainly much more cheaply. So, probably best to think of this as more of a technology demo than anything with commercial potential.

[ Figure ]

NEO Gamma is the next generation of home humanoids designed and engineered by 1X Technologies. The Gamma series includes improvements across NEO’s hardware and AI, featuring a new design that is deeply considerate of life at home. The future of Home Humanoids is here.

You all know by now not to take this video too seriously, but I will say that an advantage of building a robot like this for the home is that realistically it can spend most of its time sitting down and (presumably) charging.

[ 1X Technologies ]

This video compilation showcases novel aerial and underwater drone platforms and an ultra-quiet electric vertical takeoff and landing (eVTOL) propeller. These technologies were developed by the Advanced Vertical Flight Laboratory (AVFL) at Texas A&M University and Harmony Aeronautics, an AVFL spin-off company.

[ AVFL ]

Yes! More research like this please; legged robots (of all sizes) are TOO STOMPY.

[ ETH Zurich ]

Robosquirrel!

[ BBC ] via [ Laughing Squid ]

By watching their own motions with a camera, robots can teach themselves about the structure of their own bodies and how they move, a new study from researchers at Columbia Engineering now reveals. Equipped with this knowledge, the robots could not only plan their own actions, but also overcome damage to their bodies.

[ Columbia University, School of Engineering and Applied Science ]

If I was asking my robot to do a front flip for the first(ish) time, my face would probably look like the poor guy at 0:25. But it worked!

[ EngineAI ]

*We kindly request that all users refrain from making any dangerous modifications or using the robots in a hazardous manner.

A hazardous manner? Like teaching it martial arts...?

[ Unitree ]

Explore SLAMSpoof—a cutting-edge project by Keio-CSG that demonstrates how LiDAR spoofing attacks can compromise SLAM systems. In this video, we explore how spoofing attacks can compromise the integrity of SLAM systems, review the underlying methodology, and discuss the potential security implications for robotics and autonomous navigation. Whether you’re a robotics enthusiast, a security researcher, or simply curious about emerging technologies, this video offers valuable insights into both the risks and the innovations in the field.

[ SLAMSpoof ]

Thanks, Kentaro!

Sanctuary AI, a company developing physical AI for general purpose robots, announced the integration of new tactile sensor technology into its Phoenix general purpose robots. The integration enables teleoperation pilots to more effectively leverage the dexterity capabilities of general purpose robots to achieve complex, touch-driven tasks with precision and accuracy.

[ Sanctuary AI ]

I don’t know whether it’s the shape or the noise or what, but this robot pleases me.

[ University of Pennsylvania, Sung Robotics Lab ]

Check out the top features of the new Husky A300 - the next evolution of our rugged and customizable mobile robotic platform. Husky A300 offers superior performance, durability, and flexibility, empowering robotics researchers and innovators to tackle the most complex challenges in demanding environments.

[ Clearpath Robotics ]

The ExoMars Rosalind Franklin rover will drill deeper than any other mission has ever attempted on the Red Planet. Rosalind Franklin will be the first rover to reach a depth of up to two meters deep below the surface, acquiring samples that have been protected from harsh surface radiation and extreme temperatures.

[ European Space Agency ]

AI has been improving by leaps and bounds in recent years, and a string of new models can generate answers that almost feel as if they came from a person reasoning through a problem. But is AI actually close to reasoning like humans can? IBM distinguished scientist Murray Campbell chats with IBM Fellow Francesca Rossi about her time as president of the Association for the Advancement of Artificial Intelligence (AAAI). They discuss the state of AI, what modern reasoning models are actually doing, and whether we’ll see models that reason like we do.

[ IBM Research ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.ICRA 2025: 19–23 May 2025, ATLANTA, GA.London Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

We’re introducing Helix, a generalist Vision-Language-Action (VLA) model that unifies perception, language understanding, and learned control to overcome multiple longstanding challenges in robotics.

This is moderately impressive; my favorite part is probably the handoffs and that extra little bit of HRI with what we’d call eye contact if these robots had faces. But keep in mind that you’re looking at close to best case for robotic manipulation, and that if the robots had been given the bag instead of well-spaced objects on a single color background, or if the fridge had a normal human amount of stuff in it, they might be having a much different time of it. Also, is it just me, or is the sound on this video very weird? Like, some things make noise, some things don’t, and the robots themselves occasionally sound more like someone just added in some “soft actuator sound” or something. Also also, I’m of a suspicious nature, and when there is an abrupt cut between “robot grasps door” and “robot opens door,” I assume the worst.

[ Figure ]

Researchers at EPFL have developed a highly agile flat swimming robot. This robot is smaller than a credit card, and propels on the water surface using a pair of undulating soft fins. The fins are driven at resonance by artificial muscles, allowing the robot to perform complex maneuvers. In the future, this robot can be used for monitoring water quality or help with measuring fertilizer concentrations in rice fields

[ Paper ] via [ Science Robotics ]

I don’t know about you, but I always dance better when getting beaten with a stick.

[ Unitree Robotics ]

This is big news, people: Sweet Bite Ham Ham, one of the greatest and most useless robots of all time, has a new treat.

All yours for about US $100, overseas shipping included.

[ Ham Ham ] via [ Robotstart ]

MagicLab has announced the launch of its first generation self-developed dexterous hand product, the MagicHand S01. The MagicHand S01 has 11 degrees of freedom in a single hand. The MagicHand S01 has a hand load capacity of up to 5 kilograms, and in work environments, can carry loads of over 20 kilograms.

[ MagicLab ]

Thanks, Ni Tao!

No, I’m not creeped out at all, why?

[ Clone Robotics ]

Happy 40th Birthday to the MIT Media Lab!

Since 1985, the MIT Media Lab has provided a home for interdisciplinary research, transformative technologies, and innovative approaches to solving some of humanity’s greatest challenges. As we celebrate our 40th anniversary year, we’re looking ahead to decades more of imagining, designing, and inventing a future in which everyone has the opportunity to flourish.

[ MIT Media Lab ]

While most soft pneumatic grippers that operate with a single control parameter (such as pressure or airflow) are limited to a single grasping modality, this article introduces a new method for incorporating multiple grasping modalities into vacuum-driven soft grippers. This is achieved by combining stiffness manipulation with a bistable mechanism. Adjusting the airflow tunes the energy barrier of the bistable mechanism, enabling changes in triggering sensitivity and allowing swift transitions between grasping modes. This results in an exceptional versatile gripper, capable of handling a diverse range of objects with varying sizes, shapes, stiffness, and roughness, controlled by a single parameter, airflow, and its interaction with objects.

[ Paper ] via [ BruBotics ]

Thanks, Bram!

In this article, we present a design concept, in which a monolithic soft body is incorporated with a vibration-driven mechanism, called Leafbot. This proposed investigation aims to build a foundation for further terradynamics study of vibration-driven soft robots in a more complicated and confined environment, with potential applications in inspection tasks.

[ Paper ] via [ IEEE Transactions on Robots ]

We present a hybrid aerial-ground robot that combines the versatility of a quadcopter with enhanced terrestrial mobility. The vehicle features a passive, reconfigurable single wheeled leg, enabling seamless transitions between flight and two ground modes: a stable stance and a dynamic cruising configuration.

[ Robotics and Intelligent Systems Laboratory ]

I’m not sure I’ve ever seen this trick performed by a robot with soft fingers before.

[ Paper ]

There are a lot of robots involved in car manufacturing. Like, a lot.

[ Kawasaki Robotics ]

Steve Willits shows us some recent autonomous drone work being done at the AirLab at CMU’s Robotics Institute.

[ Carnegie Mellon University Robotics Institute ]

Somebody’s got to test all those luxury handbags and purses. And by somebody, I mean somerobot.

[ Qb Robotics ]

Do not trust people named Evan.

[ Tufts University Human-Robot Interaction Lab ]

Meet the Mind: MIT Professor Andreea Bobu.

[ MIT ]



About a year ago, Boston Dynamics released a research version of its Spot quadruped robot, which comes with a low-level application programming interface (API) that allows direct control of Spot’s joints. Even back then, the rumor was that this API unlocked some significant performance improvements on Spot, including a much faster running speed. That rumor came from the Robotics and AI (RAI) Institute, formerly The AI Institute, formerly the Boston Dynamics AI Institute, and if you were at Marc Raibert’s talk at the ICRA@40 conference in Rotterdam last fall, you already know that it turned out not to be a rumor at all.

Today, we’re able to share some of the work that the RAI Institute has been doing to apply reality-grounded reinforcement learning techniques to enable much higher performance from Spot. The same techniques can also help highly dynamic robots operate robustly, and there’s a brand new hardware platform that shows this off: an autonomous bicycle that can jump.

See Spot Run

This video is showing Spot running at a sustained speed of 5.2 meters per second (11.6 miles per hour). Out of the box, Spot’s top speed is 1.6 m/s, meaning that RAI’s spot has more than tripled (!) the quadruped’s factory speed.

If Spot running this quickly looks a little strange, that’s probably because it is strange, in the sense that the way this robot dog’s legs and body move as it runs is not very much like how a real dog runs at all. “The gait is not biological, but the robot isn’t biological,” explains Farbod Farshidian, roboticist at the RAI Institute. “Spot’s actuators are different from muscles, and its kinematics are different, so a gait that’s suitable for a dog to run fast isn’t necessarily best for this robot.”

The best Farshidian can categorize how Spot is moving is that it’s somewhat similar to a trotting gait, except with an added flight phase (with all four feet off the ground at once) that technically turns it into a run. This flight phase is necessary, Farshidian says, because the robot needs that time to successively pull its feet forward fast enough to maintain its speed. This is a “discovered behavior,” in that the robot was not explicitly programmed to “run,” but rather was just required to find the best way of moving as fast as possible.

Reinforcement Learning Versus Model Predictive Control

The Spot controller that ships with the robot when you buy it from Boston Dynamics is based on model predictive control (MPC), which involves creating a software model that approximates the dynamics of the robot as best you can, and then solving an optimization problem for the tasks that you want the robot to do in real time. It’s a very predictable and reliable method for controlling a robot, but it’s also somewhat rigid, because that original software model won’t be close enough to reality to let you really push the limits of the robot. And if you try to say, “Okay, I’m just going to make a superdetailed software model of my robot and push the limits that way,” you get stuck because the optimization problem has to be solved for whatever you want the robot to do, in real time, and the more complex the model is, the harder it is to do that quickly enough to be useful. Reinforcement learning (RL), on the other hand, learns offline. You can use as complex of a model as you want, and then take all the time you need in simulation to train a control policy that can then be run very efficiently on the robot.

Your browser does not support the video tag. In simulation, a couple of Spots (or hundreds of Spots) can be trained in parallel for robust real-world performance.Robotics and AI Institute

In the example of Spot’s top speed, it’s simply not possible to model every last detail for all of the robot’s actuators within a model-based control system that would run in real time on the robot. So instead, simplified (and typically very conservative) assumptions are made about what the actuators are actually doing so that you can expect safe and reliable performance.

Farshidian explains that these assumptions make it difficult to develop a useful understanding of what performance limitations actually are. “Many people in robotics know that one of the limitations of running fast is that you’re going to hit the torque and velocity maximum of your actuation system. So, people try to model that using the data sheets of the actuators. For us, the question that we wanted to answer was whether there might exist some other phenomena that was actually limiting performance.”

Searching for these other phenomena involved bringing new data into the reinforcement learning pipeline, like detailed actuator models learned from the real-world performance of the robot. In Spot’s case, that provided the answer to high-speed running. It turned out that what was limiting Spot’s speed was not the actuators themselves, nor any of the robot’s kinematics: It was simply the batteries not being able to supply enough power. “This was a surprise for me,” Farshidian says, “because I thought we were going to hit the actuator limits first.”

Spot’s power system is complex enough that there’s likely some additional wiggle room, and Farshidian says the only thing that prevented them from pushing Spot’s top speed past 5.2 m/s is that they didn’t have access to the battery voltages so they weren’t able to incorporate that real-world data into their RL model. “If we had beefier batteries on there, we could have run faster. And if you model that phenomena as well in our simulator, I’m sure that we can push this farther.”

Farshidian emphasizes that RAI’s technique is about much more than just getting Spot to run fast—it could also be applied to making Spot move more efficiently to maximize battery life, or more quietly to work better in an office or home environment. Essentially, this is a generalizable tool that can find new ways of expanding the capabilities of any robotic system. And when real-world data is used to make a simulated robot better, you can ask the simulation to do more, with confidence that those simulated skills will successfully transfer back onto the real robot.

Ultra Mobility Vehicle: Teaching Robot Bikes to Jump

Reinforcement learning isn’t just good for maximizing the performance of a robot—it can also make that performance more reliable. The RAI Institute has been experimenting with a completely new kind of robot that it invented in-house: a little jumping bicycle called the Ultra Mobility Vehicle, or UMV, which was trained to do parkour using essentially the same RL pipeline for balancing and driving as was used for Spot’s high-speed running.

There’s no independent physical stabilization system (like a gyroscope) keeping the UMV from falling over; it’s just a normal bike that can move forward and backward and turn its front wheel. As much mass as possible is then packed into the top bit, which actuators can rapidly accelerate up and down. “We’re demonstrating two things in this video,” says Marco Hutter, director of the RAI Institute’s Zurich office. “One is how reinforcement learning helps make the UMV very robust in its driving capabilities in diverse situations. And second, how understanding the robots’ dynamic capabilities allows us to do new things, like jumping on a table which is higher than the robot itself.”

“The key of RL in all of this is to discover new behavior and make this robust and reliable under conditions that are very hard to model. That’s where RL really, really shines.” —Marco Hutter, The RAI Institute

As impressive as the jumping is, for Hutter, it’s just as difficult (if not more difficult) to do maneuvers that may seem fairly simple, like riding backwards. “Going backwards is highly unstable,” Hutter explains. “At least for us, it was not really possible to do that with a classical [MPC] controller, particularly over rough terrain or with disturbances.”

Getting this robot out of the lab and onto terrain to do proper bike parkour is a work in progress that the RAI Institute says it will be able to demonstrate in the near future, but it’s really not about what this particular hardware platform can do—it’s about what any robot can do through RL and other learning-based methods, says Hutter. “The bigger picture here is that the hardware of such robotic systems can in theory do a lot more than we were able to achieve with our classic control algorithms. Understanding these hidden limits in hardware systems lets us improve performance and keep pushing the boundaries on control.”

Your browser does not support the video tag. Teaching the UMV to drive itself down stairs in sim results in a real robot that can handle stairs at any angle.Robotics and AI Institute

Reinforcement Learning for Robots Everywhere

Just a few weeks ago, the RAI Institute announced a new partnership with Boston Dynamics “to advance humanoid robots through reinforcement learning.” Humanoids are just another kind of robotic platform, albeit a significantly more complicated one with many more degrees of freedom and things to model and simulate. But when considering the limitations of model predictive control for this level of complexity, a reinforcement learning approach seems almost inevitable, especially when such an approach is already streamlined due to its ability to generalize.

“One of the ambitions that we have as an institute is to have solutions which span across all kinds of different platforms,” says Hutter. “It’s about building tools, about building infrastructure, building the basis for this to be done in a broader context. So not only humanoids, but driving vehicles, quadrupeds, you name it. But doing RL research and showcasing some nice first proof of concept is one thing—pushing it to work in the real world under all conditions, while pushing the boundaries in performance, is something else.”

Transferring skills into the real world has always been a challenge for robots trained in simulation, precisely because simulation is so friendly to robots. “If you spend enough time,” Farshidian explains, “you can come up with a reward function where eventually the robot will do what you want. What often fails is when you want to transfer that sim behavior to the hardware, because reinforcement learning is very good at finding glitches in your simulator and leveraging them to do the task.”

Simulation has been getting much, much better, with new tools, more accurate dynamics, and lots of computing power to throw at the problem. “It’s a hugely powerful ability that we can simulate so many things, and generate so much data almost for free,” Hutter says. But the usefulness of that data is in its connection to reality, making sure that what you’re simulating is accurate enough that a reinforcement learning approach will in fact solve for reality. Bringing physical data collected on real hardware back into the simulation, Hutter believes, is a very promising approach, whether it’s applied to running quadrupeds or jumping bicycles or humanoids. “The combination of the two—of simulation and reality—that’s what I would hypothesize is the right direction.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREA

Enjoy today’s videos!

There is an immense amount of potential for innovation and development in the field of human-robot collaboration — and we’re excited to release Meta PARTNR, a research framework that includes a large-scale benchmark, dataset and large planning model to jump start additional research in this exciting field.

[ Meta PARTNR ]

Humanoid is the first AI and robotics company in the UK, creating the world’s leading, commercially scalable, and safe humanoid robots.

[ Humanoid ]

To complement our review paper, “Grand Challenges for Burrowing Soft Robots,” we present a compilation of soft burrowers, both organic and robotic. Soft organisms use specialized mechanisms for burrowing in granular media, which have inspired the design of many soft robots. To improve the burrowing efficacy of soft robots, there are many grand challenges that must be addressed by roboticists.

[ Faboratory Research ] at [ Yale University ]

Three small lunar rovers were packed up at NASA’s Jet Propulsion Laboratory for the first leg of their multistage journey to the Moon. These suitcase-size rovers, along with a base station and camera system that will record their travels on the lunar surface, make up NASA’s CADRE (Cooperative Autonomous Distributed Robotic Exploration) technology demonstration.]

[ NASA ]

MenteeBot V3.0 is a fully vertically integrated humanoid robot, with full-stack AI and proprietary hardware.

[ Mentee Robotics ]

What do assistance robots look like? From robotic arms attached to a wheelchair to autonomous robots that can pick up and carry objects on their own, assistive robots are making a real difference to the lives of people with limited motor control.

[ Cybathlon ]

Robots can not perform reactive manipulation and they mostly operate in open-loop while interacting with their environment. Consequently, the current manipulation algorithms either are very inefficient in performance or can only work in highly structured environments. In this paper, we present closed-loop control of a complex manipulation task where a robot uses a tool to interact with objects.

[ Paper ] via [ Mitsubishi Electric Research Laboratories ]

Thanks, Yuki!

When the future becomes the present, anything is possible. In our latest campaign, “The New Normal,” we highlight the journey our riders experience from first seeing Waymo to relishing in the magic of their first ride. How did your first-ride feeling change the way you think about the possibilities of AVs?

[ Waymo ]

One of a humanoid robot’s unique advantages lies in its bipedal mobility, allowing it to navigate diverse terrains with efficiency and agility. This capability enables Moby to move freely through various environments and assist with high-risk tasks in critical industries like construction, mining, and energy.

[ UCR ]

Although robots are just tools to us, it’s still important to make them somewhat expressive so they can better integrate into our world. So, we created a small animation of the robot waking up—one that it executes all by itself!

[ Pollen Robotics ]

In this live demo, an OTTO AMR expert will walk through the key differences between AGVs and AMRs, highlighting how OTTO AMRs address challenges that AGVs cannot.

[ OTTO ] by [ Rockwell Automation ]

This Carnegie Mellon University Robotics Institute Seminar is from CMU’s Aaron Johnson, on “Uncertainty and Contact with the World.”

As robots move out of the lab and factory and into more challenging environments, uncertainty in the robot’s state, dynamics, and contact conditions becomes a fact of life. In this talk, I’ll present some recent work in handling uncertainty in dynamics and contact conditions, in order to both reduce that uncertainty where we can but also generate strategies that do not require perfect knowledge of the world state.

[ CMU RI ]



In theory, one of the main applications for robots should be operating in environments that (for whatever reason) are too dangerous for humans. I say “in theory” because in practice it’s difficult to get robots to do useful stuff in semi-structured or unstructured environments without direct human supervision. This is why there’s been some emphasis recently on teleoperation: Human software teaming up with robot hardware can be a very effective combination.

For this combination to work, you need two things. First, an intuitive control system that lets the user embody themselves in the robot to pilot it effectively. And second, a robot that can deliver on the kind of embodiment that the human pilot needs. The second bit is the more challenging, because humans have very high standards for mobility, strength, and dexterity. But researchers at the Italian Institute of Technology (IIT) have a system that manages to check both boxes, thanks to its enormously powerful quadruped, which now sports a pair of massive arms on its head.

“The primary goal of this project, conducted in collaboration with INAIL, is to extend human capabilities to the robot, allowing operators to perform complex tasks remotely in hazardous and unstructured environments to mitigate risks to their safety by exploiting the robot’s capabilities,” explains Claudio Semini, who leads the Robot Teleoperativo project at IIT. The project is based around the HyQReal hydraulic quadruped, the most recent addition to IIT’s quadruped family.

Hydraulics have been very visibly falling out of favor in robotics, because they’re complicated and messy, and in general robots don’t need the absurd power density that comes with hydraulics. But there are still a few robots in active development that use hydraulics specifically because of all that power. If your robot needs to be highly dynamic or lift really heavy things, hydraulics are, at least for now, where it’s at.

IIT’s HyQReal quadruped is one of those robots. If you need something that can carry a big payload, like a pair of massive arms, this is your robot. Back in 2019, we saw HyQReal pulling a three-tonne airplane. HyQReal itself weighs 140 kilograms, and its knee joints can output up to 300 newton-meters of torque. The hydraulic system is powered by onboard batteries and can provide up to 4 kilowatts of power. It also uses some of Moog’s lovely integrated smart actuators, which sadly don’t seem to be in development anymore. Beyond just lifting heavy things, HyQReal’s mass and power make it a very stable platform, and its aluminum roll cage and Kevlar skin ensure robustness.

The HyQReal hydraulic quadruped is tethered for power during experiments at IIT, but it can also run on battery power.IIT

The arms that HyQReal is carrying are IIT-INAIL arms, which weigh 10 kg each and have a payload of 5 kg per arm. To put that in perspective, the maximum payload of a Boston Dynamics Spot robot is only 14 kg. The head-mounted configuration of the arms means they can reach the ground, and they also have an overlapping workspace to enable bimanual manipulation, which is enhanced by HyQReal’s ability to move its body to assist the arms with their reach. “The development of core actuation technologies with high power, low weight, and advanced control has been a key enabler in our efforts,” says Nikos Tsagarakis, head of the HHCM Lab at IIT. “These technologies have allowed us to realize a low-weight bimanual manipulation system with high payload capacity and large workspace, suitable for integration with HyQReal.”

Maximizing reachable space is important, because the robot will be under the remote control of a human, who probably has no particular interest in or care for mechanical or power constraints—they just want to get the job done.

To get the job done, IIT has developed a teleoperation system, which is weird-looking because it features a very large workspace that allows the user to leverage more of the robot’s full range of motion. Having tried a bunch of different robotic telepresence systems, I can vouch for how important this is: It’s super annoying to be doing some task through telepresence, and then hit a joint limit on the robot and have to pause to reset your arm position. “That is why it is important to study operators’ quality of experience. It allows us to design the haptic and teleoperation systems appropriately because it provides insights into the levels of delight or frustration associated with immersive visualization, haptic feedback, robot control, and task performance,” confirms Ioannis Sarakoglou, who is responsible for the development of the haptic teleoperation technologies in the HHCM Lab. The whole thing takes place in a fully immersive VR environment, of course, with some clever bandwidth optimization inspired by the way humans see that transmits higher-resolution images only where the user is looking.

HyQReal’s telepresence control system offers haptic feedback and a large workspace.IIT

Telepresence Robots for the Real World

The system is designed to be used in hazardous environments where you wouldn’t want to send a human. That’s why IIT’s partner on this project is INAIL, Italy’s National Institute for Insurance Against Accidents at Work, which is understandably quite interested in finding ways in which robots can be used to keep humans out of harm’s way.

In tests with Italian firefighters in 2022 (using an earlier version of the robot with a single arm), operators were able to use the system to extinguish a simulated tunnel fire. It’s a good first step, but Semini has plans to push the system to handle “more complex, heavy, and demanding tasks, which will better show its potential for real-world applications.”

As always with robots and real-world applications, there’s still a lot of work to be done, Semini says. “The reliability and durability of the systems in extreme environments have to be improved,” he says. “For instance, the robot must endure intense heat and prolonged flame exposure in firefighting without compromising its operational performance or structural integrity.” There’s also managing the robot’s energy consumption (which is not small) to give it a useful operating time, and managing the amount of information presented to the operator to boost situational awareness while still keeping things streamlined and efficient. “Overloading operators with too much information increases cognitive burden, while too little can lead to errors and reduce situational awareness,” says Yonas Tefera, who lead the development of the immersive interface. “Advances in immersive mixed-reality interfaces and multimodal controls, such as voice commands and eye tracking, are expected to improve efficiency and reduce fatigue in the future.”

There’s a lot of crossover here with the goals of the DARPA Robotics Challenge for humanoid robots, except IIT’s system is arguably much more realistically deployable than any of those humanoids are, at least in the near term. While we’re just starting to see the potential of humanoids in carefully controlled environment, quadrupeds are already out there in the world, proving how reliable their four-legged mobility is. Manipulation is the obvious next step, but it has to be more than just opening doors. We need it to use tools, lift debris, and all that other DARPA Robotics Challenge stuff that will keep humans safe. That’s what Robot Teleoperativo is trying to make real.

You can find more detail about the Robot Teleoperativo project in this paper, presented in November at the 2024 IEEE Conference on Telepresence, in Pasadena, Calif.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYEuropean Robotics Forum: 25–27 March 2025, STUTTGART, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGAL

Enjoy today’s videos!

Humanoid robots hold the potential for unparalleled versatility for performing human-like, whole-body skills. ASAP enables highly agile motions that were previously difficult to achieve, demonstrating the potential of delta action learning in bridging simulation and real-world dynamics. These results suggest a promising sim-to-real direction for developing more expressive and agile humanoids.

[ ASAP ] from [ Carnegie Mellon University ] and [ Nvidia ]

Big News: Swiss-Mile is now RIVR! We’re thrilled to unveil our new identity as RIVR, reflecting our evolution from a university spin-off to a global leader in Physical AI and robotics. In 2025, we’ll be deploying our groundbreaking wheeled-legged robots with major logistics carriers for last-mile delivery to set new standards for efficiency and sustainability.

[ RIVR ]

Robotics is one of the best ways to reduce worker exposure to safety risks. However, one of the biggest barriers to adopting robots in these industries is the challenge of navigating the rugged terrain found in these environments. UCR’s robots navigate difficult terrain, debris-strewn floors, and confined spaces without requiring facility modifications, disrupting existing workflows, or compromising schedules, significantly improving efficiency while keeping workers safe.

[ UCR ]

This paper introduces a safety filter to ensure collision avoidance for multirotor aerial robots. The proposed method allows computational scalability against thousands of constraints and, thus, complex scenes with numerous obstacles. We experimentally demonstrate its ability to guarantee the safety of a quadrotor with an onboard LiDAR, operating in both indoor and outdoor cluttered environments against both naive and adversarial nominal policies.

[ Autonomous Robots Lab ]

Thanks, Kostas!

Brightpick Giraffe is an autonomous mobile robot (AMR) capable of reaching heights of 20 feet (6 m), resulting in three times the warehouse storage density compared to manual operations.

[ Giraffe ] via [ TechCrunch ]

IROS 2025, coming this fall in Hangzhou, China.

[ IROS 2025 ]

This cute lil guy is from a “Weak Robots Exhibition” in Japan.

[ RobotStart ]

I see no problem with cheating via infrastructure to make autonomous vehicles more reliable.

[ Oak Ridge National Laboratory ]

I am not okay with how this coffee cup is handled. Neither is my editor.

[ Qb Robotics ]

Non-prehensile pushing to move and re-orient objects to a goal is a versatile loco-manipulation skill. In this paper, we develop a learning-based controller for a mobile manipulator to move an unknown object to a desired position and yaw orientation through a sequence of pushing actions. Through our extensive hardware experiments, we show that the approach demonstrates high robustness against unknown objects of different masses, materials, sizes, and shapes.

[ Paper ] from [ ETH Zurich and Instituto Italiano de Technologia ]

Verity, On, and Maersk have collaborated to bridge the gap between the physical and digital supply chain—piloting RFID-powered autonomous inventory tracking at a Maersk facility in California. Through RFID integration, Verity pushes inventory visibility to unprecedented levels.

[ Verity ]

For some reason, KUKA is reaffirming its commitment to environmental responsibility and diversity.

[ KUKA ]

Here’s a panel from the recent Humanoids Summit on generative AI for robotics, which includes panelists from OpenAI and Agility Robotics. Just don’t mind the moderator, he’s a bit of a dork.

[ Humanoids Summit ]



The 2004 DARPA Grand Challenge was a spectacular failure. The Defense Advanced Research Projects Agency had offered a US $1 million prize for the team that could design an autonomous ground vehicle capable of completing an off-road course through sometimes flat, sometimes winding and mountainous desert terrain. As IEEE Spectrum reported at the time, it was “the motleyest assortment of vehicles assembled in one place since the filming of Mad Max 2: The Road Warrior.” Not a single entrant made it across the finish line. Some didn’t make it out of the parking lot.

Videos of the attempts are comical, although any laughter comes at the expense of the many engineers who spent countless hours and millions of dollars to get to that point.

So it’s all the more remarkable that in the second DARPA Grand Challenge, just a year and a half later, five vehicles crossed the finish line. Stanley, developed by the Stanford Racing Team, eked out a first-place win to claim the $2 million purse. This modified Volkswagen Touareg [shown at top] completed the 212-kilometer course in 6 hours, 54 minutes. Carnegie Mellon’s Sandstorm and H1ghlander took second and third place, respectively, with times of 7:05 and 7:14.

Kat-5, sponsored by the Gray Insurance Co. of Metairie, La., came in fourth with a respectable 7:30. The vehicle was named after Hurricane Katrina, which had just pummeled the Gulf Coast a month and a half earlier. Oshkosh Truck’s TerraMax also finished the circuit, although its time of 12:51 exceeded the 10-hour time limit set by DARPA.

So how did the Grand Challenge go from a total bust to having five robust finishers in such a short period of time? It’s definitely a testament to what can be accomplished when engineers rise to a challenge. But the outcome of this one race was preceded by a much longer path of research, and that plus a little bit of luck are what ultimately led to victory.

Before Stanley, there was Minerva

Let’s back up to 1998, when computer scientist Sebastian Thrun was working at Carnegie Mellon and experimenting with a very different robot: a museum tour guide. For two weeks in the summer, Minerva, which looked a bit like a Dalek from “Doctor Who,” navigated an exhibit at the Smithsonian National Museum of American History. Its main task was to roll around and dispense nuggets of information about the displays.

Minerva was a museum tour-guide robot developed by Sebastian Thrun.

In an interview at the time, Thrun acknowledged that Minerva was there to entertain. But Minerva wasn’t just a people pleaser ; it was also a machine learning experiment. It had to learn where it could safely maneuver without taking out a visitor or a priceless artifact. Visitor, nonvisitor; display case, not-display case; open floor, not-open floor. It had to react to humans crossing in front of it in unpredictable ways. It had to learn to “see.”

Fast-forward five years: Thrun transferred to Stanford in July 2003. Inspired by the first Grand Challenge, he organized the Stanford Racing Team with the aim of fielding a robotic car in the second competition.

In a vast oversimplification of Stanley’s main task, the autonomous robot had to differentiate between road and not-road in order to navigate the route successfully. The Stanford team decided to focus its efforts on developing software and used as much off-the-shelf hardware as they could, including a laser to scan the immediate terrain and a simple video camera to scan the horizon. Software overlapped the two inputs, adapted to the changing road conditions on the fly, and determined a safe driving speed. (For more technical details on Stanley, check out the team’s paper.) A remote-control kill switch, which DARPA required on all vehicles, would deactivate the car before it could become a danger. About 100,000 lines of code did that and much more.

The Stanford team hadn’t entered the 2004 Grand Challenge and wasn’t expected to win the 2005 race. Carnegie Mellon, meanwhile, had two entries—a modified 1986 Humvee and a modified 1999 Hummer—and was the clear favorite. In the 2004 race, CMU’s Sandstorm had gone furthest, completing 12 km. For the second race, CMU brought an improved Sandstorm as well as a new vehicle, H1ghlander.

Many of the other 2004 competitors regrouped to try again, and new ones entered the fray. In all, 195 teams applied to compete in the 2005 event. Teams included students, academics, industry experts, and hobbyists.

After site visits in the spring, 43 teams made it to the qualifying event, held 27 September through 5 October at the California Speedway, in Fontana. Each vehicle took four runs through the course, navigating through checkpoints and avoiding obstacles. A total of 23 teams were selected to attempt the main course across the Mojave Desert. Competing was a costly endeavor—CMU’s Red Team spent more than $3 million in its first year—and the names of sponsors were splashed across the vehicles like the logos on race cars.

In the early hours of 8 October, the finalists gathered for the big race. Each team had a staggered start time to help avoid congestion along the route. About two hours before a team’s start, DARPA gave them a CD containing approximately 3,000 GPS coordinates representing the course. Once the team hit go, it was hands off: The car had to drive itself without any human intervention. PBS’s NOVA produced an excellent episode on the 2004 and 2005 Grand Challenges that I highly recommend if you want to get a feel for the excitement, anticipation, disappointment, and triumph.

In the 2005 Grand Challenge, Carnegie Mellon University’s H1ghlander was one of five autonomous cars to finish the race.Damian Dovarganes/AP

H1ghlander held the pole position, having placed first in the qualifying rounds, followed by Stanley and Sandstorm. H1ghlander pulled ahead early and soon had a substantial lead. That’s where luck, or rather the lack of it, came in.

About two hours into the race, H1ghlander slowed down and started rolling backward down a hill. Although it eventually resumed moving forward, it never regained its top speed, even on long, straight, level sections of the course. The slower but steadier Stanley caught up to H1ghlander at the 163-km (101.5-mile) marker, passed it, and never let go of the lead.

What went wrong with H1ghlander remained a mystery, even after extensive postrace analysis. It wasn’t until 12 years after the race—and once again with a bit of luck—that CMU discovered the problem: Pressing on a small electronic filter between the engine control module and the fuel injector caused the engine to lose power and even turn off. Team members speculated that an accident a few weeks before the competition had damaged the filter. (To learn more about how CMU finally figured this out, see Spectrum Senior Editor Evan Ackerman’s 2017 story.)

The Legacy of the DARPA Grand Challenge

Regardless of who won the Grand Challenge, many success stories came out of the contest. A year and a half after the race, Thrun had already made great progress on adaptive cruise control and lane-keeping assistance, which is now readily available on many commercial vehicles. He then worked on Google’s Street View and its initial self-driving cars. CMU’s Red Team worked with NASA to develop rovers for potentially exploring the moon or distant planets. Closer to home, they helped develop self-propelled harvesters for the agricultural sector.

Stanford team leader Sebastian Thrun holds a $2 million check, the prize for winning the 2005 Grand Challenge.Damian Dovarganes/AP

Of course, there was also a lot of hype, which tended to overshadow the race’s militaristic origins—remember, the “D” in DARPA stands for “defense.” Back in 2000, a defense authorization bill had stipulated that one-third of the U.S. ground combat vehicles be “unmanned” by 2015, and DARPA conceived of the Grand Challenge to spur development of these autonomous vehicles. The U.S. military was still fighting in the Middle East, and DARPA promoters believed self-driving vehicles would help minimize casualties, particularly those caused by improvised explosive devices.

DARPA sponsored more contests, such as the 2007 Urban Challenge, in which vehicles navigated a simulated city and suburban environment; the 2012 Robotics Challenge for disaster-response robots; and the 2022 Subterranean Challenge for—you guessed it—robots that could get around underground. Despite the competitions, continued military conflicts, and hefty government contracts, actual advances in autonomous military vehicles and robots did not take off to the extent desired. As of 2023, robotic ground vehicles made up only 3 percent of the global armored-vehicle market.

Today, there are very few fully autonomous ground vehicles in the U.S. military; instead, the services have forged ahead with semiautonomous, operator-assisted systems, such as remote-controlled drones and ship autopilots. The one Grand Challenge finisher that continued to work for the U.S. military was Oshkosh Truck, the Wisconsin-based sponsor of the TerraMax. The company demonstrated a palletized loading system to transport cargo in unmanned vehicles for the U.S. Army.

Much of the contemporary reporting on the Grand Challenge predicted that self-driving cars would take us closer to a “Jetsons” future, with a self-driving vehicle to ferry you around. But two decades after Stanley, the rollout of civilian autonomous cars has been confined to specific applications, such as Waymo robotaxis transporting people around San Francisco or the GrubHub Starships struggling to deliver food across my campus at the University of South Carolina.

I’ll be watching to see how the technology evolves outside of big cities. Self-driving vehicles would be great for long distances on empty country roads, but parts of rural America still struggle to get adequate cellphone coverage. Will small towns and the spaces that surround them have the bandwidth to accommodate autonomous vehicles? As much as I’d like to think self-driving autos are nearly here, I don’t expect to find one under my carport anytime soon.

A Tale of Two Stanleys

Not long after the 2005 race, Stanley was ready to retire. Recalling his experience testing Minerva at the National Museum of American History, Thrun thought the museum would make a nice home. He loaned it to the museum in 2006, and since 2008 it has resided permanently in the museum’s collections, alongside other remarkable specimens in robotics and automobiles. In fact, it isn’t even the first Stanley in the collection.

Stanley now resides in the collections of the Smithsonian Institution’s National Museum of American History, which also houses another Stanley—this 1910 Stanley Runabout. Behring Center/National Museum of American History/Smithsonian Institution

That distinction belongs to a 1910 Stanley Runabout, an early steam-powered car introduced at a time when it wasn’t yet clear that the internal-combustion engine was the way to go. Despite clear drawbacks—steam engines had a nasty tendency to explode—“Stanley steamers” were known for their fine craftsmanship. Fred Marriott set the land speed record while driving a Stanley in 1906. It clocked in at 205.5 kilometers per hour, which was significantly faster than the 21st-century Stanley’s average speed of 30.7 km/hr. To be fair, Marriott’s Stanley was racing over a flat, straight course rather than the off-road terrain navigated by Thrun’s Stanley.

Across the century that separates the two Stanleys, it’s easy to trace a narrative of progress. Both are clearly recognizable as four-wheeled land vehicles, but I suspect the science-fiction dreamers of the early 20th century would have been hard-pressed to imagine the suite of technologies that would propel a 21st-century self-driving car. What will the vehicles of the early 22nd century be like? Will they even have four tires, or will they run on something entirely new?

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the February 2025 print issue as “Slow and Steady Wins the Race.”

References

Sebastian Thrun and his colleagues at the Stanford Artificial Intelligence Laboratory, along with members of the other groups that sponsored Stanley, published “Stanley: The Robot That Won the DARPA Grand Challenge.” This paper, from the Journal of Field Robotics, explains the vehicle’s development.

The NOVA PBS episode “The Great Robot Race provides interviews and video footage from both the failed first Grand Challenge and the successful second one. I personally liked the side story of GhostRider, an autonomous motorcycle that competed in both competitions but didn’t quite cut it. (GhostRider also now resides in the Smithsonian’s collection.)

Smithsonian curator Carlene Stephens kindly talked with me about how she collected Stanley for the National Museum of American History and where she sees artifacts like this fitting into the stream of history.

Pages