Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEXASCybathlon Challenges: 02 February 2024, ZURICH

Enjoy today’s videos!

Engineers at University of Colorado, Boulder have designed a new tiny robot that can passively change its shape to squeeze through narrow gaps. The machine, named Compliant Legged Articulated Robotic Insect, or CLARI, borrows its inspiration from the squishiness and various shapes of the world of bugs.

[ CU Boulder ]

We’ve got a huge feature on the University of Zurich’s autonomous vision-based racing drones, which you should absolutely read in full after watching this summary video.

[ Nature ] via [ UZH RPG ]

This is just CGI, but this research group has physical robots that are making this happen.

[ Freeform Robotics ]

This video gives a preview of the recent collaboration between Cyberbotics Lab and Injury Biomechanics Research Lab (IBRL) at Ohio State University on legged robot locomotion testing with the linear impactor.

[ Cyberbotics ]

This is our smallest swimming crawling version (SAW design). It is 6 centimeters long! The wave motion is actuated by a single motor. We attached floats, as the motor and battery were too heavy.

[ Zarrouk Lab ]

A pair of Digits approaches a railway line, and you won’t believe what happens next!

[ Agility Robotics ]

Breakfast time with Reachy O_o

[ Pollen Robotics ]

Suction cups are necessary for all kinds of logistics robots, but they’re necessarily pretty fragile. Wouldn’t it be nice to have suction cups that heal themselves?

[ BruBotics ]

Thanks, Bram!

We present a simple approach to in-hand cube reconfiguration. By simplifying planning, control, and perception as much as possible while maintaining robust and general performance, we gain insights into the inherent complexity of in-hand cube reconfiguration. The proposed system outperforms a substantially more complex system for cube reconfiguration based on deep learning and accurate physical simulation, contributing arguments to the discussion about what the most promising approach to general manipulation might be.

[ TU Berlin ]

Our latest augmented-reality developments for command, control, and supervision of autonomous agents in a three-operator/two-robot human-robot team. The views shown are the first-person views of three HoloLens 2 users and one top-down view of a satellite map with all team members visible throughout the entire demonstration.

[ UT ]

ABB robots go to White Castle.

[ ABB ]

In addition to completing tasks quickly and efficiently, agility allows legged robots to move through complex environments that are otherwise difficult to traverse. In “Barkour: Benchmarking Animal-Level Agility With Quadruped Robots,” we introduce the Barkour agility benchmark for quadruped robots, along with a Transformer-based generalist locomotion policy

[ Google Research ]

This week, Geordie Rose (CEO) and Suzanne Gildert (CTO) of Sanctuary AI muse about the idea of a “humanoid olympics” while discussing how humanoid robots and their respective companies can be ranked. They go over potential metrics for evaluating different humanoid robots—as well as what counts as a humanoid, what doesn’t, and why.

[ Sanctuary ]



The drone screams. It’s flying so fast that following it with my camera is hopeless, so I give up and watch in disbelief. The shrieking whine from the four motors of the racing quadrotor Dopplers up and down as the drone twists, turns, and backflips its way through the square plastic gates of the course at a speed that is literally superhuman. I’m cowering behind a safety net, inside a hangar at an airfield just outside of Zurich, along with the drone’s creators from the Robotics and Perception Group at the University of Zurich.

“I don’t even know what I just watched,” says Alex Vanover, as the drone comes to a hovering halt after completing the 75-meter course in 5.3 seconds. “That was beautiful,” Thomas Bitmatta adds. “One day, my dream is to be able to achieve that.” Vanover and Bitmatta are arguably the world’s best drone-racing pilots, multiyear champions of highly competitive international drone-racing circuits. And they’re here to prove that human pilots have not been bested by robots. Yet.

AI Racing FPV Drone Full Send! - University of Zurich youtu.be

Comparing these high-performance quadrotors to the kind of drones that hobbyists use for photography is like comparing a jet fighter to a light aircraft: Racing quadrotors are heavily optimized for speed and agility. A typical racing quadrotor can output 35 newton meters (26 pound-feet) of force, with four motors spinning tribladed propellers at 30,000 rpm. The drone weighs just 870 grams, including a 1,800-milliampere-hour battery that lasts a mere 2 minutes. This extreme power-to-weight ratio allows the drone to accelerate at 4.5 gs, reaching 100 kilometers per hour in less than a second.

The autonomous racing quadrotors have similar specs, but the one we just saw fly doesn’t have a camera because it doesn’t need one. Instead, the hangar has been equipped with a 36-camera infrared tracking system that can localize the drone within millimeters, 400 times every second. By combining the location data with a map of the course, an off-board computer can steer the drone along an optimal trajectory, which would be difficult, if not impossible, for even the best human pilot to match.

These autonomous drones are, in a sense, cheating. The human pilots have access to the single view only from a camera mounted on the drone, along with their knowledge of the course and flying experience. So, it’s really no surprise that US $400,000 worth of sensors and computers can outperform a human pilot. But the reason why these professional drone pilots came to Zurich is to see how they would do in a competition that’s actually fair.

A human-piloted racing drone [red] chases an autonomous vision-based drone [blue] through a gate at over 13 meters per second.Leonard Bauersfeld

Solving Drone RacingBy the Numbers: Autonomous Racing Drones

Frame size:

215 millimeters

Weight:

870 grams

Maximum thrust:

35 newton meters (26 pound-feet)

Flight duration:

2 minutes

Acceleration:

4.5 gs

Top speed:

130+ kilometers per hour

Onboard sensing:

Intel RealSense T265 tracking camera

Onboard computing:

Nvidia Jetson TX2

“We’re trying to make history,” says Davide Scaramuzza, who leads the Robotics and Perception Group at the University of Zurich (UZH). “We want to demonstrate that an AI-powered, vision-based drone can achieve human-level, and maybe even superhuman-level, performance in a drone race.” Using vision is the key here: Scaramuzza has been working on drones that sense the way most people do, relying on cameras to perceive the world around them and making decisions based primarily on that visual data. This is what will make the race fair—human eyes and a human brain versus robotic eyes and a robotic brain, each competitor flying the same racing quadrotors as fast as possible around the same course.

“Drone racing [against humans] is an ideal framework for evaluating the progress of autonomous vision-based robotics,” Scaramuzza explains. “And when you solve drone racing, the applications go much further because this problem can be generalized to other robotics applications, like inspection, delivery, or search and rescue.”

While there are already drones doing these tasks, they tend to fly slowly and carefully. According to Scaramuzza, being able to fly faster can make drones more efficient, improving their flight duration and range and thus their utility. “If you want drones to replace humans at dull, difficult, or dangerous tasks, the drones will have to do things faster or more efficiently than humans. That is what we are working toward—that’s our ambition,” Scaramuzza explains. “There are many hard challenges in robotics. Fast, agile, autonomous flight is one of them.”

Autonomous Navigation

Scaramuzza’s autonomous-drone system, called Swift, starts with a three-dimensional map of the course. The human pilots have access to this map as well, so that they can practice in simulation. The goal of both human and robot-drone pilots is to fly through each gate as quickly as possible, and the best way of doing this is via what’s called a time-optimal trajectory.

Robots have an advantage here because it’s possible (in simulation) to calculate this trajectory for a given course in a way that is provably optimal. But knowing the optimal trajectory gets you only so far. Scaramuzza explains that simulations are never completely accurate, and things that are especially hard to model—including the turbulent aerodynamics of a drone flying through a gate and the flexibility of the drone itself—make it difficult to stick to that optimal trajectory.

While the human-piloted drones [red] are each equipped with an FPV camera, each of the autonomous drones [blue] has an Intel RealSense vision system powered by a Nvidia Jetson TX2 onboard computer. Both sets of drones are also equipped with reflective markers that are tracked by an external camera system. Evan Ackerman

The solution, says Scaramuzza, is to use deep-reinforcement learning. You’re still training your system in simulation, but you’re also tasking your reinforcement-learning algorithm with making continuous adjustments, tuning the system to a specific track in a real-world environment. Some real-world data is collected on the track and added to the simulation, allowing the algorithm to incorporate realistically “noisy” data to better prepare it for flying the actual course. The drone will never fly the most mathematically optimal trajectory this way, but it will fly much faster than it would using a trajectory designed in an entirely simulated environment.

From there, the only thing that remains is to determine how far to push Swift. One of the lead researchers, Elia Kaufmann, quotes Mario Andretti: “If everything seems under control, you’re just not going fast enough.” Finding that edge of control is the only way the autonomous vision-based quadrotors will be able to fly faster than those controlled by humans. “If we had a successful run, we just cranked up the speed again,” Kaufmann says. “And we’d keep doing that until we crashed. Very often, our conditions for going home at the end of the day are either everything has worked, which never happens, or that all the drones are broken.”

Evan Ackerman

Although the autonomous vision-based drones were fast, they were also less robust. Even small errors could lead to crashes from which the autonomous drones could not recover.Regina Sablotny

How the Robots Fly

Once Swift has determined its desired trajectory, it needs to navigate the drone along that trajectory. Whether you’re flying a drone or driving a car, navigation involves two fundamental things: knowing where you are and knowing how to get where you want to go. The autonomous drones have calculated the time-optimal route in advance, but to fly that route, they need a reliable way to determine their own location as well as their velocity and orientation.

To that end, the quadrotor uses an Intel RealSense vision system to identify the corners of the racing gates and other visual features to localize itself on the course. An Nvidia Jetson TX2 module, which includes a GPU, a CPU, and associated hardware, manages all of the image processing and control on board.

Using only vision imposes significant constraints on how the drone flies. For example, while quadrotors are equally capable of flying in any direction, Swift’s camera needs to point forward most of the time. There’s also the issue of motion blur, which occurs when the exposure length of a single frame in the drone’s camera feed is long enough that the drone’s own motion over that time becomes significant. Motion blur is especially problematic when the drone is turning: The high angular velocity results in blurring that essentially renders the drone blind. The roboticists have to plan their flight paths to minimize motion blur, finding a compromise between a time-optimal flight path and one that the drone can fly without crashing.

Davide Scaramuzza [far left], Elia Kaufmann [far right] and other roboticists from the University of Zurich watch a close race.Regina Sablotny

How the Humans Fly

For the human pilots, the challenges are similar. The quadrotors are capable of far better performance than pilots normally take advantage of. Bitmatta estimates that he flies his drone at about 60 percent of its maximum performance. But the biggest limiting factor for the human pilots is the video feed.

People race drones in what’s called first-person view (FPV), using video goggles that display a real-time feed from a camera mounted on the front of the drone. The FPV video systems that the pilots used in Zurich can transmit at 60 interlaced frames per second in relatively poor analog VGA quality. In simulation, drone pilots practice in HD at over 200 frames per second, which makes a substantial difference. “Some of the decisions that we make are based on just four frames of data,” explains Bitmatta. “Higher-quality video, with better frame rates and lower latency, would give us a lot more data to use.” Still, one of the things that impresses the roboticists the most is just how well people perform with the video quality available. It suggests that these pilots develop the ability to perform the equivalent of the robot’s localization and state-estimation algorithms.

It seems as though the human pilots are also attempting to calculate a time-optimal trajectory, Scaramuzza says. “Some pilots have told us that they try to imagine an imaginary line through a course, after several hours of rehearsal. So we speculate that they are actually building a mental map of the environment, and learning to compute an optimal trajectory to follow. It’s very interesting—it seems that both the humans and the machines are reasoning in the same way.”

But in his effort to fly faster, Bitmatta tries to avoid following a predefined trajectory. “With predictive flying, I’m trying to fly to the plan that I have in my head. With reactive flying, I’m looking at what’s in front of me and constantly reacting to my environment.” Predictive flying can be fast in a controlled environment, but if anything unpredictable happens, or if Bitmatta has even a brief lapse in concentration, the drone will have traveled tens of meters before he can react. “Flying reactively from the start can help you to recover from the unexpected,” he says.

Will Humans Have an Edge?

“Human pilots are much more able to generalize, to make decisions on the fly, and to learn from experiences than are the autonomous systems that we currently have,” explains Christian Pfeiffer, a neuroscientist turned roboticist at UZH who studies how human drone pilots do what they do. “Humans have adapted to plan into the future—robots don’t have that long-term vision. I see that as one of the main differences between humans and autonomous systems right now.”

Scaramuzza agrees. “Humans have much more experience, accumulated through years of interacting with the world,” he says. “Their knowledge is so much broader because they’ve been trained across many different situations. At the moment, the problem that we face in the robotics community is that we always need to train an algorithm for each specific task. Humans are still better than any machine because humans can make better decisions in very complex situations and in the presence of imperfect data.”

“I think there’s a lot that we as humans can learn from how these robots fly.” —Thomas Bimatta

This understanding that humans are still far better generalists has placed some significant constraints on the race. The “fairness” is heavily biased in favor of the robots in that the race, while designed to be as equal as possible, is taking place in the only environment in which Swift is likely to have a chance. The roboticists have done their best to minimize unpredictability—there’s no wind inside of the hangar, for example, and the illumination is tightly controlled. “We are using state-of-the-art perception algorithms,” Scaramuzza explains, “but even the best algorithms still have a lot of failure modes because of illumination changes.”

To ensure consistent lighting, almost all of the data for Swift’s training was collected at night, says Kaufmann. “The nice thing about night is that you can control the illumination; you can switch on the lights and you have the same conditions every time. If you fly in the morning, when the sunlight is entering the hangar, all that backlight makes it difficult for the camera to see the gates. We can handle these conditions, but we have to fly at slower speeds. When we push the system to its absolute limits, we sacrifice robustness.”

Race Day

The race starts on a Saturday morning. Sunlight streams through the hangar’s skylights and open doors, and as the human pilots and autonomous drones start to fly test laps around the track, it’s immediately obvious that the vision-based drones are not performing as well as they did the night before. They’re regularly clipping the sides of the gates and spinning out of control, a telltale sign that the vision-based state estimation is being thrown off. The roboticists seem frustrated. The human pilots seem cautiously optimistic.

The winner of the competition will fly the three fastest consecutive laps without crashing. The humans and the robots pursue that goal in essentially the same way, by adjusting the parameters of their flight to find the point at which they’re barely in control. Quadrotors tumble into gates, walls, floors, and ceilings, as the racers push their limits. This is a normal part of drone racing, and there are dozens of replacement drones and staff to fix them when they break.

Professional drone pilot Thomas Bitmatta [left] examines flight paths recorded by the external tracking system. The human pilots felt they could fly better by studying the robots.Evan Ackerman

There will be several different metrics by which to decide whether the humans or the robots are faster. The external localization system used to actively control the autonomous drone last night is being used today for passive tracking, recording times for each segment of the course, each lap of the course, and for each three-lap multidrone race.

As the human pilots get comfortable with the course, their lap times decrease. Ten seconds per lap. Then 8 seconds. Then 6.5 seconds. Hidden behind their FPV headsets, the pilots are concentrating intensely as their shrieking quadrotors whirl through the gates. Swift, meanwhile, is much more consistent, typically clocking lap times below 6 seconds but frequently unable to complete three consecutive laps without crashing. Seeing Swift’s lap times, the human pilots push themselves, and their lap times decrease further. It’s going to be very close.

Zurich Drone Racing: AI vs Human https://rpg.ifi.uzh.ch/

The head-to-head races start, with Swift and a human pilot launching side-by-side at the sound of the starting horn. The human is immediately at a disadvantage, because a person’s reaction time is slow compared to that of a robot: Swift can launch in less than 100 milliseconds, while a human takes about 220 ms to hear a noise and react to it.

UZH’s Elia Kaufmann prepares an autonomous vision-based drone for a race. Since landing gear would only slow racing drones down, they take off from stands, which allows them to launch directly toward the first gate.Evan Ackerman

On the course, the human pilots can almost keep up with Swift: The robot’s best three-lap time is 17.465 seconds, while Bitmatta’s is 18.746 seconds and Vanover manages 17.956 seconds. But in nine head-to-head races with Swift, Vanover wins four, and in seven races, Bitmatta wins three. That’s because Swift doesn’t finish the majority of the time, colliding either with a gate or with its opponent. The human pilots can recover from collisions, even relaunching from the ground if necessary. Swift doesn’t have those skills. The robot is faster, but it’s also less robust.

Zurich Drone Racing: Onboard View https://rpg.ifi.uzh.ch/

Getting Even Faster

Thomas Bitmatta, two-time MultiGP International Open World Cup champion, pilots his drone through the course in FPV (first-person view).Regina Sablotny

In drone racing, crashing is part of the process. Both Swift and the human pilots crashed dozens of drones, which were constantly being repaired.Regina Sablotny

“The absolute performance of the robot—when it’s working, it’s brilliant,” says Bitmatta, when I speak to him at the end of race day. “It’s a little further ahead of us than I thought it would be. It’s still achievable for humans to match it, but the good thing for us at the moment is that it doesn’t look like it’s very adaptable.”

UZH’s Kaufmann doesn’t disagree. “Before the race, we had assumed that consistency was going to be our strength. It turned out not to be.” Making the drone more robust so that it can adapt to different lighting conditions, Kaufmann adds, is mostly a matter of collecting more data. “We can address this by retraining the perception system, and I’m sure we can substantially improve.”Kaufmann believes that under controlled conditions, the potential performance of the autonomous vision-based drones is already well beyond what the human pilots are capable of. Even if this wasn’t conclusively proved through the competition, bringing the human pilots to Zurich and collecting data about how they fly made Kaufmann even more confident in what Swift can do. “We had overestimated the human pilots,” he says. “We were measuring their performance as they were training, and we slowed down a bit to increase our success rate, because we had seen that we could fly slower and still win. Our fastest strategies accelerate the quadrotor at 4.5 gs, but we saw that if we accelerate at only 3.8 gs, we can still achieve a safe win.”

Bitmatta feels that the humans have a lot more potential, too. “The kind of flying we were doing last year was nothing compared with what we’re doing now. Our rate of progress is really fast. And I think there’s a lot that we as humans can learn from how these robots fly.”

Useful Flying Robots

As far as Scaramuzza is aware, the event in Zurich, which was held last summer, was the first time that a fully autonomous mobile robot achieved world-champion performance in a real-world competitive sport. But, he points out, “this is still a research experiment. It’s not a product. We are very far from making something that can work in any environment and any condition.”

Besides making the drones more adaptable to different lighting conditions, the roboticists are teaching Swift to generalize from a known course to a new one, as humans do, and to safely fly around other drones. All of these skills are transferable and will eventually lead to practical applications. “Drone racing is pushing an autonomous system to its absolute limits,” roboticist Christian Pfeiffer says. “It’s not the ultimate goal—it’s a stepping-stone toward building better and more capable autonomous robots.” When one of those robots flies through your window and drops off a package on your coffee table before zipping right out again, these researchers will have earned your thanks.

Scaramuzza is confident that his drones will one day be the champions of the air—not just inside a carefully controlled hangar in Zurich but wherever they can be useful to humanity. “I think ultimately, a machine will be better than any human pilot, especially when consistency and precision are important,” he says. “I don’t think this is controversial. The question is, when? I don’t think it will happen in the next few decades. At the moment, humans are much better with bad data. But this is just a perception problem, and computer vision is making giant steps forward. Eventually, robotics won’t just catch up with humans, it will outperform them.”

Meanwhile, the human pilots are taking this in stride. “Seeing people use racing as a way of learning—I appreciate that,” Bitmatta says. “Part of me is a racer who doesn’t want anything to be faster than I am. And part of me is really excited for where this technology can lead. The possibilities are endless, and this is the start of something that could change the whole world.”

This article appears in the September 2023 print issue as “Superhuman Speed: AI Drones for the Win.”



The drone screams. It’s flying so fast that following it with my camera is hopeless, so I give up and watch in disbelief. The shrieking whine from the four motors of the racing quadrotor Dopplers up and down as the drone twists, turns, and backflips its way through the square plastic gates of the course at a speed that is literally superhuman. I’m cowering behind a safety net, inside a hangar at an airfield just outside of Zurich, along with the drone’s creators from the Robotics and Perception Group at the University of Zurich.

“I don’t even know what I just watched,” says Alex Vanover, as the drone comes to a hovering halt after completing the 75-meter course in 5.3 seconds. “That was beautiful,” Thomas Bitmatta adds. “One day, my dream is to be able to achieve that.” Vanover and Bitmatta are arguably the world’s best drone-racing pilots, multiyear champions of highly competitive international drone-racing circuits. And they’re here to prove that human pilots have not been bested by robots. Yet.

AI Racing FPV Drone Full Send! - University of Zurich youtu.be

Comparing these high-performance quadrotors to the kind of drones that hobbyists use for photography is like comparing a jet fighter to a light aircraft: Racing quadrotors are heavily optimized for speed and agility. A typical racing quadrotor can output 35 newton meters (26 pound-feet) of force, with four motors spinning tribladed propellers at 30,000 rpm. The drone weighs just 870 grams, including a 1,800-milliampere-hour battery that lasts a mere 2 minutes. This extreme power-to-weight ratio allows the drone to accelerate at 4.5 gs, reaching 100 kilometers per hour in less than a second.

The autonomous racing quadrotors have similar specs, but the one we just saw fly doesn’t have a camera because it doesn’t need one. Instead, the hangar has been equipped with a 36-camera infrared tracking system that can localize the drone within millimeters, 400 times every second. By combining the location data with a map of the course, an off-board computer can steer the drone along an optimal trajectory, which would be difficult, if not impossible, for even the best human pilot to match.

These autonomous drones are, in a sense, cheating. The human pilots have access to the single view only from a camera mounted on the drone, along with their knowledge of the course and flying experience. So, it’s really no surprise that US $400,000 worth of sensors and computers can outperform a human pilot. But the reason why these professional drone pilots came to Zurich is to see how they would do in a competition that’s actually fair.

A human-piloted racing drone [red] chases an autonomous vision-based drone [blue] through a gate at over 13 meters per second.Leonard Bauersfeld

Solving Drone RacingBy the Numbers: Autonomous Racing Drones

Frame size:

215 millimeters

Weight:

870 grams

Maximum thrust:

35 newton meters (26 pound-feet)

Flight duration:

2 minutes

Acceleration:

4.5 gs

Top speed:

130+ kilometers per hour

Onboard sensing:

Intel RealSense T265 tracking camera

Onboard computing:

Nvidia Jetson TX2

“We’re trying to make history,” says Davide Scaramuzza, who leads the Robotics and Perception Group at the University of Zurich (UZH). “We want to demonstrate that an AI-powered, vision-based drone can achieve human-level, and maybe even superhuman-level, performance in a drone race.” Using vision is the key here: Scaramuzza has been working on drones that sense the way most people do, relying on cameras to perceive the world around them and making decisions based primarily on that visual data. This is what will make the race fair—human eyes and a human brain versus robotic eyes and a robotic brain, each competitor flying the same racing quadrotors as fast as possible around the same course.

“Drone racing [against humans] is an ideal framework for evaluating the progress of autonomous vision-based robotics,” Scaramuzza explains. “And when you solve drone racing, the applications go much further because this problem can be generalized to other robotics applications, like inspection, delivery, or search and rescue.”

While there are already drones doing these tasks, they tend to fly slowly and carefully. According to Scaramuzza, being able to fly faster can make drones more efficient, improving their flight duration and range and thus their utility. “If you want drones to replace humans at dull, difficult, or dangerous tasks, the drones will have to do things faster or more efficiently than humans. That is what we are working toward—that’s our ambition,” Scaramuzza explains. “There are many hard challenges in robotics. Fast, agile, autonomous flight is one of them.”

Autonomous Navigation

Scaramuzza’s autonomous-drone system, called Swift, starts with a three-dimensional map of the course. The human pilots have access to this map as well, so that they can practice in simulation. The goal of both human and robot-drone pilots is to fly through each gate as quickly as possible, and the best way of doing this is via what’s called a time-optimal trajectory.

Robots have an advantage here because it’s possible (in simulation) to calculate this trajectory for a given course in a way that is provably optimal. But knowing the optimal trajectory gets you only so far. Scaramuzza explains that simulations are never completely accurate, and things that are especially hard to model—including the turbulent aerodynamics of a drone flying through a gate and the flexibility of the drone itself—make it difficult to stick to that optimal trajectory.

While the human-piloted drones [red] are each equipped with an FPV camera, each of the autonomous drones [blue] has an Intel RealSense vision system powered by a Nvidia Jetson TX2 onboard computer. Both sets of drones are also equipped with reflective markers that are tracked by an external camera system. Evan Ackerman

The solution, says Scaramuzza, is to use deep-reinforcement learning. You’re still training your system in simulation, but you’re also tasking your reinforcement-learning algorithm with making continuous adjustments, tuning the system to a specific track in a real-world environment. Some real-world data is collected on the track and added to the simulation, allowing the algorithm to incorporate realistically “noisy” data to better prepare it for flying the actual course. The drone will never fly the most mathematically optimal trajectory this way, but it will fly much faster than it would using a trajectory designed in an entirely simulated environment.

From there, the only thing that remains is to determine how far to push Swift. One of the lead researchers, Elia Kaufmann, quotes Mario Andretti: “If everything seems under control, you’re just not going fast enough.” Finding that edge of control is the only way the autonomous vision-based quadrotors will be able to fly faster than those controlled by humans. “If we had a successful run, we just cranked up the speed again,” Kaufmann says. “And we’d keep doing that until we crashed. Very often, our conditions for going home at the end of the day are either everything has worked, which never happens, or that all the drones are broken.”

Evan Ackerman

Although the autonomous vision-based drones were fast, they were also less robust. Even small errors could lead to crashes from which the autonomous drones could not recover.Regina Sablotny

How the Robots Fly

Once Swift has determined its desired trajectory, it needs to navigate the drone along that trajectory. Whether you’re flying a drone or driving a car, navigation involves two fundamental things: knowing where you are and knowing how to get where you want to go. The autonomous drones have calculated the time-optimal route in advance, but to fly that route, they need a reliable way to determine their own location as well as their velocity and orientation.

To that end, the quadrotor uses an Intel RealSense vision system to identify the corners of the racing gates and other visual features to localize itself on the course. An Nvidia Jetson TX2 module, which includes a GPU, a CPU, and associated hardware, manages all of the image processing and control on board.

Using only vision imposes significant constraints on how the drone flies. For example, while quadrotors are equally capable of flying in any direction, Swift’s camera needs to point forward most of the time. There’s also the issue of motion blur, which occurs when the exposure length of a single frame in the drone’s camera feed is long enough that the drone’s own motion over that time becomes significant. Motion blur is especially problematic when the drone is turning: The high angular velocity results in blurring that essentially renders the drone blind. The roboticists have to plan their flight paths to minimize motion blur, finding a compromise between a time-optimal flight path and one that the drone can fly without crashing.

Davide Scaramuzza [far left], Elia Kaufmann [far right] and other roboticists from the University of Zurich watch a close race.Regina Sablotny

How the Humans Fly

For the human pilots, the challenges are similar. The quadrotors are capable of far better performance than pilots normally take advantage of. Bitmatta estimates that he flies his drone at about 60 percent of its maximum performance. But the biggest limiting factor for the human pilots is the video feed.

People race drones in what’s called first-person view (FPV), using video goggles that display a real-time feed from a camera mounted on the front of the drone. The FPV video systems that the pilots used in Zurich can transmit at 60 interlaced frames per second in relatively poor analog VGA quality. In simulation, drone pilots practice in HD at over 200 frames per second, which makes a substantial difference. “Some of the decisions that we make are based on just four frames of data,” explains Bitmatta. “Higher-quality video, with better frame rates and lower latency, would give us a lot more data to use.” Still, one of the things that impresses the roboticists the most is just how well people perform with the video quality available. It suggests that these pilots develop the ability to perform the equivalent of the robot’s localization and state-estimation algorithms.

It seems as though the human pilots are also attempting to calculate a time-optimal trajectory, Scaramuzza says. “Some pilots have told us that they try to imagine an imaginary line through a course, after several hours of rehearsal. So we speculate that they are actually building a mental map of the environment, and learning to compute an optimal trajectory to follow. It’s very interesting—it seems that both the humans and the machines are reasoning in the same way.”

But in his effort to fly faster, Bitmatta tries to avoid following a predefined trajectory. “With predictive flying, I’m trying to fly to the plan that I have in my head. With reactive flying, I’m looking at what’s in front of me and constantly reacting to my environment.” Predictive flying can be fast in a controlled environment, but if anything unpredictable happens, or if Bitmatta has even a brief lapse in concentration, the drone will have traveled tens of meters before he can react. “Flying reactively from the start can help you to recover from the unexpected,” he says.

Will Humans Have an Edge?

“Human pilots are much more able to generalize, to make decisions on the fly, and to learn from experiences than are the autonomous systems that we currently have,” explains Christian Pfeiffer, a neuroscientist turned roboticist at UZH who studies how human drone pilots do what they do. “Humans have adapted to plan into the future—robots don’t have that long-term vision. I see that as one of the main differences between humans and autonomous systems right now.”

Scaramuzza agrees. “Humans have much more experience, accumulated through years of interacting with the world,” he says. “Their knowledge is so much broader because they’ve been trained across many different situations. At the moment, the problem that we face in the robotics community is that we always need to train an algorithm for each specific task. Humans are still better than any machine because humans can make better decisions in very complex situations and in the presence of imperfect data.”

“I think there’s a lot that we as humans can learn from how these robots fly.” —Thomas Bimatta

This understanding that humans are still far better generalists has placed some significant constraints on the race. The “fairness” is heavily biased in favor of the robots in that the race, while designed to be as equal as possible, is taking place in the only environment in which Swift is likely to have a chance. The roboticists have done their best to minimize unpredictability—there’s no wind inside of the hangar, for example, and the illumination is tightly controlled. “We are using state-of-the-art perception algorithms,” Scaramuzza explains, “but even the best algorithms still have a lot of failure modes because of illumination changes.”

To ensure consistent lighting, almost all of the data for Swift’s training was collected at night, says Kaufmann. “The nice thing about night is that you can control the illumination; you can switch on the lights and you have the same conditions every time. If you fly in the morning, when the sunlight is entering the hangar, all that backlight makes it difficult for the camera to see the gates. We can handle these conditions, but we have to fly at slower speeds. When we push the system to its absolute limits, we sacrifice robustness.”

Race Day

The race starts on a Saturday morning. Sunlight streams through the hangar’s skylights and open doors, and as the human pilots and autonomous drones start to fly test laps around the track, it’s immediately obvious that the vision-based drones are not performing as well as they did the night before. They’re regularly clipping the sides of the gates and spinning out of control, a telltale sign that the vision-based state estimation is being thrown off. The roboticists seem frustrated. The human pilots seem cautiously optimistic.

The winner of the competition will fly the three fastest consecutive laps without crashing. The humans and the robots pursue that goal in essentially the same way, by adjusting the parameters of their flight to find the point at which they’re barely in control. Quadrotors tumble into gates, walls, floors, and ceilings, as the racers push their limits. This is a normal part of drone racing, and there are dozens of replacement drones and staff to fix them when they break.

Professional drone pilot Thomas Bitmatta [left] examines flight paths recorded by the external tracking system. The human pilots felt they could fly better by studying the robots.Evan Ackerman

There will be several different metrics by which to decide whether the humans or the robots are faster. The external localization system used to actively control the autonomous drone last night is being used today for passive tracking, recording times for each segment of the course, each lap of the course, and for each three-lap multidrone race.

As the human pilots get comfortable with the course, their lap times decrease. Ten seconds per lap. Then 8 seconds. Then 6.5 seconds. Hidden behind their FPV headsets, the pilots are concentrating intensely as their shrieking quadrotors whirl through the gates. Swift, meanwhile, is much more consistent, typically clocking lap times below 6 seconds but frequently unable to complete three consecutive laps without crashing. Seeing Swift’s lap times, the human pilots push themselves, and their lap times decrease further. It’s going to be very close.

Zurich Drone Racing: AI vs Human https://rpg.ifi.uzh.ch/

The head-to-head races start, with Swift and a human pilot launching side-by-side at the sound of the starting horn. The human is immediately at a disadvantage, because a person’s reaction time is slow compared to that of a robot: Swift can launch in less than 100 milliseconds, while a human takes about 220 ms to hear a noise and react to it.

UZH’s Elia Kaufmann prepares an autonomous vision-based drone for a race. Since landing gear would only slow racing drones down, they take off from stands, which allows them to launch directly toward the first gate.Evan Ackerman

On the course, the human pilots can almost keep up with Swift: The robot’s best three-lap time is 17.465 seconds, while Bitmatta’s is 18.746 seconds and Vanover manages 17.956 seconds. But in nine head-to-head races with Swift, Vanover wins four, and in seven races, Bitmatta wins three. That’s because Swift doesn’t finish the majority of the time, colliding either with a gate or with its opponent. The human pilots can recover from collisions, even relaunching from the ground if necessary. Swift doesn’t have those skills. The robot is faster, but it’s also less robust.

Zurich Drone Racing: Onboard View https://rpg.ifi.uzh.ch/

Getting Even Faster

Thomas Bitmatta, two-time MultiGP International Open World Cup champion, pilots his drone through the course in FPV (first-person view).Regina Sablotny

In drone racing, crashing is part of the process. Both Swift and the human pilots crashed dozens of drones, which were constantly being repaired.Regina Sablotny

“The absolute performance of the robot—when it’s working, it’s brilliant,” says Bitmatta, when I speak to him at the end of race day. “It’s a little further ahead of us than I thought it would be. It’s still achievable for humans to match it, but the good thing for us at the moment is that it doesn’t look like it’s very adaptable.”

UZH’s Kaufmann doesn’t disagree. “Before the race, we had assumed that consistency was going to be our strength. It turned out not to be.” Making the drone more robust so that it can adapt to different lighting conditions, Kaufmann adds, is mostly a matter of collecting more data. “We can address this by retraining the perception system, and I’m sure we can substantially improve.”Kaufmann believes that under controlled conditions, the potential performance of the autonomous vision-based drones is already well beyond what the human pilots are capable of. Even if this wasn’t conclusively proved through the competition, bringing the human pilots to Zurich and collecting data about how they fly made Kaufmann even more confident in what Swift can do. “We had overestimated the human pilots,” he says. “We were measuring their performance as they were training, and we slowed down a bit to increase our success rate, because we had seen that we could fly slower and still win. Our fastest strategies accelerate the quadrotor at 4.5 gs, but we saw that if we accelerate at only 3.8 gs, we can still achieve a safe win.”

Bitmatta feels that the humans have a lot more potential, too. “The kind of flying we were doing last year was nothing compared with what we’re doing now. Our rate of progress is really fast. And I think there’s a lot that we as humans can learn from how these robots fly.”

Useful Flying Robots

As far as Scaramuzza is aware, the event in Zurich, which was held last summer, was the first time that a fully autonomous mobile robot achieved world-champion performance in a real-world competitive sport. But, he points out, “this is still a research experiment. It’s not a product. We are very far from making something that can work in any environment and any condition.”

Besides making the drones more adaptable to different lighting conditions, the roboticists are teaching Swift to generalize from a known course to a new one, as humans do, and to safely fly around other drones. All of these skills are transferable and will eventually lead to practical applications. “Drone racing is pushing an autonomous system to its absolute limits,” roboticist Christian Pfeiffer says. “It’s not the ultimate goal—it’s a stepping-stone toward building better and more capable autonomous robots.” When one of those robots flies through your window and drops off a package on your coffee table before zipping right out again, these researchers will have earned your thanks.

Scaramuzza is confident that his drones will one day be the champions of the air—not just inside a carefully controlled hangar in Zurich but wherever they can be useful to humanity. “I think ultimately, a machine will be better than any human pilot, especially when consistency and precision are important,” he says. “I don’t think this is controversial. The question is, when? I don’t think it will happen in the next few decades. At the moment, humans are much better with bad data. But this is just a perception problem, and computer vision is making giant steps forward. Eventually, robotics won’t just catch up with humans, it will outperform them.”

Meanwhile, the human pilots are taking this in stride. “Seeing people use racing as a way of learning—I appreciate that,” Bitmatta says. “Part of me is a racer who doesn’t want anything to be faster than I am. And part of me is really excited for where this technology can lead. The possibilities are endless, and this is the start of something that could change the whole world.”

This article appears in the September 2023 print issue as “Superhuman Speed: AI Drones for the Win.”

Humans and robots will increasingly have to work together in the new industrial context. Therefore, it is necessary to improve the User Experience, Technology Acceptance, and overall wellbeing to achieve a smoother and more satisfying interaction while obtaining the maximum performance possible out of it. For this reason, it is essential to analyze these interactions to enhance User Experience. The heuristic evaluation is an easy-to-use, low-cost method that can be applied at different stages of a design process in an iterative manner. Despite these advantages, there is rarely a list of heuristics in the current literature that evaluates Human-Robot interactions both from a User Experience, Technology Acceptance, and Human-Centered approach. Such an approach should integrate key aspects like safety, trust, and perceived safety, ergonomics and workload, inclusivity, and multimodality, as well as robot characteristics and functionalities. Therefore, a new set of heuristics, namely, the HEUROBOX tool, is presented in this work in the form of the HEUROBOX tool to help practitioners and researchers in the assessment of human-robot systems in industrial environments. The HEUROBOX tool clusters design guidelines and methodologies as a logic list of heuristics for human-robot interaction and comprises four categories: Safety, Ergonomics, Functionality, and Interfaces. They include 84 heuristics in the basic evaluation, while the advanced evaluation lists a total of 228 heuristics in order to adapt the tool to the evaluation of different industrial requirements. Finally, the set of new heuristics has been validated by experts using the System Usability Scale (SUS) questionnaire and the categories has been prioritized in order of their importance in the evaluation of Human-Robot Interaction through the Analytic Hierarchy Process (AHP).

Smart speakers and conversational agents have been accepted into our homes for a number of tasks such as playing music, interfacing with the internet of things, and more recently, general chit-chat. However, they have been less readily accepted in our workplaces. This may be due to data privacy and security concerns that exist with commercially available smart speakers. However, one of the reasons for this may be that a smart speaker is simply too abstract and does not portray the social cues associated with a trustworthy work colleague. Here, we present an in-depth mixed method study, in which we investigate this question of embodiment in a serious task-based work scenario of a first responder team. We explore the concepts of trust, engagement, cognitive load, and human performance using a humanoid head style robot, a commercially available smart speaker, and a specially developed dialogue manager. Studying the effect of embodiment on trust, being a highly subjective and multi-faceted phenomena, is clearly challenging, and our results indicate that potentially, the robot, with its anthropomorphic facial features, expressions, and eye gaze, was trusted more than the smart speaker. In addition, we found that embodying a conversational agent helped increase task engagement and performance compared to the smart speaker. This study indicates that embodiment could potentially be useful for transitioning conversational agents into the workplace, and further in situ, “in the wild” experiments with domain workers could be conducted to confirm this.

Owing to their complex structural design and control system, musculoskeletal robots struggle to execute complicated tasks such as turning with their limited range of motion. This study investigates the utilization of passive toe joints in the foot slip-turning motion of a musculoskeletal robot to turn on its toes with minimum movements to reach the desired angle while increasing the turning angle and its range of mobility. The different conditions of plantar intrinsic muscles (PIM) were also studied in the experiment to investigate the effect of actively controlling the stiffness of toe joints. The results show that the usage of toe joints reduced frictional torque and improved rotational angle. Meanwhile, the results of the toe-lifting angle show that the usage of PIM could contribute to preventing over-dorsiflexion of toes and possibly improving postural stability. Lastly, the results of ground reaction force show that the foot with different stiffness can affect the curve pattern. These findings contribute to the implementations of biological features and utilize them in bipedal robots to simplify their motions, and improve adaptability, regardless of their complex structure.

Disassembly of electric vehicle batteries is a critical stage in recovery, recycling and re-use of high-value battery materials, but is complicated by limited standardisation, design complexity, compounded by uncertainty and safety issues from varying end-of-life condition. Telerobotics presents an avenue for semi-autonomous robotic disassembly that addresses these challenges. However, it is suggested that quality and realism of the user’s haptic interactions with the environment is important for precise, contact-rich and safety-critical tasks. To investigate this proposition, we demonstrate the disassembly of a Nissan Leaf 2011 module stack as a basis for a comparative study between a traditional asymmetric haptic-“cobot” master-slave framework and identical master and slave cobots based on task completion time and success rate metrics. We demonstrate across a range of disassembly tasks a time reduction of 22%–57% is achieved using identical cobots, yet this improvement arises chiefly from an expanded workspace and 1:1 positional mapping, and suffers a 10%–30% reduction in first attempt success rate. For unbolting and grasping, the realism of force feedback was comparatively less important than directional information encoded in the interaction, however, 1:1 force mapping strengthened environmental tactile cues for vacuum pick-and-place and contact cutting tasks.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCON 2023: 18–20 October 2023, NEW ORLEANSHUMANOIDS 2023: 12–14 December 2023, AUSTIN, TEXASCYBATHLON CHALLENGES: 02 February 2024, ZURICH

Enjoy today’s videos!

For US $2.7 million, one of these can be yours.

[ Impress ]

Here’s a little bit more IRL footage of Apptronik’s Apollo, which was announced this week.

[ Apptronik ]

TruckBot is an autonomous robot that can unload both truck trailers and shipping containers at a rate of up to 1,000 cases per hour. It reaches up to 52 feet [16 meters] into the truck trailer or shipping container and can handle boxes weighing up to 50 lbs [23 kilograms], including containers with packing complexities and mixed SKU loads.

[ Mujin ]

These high-speed robot hands from the late 1990s and 2000s are still impressive.

[ Namiki Laboratory ]

This is maybe the jauntiest robot I’ve ever seen.

[ UPenn ]

Or maybe this is the jauntiest robot I’ve ever seen.

[ Deep Robotics ]

Turns out, if you make feet into hydrofoil shapes and put a pair of legs into a water current that’s been disturbed by a cylinder, you’ll get a fairly convincing biological walking gait.

[ UMass Amherst ]

Thanks, Julia!

Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carrier’s fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable. Now MIT researchers have found a way to simplify this process, known as contact-rich manipulation planning.

Okay, but I want to know more about Mr. Bucket <3.

[ MIT News ]

By collaborating with Dusty on the Stanford University Bridge Project, California Drywall was able to cut layout time in half and fast-track the installation project.

[ Dusty Robotics ]

PILOTs for robotic INspection and maintenance Grounded on advanced intelligent platforms and prototype applications (PILOTING) is an H2020 European project coordinated by CATEC. The variety of inspection and maintenance operations considered within the project requires the use of different robotic systems. A series of robotic vehicles from PILOTING partners has been adapted/developed and integrated within the PILOTING I&M platform.

[ GRVC ]

A NASA flight campaign aims to enable drones to land safely on rooftop hubs called vertiports for future delivery of people and goods. The campaign may also lead to improvements in weather prediction.

[ NASA ]

An unscripted but presumably edited long interview with Robot Sophia, if you’re into that particular kind of theater.

[ Hanson Robotics ]

Thanks, Dan!



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCON 2023: 18–20 October 2023, NEW ORLEANSHUMANOIDS 2023: 12–14 December 2023, AUSTIN, TEXASCYBATHLON CHALLENGES: 02 February 2024, ZURICH

Enjoy today’s videos!

For US $2.7 million, one of these can be yours.

[ Impress ]

Here’s a little bit more IRL footage of Apptronik’s Apollo, which was announced this week.

[ Apptronik ]

TruckBot is an autonomous robot that can unload both truck trailers and shipping containers at a rate of up to 1,000 cases per hour. It reaches up to 52 feet [16 meters] into the truck trailer or shipping container and can handle boxes weighing up to 50 lbs [23 kilograms], including containers with packing complexities and mixed SKU loads.

[ Mujin ]

These high-speed robot hands from the late 1990s and 2000s are still impressive.

[ Namiki Laboratory ]

This is maybe the jauntiest robot I’ve ever seen.

[ UPenn ]

Or maybe this is the jauntiest robot I’ve ever seen.

[ Deep Robotics ]

Turns out, if you make feet into hydrofoil shapes and put a pair of legs into a water current that’s been disturbed by a cylinder, you’ll get a fairly convincing biological walking gait.

[ UMass Amherst ]

Thanks, Julia!

Humans are generally good at whole-body manipulation, but robots struggle with such tasks. To the robot, each spot where the box could touch any point on the carrier’s fingers, arms, and torso represents a contact event that it must reason about. With billions of potential contact events, planning for this task quickly becomes intractable. Now MIT researchers have found a way to simplify this process, known as contact-rich manipulation planning.

Okay, but I want to know more about Mr. Bucket <3.

[ MIT News ]

By collaborating with Dusty on the Stanford University Bridge Project, California Drywall was able to cut layout time in half and fast-track the installation project.

[ Dusty Robotics ]

PILOTs for robotic INspection and maintenance Grounded on advanced intelligent platforms and prototype applications (PILOTING) is an H2020 European project coordinated by CATEC. The variety of inspection and maintenance operations considered within the project requires the use of different robotic systems. A series of robotic vehicles from PILOTING partners has been adapted/developed and integrated within the PILOTING I&M platform.

[ GRVC ]

A NASA flight campaign aims to enable drones to land safely on rooftop hubs called vertiports for future delivery of people and goods. The campaign may also lead to improvements in weather prediction.

[ NASA ]

An unscripted but presumably edited long interview with Robot Sophia, if you’re into that particular kind of theater.

[ Hanson Robotics ]

Thanks, Dan!

Underwater infrastructure, such as pipelines, requires regular inspection and maintenance including cleaning, welding of defects and valve-turning or hot-stabbing. At the moment, these tasks are mostly performed by divers and Remotely Operated Vehicles (ROVs) but the use of intervention Autonomous Underwater Vehicles (intervention-AUVs) can greatly reduce operation time, risk, and cost. However, autonomous underwater manipulation has not yet reached a high technological readiness and is an intensively researched topic. This review identifies key requirements based on necessary inspection and maintenance methods, linking them to the current technology and deriving major challenges which need to be addressed in development. These include the handling of tools, where a separation between handheld and mounted tools is detected in already employed underwater intervention vehicles such as the Sabertooth by Saab Seaeye or the Aquanaut by Nauticus robotics, two vehicles capable of semi-autonomous intervention. The main challenge identified concerns high level autonomy, i.e., the process of decision-making. This process includes detecting the correct point of interest, maximizing the workspace of the manipulator, planning the manipulation considering required forces, and monitoring the progress to allow for corrections and high quality results. In order to overcome these issues, reliable close range sensing and precise end point navigation is needed. By identifying these persisting challenges, the paper provides inspiration for further development directions in the field of autonomous underwater intervention.

Our understanding of the complex mechanisms that power biological intelligence has been greatly enhanced through the explosive growth of large-scale neuroscience and robotics simulation tools that are used by the research community to perform previously infeasible experiments, such as the simulation of the neocortex’s circuitry. Nevertheless, simulation falls far from being directly applicable to biorobots due to the large discrepancy between the simulated and the real world. A possible solution for this problem is the further enhancement of existing simulation tools for robotics, AI and neuroscience with multi-physics capabilities. Previously infeasible or difficult to simulate scenarios, such as robots swimming on the water surface, interacting with soft materials, walking on granular materials etc., would be rendered possible within a multi-physics simulation environment designed for robotics. In combination with multi-physics simulation, large-scale simulation tools that integrate multiple simulation modules in a closed-loop manner help address fundamental questions around the organization of neural circuits and the interplay between the brain, body and environment. We analyze existing designs for large-scale simulation running on cloud and HPC infrastructure as well as their shortcomings. Based on this analysis we propose a next-gen modular architecture design based on multi-physics engines, that we believe would greatly benefit biorobotics and AI.

A distinctive feature of quadrupeds that is integral to their locomotion is the tail. Tails serve many purposes in biological systems, including propulsion, counterbalance, and stabilization while walking, running, climbing, or jumping. Similarly, tails in legged robots may augment the stability and maneuverability of legged robots by providing an additional point of contact with the ground. However, in the field of terrestrial bio-inspired legged robotics, the tail is often ignored because of the difficulties in design and control. In this study, we test the hypothesis that a variable stiffness robotic tail can improve the performance of a sprawling quadruped robot by enhancing its stability and maneuverability in various environments. In order to validate our hypothesis, we integrated a cable-driven, flexible tail with multiple segments into the underactuated sprawling quadruped robot, where a single servo motor working alongside a reel and cable mechanism regulates the tail’s stiffness. Our results demonstrated that by controlling the stiffness of the tail, the stability of locomotion on rough terrain and the climbing ability of the robot are improved compared to the movement with a rigid tail and no tail. Our findings highlight that constant ground support provided by the flexible tail is key to maintaining stable locomotion. This ensured a predictable gait cycle, eliminating unexpected turning and slipping, resulting in an increase in locomotion speed and efficiency. Additionally, we observed the robot’s enhanced climbing ability on surfaces inclined up to 20°. The flexibility of the tail enabled the robot to overcome obstacles without external sensing, exhibiting significant adaptability across various terrains.



Back in January, Apptronik said it was working on a new commercial general-purpose humanoid robot called Apollo. I say “new” because over the past seven or eight years Apptronik has developed more than half a dozen humanoid robots along with a couple of full-body exoskeletons. But as the company told us earlier this year, it has decided that now is absolutely definitely for sure the time for bipedal humanoids to go commercial.

Today, Apptronik is unveiling Apollo. It says the robot is “designed to transform the industrial workforce and beyond in service of improving the human experience.” It will first be used in logistics and manufacturing, but Apptronik promises “endless potential applications long term.” Still, the company must make it happen: It’s a big step from a prototype to a commercial product.

The biped that we saw in January was a prototype for Apollo, but today Apptronik is showing an alpha version of the real thing. The robot is roughly human-size, standing 1.7 meters tall and weighing 73 kilograms, with a maximum payload of 25 kg. It can run for about 4 hours on a swappable battery. The company has two of these robots right now, and it is building four more.

While Apptronik is initially focused on case and tote handling solutions in the logistics and manufacturing industries, Apollo is a general-purpose robot that is designed to work in the real world where development partners will extend Apollo’s solutions far beyond logistics and manufacturing eventually extending into construction, oil and gas, electronics production, retail, home delivery, elder care and countless more. Apollo is the “iPhone” of robots, enabling development partners to expand on Apptronik developed solutions and extend the digital world into the physical world to work alongside people and do the jobs that they don’t want to do.

I’m generally not a huge fan of the “iPhone of robots” analogy, primarily because the iPhone was cost-effective and widely desirable as a multipurpose tool even before developers really got involved with it. Historically, robots have not been successful in this way. It’ll take some time to learn whether Apollo will be able to demonstrate that out-of-the-box versatility, but my guess is that the initial success of Apollo (as with basically every other robot) will depend primarily on what practical applications Apptronik itself will be able to set it up for. Maybe at some point humanoids will be so affordable and easy to use that there will be an open-ended developer market, but we’re nowhere close to that yet.

Pretty much all the humanoid robots entering the market are meant for the handling of standard containers, known as cases and totes. And for good reason: The job is dull and physically taxing, and there aren’t enough people willing to do it. There’s plenty of room for robots like Apollo, provided the cost isn’t too high.

To understand how Apollo can be competitive, we spoke with Apptronik CEO Jeff Cardenas and CTO Nick Paine.

How are you going to make Apollo affordable?

Jeff Cardenas: This isn’t our first humanoid that we’ve built—we’ve done about eight. The approach that we took with our robots early on was to just build the best thing we could, and worry about getting the cost down later. But we would hit a wall each time. A big focus with Apollo was to not do that again. We had to start thinking about cost from the very beginning, and we needed to make sure that the first alpha unit that we build is as close to the gamma unit as possible. A lot of people will wave a wand and say, “There’s going to be millions of humanoids one day, so things like harmonic drives are going to become much cheaper at scale.” But when you actually quote components at really high volumes, you don’t get the price break you think you’ll get. The electronics—the motor drivers with the actuators—60 percent or more of the cost of the system is there.

Nick Paine: We are trying to think about Apollo from a long-term perspective. We wanted to avoid the situation where we’d build a robot just to show that we could do something, but then have to figure out how to swap out expensive high-precision parts for something else while presenting our controls team with an entirely new problem as well.

So the focus is on Apollo’s actuators?

Paine: Apptronik is a little unique in that we’ve built up actuation experience through a range of projects that we’ve worked on—I think we’ve designed around 13 complete systems, so we’ve experienced the full gamut of what type of actuation architectures work well for what scenarios and what applications. Apollo is really a culmination of all that knowledge gathered over many years of iterative learning, optimized for the humanoid use case, and being very intentional about what properties from a first-principles standpoint that we wanted to have at each joint of the robot. That resulted in a combination of linear and rotary actuators throughout the system.

Cardenas: What we’re targeting is affordability, and part of how we get there is with our actuation approach. The new actuators we’re using have about a third fewer components than our previous actuators. They also take about a third of the assembly time. Long term, our road map is really focused around supply chain: How do we get away from single-source vendors and start to leverage components that are much more readily available? We think that’s going to be important for cost and scaling the systems long term.

Can you share some technical details on the actuators?

Paine: Folks can look at the patents when they come out, but I would chalk it up to our teams’ first-principles design experience, and past history of system-level integration.

But it’s not like you have some magical new actuator technology?

Cardenas: We’re not relying on fundamental breakthroughs to reach this threshold of performance. We need to get our robots out into the world, and we’re able to leverage technologies that already exist. And with our experience and a systems sort of thinking we’re putting it together in a novel way.

What does “affordable” mean in the context of a robot like Apollo?

Cardenas: I think long term, a humanoid needs to cost less than US $50,000. They should be comparable to the price of many cars.

Paine: I think actually we could be significantly cheaper than cars, based on the assumption that at scale, the cost of a product typically approaches the cost of its constituent materials. Cars weigh about 1,800 kilograms, and our robot weighs 70 kilograms. That’s 25 times less raw materials. And as Jeff said, we already have a path and a supply chain for very cost-effective actuators. I think that’s a really interesting analysis to think about, and we’re excited to see where it goes.

Some of the videos show Apollo with a five-fingered hand. What’s your perspective on end effectors?

Cardenas: We think that long term, hands will be important for humanoids, although they won’t necessarily have to be five-fingered hands. The end effector is modular. For first applications when we’re picking boxes, we don’t need a five-finger hand for that. And so we’re going to simplify the problem and deploy with a simpler end effector.

Paine: I feel like some folks are trying to do hands because they think it’s cool, or because it shows that their team is capable. The way that I think about it is, humanoids are hard enough as they are—there are a lot of challenges and complexities to figure out. We are a very pragmatic team from an engineering standpoint, and we are very careful about choosing our battles, putting our resources where they’re most valuable. And so for the alpha version of Apollo, we have a modular interface with the wrist. We are not solving the generic five-finger fine dexterity and manipulation problem. But we do think that long term, the best versatile end effector is a hand.

These initial applications that you’re targeting with Apollo don’t seem to be leveraging its bipedal mobility. Why have a robot with legs at all?

Cardenas: One of the things that we’ve learned about legs is that they address the need for reaching the ground and reaching up high. If you try to solve that problem with wheels, then you end up with a really big base, because it has to be statically stable. The customers that we’re working with are really interested in this idea of retrofitability. They don’t want to have to make workspace changes. The workstations are really narrow—they’re designed around the human form, and so we think legs are going to be the way to get there.

Legs are an elegant solution to achieving a lightweight system that can operate at large vertical workspaces in small footprints. —Nick Paine, Apptronik CTO

Can Apollo safely fall over and get back up?

Paine: A very important requirement is that Apollo needs to be able to fall over and not break, and that drives some key actuation requirements. One of the unique things with Apollo is that not only is it well suited for OSHA-level manipulation of payloads, but it’s also well suited for robustly handling impacts with the environment. And from a maintenance standpoint, two bolts is all you need to remove to swap out an actuator.

Cardenas says that Apptronik has more than 10 pilots planned with case picking as the initial application. The rest of this year will be focused on in-house demonstrations with the Apollo alpha units, with field pilots planned for next year with production robots. Full commercial release is planned for the end of 2024. It’s certainly an aggressive timeline, but Apptronik is confident in its approach. “The beauty of robotics is in showing versus telling,” Cardenas says. “That’s what we’re trying to do with this launch.”



Back in January, Apptronik said it was working on a new commercial general-purpose humanoid robot called Apollo. I say “new” because over the past seven or eight years Apptronik has developed more than half a dozen humanoid robots along with a couple of full-body exoskeletons. But as the company told us earlier this year, it has decided that now is absolutely definitely for sure the time for bipedal humanoids to go commercial.

Today, Apptronik is unveiling Apollo. It says the robot is “designed to transform the industrial workforce and beyond in service of improving the human experience.” It will first be used in logistics and manufacturing, but Apptronik promises “endless potential applications long term.” Still, the company must make it happen: It’s a big step from a prototype to a commercial product.

The biped that we saw in January was a prototype for Apollo, but today Apptronik is showing an alpha version of the real thing. The robot is roughly human-size, standing 1.7 meters tall and weighing 73 kilograms, with a maximum payload of 25 kg. It can run for about 4 hours on a swappable battery. The company has two of these robots right now, and it is building four more.

While Apptronik is initially focused on case and tote handling solutions in the logistics and manufacturing industries, Apollo is a general-purpose robot that is designed to work in the real world where development partners will extend Apollo’s solutions far beyond logistics and manufacturing eventually extending into construction, oil and gas, electronics production, retail, home delivery, elder care and countless more. Apollo is the “iPhone” of robots, enabling development partners to expand on Apptronik developed solutions and extend the digital world into the physical world to work alongside people and do the jobs that they don’t want to do.

I’m generally not a huge fan of the “iPhone of robots” analogy, primarily because the iPhone was cost-effective and widely desirable as a multipurpose tool even before developers really got involved with it. Historically, robots have not been successful in this way. It’ll take some time to learn whether Apollo will be able to demonstrate that out-of-the-box versatility, but my guess is that the initial success of Apollo (as with basically every other robot) will depend primarily on what practical applications Apptronik itself will be able to set it up for. Maybe at some point humanoids will be so affordable and easy to use that there will be an open-ended developer market, but we’re nowhere close to that yet.

Pretty much all the humanoid robots entering the market are meant for the handling of standard containers, known as cases and totes. And for good reason: The job is dull and physically taxing, and there aren’t enough people willing to do it. There’s plenty of room for robots like Apollo, provided the cost isn’t too high.

To understand how Apollo can be competitive, we spoke with Apptronik CEO Jeff Cardenas and CTO Nick Paine.

How are you going to make Apollo affordable?

Jeff Cardenas: This isn’t our first humanoid that we’ve built—we’ve done about eight. The approach that we took with our robots early on was to just build the best thing we could, and worry about getting the cost down later. But we would hit a wall each time. A big focus with Apollo was to not do that again. We had to start thinking about cost from the very beginning, and we needed to make sure that the first alpha unit that we build is as close to the gamma unit as possible. A lot of people will wave a wand and say, “There’s going to be millions of humanoids one day, so things like harmonic drives are going to become much cheaper at scale.” But when you actually quote components at really high volumes, you don’t get the price break you think you’ll get. The electronics—the motor drivers with the actuators—60 percent or more of the cost of the system is there.

Nick Paine: We are trying to think about Apollo from a long-term perspective. We wanted to avoid the situation where we’d build a robot just to show that we could do something, but then have to figure out how to swap out expensive high-precision parts for something else while presenting our controls team with an entirely new problem as well.

So the focus is on Apollo’s actuators?

Paine: Apptronik is a little unique in that we’ve built up actuation experience through a range of projects that we’ve worked on—I think we’ve designed around 13 complete systems, so we’ve experienced the full gamut of what type of actuation architectures work well for what scenarios and what applications. Apollo is really a culmination of all that knowledge gathered over many years of iterative learning, optimized for the humanoid use case, and being very intentional about what properties from a first-principles standpoint that we wanted to have at each joint of the robot. That resulted in a combination of linear and rotary actuators throughout the system.

Cardenas: What we’re targeting is affordability, and part of how we get there is with our actuation approach. The new actuators we’re using have about a third fewer components than our previous actuators. They also take about a third of the assembly time. Long term, our road map is really focused around supply chain: How do we get away from single-source vendors and start to leverage components that are much more readily available? We think that’s going to be important for cost and scaling the systems long term.

Can you share some technical details on the actuators?

Paine: Folks can look at the patents when they come out, but I would chalk it up to our teams’ first-principles design experience, and past history of system-level integration.

But it’s not like you have some magical new actuator technology?

Cardenas: We’re not relying on fundamental breakthroughs to reach this threshold of performance. We need to get our robots out into the world, and we’re able to leverage technologies that already exist. And with our experience and a systems sort of thinking we’re putting it together in a novel way.

What does “affordable” mean in the context of a robot like Apollo?

Cardenas: I think long term, a humanoid needs to cost less than US $50,000. They should be comparable to the price of many cars.

Paine: I think actually we could be significantly cheaper than cars, based on the assumption that at scale, the cost of a product typically approaches the cost of its constituent materials. Cars weigh about 1,800 kilograms, and our robot weighs 70 kilograms. That’s 25 times less raw materials. And as Jeff said, we already have a path and a supply chain for very cost-effective actuators. I think that’s a really interesting analysis to think about, and we’re excited to see where it goes.

Some of the videos show Apollo with a five-fingered hand. What’s your perspective on end effectors?

Cardenas: We think that long term, hands will be important for humanoids, although they won’t necessarily have to be five-fingered hands. The end effector is modular. For first applications when we’re picking boxes, we don’t need a five-finger hand for that. And so we’re going to simplify the problem and deploy with a simpler end effector.

Paine: I feel like some folks are trying to do hands because they think it’s cool, or because it shows that their team is capable. The way that I think about it is, humanoids are hard enough as they are—there are a lot of challenges and complexities to figure out. We are a very pragmatic team from an engineering standpoint, and we are very careful about choosing our battles, putting our resources where they’re most valuable. And so for the alpha version of Apollo, we have a modular interface with the wrist. We are not solving the generic five-finger fine dexterity and manipulation problem. But we do think that long term, the best versatile end effector is a hand.

These initial applications that you’re targeting with Apollo don’t seem to be leveraging its bipedal mobility. Why have a robot with legs at all?

Cardenas: One of the things that we’ve learned about legs is that they address the need for reaching the ground and reaching up high. If you try to solve that problem with wheels, then you end up with a really big base, because it has to be statically stable. The customers that we’re working with are really interested in this idea of retrofitability. They don’t want to have to make workspace changes. The workstations are really narrow—they’re designed around the human form, and so we think legs are going to be the way to get there.

Legs are an elegant solution to achieving a lightweight system that can operate at large vertical workspaces in small footprints. —Nick Paine, Apptronik CTO

Can Apollo safely fall over and get back up?

Paine: A very important requirement is that Apollo needs to be able to fall over and not break, and that drives some key actuation requirements. One of the unique things with Apollo is that not only is it well suited for OSHA-level manipulation of payloads, but it’s also well suited for robustly handling impacts with the environment. And from a maintenance standpoint, two bolts is all you need to remove to swap out an actuator.

Cardenas says that Apptronik has more than 10 pilots planned with case picking as the initial application. The rest of this year will be focused on in-house demonstrations with the Apollo alpha units, with field pilots planned for next year with production robots. Full commercial release is planned for the end of 2024. It’s certainly an aggressive timeline, but Apptronik is confident in its approach. “The beauty of robotics is in showing versus telling,” Cardenas says. “That’s what we’re trying to do with this launch.”

Inertial Measurement Units are present in several applications in aerospace, unmanned vehicle navigation, legged robots, and human motion tracking systems, due to their ability to estimate a body’s acceleration, orientation and angular rate. In contrast to rovers and drones, legged locomotion involves repeated impacts between the feet and the ground, and rapid locomotion (e.g., running) involves alternating stance and flight phases, resulting in substantial oscillations in vertical acceleration. The aim of this research is to investigate the effects of periodic low-acceleration impacts (4 g, 8 g and 16 g), which imitate the vertical motion of a running robot, on the attitude estimation of multiple Micro-Electromechanical Systems IMUs. The results reveal the presence of a significant drift in the attitude estimation of the sensors, which can provide important information during the design process of a robot (sensor selection), or during the control phase (e.g., the system will know that after a series of impacts the attitude estimations will be inaccurate).

Introduction: Collaboration in teams composed of both humans and automation has an interdependent nature, which demands calibrated trust among all the team members. For building suitable autonomous teammates, we need to study how trust and trustworthiness function in such teams. In particular, automation occasionally fails to do its job, which leads to a decrease in a human’s trust. Research has found interesting effects of such a reduction of trust on the human’s trustworthiness, i.e., human characteristics that make them more or less reliable. This paper investigates how automation failure in a human-automation collaborative scenario affects the human’s trust in the automation, as well as a human’s trustworthiness towards the automation.

Methods: We present a 2 × 2 mixed design experiment in which the participants perform a simulated task in a 2D grid-world, collaborating with an automation in a “moving-out” scenario. During the experiment, we measure the participants’ trustworthiness, trust, and liking regarding the automation, both subjectively and objectively.

Results: Our results show that automation failure negatively affects the human’s trustworthiness, as well as their trust in and liking of the automation.

Discussion: Learning the effects of automation failure in trust and trustworthiness can contribute to a better understanding of the nature and dynamics of trust in these teams and improving human-automation teamwork.

The evolutionary robotics field offers the possibility of autonomously generating robots that are adapted to desired tasks by iteratively optimising across successive generations of robots with varying configurations until a high-performing candidate is found. The prohibitive time and cost of actually building this many robots means that most evolutionary robotics work is conducted in simulation, but to apply evolved robots to real-world problems, they must be implemented in hardware, which brings new challenges. This paper explores in detail the design of an example system for realising diverse evolved robot bodies, and specifically how this interacts with the evolutionary process. We discover that every aspect of the hardware implementation introduces constraints that change the evolutionary space, and exploring this interplay between hardware constraints and evolution is the key contribution of this paper. In simulation, any robot that can be defined by a suitable genetic representation can be implemented and evaluated, but in hardware, real-world limitations like manufacturing/assembly constraints and electrical power delivery mean that many of these robots cannot be built, or will malfunction in operation. This presents the novel challenge of how to constrain an evolutionary process within the space of evolvable phenotypes to only those regions that are practically feasible: the viable phenotype space. Methods of phenotype filtering and repair were introduced to address this, and found to degrade the diversity of the robot population and impede traversal of the exploration space. Furthermore, the degrees of freedom permitted by the hardware constraints were found to be poorly matched to the types of morphological variation that would be the most useful in the target environment. Consequently, the ability of the evolutionary process to generate robots with effective adaptations was greatly reduced. The conclusions from this are twofold. 1) Designing a hardware platform for evolving robots requires different thinking, in which all design decisions should be made with reference to their impact on the viable phenotype space. 2) It is insufficient to just evolve robots in simulation without detailed consideration of how they will be implemented in hardware, because the hardware constraints have a profound impact on the evolutionary space.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Loco-manipulation planning skills are pivotal for expanding the utility of robots in everyday environments. Here, we propose a minimally guided framework that automatically discovers whole-body trajectories jointly with contact schedules for solving general loco-manipulation tasks in premodeled environments. We showcase emergent behaviors for a quadrupedal mobile manipulator exploiting both prehensile and nonprehensile interactions to perform real-world tasks such as opening/closing heavy dishwashers and traversing spring-loaded doors.

I swear the cuteness of a quadrupedsusing a lil foot to hold open a spring-loaded door just never gets old.

[ Science Robotics ] via [ RSL ]

In 2019, Susie Sensmeier became one of the first customers in the United States to receive a commercial drone delivery. She was hooked. Four years later, Susie and her husband, Paul, have had over 1,200 orders delivered to their front yard in Christiansburg, Va., via Wing’s drone delivery service. We believe this sets a world record.

[ Wing ]

At the RoboCup 2023, one challenge was the Dynamic Ball Handling Challenge. The defending team used a static image with the sole purpose of intercepting the ball. The attacking team’s goal was to do at least two passes followed by a goal. This procedure was repeated three times on three different days and fields.

[ B-Human ]

When it comes to space, humans and robots go way back. We rely heavily on our mechanical friends to perform tasks that are too dangerous, difficult, or out of reach for us humans. We’re even working on a new generation of robots that will help us explore in advanced and novel ways.

[ NASA ]

The KUKA Innovation Award has been held annually since 2014 and is addressed to developers, graduates, and research teams from universities and companies. For this year’s award, the applicants were asked to use open interfaces in our newly introduced robot operating system iiQKA and to add their own hardware and software components. Team SPIRIT from the Institute of Robotics and Mechatronics at the German Aerospace Center worked on the automation of maintenance and inspection tasks in the oil and gas industry.

[ Kuka ]

We present tasks of traversing challenging terrain that requires discovering a contact schedule, navigating non-convex obstacles, and coordinating many degrees of freedom. Our hybrid planner has been applied to three different robots: a quadruped, a wheeled quadruped, and a legged excavator. We validate our hybrid locomotion planner in the real world and simulation, generating behaviors we could not achieve with previous methods.

[ ETHZ ]

Giving drones hummingbird performance with no GPS, no motion capture, no cloud computing, and no prior map.

[ Ajna ]

In this video we introduce a new option for our Ridgeback Omnidirectional Indoor Mobile Platform, available through Clearpath Robotics integration services. This height-adjustable lift column is programmable through ROS and configure with MoveIt!

[ Clearpath ]

How do robots understand their surroundings? How do they decide what to pick up next? And how do they learn how to pick it up?

[ Covariant ]

Our Phoenix robots can successfully and accurately perform tasks that require the dexterity of two hands simultaneously, also known as bimanual object manipulation!

[ Sanctuary AI ]

By this point, I should be able to just type O_o and you’ll know it’s a Reachy video, right?

[ Pollen Robotics ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Loco-manipulation planning skills are pivotal for expanding the utility of robots in everyday environments. Here, we propose a minimally guided framework that automatically discovers whole-body trajectories jointly with contact schedules for solving general loco-manipulation tasks in premodeled environments. We showcase emergent behaviors for a quadrupedal mobile manipulator exploiting both prehensile and nonprehensile interactions to perform real-world tasks such as opening/closing heavy dishwashers and traversing spring-loaded doors.

I swear the cuteness of a quadrupedsusing a lil foot to hold open a spring-loaded door just never gets old.

[ Science Robotics ] via [ RSL ]

In 2019, Susie Sensmeier became one of the first customers in the United States to receive a commercial drone delivery. She was hooked. Four years later, Susie and her husband, Paul, have had over 1,200 orders delivered to their front yard in Christiansburg, Va., via Wing’s drone delivery service. We believe this sets a world record.

[ Wing ]

At the RoboCup 2023, one challenge was the Dynamic Ball Handling Challenge. The defending team used a static image with the sole purpose of intercepting the ball. The attacking team’s goal was to do at least two passes followed by a goal. This procedure was repeated three times on three different days and fields.

[ B-Human ]

When it comes to space, humans and robots go way back. We rely heavily on our mechanical friends to perform tasks that are too dangerous, difficult, or out of reach for us humans. We’re even working on a new generation of robots that will help us explore in advanced and novel ways.

[ NASA ]

The KUKA Innovation Award has been held annually since 2014 and is addressed to developers, graduates, and research teams from universities and companies. For this year’s award, the applicants were asked to use open interfaces in our newly introduced robot operating system iiQKA and to add their own hardware and software components. Team SPIRIT from the Institute of Robotics and Mechatronics at the German Aerospace Center worked on the automation of maintenance and inspection tasks in the oil and gas industry.

[ Kuka ]

We present tasks of traversing challenging terrain that requires discovering a contact schedule, navigating non-convex obstacles, and coordinating many degrees of freedom. Our hybrid planner has been applied to three different robots: a quadruped, a wheeled quadruped, and a legged excavator. We validate our hybrid locomotion planner in the real world and simulation, generating behaviors we could not achieve with previous methods.

[ ETHZ ]

Giving drones hummingbird performance with no GPS, no motion capture, no cloud computing, and no prior map.

[ Ajna ]

In this video we introduce a new option for our Ridgeback Omnidirectional Indoor Mobile Platform, available through Clearpath Robotics integration services. This height-adjustable lift column is programmable through ROS and configure with MoveIt!

[ Clearpath ]

How do robots understand their surroundings? How do they decide what to pick up next? And how do they learn how to pick it up?

[ Covariant ]

Our Phoenix robots can successfully and accurately perform tasks that require the dexterity of two hands simultaneously, also known as bimanual object manipulation!

[ Sanctuary AI ]

By this point, I should be able to just type O_o and you’ll know it’s a Reachy video, right?

[ Pollen Robotics ]

Pages