Feed aggregator



Deep below the Louisville, Ky., zoo lies a network of enormous caverns carved out of limestone. The caverns are dark. They’re dusty. They’re humid. And during one week in September 2021, they were full of the most sophisticated robots in the world. The robots (along with their human teammates) were there to tackle a massive underground course designed by DARPA, the Defense Advanced Research Projects Agency, as the culmination of its three-year Subterranean Challenge.

The SubT was first announced in early 2018. DARPA designed the competition to advance practical robotics in extreme conditions, based around three distinct underground environments: human-made tunnels, the urban underground, and natural caves. To do well, the robots would have to work in teams to traverse and map completely unknown areas spanning kilometers, search out a variety of artifacts, and identify their locations with pinpoint accuracy under strict time constraints. To more closely mimic the scenarios in which first responders might utilize autonomous robots, robots experienced darkness, dust and smoke, and even DARPA-controlled rockfalls that occasionally blocked their progress.

With direct funding plus prize money that reached into the millions, DARPA encouraged international collaborations among top academic institutions as well as industry. A series of three preliminary circuit events would give teams experience with each environment.

During the Tunnel Circuit event, which took place in August 2019 in the National Institute for Occupational Safety and Health’s experimental coal mine, on the outskirts of Pittsburgh, many teams lost communication with their robots after the first bend in the tunnel. Six months later, at the Urban Circuit event, held at an unfinished nuclear power station in Satsop, Wash., teams beefed up their communications with everything from a straightforward tethered Ethernet cable to battery-powered mesh network nodes that robots would drop like breadcrumbs as they went along, ideally just before they passed out of communication range. The Cave Circuit, scheduled for the fall of 2020, was canceled due to COVID-19.

Team CoSTAR, a collaboration between NASA’s JPL, MIT, Caltech, KAIST, and LTU, inspects the communications-node deployment system on their Husky wheeled robots [top]. CoSTAR’s pack of quadrupeds, consisting of Spot robots from Boston Dynamics modified with customized autonomy payloads [middle], undergo a hardware check before their final competition run. The Spots are wearing “socks” made from cut-up mountain-bike tires, cable ties, and black tape. Despite the ruggedness of many of the robots, as research platforms, most demanded careful attention from their human teammates, including Team Cerberus [bottom]. Evan Ackerman

By the time teams reached the SubT Final Event in the Louisville Mega Cavern, the focus was on autonomy rather than communications. As in the preliminary events, humans weren’t permitted on the course, and only one person from each team was allowed to interact remotely with the team’s robots, so direct remote control was impractical. It was clear that teams of robots able to make their own decisions about where to go and how to get there would be the only viable way to traverse the course quickly.

DARPA outdid itself for the final event, constructing an enormous kilometer-long course within the existing caverns. Shipping containers connected end-to-end formed complex networks, and many of them were carefully sculpted and decorated to resemble mining tunnels and natural caves. Offices, storage rooms, and even a subway station, all built from scratch, comprised the urban segment of the course. Teams had one hour to find as many of the 40 artifacts as possible. To score a point, the robot would have to report the artifact’s location back to the base station at the course entrance, which would be a challenge in the far reaches of the course where direct communication was impossible.

Eight teams competed in the SubT Final, and most brought a carefully curated mix of robots designed to work together. Wheeled vehicles offered the most reliable mobility, but quadrupedal robots proved surprisingly capable, especially over tricky terrain. Drones allowed complete exploration of some of the larger caverns.

By the end of the final competition, two teams had each found 23 artifacts: Team Cerberus—a collaboration of the University of Nevada, Reno; ETH Zurich; the Norwegian University of Science and Technology; the University of California, Berkeley; the Oxford Robotics Institute; Flyability; and the Sierra Nevada Corp.—and Team CSIRO Data61—consisting of CSIRO’s Data61; Emesent; and Georgia Tech. The equal scores triggered a tie-breaker rule: Which team had been the quickest to its final artifact? That gave first place to Cerberus, which had been just 46 seconds faster than CSIRO.

Despite coming in second, Team CSIRO’s robots achieved the astonishing feat of creating a map of the course that differed from DARPA’s ground-truth map by less than 1 percent, effectively matching what a team of expert humans spent many days creating. That’s the kind of tangible, fundamental advance SubT was intended to inspire, according to Tim Chung, the DARPA program manager who ran the challenge.

“There’s so much that happens underground that we don’t often give a lot of thought to, but if you look at the amount of infrastructure that we’ve built underground, it’s just massive,” Chung told IEEE Spectrum. “There’s a lot of opportunity in being able to perceive, understand, and navigate in subterranean environments—there are engineering integration challenges, as well as foundational design challenges and theoretical questions that we have not yet answered. And those are the questions DARPA is most interested in, because that’s what’s going to change the face of robotics in 5 or 10 or 15 years, if not sooner.”


This point cloud assembled by Team CSIRO Data61 shows a robotic view of nearly the entire SubT course, with each dot in the cloud representing a point in 3D space measured by a sensor on a robot. Team CSIRO’s point cloud differed from DARPA’s official map by less than 1 percent

CSIRO DATA61

IEEE Spectrum was in Louisville to cover the Subterranean Final, and we spoke recently with Chung, as well as CSIRO Data61 team lead Navinda Kottege and Cerberus team lead Kostas Alexis and about their SubT experience and the influence the event is having on the future of robotics.

DARPA has hundreds of programs, but most of them don’t involve multiyear international competitions with million-dollar prizes. What was special about the Subterranean Challenge?

TIM CHUNG | DARPA program manager MCKIBILLO

Tim Chung: Every now and then, one of DARPA’s concepts warrants a different model for seeking out innovation. It’s when you know you have an impending breakthrough in a field, but you don’t know exactly how that breakthrough is going to happen, and where the traditional DARPA program model, with a broad announcement followed by proposal selection, might restrict innovation. DARPA saw the SubT Challenge as a way of attracting the robotics community to solving problems that we anticipate being impactful, like resiliency, autonomy, and sensing in austere environments. And one place where you can find those technical challenges coming together is underground.

The skill that these teams had at autonomously mapping their environments was impressive. Can you talk about that?

T.C.: We brought in a team of experts with professional survey equipment who spent many days making a precisely calibrated ground-truth map of the SubT course. And then during the competition, we saw these robots delivering nearly complete coverage of the course in under an hour—I couldn’t believe how beautiful those point clouds were! [See the point cloud image on p. TK.] I think that’s really an accelerant. When you can trust your map, you have so much more actionable situational awareness. It’s not a solved problem, but when you can attain the level of fidelity that we’ve seen in SubT, that’s a gateway technology with the potential to unlock all sorts of future innovation.

Autonomy was a necessary part of SubT, but having a human in the loop was critical as well. Do you think that humans will continue to be a necessary part of effective robotic teams, or is full autonomy the future?

T.C.: Early in the competition, we saw a lot of hand-holding, with humans giving robots low-level commands. But teams quickly realized that they needed a more autonomous approach. Full autonomy is hard, though, and I think humans will continue to play a pretty big role, just a role that needs to evolve and change into something that focuses on what humans do best.

I think that progressing from human operators to human supervisors will enhance the types of missions that human-robot teams will be able to conduct. In the final event, we saw robots on the course exploring and finding artifacts, while the human supervisor was focused on other stuff and not even paying attention to the robots. That was so cool. The robots were doing what they needed to do, leaving the human free to make high-level decisions. That’s a big change: from what was basically remote teleoperation to “you robots go off and do your thing and I’ll do mine.” And it’s incumbent on the robots to become even more capable so that the transition [of the human] from operator to supervisor can occur.

An ANYmal quadruped from Team Cerberus enters the course [top]. During the competition, only robots and DARPA staff were allowed to cross this threshold. The visual markers surrounding the course entrance provided a precise origin point from which the robots would base the maps they created. This allowed DARPA to measure the accuracy of the artifact locations that teams reported to score points. Cerberus’s ANYmal exits the urban section of the course, modeled after a subway station [bottom], and enters the tunnel section of the course, based on an abandoned mine. Evan Ackerman

What are some remaining challenges for robots in underground environments?

T.C.: Traversability analysis and reasoning about the environment are still a problem. Robots will be able to move through these environments at a faster clip if they can understand a little bit more about where they’re stepping or what they’re flying around. So, despite the fact that they were one to two orders of magnitude faster than humans for mapping purposes, the robots are still relatively slow. Shaving off another order of magnitude would really help change the game. Speed would be the ultimate enabler and have a dramatic impact on first-response scenarios, where every minute counts.

What difference do you think SubT has made, or will make, to robotics?

T.C.: The fact that many of the technologies being used in the SubT Challenge are now being productized and commercialized means that the time horizon for robots to make it into the hands of first responders has been far shortened, in my opinion. It’s already happened, and was happening, even during the competition itself, and that’s a really great impact.

What’s difficult and important about operating robots underground?

NAVINDA KOTTEGE CSIRO | Data61 team lead MCKIBILLO

Navinda Kottege: The fact that we were in a subterranean environment was one aspect of the challenge, and a very important aspect, but if you break it down, what the SubT Challenge meant was that we were in a GPS-denied environment, where you can’t rely on communications, with very difficult mobility challenges. There are many other scenarios where you might encounter these things—the Fukushima nuclear disaster, for example, wasn’t underground, but communication was a massive issue for the robots they tried to send in. The Amazon Rainforest is another example where you’d encounter similar difficulties in communication and mobility. So we saw how each of these component technologies that we would have to develop and mature would have applications in many other domains beyond the subterranean.

Where is the right place for a human in a human-robot team?

N.K.: There are two extremes. One is that you push a button and the robots go and do their thing. The other is what we call “human in the loop,” where it’s essentially remote control through high-level commands. But if the human is taken out of the loop, the loop breaks and the system stops, and we were experiencing that with brittle communications. The middle ground is a “human on the loop” concept, where you have a human supervisor who sets mission-level goals, but if the human is taken off of the loop, the loop can still run. The human added value because they had a better overview of what was happening across the whole scenario, and that’s the sort of thing that humans are super, super good at.

The subway station platform [top] incorporated many challenges for robots. Wheeled and tracked robots had particular difficulty with the rails. DARPA hid artifacts in the ceiling of the subway station (accessible only by drone), as well as under a grate in the platform floor. In addition to building many customized tunnels and structures inside the Louisville Mega Cavern, DARPA also incorporated the cavern itself into the course. This massive room [bottom] rewarded robots that managed to explore it with several additional artifacts. Evan Ackerman

How did SubT advance the field of robotics?

N.K.: For field robots to succeed, you need multiple things to work together. And I think that’s what was forced upon us by the level of complexity of the SubT Challenge. This whole notion of being able to reliably deploy robots in real-world scenarios was, to me, the key thing. Looking back at our team, three years ago we had some cool bits and pieces of technology, but we didn’t have robot systems that could reliably work for an hour or more without a human having to go and fix something. That was one of the biggest advances we had, because now, as we continue this work, we don’t even have to think twice about deploying our robots and whether they’ll destroy themselves if we leave them alone for 10 minutes. It’s that level of maturity that we’ve achieved, thanks to the robustness and reliability that we had to engineer into our systems to be successful at SubT, and now we can start focusing on the next step: What can you do when you have a fleet of autonomous robots that you can rely on?

Your team of robots created a map of the course that matched DARPA’s official map with an accuracy of better than 1 percent. That’s amazing.

N.K.: I got contacted immediately after the final event by the company that DARPA brought in to do the ground-truth mapping of the SubT course. They’d spent 100 person-hours using very expensive equipment to make their map, and they wanted to know how in the world we got our map in under an hour with a bunch of robots. It’s a good question! But the context is that our one hour of mapping took us 15 years of development to get to that stage.

There’s a difference in what’s theoretically possible and what actually works in the real world. In its early stages, our software worked, in that it hit all of the theoretical milestones it was supposed to. But then we started taking it out to the real world and testing it in very difficult environments, and that’s where we started finding all the edge cases of where it breaks. Essentially, for the last 10-plus years, we were trying to break our mapping system as much as possible, and that turned it into a really well-engineered solution. Honestly, whenever we see the results of our mapping system, it still surprises us!

What made you decide to participate in the SubT Challenge?

KOSTAS ALEXIS | Cerberus team lead MCKIBILLO

Kostas Alexis: What motivated everyone was the understanding that for autonomous robots, this challenge was extremely difficult and relevant. We knew that robotic systems could operate in these environments if humans accompanied them or teleoperated them, but we also knew that we were very far away from enabling autonomy. And we understood the value of being able to send robots instead of humans into danger. It was this combination of societal impact and technical challenge that was appealing to us, especially in the context of a competition where you can’t just do work in the lab, write a paper, and call it a day—you had to develop something that would work all the way through the finals.

Tight cave sections [top] required careful navigation by ground robots. Stalactites and stalagmites were especially treacherous for drones in flight. At the right of the picture, partially hidden by a column, is a blue coil of rope, one of the artifacts. A Team Cerberus ANYmal [bottom] walks past a decorative (but not inaccurate) warning sign, next to a drill artifact. Evan Ackerman

What was the most challenging part of SubT for your team?

K.A.: We are at the stage where we can navigate robots in normal officelike environments, but SubT had many challenges. First, relying on communications with our robots was not possible. Second, the terrain was not easy. Typically, even terrain that is hard for robots is easy for humans, but the natural cave terrain has been the only time I’ve felt like the terrain was a challenge for humans too. And third, there’s the scale of kilometer-size environments. The robots had to demonstrate a level of robustness and resourcefulness in their autonomy and functionality that the current state-of-the-art in robotics could not demonstrate. The great thing about the SubT Challenge was that DARPA started it knowing that robotics did not have that capacity, but asked us to deliver a competitive team of robots three years down the road. And I think that approach went well for all the teams. It was a great push that accelerated research.

As robots get more autonomous, where will humans fit in?

K.A.: It is a fact now that we can have very good maps from robots, and it is a fact that we have object detection, and so on. However, we do not have a way of correlating all the objects in the environment and their possible interactions. So, although we can create awesome, beautiful, accurate maps, we are not equally good at reasoning.

This is really about time. If we were performing a mission where we wanted to guarantee full exploration and coverage of a place with no time limit, we likely wouldn’t need a human in the loop—we can automate this fully. But when time is a factor and you want to explore as much as you can, then the human ability to reason through data is very valuable. And even if we can make robots that sometimes perform as well as humans, that doesn’t necessarily translate to novel environments.

The other aspect is societal. We make robots to serve us, and in all of these critical operations, as a roboticist myself, I would like to know that there is a human making the final calls.

While most of the course was designed to look as much like real underground environments as possible, DARPA also included sections that posed very robot-specific challenges. Robots had the potential to get disoriented in this blank white hallway (part of the urban section of the course) if they couldn’t identify unique features to differentiate one part of the hallway from another. Evan Ackerman

Do you think SubT was able to solve any significant challenges in robotics?

K.A.: One thing, of which I’m very proud for my team, is that SubT established that legged robotic systems can be deployed under the most arbitrary of conditions. [Team Cerberus deployed four ANYmal C quadrupedal robots from Swiss robotics company ANYbotics in the final competition.] We knew before SubT that legged robots were magnificent in the research domain, but now we also know that if you have to deal with complex environments on the ground or underground, you can take legged robots combined with drones and you should be good to go.

When will we see practical applications of some of the developments made through SubT?

K.A.: I think commercialization will happen much faster through SubT than what we would normally expect from a research activity. My opinion is that the time scale is counted in terms of months—it might be a year or so, but it’s not a matter of multiple years, and typically I’m conservative on that front.

In terms of disaster response, now we’re talking about responsibility. We’re talking about systems with virtually 100 percent reliability. This is much more involved, because you need to be able to demonstrate, certify, and guarantee that your system works across so many diverse use cases. And the key question: Can you trust it? This will take a lot of time. With SubT, DARPA created a broad vision. I believe we will find our way toward that vision, but before disaster response, we will first see these robots in industry.

This article appears in the May 2022 print issue as “Robots Conquer the Underground.”



Deep below the Louisville, Ky., zoo lies a network of enormous caverns carved out of limestone. The caverns are dark. They’re dusty. They’re humid. And during one week in September 2021, they were full of the most sophisticated robots in the world. The robots (along with their human teammates) were there to tackle a massive underground course designed by DARPA, the Defense Advanced Research Projects Agency, as the culmination of its three-year Subterranean Challenge.

The SubT was first announced in early 2018. DARPA designed the competition to advance practical robotics in extreme conditions, based around three distinct underground environments: human-made tunnels, the urban underground, and natural caves. To do well, the robots would have to work in teams to traverse and map completely unknown areas spanning kilometers, search out a variety of artifacts, and identify their locations with pinpoint accuracy under strict time constraints. To more closely mimic the scenarios in which first responders might utilize autonomous robots, robots experienced darkness, dust and smoke, and even DARPA-controlled rockfalls that occasionally blocked their progress.

With direct funding plus prize money that reached into the millions, DARPA encouraged international collaborations among top academic institutions as well as industry. A series of three preliminary circuit events would give teams experience with each environment.

During the Tunnel Circuit event, which took place in August 2019 in the National Institute for Occupational Safety and Health’s experimental coal mine, on the outskirts of Pittsburgh, many teams lost communication with their robots after the first bend in the tunnel. Six months later, at the Urban Circuit event, held at an unfinished nuclear power station in Satsop, Wash., teams beefed up their communications with everything from a straightforward tethered Ethernet cable to battery-powered mesh network nodes that robots would drop like breadcrumbs as they went along, ideally just before they passed out of communication range. The Cave Circuit, scheduled for the fall of 2020, was canceled due to COVID-19.

Team CoSTAR, a collaboration between NASA’s JPL, MIT, Caltech, KAIST, and LTU, inspects the communications-node deployment system on their Husky wheeled robots [top]. CoSTAR’s pack of quadrupeds, consisting of Spot robots from Boston Dynamics modified with customized autonomy payloads [middle], undergo a hardware check before their final competition run. The Spots are wearing “socks” made from cut-up mountain-bike tires, cable ties, and black tape. Despite the ruggedness of many of the robots, as research platforms, most demanded careful attention from their human teammates, including Team Cerberus [bottom]. Evan Ackerman

By the time teams reached the SubT Final Event in the Louisville Mega Cavern, the focus was on autonomy rather than communications. As in the preliminary events, humans weren’t permitted on the course, and only one person from each team was allowed to interact remotely with the team’s robots, so direct remote control was impractical. It was clear that teams of robots able to make their own decisions about where to go and how to get there would be the only viable way to traverse the course quickly.

DARPA outdid itself for the final event, constructing an enormous kilometer-long course within the existing caverns. Shipping containers connected end-to-end formed complex networks, and many of them were carefully sculpted and decorated to resemble mining tunnels and natural caves. Offices, storage rooms, and even a subway station, all built from scratch, comprised the urban segment of the course. Teams had one hour to find as many of the 40 artifacts as possible. To score a point, the robot would have to report the artifact’s location back to the base station at the course entrance, which would be a challenge in the far reaches of the course where direct communication was impossible.

Eight teams competed in the SubT Final, and most brought a carefully curated mix of robots designed to work together. Wheeled vehicles offered the most reliable mobility, but quadrupedal robots proved surprisingly capable, especially over tricky terrain. Drones allowed complete exploration of some of the larger caverns.

By the end of the final competition, two teams had each found 23 artifacts: Team Cerberus—a collaboration of the University of Nevada, Reno; ETH Zurich; the Norwegian University of Science and Technology; the University of California, Berkeley; the Oxford Robotics Institute; Flyability; and the Sierra Nevada Corp.—and Team CSIRO Data61—consisting of CSIRO’s Data61; Emesent; and Georgia Tech. The equal scores triggered a tie-breaker rule: Which team had been the quickest to its final artifact? That gave first place to Cerberus, which had been just 46 seconds faster than CSIRO.

Despite coming in second, Team CSIRO’s robots achieved the astonishing feat of creating a map of the course that differed from DARPA’s ground-truth map by less than 1 percent, effectively matching what a team of expert humans spent many days creating. That’s the kind of tangible, fundamental advance SubT was intended to inspire, according to Tim Chung, the DARPA program manager who ran the challenge.

“There’s so much that happens underground that we don’t often give a lot of thought to, but if you look at the amount of infrastructure that we’ve built underground, it’s just massive,” Chung told IEEE Spectrum. “There’s a lot of opportunity in being able to perceive, understand, and navigate in subterranean environments—there are engineering integration challenges, as well as foundational design challenges and theoretical questions that we have not yet answered. And those are the questions DARPA is most interested in, because that’s what’s going to change the face of robotics in 5 or 10 or 15 years, if not sooner.”


This point cloud assembled by Team CSIRO Data61 shows a robotic view of nearly the entire SubT course, with each dot in the cloud representing a point in 3D space measured by a sensor on a robot. Team CSIRO’s point cloud differed from DARPA’s official map by less than 1 percent

CSIRO DATA61

IEEE Spectrum was in Louisville to cover the Subterranean Final, and we spoke recently with Chung, as well as CSIRO Data61 team lead Navinda Kottege and Cerberus team lead Kostas Alexis and about their SubT experience and the influence the event is having on the future of robotics.

DARPA has hundreds of programs, but most of them don’t involve multiyear international competitions with million-dollar prizes. What was special about the Subterranean Challenge?

TIM CHUNG | DARPA program manager MCKIBILLO

Tim Chung: Every now and then, one of DARPA’s concepts warrants a different model for seeking out innovation. It’s when you know you have an impending breakthrough in a field, but you don’t know exactly how that breakthrough is going to happen, and where the traditional DARPA program model, with a broad announcement followed by proposal selection, might restrict innovation. DARPA saw the SubT Challenge as a way of attracting the robotics community to solving problems that we anticipate being impactful, like resiliency, autonomy, and sensing in austere environments. And one place where you can find those technical challenges coming together is underground.

The skill that these teams had at autonomously mapping their environments was impressive. Can you talk about that?

T.C.: We brought in a team of experts with professional survey equipment who spent many days making a precisely calibrated ground-truth map of the SubT course. And then during the competition, we saw these robots delivering nearly complete coverage of the course in under an hour—I couldn’t believe how beautiful those point clouds were! [See the point cloud image on p. TK.] I think that’s really an accelerant. When you can trust your map, you have so much more actionable situational awareness. It’s not a solved problem, but when you can attain the level of fidelity that we’ve seen in SubT, that’s a gateway technology with the potential to unlock all sorts of future innovation.

Autonomy was a necessary part of SubT, but having a human in the loop was critical as well. Do you think that humans will continue to be a necessary part of effective robotic teams, or is full autonomy the future?

T.C.: Early in the competition, we saw a lot of hand-holding, with humans giving robots low-level commands. But teams quickly realized that they needed a more autonomous approach. Full autonomy is hard, though, and I think humans will continue to play a pretty big role, just a role that needs to evolve and change into something that focuses on what humans do best.

I think that progressing from human operators to human supervisors will enhance the types of missions that human-robot teams will be able to conduct. In the final event, we saw robots on the course exploring and finding artifacts, while the human supervisor was focused on other stuff and not even paying attention to the robots. That was so cool. The robots were doing what they needed to do, leaving the human free to make high-level decisions. That’s a big change: from what was basically remote teleoperation to “you robots go off and do your thing and I’ll do mine.” And it’s incumbent on the robots to become even more capable so that the transition [of the human] from operator to supervisor can occur.

An ANYmal quadruped from Team Cerberus enters the course [top]. During the competition, only robots and DARPA staff were allowed to cross this threshold. The visual markers surrounding the course entrance provided a precise origin point from which the robots would base the maps they created. This allowed DARPA to measure the accuracy of the artifact locations that teams reported to score points. Cerberus’s ANYmal exits the urban section of the course, modeled after a subway station [bottom], and enters the tunnel section of the course, based on an abandoned mine. Evan Ackerman

What are some remaining challenges for robots in underground environments?

T.C.: Traversability analysis and reasoning about the environment are still a problem. Robots will be able to move through these environments at a faster clip if they can understand a little bit more about where they’re stepping or what they’re flying around. So, despite the fact that they were one to two orders of magnitude faster than humans for mapping purposes, the robots are still relatively slow. Shaving off another order of magnitude would really help change the game. Speed would be the ultimate enabler and have a dramatic impact on first-response scenarios, where every minute counts.

What difference do you think SubT has made, or will make, to robotics?

T.C.: The fact that many of the technologies being used in the SubT Challenge are now being productized and commercialized means that the time horizon for robots to make it into the hands of first responders has been far shortened, in my opinion. It’s already happened, and was happening, even during the competition itself, and that’s a really great impact.

What’s difficult and important about operating robots underground?

NAVINDA KOTTEGE CSIRO | Data61 team lead MCKIBILLO

Navinda Kottege: The fact that we were in a subterranean environment was one aspect of the challenge, and a very important aspect, but if you break it down, what the SubT Challenge meant was that we were in a GPS-denied environment, where you can’t rely on communications, with very difficult mobility challenges. There are many other scenarios where you might encounter these things—the Fukushima nuclear disaster, for example, wasn’t underground, but communication was a massive issue for the robots they tried to send in. The Amazon Rainforest is another example where you’d encounter similar difficulties in communication and mobility. So we saw how each of these component technologies that we would have to develop and mature would have applications in many other domains beyond the subterranean.

Where is the right place for a human in a human-robot team?

N.K.: There are two extremes. One is that you push a button and the robots go and do their thing. The other is what we call “human in the loop,” where it’s essentially remote control through high-level commands. But if the human is taken out of the loop, the loop breaks and the system stops, and we were experiencing that with brittle communications. The middle ground is a “human on the loop” concept, where you have a human supervisor who sets mission-level goals, but if the human is taken off of the loop, the loop can still run. The human added value because they had a better overview of what was happening across the whole scenario, and that’s the sort of thing that humans are super, super good at.

The subway station platform [top] incorporated many challenges for robots. Wheeled and tracked robots had particular difficulty with the rails. DARPA hid artifacts in the ceiling of the subway station (accessible only by drone), as well as under a grate in the platform floor. In addition to building many customized tunnels and structures inside the Louisville Mega Cavern, DARPA also incorporated the cavern itself into the course. This massive room [bottom] rewarded robots that managed to explore it with several additional artifacts. Evan Ackerman

How did SubT advance the field of robotics?

N.K.: For field robots to succeed, you need multiple things to work together. And I think that’s what was forced upon us by the level of complexity of the SubT Challenge. This whole notion of being able to reliably deploy robots in real-world scenarios was, to me, the key thing. Looking back at our team, three years ago we had some cool bits and pieces of technology, but we didn’t have robot systems that could reliably work for an hour or more without a human having to go and fix something. That was one of the biggest advances we had, because now, as we continue this work, we don’t even have to think twice about deploying our robots and whether they’ll destroy themselves if we leave them alone for 10 minutes. It’s that level of maturity that we’ve achieved, thanks to the robustness and reliability that we had to engineer into our systems to be successful at SubT, and now we can start focusing on the next step: What can you do when you have a fleet of autonomous robots that you can rely on?

Your team of robots created a map of the course that matched DARPA’s official map with an accuracy of better than 1 percent. That’s amazing.

N.K.: I got contacted immediately after the final event by the company that DARPA brought in to do the ground-truth mapping of the SubT course. They’d spent 100 person-hours using very expensive equipment to make their map, and they wanted to know how in the world we got our map in under an hour with a bunch of robots. It’s a good question! But the context is that our one hour of mapping took us 15 years of development to get to that stage.

There’s a difference in what’s theoretically possible and what actually works in the real world. In its early stages, our software worked, in that it hit all of the theoretical milestones it was supposed to. But then we started taking it out to the real world and testing it in very difficult environments, and that’s where we started finding all the edge cases of where it breaks. Essentially, for the last 10-plus years, we were trying to break our mapping system as much as possible, and that turned it into a really well-engineered solution. Honestly, whenever we see the results of our mapping system, it still surprises us!

What made you decide to participate in the SubT Challenge?

KOSTAS ALEXIS | Cerberus team lead MCKIBILLO

Kostas Alexis: What motivated everyone was the understanding that for autonomous robots, this challenge was extremely difficult and relevant. We knew that robotic systems could operate in these environments if humans accompanied them or teleoperated them, but we also knew that we were very far away from enabling autonomy. And we understood the value of being able to send robots instead of humans into danger. It was this combination of societal impact and technical challenge that was appealing to us, especially in the context of a competition where you can’t just do work in the lab, write a paper, and call it a day—you had to develop something that would work all the way through the finals.

Tight cave sections [top] required careful navigation by ground robots. Stalactites and stalagmites were especially treacherous for drones in flight. At the right of the picture, partially hidden by a column, is a blue coil of rope, one of the artifacts. A Team Cerberus ANYmal [bottom] walks past a decorative (but not inaccurate) warning sign, next to a drill artifact. Evan Ackerman

What was the most challenging part of SubT for your team?

K.A.: We are at the stage where we can navigate robots in normal officelike environments, but SubT had many challenges. First, relying on communications with our robots was not possible. Second, the terrain was not easy. Typically, even terrain that is hard for robots is easy for humans, but the natural cave terrain has been the only time I’ve felt like the terrain was a challenge for humans too. And third, there’s the scale of kilometer-size environments. The robots had to demonstrate a level of robustness and resourcefulness in their autonomy and functionality that the current state-of-the-art in robotics could not demonstrate. The great thing about the SubT Challenge was that DARPA started it knowing that robotics did not have that capacity, but asked us to deliver a competitive team of robots three years down the road. And I think that approach went well for all the teams. It was a great push that accelerated research.

As robots get more autonomous, where will humans fit in?

K.A.: It is a fact now that we can have very good maps from robots, and it is a fact that we have object detection, and so on. However, we do not have a way of correlating all the objects in the environment and their possible interactions. So, although we can create awesome, beautiful, accurate maps, we are not equally good at reasoning.

This is really about time. If we were performing a mission where we wanted to guarantee full exploration and coverage of a place with no time limit, we likely wouldn’t need a human in the loop—we can automate this fully. But when time is a factor and you want to explore as much as you can, then the human ability to reason through data is very valuable. And even if we can make robots that sometimes perform as well as humans, that doesn’t necessarily translate to novel environments.

The other aspect is societal. We make robots to serve us, and in all of these critical operations, as a roboticist myself, I would like to know that there is a human making the final calls.

While most of the course was designed to look as much like real underground environments as possible, DARPA also included sections that posed very robot-specific challenges. Robots had the potential to get disoriented in this blank white hallway (part of the urban section of the course) if they couldn’t identify unique features to differentiate one part of the hallway from another. Evan Ackerman

Do you think SubT was able to solve any significant challenges in robotics?

K.A.: One thing, of which I’m very proud for my team, is that SubT established that legged robotic systems can be deployed under the most arbitrary of conditions. [Team Cerberus deployed four ANYmal C quadrupedal robots from Swiss robotics company ANYbotics in the final competition.] We knew before SubT that legged robots were magnificent in the research domain, but now we also know that if you have to deal with complex environments on the ground or underground, you can take legged robots combined with drones and you should be good to go.

When will we see practical applications of some of the developments made through SubT?

K.A.: I think commercialization will happen much faster through SubT than what we would normally expect from a research activity. My opinion is that the time scale is counted in terms of months—it might be a year or so, but it’s not a matter of multiple years, and typically I’m conservative on that front.

In terms of disaster response, now we’re talking about responsibility. We’re talking about systems with virtually 100 percent reliability. This is much more involved, because you need to be able to demonstrate, certify, and guarantee that your system works across so many diverse use cases. And the key question: Can you trust it? This will take a lot of time. With SubT, DARPA created a broad vision. I believe we will find our way toward that vision, but before disaster response, we will first see these robots in industry.

This article appears in the May 2022 print issue as “Robots Conquer the Underground.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaIEEE ARSO 2022: 28–30 May 2022, Long Beach, Calif.RSS 2022: 21 June–1 July 2022, New York CityERF 2022: 28–30 June 2022, Rotterdam, Netherlands RoboCup 2022: 11–17 July 2022, BangkokIEEE CASE 2022: 20–24 August 2022, Mexico CityCLAWAR 2022: 12–14 September 2022, Azores, Portugal

Enjoy today’s videos!

Strandbeest Evolution 2021 provides an update on the evolutionary development. Every spring I go to the beach with a new beast. During the summer I do all kinds of experiments with the wind, sand and water. In the fall I grew a bit wiser about how these beasts can survive the circumstances on the beach. At that point I declare them extinct and they go to the bone yard.

[ Strandbeest ]

MIT CSAIL scientists created an algorithm to solve one of the hardest tasks in computer vision: assigning a label to every pixel in the world, without human supervision.

I don’t know how many pixels there are in the world, but I’m guessing it’s kind of a lot. Good luck STEGO!

[ MIT ]

A clever design for an antidrone drone, although from the look of things, you’ll have to be very talented to catch anything with it.

[ Aleksey Zaitsevsky ] via [ Gizmodo ]

Poramate Manoonpong shares his gecko-inspired climbing robot (Nyxbot), which can climb a 30-degree slope and cross over obstacles up to 38 percent of its body height.

He’s also been working on this hexapod robot with some of the cutest li’l feet I've ever seen:

[ Poramate Manoonpong ]

We wrote about Calmbots a couple of years ago, but no reason not to watch this video and be squigged out again.

[ Calmbots ]

NASA’s Perseverance Mars rover used its Mastcam-Z camera system to shoot video of Phobos, one of Mars’ two moons, eclipsing the Sun. It’s the most zoomed-in, highest frame-rate observation of a Phobos solar eclipse ever taken from the Martian surface.

[ JPL ]

Get ready for some dramatic music and experience a day in the life of a busy Starship Robot as it autonomously delivers throughout the city of Milton Keynes from dusk until dawn. Our robots are completing tens of thousands of autonomous deliveries a day all over the world.

[ Starship ]

Our purpose-built vehicle is designed and engineered to drive safely in rain, hail, or shine. In this episode of Putting Zoox to the Test, members from our Durability team explain how we’re testing our hardware to ensure it functions as it should in wet weather.

[ Zoox ]

This video about a partnership between Sarcos and Palantir is astonishing because of how little it manages to say relative to its length and (I’m guessing) budget.

[ Palantir ]

Dr. Heidi from Philippines Flying Labs provides a first report of the drone delivery training in Tawi Tawi along with the very first medical drone deliveries that took place as part of this training.

[ PFL ]

Stanford was one of the pioneers in artificial intelligence. Hear from professors such as Chris Manning and Fei-Fei Li on the earliest days of natural language processing and computer vision, the work of scholars John McCarthy and Jay McClelland, the launch of the Stanford AI Lab, early robotics at the school, and other pivotal moments in Stanford AI.

[ Stanford HAI ]

In honor of ASIMO’s retirement, here’s a 20-minute history about the robot from Honda.

[ ASIMO ]

On this episode of the “Robot Brains Podcast,” Pieter talks with Eric Horvitz of Microsoft about AI for the Greater Good.

[ Robot Brains ]

Here are two talks hosted by UPenn’s GRASP Lab, featuring Kevin Lynch from Northwestern University, followed by Robert J. Wood from Harvard University.

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaIEEE ARSO 2022: 28–30 May 2022, Long Beach, Calif.RSS 2022: 21 June–1 July 2022, New York CityERF 2022: 28–30 June 2022, Rotterdam, Netherlands RoboCup 2022: 11–17 July 2022, BangkokIEEE CASE 2022: 20–24 August 2022, Mexico CityCLAWAR 2022: 12–14 September 2022, Azores, Portugal

Enjoy today’s videos!

Strandbeest Evolution 2021 provides an update on the evolutionary development. Every spring I go to the beach with a new beast. During the summer I do all kinds of experiments with the wind, sand and water. In the fall I grew a bit wiser about how these beasts can survive the circumstances on the beach. At that point I declare them extinct and they go to the bone yard.

[ Strandbeest ]

MIT CSAIL scientists created an algorithm to solve one of the hardest tasks in computer vision: assigning a label to every pixel in the world, without human supervision.

I don’t know how many pixels there are in the world, but I’m guessing it’s kind of a lot. Good luck STEGO!

[ MIT ]

A clever design for an antidrone drone, although from the look of things, you’ll have to be very talented to catch anything with it.

[ Aleksey Zaitsevsky ] via [ Gizmodo ]

Poramate Manoonpong shares his gecko-inspired climbing robot (Nyxbot), which can climb a 30-degree slope and cross over obstacles up to 38 percent of its body height.

He’s also been working on this hexapod robot with some of the cutest li’l feet I've ever seen:

[ Poramate Manoonpong ]

We wrote about Calmbots a couple of years ago, but no reason not to watch this video and be squigged out again.

[ Calmbots ]

NASA’s Perseverance Mars rover used its Mastcam-Z camera system to shoot video of Phobos, one of Mars’ two moons, eclipsing the Sun. It’s the most zoomed-in, highest frame-rate observation of a Phobos solar eclipse ever taken from the Martian surface.

[ JPL ]

Get ready for some dramatic music and experience a day in the life of a busy Starship Robot as it autonomously delivers throughout the city of Milton Keynes from dusk until dawn. Our robots are completing tens of thousands of autonomous deliveries a day all over the world.

[ Starship ]

Our purpose-built vehicle is designed and engineered to drive safely in rain, hail, or shine. In this episode of Putting Zoox to the Test, members from our Durability team explain how we’re testing our hardware to ensure it functions as it should in wet weather.

[ Zoox ]

This video about a partnership between Sarcos and Palantir is astonishing because of how little it manages to say relative to its length and (I’m guessing) budget.

[ Palantir ]

Dr. Heidi from Philippines Flying Labs provides a first report of the drone delivery training in Tawi Tawi along with the very first medical drone deliveries that took place as part of this training.

[ PFL ]

Stanford was one of the pioneers in artificial intelligence. Hear from professors such as Chris Manning and Fei-Fei Li on the earliest days of natural language processing and computer vision, the work of scholars John McCarthy and Jay McClelland, the launch of the Stanford AI Lab, early robotics at the school, and other pivotal moments in Stanford AI.

[ Stanford HAI ]

In honor of ASIMO’s retirement, here’s a 20-minute history about the robot from Honda.

[ ASIMO ]

On this episode of the “Robot Brains Podcast,” Pieter talks with Eric Horvitz of Microsoft about AI for the Greater Good.

[ Robot Brains ]

Here are two talks hosted by UPenn’s GRASP Lab, featuring Kevin Lynch from Northwestern University, followed by Robert J. Wood from Harvard University.

[ UPenn ]



Honda’s ASIMO humanoid robot is retiring. For the last 20 years, ASIMO had been performing at the Honda showroom in Tokyo, Japan, but these regular demonstrations are now at an end. We’ve known for a while that this was coming—Honda announced back in 2018 that it was halting ASIMO development in favor of working on robots with more practical applications, like robots for elder care and disaster relief. But what blows me away about ASIMO, even now, is just how impressive it still is.

The most recent version of ASIMO was announced in 2011. As I watch this performance now, I have to keep reminding myself that this was all happening more than 10 years ago.

That’s decade-old sensing, actuation, compute, batteries—even still, what ASIMO is demonstrating are things that are absolutely not easy for humanoid robots even now. And like, the robot still looks so futuristic, right? The design is wonderful, all the movements are buttery smooth, and ASIMO would not be out of place in any science-fiction movie. This little robot really did set a (still somewhat aspirational) standard, especially relative to other humanoid robots, which have only within the last few years been able to match and then significantly surpass ASIMO’s performance, if not its looks.

The current generation of ASIMO is part of a lineage of humanoid robotics research at Honda stretching back to the mid-1980s:

As recently as 2017, Honda was still making improvements to ASIMO’s software and presenting that research at conferences. Here’s a video from ICRA that year, featuring a naked (!) ASIMO being mildly abused:

But Honda has more recently seemed to realize that they could take the ASIMO platform and the philosophy of humanoid robotics that it represents only so far, and as of 2018 the company shifted development to a clearly ASIMO-inspired but much more robust robot called E2-DR:

Clearly, there’s a lot more potential with a rugged platform like E2-DR, both for research and for exploring practical tasks in the near (or at least nearer) term. I’m glad that Honda is continuing the research into legged robots that it pioneered so many decades ago. But E2-DR is not ASIMO. It’s not trying to be, and that’s probably a good thing, but a part of me still mourns the vision of friendly and helpful humanoids that ASIMO represented.

I’ll miss you, buddy.

The author, a decade younger than he is now, with an earlier version of ASIMO at Stanford University in 2011. Yes, ASIMO is very, very short.



Honda’s ASIMO humanoid robot is retiring. For the last 20 years, ASIMO had been performing at the Honda showroom in Tokyo, Japan, but these regular demonstrations are now at an end. We’ve known for a while that this was coming—Honda announced back in 2018 that it was halting ASIMO development in favor of working on robots with more practical applications, like robots for elder care and disaster relief. But what blows me away about ASIMO, even now, is just how impressive it still is.

The most recent version of ASIMO was announced in 2011. As I watch this performance now, I have to keep reminding myself that this was all happening more than 10 years ago.

That’s decade-old sensing, actuation, compute, batteries—even still, what ASIMO is demonstrating are things that are absolutely not easy for humanoid robots even now. And like, the robot still looks so futuristic, right? The design is wonderful, all the movements are buttery smooth, and ASIMO would not be out of place in any science-fiction movie. This little robot really did set a (still somewhat aspirational) standard, especially relative to other humanoid robots, which have only within the last few years been able to match and then significantly surpass ASIMO’s performance, if not its looks.

The current generation of ASIMO is part of a lineage of humanoid robotics research at Honda stretching back to the mid-1980s:

As recently as 2017, Honda was still making improvements to ASIMO’s software and presenting that research at conferences. Here’s a video from ICRA that year, featuring a naked (!) ASIMO being mildly abused:

But Honda has more recently seemed to realize that they could take the ASIMO platform and the philosophy of humanoid robotics that it represents only so far, and as of 2018 the company shifted development to a clearly ASIMO-inspired but much more robust robot called E2-DR:

Clearly, there’s a lot more potential with a rugged platform like E2-DR, both for research and for exploring practical tasks in the near (or at least nearer) term. I’m glad that Honda is continuing the research into legged robots that it pioneered so many decades ago. But E2-DR is not ASIMO. It’s not trying to be, and that’s probably a good thing, but a part of me still mourns the vision of friendly and helpful humanoids that ASIMO represented.

I’ll miss you, buddy.

The author, a decade younger than he is now, with an earlier version of ASIMO at Stanford University in 2011. Yes, ASIMO is very, very short.



There has been much interest in designing robots that are agile enough to navigate through tight spaces. This ability could be useful in assessing disaster zones or pipelines, for example.

But, choosing the right design is crucial to success in such applications.

“Though legged robots are very promising for use in real-world applications, it is still challenging for them to operate in narrow spaces,” explains Qing Shi, a Professor at the Beijing Institute of Technology. “Large quadruped robots cannot enter narrow spaces, while micro quadruped robots can enter the narrow spaces but face difficulty in performing tasks owing to their limited ability to carry heavy loads.”

Instead of designing a large four-legged robot or microrobots, Shi and his colleagues decided to create a robot inspired by an animal highly adept at squeezing through tight spaces and turning on a dime: the rat.

In a study published on 7 April in IEEE Transactions on Robotics, they demonstrate how their new rat-inspired robot, SQuRo (small-sized Quadruped Robotic Rat), can walk, crawl, and climb over objects, and turn sharply with unprecedented agility. What’s more, SQuRo can recover from falls, like its organic inspiration .

Shi and his colleagues first used X-Rays of real rats to better understand the animal’s anatomy—especially its joints. They then designed SQuRO to have a similar structure, movement patterns, and degrees of freedom (DOF) as the rodents they studied. This includes two DOFs in each limb, the waist, and the head; the setup allows the robot to replicate a real rat's flexible spine movement.

SQuRo was then put to the test through a series of experiments, first exploring its ability to perform four key motions: crouching-to-standing, walking, turning, and crawling. The turning results were especially impressive, with SQuRo demonstrating it can turn on a very tight radius of less than half its own body length. “Notably, the turning radius is much smaller than other robots, which guarantees the agile movement in narrow space,” says Shi.

Next, the researchers tested SQuRo in more challenging scenarios. In one situation they devised, the robotic rodent had to make its way through a narrow, irregular passage that mimicked a cave environment. SQuRo successfully navigated the passageway. In another scenario, SQuRo successfully toted a 200-gram weight (representing 91 percent of its own weight) across a field that included inclines of up to 20 degrees.

A Robotic Rat that Does It All www.youtube.com

Importantly, any robot that is navigating disaster zones, pipelines, or other challenging environments will need to be able to climb over any obstacles it encounters. With that in mind, the researchers also designed SQuRo so that it can lean back on its haunches and put its forelimbs in position to climb over an object, similar to what real rats do. In an experiment, they show that SQuRo can overcome obstacles 30 millimeters high (which is 33 percent of its own height height) with a success rate of 70 percent. In a final experiment, SQuRo was able to right itself after falling on its side.

“To the best of our knowledge, SQuRo is the first small-sized quadruped robot of this scale that is capable of performing five motion modes, which includes crouching-to-standing, walking, crawling, turning, and fall recovery,” says Shi.

He says the team is interested in commercializing the robot and plans to improve its agility via closed-loop control and in-depth dynamic analysis. “Moreover, we will install more sensors on the robot to conduct field tests in narrow unstructured pipelines,” says Shi. “We are confident that SQuRo has the potential to be used in pipeline [fault] detection after being equipped with cameras and other detection sensors.”



There has been much interest in designing robots that are agile enough to navigate through tight spaces. This ability could be useful in assessing disaster zones or pipelines, for example.

But, choosing the right design is crucial to success in such applications.

“Though legged robots are very promising for use in real-world applications, it is still challenging for them to operate in narrow spaces,” explains Qing Shi, a Professor at the Beijing Institute of Technology. “Large quadruped robots cannot enter narrow spaces, while micro quadruped robots can enter the narrow spaces but face difficulty in performing tasks owing to their limited ability to carry heavy loads.”

Instead of designing a large four-legged robot or microrobots, Shi and his colleagues decided to create a robot inspired by an animal highly adept at squeezing through tight spaces and turning on a dime: the rat.

In a study published on 7 April in IEEE Transactions on Robotics, they demonstrate how their new rat-inspired robot, SQuRo (small-sized Quadruped Robotic Rat), can walk, crawl, and climb over objects, and turn sharply with unprecedented agility. What’s more, SQuRo can recover from falls, like its organic inspiration .

Shi and his colleagues first used X-Rays of real rats to better understand the animal’s anatomy—especially its joints. They then designed SQuRO to have a similar structure, movement patterns, and degrees of freedom (DOF) as the rodents they studied. This includes two DOFs in each limb, the waist, and the head; the setup allows the robot to replicate a real rat's flexible spine movement.

SQuRo was then put to the test through a series of experiments, first exploring its ability to perform four key motions: crouching-to-standing, walking, turning, and crawling. The turning results were especially impressive, with SQuRo demonstrating it can turn on a very tight radius of less than half its own body length. “Notably, the turning radius is much smaller than other robots, which guarantees the agile movement in narrow space,” says Shi.

Next, the researchers tested SQuRo in more challenging scenarios. In one situation they devised, the robotic rodent had to make its way through a narrow, irregular passage that mimicked a cave environment. SQuRo successfully navigated the passageway. In another scenario, SQuRo successfully toted a 200-gram weight (representing 91 percent of its own weight) across a field that included inclines of up to 20 degrees.

A Robotic Rat that Does It All www.youtube.com

Importantly, any robot that is navigating disaster zones, pipelines, or other challenging environments will need to be able to climb over any obstacles it encounters. With that in mind, the researchers also designed SQuRo so that it can lean back on its haunches and put its forelimbs in position to climb over an object, similar to what real rats do. In an experiment, they show that SQuRo can overcome obstacles 30 millimeters high (which is 33 percent of its own height height) with a success rate of 70 percent. In a final experiment, SQuRo was able to right itself after falling on its side.

“To the best of our knowledge, SQuRo is the first small-sized quadruped robot of this scale that is capable of performing five motion modes, which includes crouching-to-standing, walking, crawling, turning, and fall recovery,” says Shi.

He says the team is interested in commercializing the robot and plans to improve its agility via closed-loop control and in-depth dynamic analysis. “Moreover, we will install more sensors on the robot to conduct field tests in narrow unstructured pipelines,” says Shi. “We are confident that SQuRo has the potential to be used in pipeline [fault] detection after being equipped with cameras and other detection sensors.”



There are all kinds of things that you can do with robots. This, in some sense, is a problem with robots—just because some task or problem can be solved with robots does not necessarily mean that task or problem should be solved with robots.

Is suburban drone delivery of consumer goods one of these things? Wing (a subsidiary of Alphabet) is determined to prove that drone delivery can work and has absolutely made it work, recently completing 200,000 deliveries in Australia and following that up with an expansion into a community in Texas, launching this month. Does this conclusively prove that there’s demand and Wing’s business model is realistic and sustainable? Maybe. But probably not.

Honestly, if I forget to pick up a loaf of bread at the grocery store, is the best solution really to have this enormously complex and expensive delivery-drone infrastructure in place to get me that loaf of bread? Or a single apple? Yeah, that’s just the PR video, but it’s in line with how Wing advertises its services.

As we have reported in the past, drone delivery definitely has value. I feel like that value is when you have time-critical, very high value things that cannot be delivered efficiently any other way. In those cases, send in the drones. Medications and vaccines are the classic example. But seeing a drone deliver an apple makes me wince a little bit, because it feels a little too much like using this amazing technology simply for its own sake, rather than for a practical, viable reason.

I do want to acknowledge that getting goods (including food) to people is an important task, and a task that robots are well qualified for. I totally want my groceries to be delivered by robot, but rarely do my groceries consist of a single apple, and even more rarely do I have a desperate need for an apple right now. And there are other ways of solving the last mile (or last several miles) problem, whether it’s sidewalk robots or autonomous cars, both of which have the potential to carry much more cargo much farther than a drone can, making them (I would argue) more broadly useful, especially in a suburban setting.

It’s true that drone delivery is unique in that it’s fast and direct, and there are times when that’s necessary. It’s also true that sidewalk robots and larger autonomous vehicles have their own problems, many of which can be circumvented with drones. I think that the place for consumer delivery drones is as part of a broader robotic delivery ecosystem, with different kinds of robots doing the tasks that they are best suited for. Does that ecosystem exist yet? No. Is Wing helping us get closer to that ecosystem? Almost definitely. Should our long-term expectation be that whenever we want an apple or a taco or a cup of coffee, a drone will bring it to us in 10 minutes or less because that’s the best way for our desires to be fulfilled? Honestly, I kind of hope not.

What Wing has done in Australia and will continue to do in Texas proves that suburban drone delivery can work in the sense that it can function from a technical perspective, which is no small feat. The hardware, the software, the infrastructure, operating at scale for years—that’s all incredibly impressive, and I’m not trying to minimize those achievements. But, it doesn’t necessarily prove that a problem is being solved either efficiently or sustainably, especially relative to other ways of solving similar problems, and it’s also not clear just how much Wing is subsidizing the cost of all of this, and what its long-term business plan is.

Wing

If we now assume that Wing at least has a good handle on the technical challenges surrounding suburban drone delivery (even if it hasn’t solved them completely), that still leaves three big questions. First, does this massive investment of time and money and engineering talent from Wing actually translate into a business model that’s profitable long-term? Second, are small drones carrying up to 1.2 kilograms of cargo at a time really the best way of solving the problem of getting goods to the people that need them in a timely manner? And third, even if suburban delivery drones can fulfill this role, should they? Is Wing using its resources and world-class engineering expertise to leverage robotics into creatively solving real-world problems, or is it instead using the inherent novelty of drone delivery to justify a bonkers amount of infrastructure just to get a select group of people single items of food on demand?

For its part, Wing did make this comment in a recent blog post:

Integrating drone delivery into daily life isn’t just an added convenience. It holds the promise to reduce traffic congestion, accidents, and greenhouse gas emissions while growing sales for businesses all the while giving people more time back in their busy lives.

Those are big, important promises. I sincerely hope Wing can deliver.



There are all kinds of things that you can do with robots. This, in some sense, is a problem with robots—just because some task or problem can be solved with robots does not necessarily mean that task or problem should be solved with robots.

Is suburban drone delivery of consumer goods one of these things? Wing (a subsidiary of Alphabet) is determined to prove that drone delivery can work and has absolutely made it work, recently completing 200,000 deliveries in Australia and following that up with an expansion into a community in Texas, launching this month. Does this conclusively prove that there’s demand and Wing’s business model is realistic and sustainable? Maybe. But probably not.

Honestly, if I forget to pick up a loaf of bread at the grocery store, is the best solution really to have this enormously complex and expensive delivery-drone infrastructure in place to get me that loaf of bread? Or a single apple? Yeah, that’s just the PR video, but it’s in line with how Wing advertises its services.

As we have reported in the past, drone delivery definitely has value. I feel like that value is when you have time-critical, very high value things that cannot be delivered efficiently any other way. In those cases, send in the drones. Medications and vaccines are the classic example. But seeing a drone deliver an apple makes me wince a little bit, because it feels a little too much like using this amazing technology simply for its own sake, rather than for a practical, viable reason.

I do want to acknowledge that getting goods (including food) to people is an important task, and a task that robots are well qualified for. I totally want my groceries to be delivered by robot, but rarely do my groceries consist of a single apple, and even more rarely do I have a desperate need for an apple right now. And there are other ways of solving the last mile (or last several miles) problem, whether it’s sidewalk robots or autonomous cars, both of which have the potential to carry much more cargo much farther than a drone can, making them (I would argue) more broadly useful, especially in a suburban setting.

It’s true that drone delivery is unique in that it’s fast and direct, and there are times when that’s necessary. It’s also true that sidewalk robots and larger autonomous vehicles have their own problems, many of which can be circumvented with drones. I think that the place for consumer delivery drones is as part of a broader robotic delivery ecosystem, with different kinds of robots doing the tasks that they are best suited for. Does that ecosystem exist yet? No. Is Wing helping us get closer to that ecosystem? Almost definitely. Should our long-term expectation be that whenever we want an apple or a taco or a cup of coffee, a drone will bring it to us in 10 minutes or less because that’s the best way for our desires to be fulfilled? Honestly, I kind of hope not.

What Wing has done in Australia and will continue to do in Texas proves that suburban drone delivery can work in the sense that it can function from a technical perspective, which is no small feat. The hardware, the software, the infrastructure, operating at scale for years—that’s all incredibly impressive, and I’m not trying to minimize those achievements. But, it doesn’t necessarily prove that a problem is being solved either efficiently or sustainably, especially relative to other ways of solving similar problems, and it’s also not clear just how much Wing is subsidizing the cost of all of this, and what its long-term business plan is.

Wing

If we now assume that Wing at least has a good handle on the technical challenges surrounding suburban drone delivery (even if it hasn’t solved them completely), that still leaves three big questions. First, does this massive investment of time and money and engineering talent from Wing actually translate into a business model that’s profitable long-term? Second, are small drones carrying up to 1.2 kilograms of cargo at a time really the best way of solving the problem of getting goods to the people that need them in a timely manner? And third, even if suburban delivery drones can fulfill this role, should they? Is Wing using its resources and world-class engineering expertise to leverage robotics into creatively solving real-world problems, or is it instead using the inherent novelty of drone delivery to justify a bonkers amount of infrastructure just to get a select group of people single items of food on demand?

For its part, Wing did make this comment in a recent blog post:

Integrating drone delivery into daily life isn’t just an added convenience. It holds the promise to reduce traffic congestion, accidents, and greenhouse gas emissions while growing sales for businesses all the while giving people more time back in their busy lives.

Those are big, important promises. I sincerely hope Wing can deliver.

The soft organisms in nature have always been a source of inspiration for the design of soft arms and this paper draws inspiration from the octopus’s tentacle, aiming at a soft robot for moving flexibly in three-dimensional space. In the paper, combined with the characteristics of an octopus’s tentacle, a cable-driven soft arm is designed and fabricated, which can motion flexibly in three-dimensional space. Based on the TensorFlow framework, a data-driven model is established, and the data-driven model is trained using deep reinforcement learning strategy to realize posture control of a single soft arm. Finally, two trained soft arms are assembled into an octopus-inspired biped walking robot, which can go forward and turn around. Experimental analysis shows that the robot can achieve an average speed of 7.78 cm/s, and the maximum instantaneous speed can reach 12.8 cm/s.

Detecting changes such as moved, removed, or new objects is the essence for numerous indoor applications in robotics such as tidying-up, patrolling, and fetch/carry tasks. The problem is particularly challenging in open-world scenarios where novel objects may appear at any time. The main idea of this paper is to detect objects from partial 3D reconstructions of interesting areas in the environment. In our pipeline we first identify planes, consider clusters on top as objects, and compute their point-pair-features. They are used to match potential objects and categorize them robustly into static, moved, removed, and novel objects even in the presence of partial object reconstructions and clutter. Our approach dissolves heaps of objects without specific object knowledge, but only with the knowledge acquired from change detection. The evaluation is performed on real-world data that includes challenges affecting the quality of the reconstruction as a result of noisy input data. We present the novel dataset ObChange for quantitative evaluation, and we compare our method against a baseline using learning-based object detection. The results show that, even with a targeted training set, our approach outperforms the baseline for most test cases. Lastly, we also demonstrate our method’s effectiveness in real robot experiments.

Social touch is essential to everyday interactions, but current socially assistive robots have limited touch-perception capabilities. Rather than build entirely new robotic systems, we propose to augment existing rigid-bodied robots with an external touch-perception system. This practical approach can enable researchers and caregivers to continue to use robotic technology they have already purchased and learned about, but with a myriad of new social-touch interactions possible. This paper presents a low-cost, easy-to-build, soft tactile-perception system that we created for the NAO robot, as well as participants’ feedback on touching this system. We installed four of our fabric-and-foam-based resistive sensors on the curved surfaces of a NAO’s left arm, including its hand, lower arm, upper arm, and shoulder. Fifteen adults then performed five types of affective touch-communication gestures (hitting, poking, squeezing, stroking, and tickling) at two force intensities (gentle and energetic) on the four sensor locations; we share this dataset of four time-varying resistances, our sensor patterns, and a characterization of the sensors’ physical performance. After training, a gesture-classification algorithm based on a random forest identified the correct combined touch gesture and force intensity on windows of held-out test data with an average accuracy of 74.1%, which is more than eight times better than chance. Participants rated the sensor-equipped arm as pleasant to touch and liked the robot’s presence significantly more after touch interactions. Our promising results show that this type of tactile-perception system can detect necessary social-touch communication cues from users, can be tailored to a variety of robot body parts, and can provide HRI researchers with the tools needed to implement social touch in their own systems.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaIEEE ARSO 2022: 28–30 May 2022, Long BeachRSS 2022: 21–1 June 2022, New YorkERF 2022: 28–30 June 2022, Rotterdam, the Netherlands RoboCup 2022: 11–17 July 2022, Bangkok, ThailandIEEE CASE 2022: 20–24 August 2022, Mexico City, MexicoCLAWAR 2022: 12–14 September 2022, Açores, Portugal

Enjoy today's videos!

NASA’s Perseverance Mars rover is using its self-driving capabilities as it treks across Jezero Crater seeking signs of ancient life and gathering rock and soil samples for a planned return to Earth. With the help of special 3D glasses, rover drivers on Earth plan routes with specific stops, but increasingly allow the rover to “take the wheel” and choose how it gets to those stops. Perseverance's auto-navigation system, known as AutoNav, makes 3D maps of the terrain ahead, identifies hazards, and plans a route around any obstacles without additional direction from controllers back on Earth.

[ JPL ]

Cassie walking while carrying two heavy jugs swinging all over the place is damn impressive.

[ DRL ]

The Suzumori Endo Lab at Tokyo Tech has developed a soft tensegrity robot driven by thin artificial muscles, the movement of which makes me vaguely uncomfortable.

[ Tokyo Tech ]

MIT engineers have developed a telerobotic system to help surgeons quickly and remotely treat patients experiencing a stroke or aneurysm. With a modified joystick, surgeons in one hospital may control a robotic arm at another location to safely operate on a patient during a critical window of time that could save the patient’s life and preserve their brain function.

[ MIT ]

For LOVOT, a new robot accessory that you never knew you (or anyone) needed.

[ LOVOT ]

Meet Cassie, an electrical engineer at Boston Dynamics, as she answers real questions from kids and other curious minds—from how to start a career in robotics to Spot’s favorite color.

[ Boston Dynamics ]

2021 research highlights from the CNRS-AIST Joint Robotics Laboratory.

[ CNRS-AIST JRL ]

The Rai Lab at KAIST would like to introduce Raibo, a “dynamic and versatile quadruped robot.”

[ Rai Lab ]

The M1600 was designed for high-volume robotic applications with direct input from our customers. This durable and compact sensor can be deployed in a wide variety of environmental and weather conditions, allowing for 365-day, 24/7 usage. It can provide the smart, real-time point clouds required by autonomous mobile robots and last-mile delivery for safe and extended operation without human intervention.

[ Velodyne ]

While I appreciate that Digit is a robot made for work, I’d love to see them build a robot made for fun.

[ Agility Robotics ]

Getting drones to perch on vertical surfaces would be much easier if we just covered the world in velcro, wouldn’t it?

[ NYU ARPL ]

An upcoming ICRA paper from Kostas Alexis at the Norwegian University of Science and Technology about the SubT strategy of Team Cerberus.

This paper presents a novel strategy for autonomous teamed exploration of subterranean environments using legged and aerial robots. Tailored to the fact that subterranean settings, such as cave networks and underground mines, often involve complex, large-scale and multibranched topologies, while wireless communication within them can be particularly challenging, this work is structured around the synergy of an onboard exploration path planner that allows for resilient long-term autonomy, and a multirobot coordination framework.

[ GitHub ]

Two excellent seminars from the University of Toronto Robotics Institute, featuring Henny Admoni from Carnegie Mellon University and Dorsa Sadigh from Stanford.

UofT Robotics Institute Seminar: Dorsa Sadigh on Learning from Non-Traditional Sources of Data www.youtube.com

[ UofT ]

The CMU Teruko Yata Memorial Lecture is presented by Jeannette Bohg from Stanford, on Leveraging Language and Video Demonstrations for Learning Robot Manipulation Skills and Enabling Closed-Loop Task Planning.

Humans have gradually developed language, mastered complex motor skills, created and utilized sophisticated tools. The act of conceptualization is fundamental to these abilities because it allows humans to mentally represent, summarize, and abstract diverse knowledge and skills. By means of abstraction, concepts that we learn from a limited number of examples can be extended to a potentially infinite set of new and unanticipated situations. Abstract concepts can also be more easily taught to others by demonstration.

[ CMU ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaIEEE ARSO 2022: 28–30 May 2022, Long BeachRSS 2022: 21–1 June 2022, New YorkERF 2022: 28–30 June 2022, Rotterdam, the Netherlands RoboCup 2022: 11–17 July 2022, Bangkok, ThailandIEEE CASE 2022: 20–24 August 2022, Mexico City, MexicoCLAWAR 2022: 12–14 September 2022, Açores, Portugal

Enjoy today's videos!

NASA’s Perseverance Mars rover is using its self-driving capabilities as it treks across Jezero Crater seeking signs of ancient life and gathering rock and soil samples for a planned return to Earth. With the help of special 3D glasses, rover drivers on Earth plan routes with specific stops, but increasingly allow the rover to “take the wheel” and choose how it gets to those stops. Perseverance's auto-navigation system, known as AutoNav, makes 3D maps of the terrain ahead, identifies hazards, and plans a route around any obstacles without additional direction from controllers back on Earth.

[ JPL ]

Cassie walking while carrying two heavy jugs swinging all over the place is damn impressive.

[ DRL ]

The Suzumori Endo Lab at Tokyo Tech has developed a soft tensegrity robot driven by thin artificial muscles, the movement of which makes me vaguely uncomfortable.

[ Tokyo Tech ]

MIT engineers have developed a telerobotic system to help surgeons quickly and remotely treat patients experiencing a stroke or aneurysm. With a modified joystick, surgeons in one hospital may control a robotic arm at another location to safely operate on a patient during a critical window of time that could save the patient’s life and preserve their brain function.

[ MIT ]

For LOVOT, a new robot accessory that you never knew you (or anyone) needed.

[ LOVOT ]

Meet Cassie, an electrical engineer at Boston Dynamics, as she answers real questions from kids and other curious minds—from how to start a career in robotics to Spot’s favorite color.

[ Boston Dynamics ]

2021 research highlights from the CNRS-AIST Joint Robotics Laboratory.

[ CNRS-AIST JRL ]

The Rai Lab at KAIST would like to introduce Raibo, a “dynamic and versatile quadruped robot.”

[ Rai Lab ]

The M1600 was designed for high-volume robotic applications with direct input from our customers. This durable and compact sensor can be deployed in a wide variety of environmental and weather conditions, allowing for 365-day, 24/7 usage. It can provide the smart, real-time point clouds required by autonomous mobile robots and last-mile delivery for safe and extended operation without human intervention.

[ Velodyne ]

While I appreciate that Digit is a robot made for work, I’d love to see them build a robot made for fun.

[ Agility Robotics ]

Getting drones to perch on vertical surfaces would be much easier if we just covered the world in velcro, wouldn’t it?

[ NYU ARPL ]

An upcoming ICRA paper from Kostas Alexis at the Norwegian University of Science and Technology about the SubT strategy of Team Cerberus.

This paper presents a novel strategy for autonomous teamed exploration of subterranean environments using legged and aerial robots. Tailored to the fact that subterranean settings, such as cave networks and underground mines, often involve complex, large-scale and multibranched topologies, while wireless communication within them can be particularly challenging, this work is structured around the synergy of an onboard exploration path planner that allows for resilient long-term autonomy, and a multirobot coordination framework.

[ GitHub ]

Two excellent seminars from the University of Toronto Robotics Institute, featuring Henny Admoni from Carnegie Mellon University and Dorsa Sadigh from Stanford.

UofT Robotics Institute Seminar: Dorsa Sadigh on Learning from Non-Traditional Sources of Data www.youtube.com

[ UofT ]

The CMU Teruko Yata Memorial Lecture is presented by Jeannette Bohg from Stanford, on Leveraging Language and Video Demonstrations for Learning Robot Manipulation Skills and Enabling Closed-Loop Task Planning.

Humans have gradually developed language, mastered complex motor skills, created and utilized sophisticated tools. The act of conceptualization is fundamental to these abilities because it allows humans to mentally represent, summarize, and abstract diverse knowledge and skills. By means of abstraction, concepts that we learn from a limited number of examples can be extended to a potentially infinite set of new and unanticipated situations. Abstract concepts can also be more easily taught to others by demonstration.

[ CMU ]

Magnetically actuated robots have become increasingly popular in medical endoscopy over the past decade. Despite the significant improvements in autonomy and control methods, progress within the field of medical magnetic endoscopes has mainly been in the domain of enhanced navigation. Interventional tasks such as biopsy, polyp removal, and clip placement are a major procedural component of endoscopy. Little advancement has been done in this area due to the problem of adequately controlling and stabilizing magnetically actuated endoscopes for interventional tasks. In the present paper we discuss a novel model-based Linear Parameter Varying (LPV) control approach to provide stability during interventional maneuvers. This method linearizes the non-linear dynamic interaction between the external actuation system and the endoscope in a set of equilibria, associated to different distances between the magnetic source and the endoscope, and computes different controllers for each equilibrium. This approach provides the global stability of the overall system and robustness against external disturbances. The performance of the LPV approach is compared to an intelligent teleoperation control method (based on a Proportional Integral Derivative (PID) controller), on the Magnetic Flexible Endoscope (MFE) platform. Four biopsies in different regions of the colon and at two different system equilibria are performed. Both controllers are asked to stabilize the endoscope in the presence of external disturbances (i.e. the introduction of the biopsy forceps through the working channel of the endoscope). The experiments, performed in a benchtop colon simulator, show a maximum reduction of the mean orientation error of the endoscope of 45.8% with the LPV control compared to the PID controller.

Specifying leg placement is a key element for legged robot control, however current methods for specifying individual leg motions with human-robot interfaces require mental concentration and the use of both arm muscles. In this paper, a new control interface is discussed to specify leg placement for hexapod robot by using finger motions. Two mapping methods are proposed and tested with lab staff, Joint Angle Mapping (JAM) and Tip Position Mapping (TPM). The TPM method was shown to be more efficient. Then a manual controlled gait based on TPM is compared with fixed gait and camera-based autonomous gait in a Webots simulation to test the obstacle avoidance performance on 2D terrain. Number of Contacts (NOC) for each gait are recorded during the tests. The results show that both the camera-based autonomous gait and the TPM are effective methods in adjusting step size to avoid obstacles. In high obstacle density environments, TPM reduces the number of contacts to 25% of the fixed gaits, which is even better than some of the autonomous gaits with longer step size. This shows that TPM has potential in environments and situations where autonomous footfall planning fails or is unavailable. In future work, this approach can be improved by combining with haptic feedback, additional degrees of freedom and artificial intelligence.

Pages