Feed aggregator



For the past month, the Cumbre Vieja volcano on the Spanish island of La Palma has been erupting, necessitating the evacuation of 7,000 people as lava flows towards the sea and destroys everything in its path. Sadly, many pets have been left behind, trapped in walled-off yards that are now covered in ash without access to food or water. The reason that we know about these animals is because drones have been used to monitor the eruption, providing video (sometimes several times per day) of the situation.

In areas that are too dangerous to send humans, drones have been used to drop food and water to some of these animals, but that can only keep them alive for so long. Yesterday, a drone company called Aerocamaras received permission to attempt a rescue, using a large drone equipped with a net to, they hope, airlift a group of starving dogs to safety.

This video taken by a drone just over a week ago shows the dogs on La Palma:

What the previous video doesn't show is a wider view of the eruption. Here's some incredible drone footage with an alarmingly close look at the lava, along with a view back through the town of Todoque, or what's left of it:

Drone companies have been doing their best to get food and water to the stranded animals. A company called TecnoFly has been using a DJI Matrice 600 with a hook system to carry buckets of food and water to very, very grateful dogs:

Drones are the best option here because the dogs are completely cut off by lava, and helicopters cannot fly in the area because of the risk of volcanic gas and ash. In Spain, it's illegal to transport live animals by drone, so special permits were necessary for Aerocamaras to even try this. The good news is that those permits have been granted, and Aerocamaras is currently testing the drone and net system at the launch site.

It looks like the drone that Aerocamaras will be using is a DJI Agras T20, which is designed for agricultural spraying. It's huge, as drones go, with a maximum takeoff weight of 47.5 kg and a payload of 23kg. For the rescue, the drone will be carrying a net, and the idea is that if they can lower the net flat to the ground as the drone hovers above and convince one of the dogs to walk across, they could then fly the drone upwards, closing the net around the dog, and fly it to safety.

Photo: Leales.org

The closest that Aerocamaras can get to the drones is 450 meters away (there's flowing lava in between the dogs and safety), which will give the drone about four minutes of hover time during which a single dog has to somehow be lured into the net. It should help that the dogs are already familiar with drones and have been associating them with food, but the drone can't lift two dogs at once, so the key is to get them just interested enough to enable a rescue of one at a time. And if that doesn't work, it may be possible to give the dogs additional food and perhaps some kind of shelter, although from the sound of things, if the dogs aren't somehow rescued within the next few days they are unlikely to survive. If Aerocamaras' testing goes well, a rescue attempt could happen as soon as tomorrow.

This rescue has been coordinated by Leales.org, a Spanish animal association, which has also been doing their best to rescue cats and other animals. Aerocamaras is volunteering their services, but if you'd like to help with the veterinary costs of some of the animals being rescued on La Palma, Leales has a GoFundMe page here. For updates on the rescue, follow Aerocamaras and Leales on Twitter—and we're hoping to be able to post an update on Friday, if not before.

Human-Robot Collaboration (HRC) has the potential for a paradigm shift in industrial production by complementing the strengths of industrial robots with human staff. However, exploring these scenarios in physical experimental settings is costly and difficult, e.g., due to safety considerations. We present a virtual reality application that allows the exploration of HRC work arrangements with autonomous robots and their effect on human behavior. Prior experimental studies conducted using this application demonstrated the benefits of augmenting an autonomous robot arm with communication channels on subjective aspects such as perceived stress. Motivated by current safety regulations that hinder HRC to expand its full potential, we explored the effects of the augmented communication on objective measures (collision rate and produced goods) within a virtual sandbox application. Explored through a safe and replicable setup, the goal was to determine whether communication channels that provide guidance and explanation on the robot can help mitigate safety hazards without interfering with the production effectiveness of both parties. This is based on the theoretical foundation that communication channels enable the robot to explain its action, helps the human collaboration partner to comprehend the current state of the shared task better, and react accordingly. Focused on the optimization of production output, reduced collision rate, and increased perception of safety, a between-subjects experimental study with two conditions (augmented communication vs non-augmented) was conducted. The results revealed a statistically significant difference in terms of production quantity output and collisions with the robot, favoring the augmented conditions. Additional statistically significant differences regarding self-reported perceived safety were found. The results of this study provide an entry point for future research regarding the augmentation of industrial robots with communication channels for safety purposes.

Human-robot collaboration is gaining more and more interest in industrial settings, as collaborative robots are considered safe and robot actions can be programmed easily by, for example, physical interaction. Despite this, robot programming mostly focuses on automated robot motions and interactive tasks or coordination between human and robot still requires additional developments. For example, the selection of which tasks or actions a robot should do next might not be known beforehand or might change at the last moment. Within a human-robot collaborative setting, the coordination of complex shared tasks, is therefore more suited to a human, where a robot would act upon requested commands.In this work we explore the utilization of commands to coordinate a shared task between a human and a robot, in a shared work space. Based on a known set of higher-level actions (e.g., pick-and-placement, hand-over, kitting) and the commands that trigger them, both a speech-based and graphical command-based interface are developed to investigate its use. While speech-based interaction might be more intuitive for coordination, in industrial settings background sounds and noise might hinder its capabilities. The graphical command-based interface circumvents this, while still demonstrating the capabilities of coordination. The developed architecture follows a knowledge-based approach, where the actions available to the robot are checked at runtime whether they suit the task and the current state of the world. Experimental results on industrially relevant assembly, kitting and hand-over tasks in a laboratory setting demonstrate that graphical command-based and speech-based coordination with high-level commands is effective for collaboration between a human and a robot.



Last week, the Association of the United States Army (AUSA) conference took place in Washington, D.C. One of the exhibitors was Ghost Robotics—we've previously covered their nimble and dynamic quadrupedal robots, which originated at the University of Pennsylvania with Minitaur in 2016. Since then, Ghost has developed larger, ruggedized "quadrupedal unmanned ground vehicles" (Q-UGVs) suitable for a variety of applications, one of which is military.

At AUSA, Ghost had a variety of its Vision 60 robots on display with a selection of defense-oriented payloads, including the system above, which is a remotely controlled rifle customized for the robot by a company called SWORD International.

The image of a futuristic-looking, potentially lethal weapon on a quadrupedal robot has generated some very strong reactions (the majority of them negative) in the media as well as on social media over the past few days. We recently spoke with Ghost Robotics' CEO Jiren Parikh to understand exactly what was being shown at AUSA, and to get his perspective on providing the military with armed autonomous robots.

IEEE Spectrum: Can you describe the level of autonomy that your robot has, as well as the level of autonomy that the payload has?

Jiren Parikh: It's critical to separate the two. The SPUR, or Special Purpose Unmanned Rifle from SWORD Defense, has no autonomy and no AI. It's triggered from a distance, and that has to be done by a human. There is always an operator in the loop. SWORD's customers include special operations teams worldwide, and when SWORD contacted us through a former special ops team member, the idea was to create a walking tripod proof of concept. They wanted a way of keeping the human who would otherwise have to pull the trigger at a distance from the weapon, to minimize the danger that they'd be in. We thought it was a great idea.

Our robot is also not autonomous. It's remotely operated with an operator in the loop. It does have perception for object avoidance for the environment because we need it to be able to walk around things and remain stable on unstructured terrain, and the operator has the ability to set GPS waypoints so it travels to a specific location. There's no targeting or weapons-related AI, and we have no intention of doing that. We support SWORD Defense like we do any other military, public safety or enterprise payload partner, and don't have any intention of selling weapons payloads.

Who is currently using your robots?

We have more than 20 worldwide government customers from various agencies, US and allied, who abide by very strict rules. You can see it and feel it when you talk to any of these agencies; they are not pro-autonomous weapons. I think they also recognize that they have to be careful about what they introduce. The vast majority of our customers are using them or developing applications for CBRNE [Chemical, Biological, Radiological, Nuclear, and Explosives detection], reconnaissance, target acquisition, confined space and subterranean inspection, mapping, EOD safety, wireless mesh networks, perimeter security and other applications where they want a better option than tracked and wheeled robots that are less agile and capable.

We also have agencies that do work where we are not privy to details. We sell them our robot and they can use it with any software, any radio, and any payload, and the folks that are using these systems, they're probably special teams, WMD and CBRN units and other special units doing confidential or classified operations in remote locations. We can only assume that a lot of our customers are doing really difficult, dangerous work. And remember that these are men and women who can't talk about what they do, with families who are under constant stress. So all we're trying to do is allow them to use our robot in military and other government agency applications to keep our people from getting hurt. That's what we promote. And if it's a weapon that they need to put on our robot to do their job, we're happy for them to do that. No different than any other dual use technology company that sells to defense or other government agencies.

How is what Ghost Robotics had on display at AUSA functionally different from other armed robotic platforms that have been around for well over a decade?

Decades ago, we had guided missiles, which are basically robots with weapons on them. People don't consider it a robot, but that's what it is. More recently, there have been drones and ground robots with weapons on them. But they didn't have legs, and they're not invoking this evolutionary memory of predators. And now add science fiction movies and social media to that, which we have no control over—the challenge for us is that legged robots are fascinating, and science fiction has made them scary. So I think we're going to have to socialize these kinds of legged systems over the next five to ten years in small steps, and hopefully people get used to them and understand the benefits for our soldiers. But we know it can be frightening. We also have families, and we think about these things as well.

“If our robot had tracks on it instead of legs, nobody would be paying attention."
—Jiren Parikh

Are you concerned that showing legged robots with weapons will further amplify this perception problem, and make people less likely to accept them?

In the short term, weeks or months, yes. I think if you're talking about a year or two, no. We will get used to these robots just like armed drones, they just have to be socialized. If our robot had tracks on it instead of legs, nobody would be paying attention. We just have to get used to robots with legs.

More broadly, how does Ghost Robotics think armed robots should or should not be used?

I think there is a critical place for these robots in the military. Our military is here to protect us, and there are servicemen and women who are putting their lives on the line everyday to protect the United States and allies. I do not want them to lack for our robot with whatever payload, including weapons systems, if they need it to do their job and keep us safe. And if we've saved one life because these people had our robot when they needed it, I think that's something to be proud of.

I'll tell you personally: until I joined Ghost Robotics, I was oblivious to the amount of stress and turmoil and pain our servicemen and women go through to protect us. Some of the special operations folks that we talk to, they can't disclose what they do, but you can feel it when they talk about their colleagues and comrades that they've lost. The amount of energy that's put into protecting us by these people that we don't even know is really amazing, and we take it for granted.

What about in the context of police rather than the military?

I don't see that happening. We've just started talking with law enforcement, but we haven't had any inquiries on weapons. It's been hazmat, CBRNE, recon of confined spaces and crime scenes or sending robots in to talk with people that are barricaded or involved in a hostage situation. I don't think you're going to see the police using weaponized robots. In other countries, it's certainly possible, but I believe that it won't happen here. We live in a country where our military is run by a very strict set of rules, and we have this political and civilian backstop on how engagements should be conducted with new technologies.

How do you feel about the push for regulation of lethal autonomous weapons?

We're all for regulation. We're all for it. This is something everybody should be for right now. What those regulations are, what you can or can't do and how AI is deployed, I think that's for politicians and the armed services to decide. The question is whether the rest of the world will abide by it, and so we have to be realistic and we have to be ready to support defending ourselves against rogue nations or terrorist organizations that feel differently. Sticking your head in the sand is not the solution.

Based on the response that you've experienced over the past several days, will you be doing anything differently going forward?

We're very committed to what we're doing, and our team here understands our mission. We're not going to be reactive. And we're going to stick by our commitment to our US and allied government customers. We're going to help them do whatever they need to do, with whatever payload they need, to do their job, and do it safely. We are very fortunate to live in a country where the use of military force is a last resort, and the use of new technologies and weapons takes years and involves considerable deliberation from the armed services with civilian oversight.



Last week, the Association of the United States Army (AUSA) conference took place in Washington, D.C. One of the exhibitors was Ghost Robotics—we've previously covered their nimble and dynamic quadrupedal robots, which originated at the University of Pennsylvania with Minitaur in 2016. Since then, Ghost has developed larger, ruggedized "quadrupedal unmanned ground vehicles" (Q-UGVs) suitable for a variety of applications, one of which is military.

At AUSA, Ghost had a variety of its Vision 60 robots on display with a selection of defense-oriented payloads, including the system above, which is a remotely controlled rifle customized for the robot by a company called SWORD International.

The image of a futuristic-looking, potentially lethal weapon on a quadrupedal robot has generated some very strong reactions (the majority of them negative) in the media as well as on social media over the past few days. We recently spoke with Ghost Robotics' CEO Jiren Parikh to understand exactly what was being shown at AUSA, and to get his perspective on providing the military with armed autonomous robots.

IEEE Spectrum: Can you describe the level of autonomy that your robot has, as well as the level of autonomy that the payload has?

Jiren Parikh: It's critical to separate the two. The SPUR, or Special Purpose Unmanned Rifle from SWORD Defense, has no autonomy and no AI. It's triggered from a distance, and that has to be done by a human. There is always an operator in the loop. SWORD's customers include special operations teams worldwide, and when SWORD contacted us through a former special ops team member, the idea was to create a walking tripod proof of concept. They wanted a way of keeping the human who would otherwise have to pull the trigger at a distance from the weapon, to minimize the danger that they'd be in. We thought it was a great idea.

Our robot is also not autonomous. It's remotely operated with an operator in the loop. It does have perception for object avoidance for the environment because we need it to be able to walk around things and remain stable on unstructured terrain, and the operator has the ability to set GPS waypoints so it travels to a specific location. There's no targeting or weapons-related AI, and we have no intention of doing that. We support SWORD Defense like we do any other military, public safety or enterprise payload partner, and don't have any intention of selling weapons payloads.

Who is currently using your robots?

We have more than 20 worldwide government customers from various agencies, US and allied, who abide by very strict rules. You can see it and feel it when you talk to any of these agencies; they are not pro-autonomous weapons. I think they also recognize that they have to be careful about what they introduce. The vast majority of our customers are using them or developing applications for CBRNE [Chemical, Biological, Radiological, Nuclear, and Explosives detection], reconnaissance, target acquisition, confined space and subterranean inspection, mapping, EOD safety, wireless mesh networks, perimeter security and other applications where they want a better option than tracked and wheeled robots that are less agile and capable.

We also have agencies that do work where we are not privy to details. We sell them our robot and they can use it with any software, any radio, and any payload, and the folks that are using these systems, they're probably special teams, WMD and CBRN units and other special units doing confidential or classified operations in remote locations. We can only assume that a lot of our customers are doing really difficult, dangerous work. And remember that these are men and women who can't talk about what they do, with families who are under constant stress. So all we're trying to do is allow them to use our robot in military and other government agency applications to keep our people from getting hurt. That's what we promote. And if it's a weapon that they need to put on our robot to do their job, we're happy for them to do that. No different than any other dual use technology company that sells to defense or other government agencies.

How is what Ghost Robotics had on display at AUSA functionally different from other armed robotic platforms that have been around for well over a decade?

Decades ago, we had guided missiles, which are basically robots with weapons on them. People don't consider it a robot, but that's what it is. More recently, there have been drones and ground robots with weapons on them. But they didn't have legs, and they're not invoking this evolutionary memory of predators. And now add science fiction movies and social media to that, which we have no control over—the challenge for us is that legged robots are fascinating, and science fiction has made them scary. So I think we're going to have to socialize these kinds of legged systems over the next five to ten years in small steps, and hopefully people get used to them and understand the benefits for our soldiers. But we know it can be frightening. We also have families, and we think about these things as well.

“If our robot had tracks on it instead of legs, nobody would be paying attention."
—Jiren Parikh

Are you concerned that showing legged robots with weapons will further amplify this perception problem, and make people less likely to accept them?

In the short term, weeks or months, yes. I think if you're talking about a year or two, no. We will get used to these robots just like armed drones, they just have to be socialized. If our robot had tracks on it instead of legs, nobody would be paying attention. We just have to get used to robots with legs.

More broadly, how does Ghost Robotics think armed robots should or should not be used?

I think there is a critical place for these robots in the military. Our military is here to protect us, and there are servicemen and women who are putting their lives on the line everyday to protect the United States and allies. I do not want them to lack for our robot with whatever payload, including weapons systems, if they need it to do their job and keep us safe. And if we've saved one life because these people had our robot when they needed it, I think that's something to be proud of.

I'll tell you personally: until I joined Ghost Robotics, I was oblivious to the amount of stress and turmoil and pain our servicemen and women go through to protect us. Some of the special operations folks that we talk to, they can't disclose what they do, but you can feel it when they talk about their colleagues and comrades that they've lost. The amount of energy that's put into protecting us by these people that we don't even know is really amazing, and we take it for granted.

What about in the context of police rather than the military?

I don't see that happening. We've just started talking with law enforcement, but we haven't had any inquiries on weapons. It's been hazmat, CBRNE, recon of confined spaces and crime scenes or sending robots in to talk with people that are barricaded or involved in a hostage situation. I don't think you're going to see the police using weaponized robots. In other countries, it's certainly possible, but I believe that it won't happen here. We live in a country where our military is run by a very strict set of rules, and we have this political and civilian backstop on how engagements should be conducted with new technologies.

How do you feel about the push for regulation of lethal autonomous weapons?

We're all for regulation. We're all for it. This is something everybody should be for right now. What those regulations are, what you can or can't do and how AI is deployed, I think that's for politicians and the armed services to decide. The question is whether the rest of the world will abide by it, and so we have to be realistic and we have to be ready to support defending ourselves against rogue nations or terrorist organizations that feel differently. Sticking your head in the sand is not the solution.

Based on the response that you've experienced over the past several days, will you be doing anything differently going forward?

We're very committed to what we're doing, and our team here understands our mission. We're not going to be reactive. And we're going to stick by our commitment to our US and allied government customers. We're going to help them do whatever they need to do, with whatever payload they need, to do their job, and do it safely. We are very fortunate to live in a country where the use of military force is a last resort, and the use of new technologies and weapons takes years and involves considerable deliberation from the armed services with civilian oversight.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USASSRR 2021 – October 25-27, 2021 – New York, NY, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

This project investigates the interaction between robots and animals, in particular, the quadruped ANYmal and wild vervet monkeys. We will test whether robots can be tolerated but also socially accepted in a group of vervets. We will evaluate whether social bonds are created between them and whether vervets trust knowledge from robots.

[ RSL ]

At this year's ACM Symposium on User Interface Software and Technology (UIST), the Student Innovation Contest was based around Sony Toio robots. Here are some of the things that teams came up with:

[ UIST ]

Collecting samples from Mars and bringing them back to Earth will be a historic undertaking that started with the launch of NASA's Perseverance rover on July 30, 2020. Perseverance collected its first rock core samples in September 2021. The rover will leave them on Mars for a future mission to retrieve and return to Earth. NASA and the European Space Agency (ESA) are solidifying concepts for this proposed Mars Sample Return campaign. The current concept includes a lander, a fetch rover, an ascent vehicle to launch the sample container to Martian orbit, and a retrieval spacecraft with a payload for capturing and containing the samples and then sending them back to Earth to land in an unpopulated area.

[ JPL ]

FCSTAR is a minimally actuated flying climbing robot capable of crawling vertically. It is the latest in the family of the STAR robots. Designed and built at the Bio-Inspired and Medical Robotics Lab at the Ben Gurion University of the Negev by Nitzan Ben David and David Zarrouk.

[ BGU ]

Evidently the novelty of Spot has not quite worn off yet.

[ IRL ]

As much as I like Covariant, it seems weird to call a robot like this "Waldo" when the world waldo already has a specific meaning in robotics, thanks to the short story by Robert A. Heinlein.

Also, kinda looks like it failed that very first pick in the video...?

[ Covariant ]

Thanks, Alice!

Here is how I will be assembling the Digit that I'm sure Agility Robotics will be sending me any day now.

[ Agility Robotics ]

Robotis would like to remind you that ROS World is next week, and also that they make a lot of ROS-friendly robots!

[ ROS World ] via [ Robotis ]

Researchers at the Australian UTS School of Architecture have partnered with construction design firm BVN Architecture to develop a unique 3D printed air-diffusion system.

[ UTS ]

Team MARBLE, who took third at the DARPA SubT Challenge, has put together this video which combines DARPA's videos with footage taken by the team to tell the whole story with some behind the scenes stuff thrown in.

[ MARBLE ]

You probably don't need to watch all 10 minutes of the first public flight of Volocopter's cargo drone, but it's fun to see the propellers spin up for the takeoff.

[ Volocopter ]

Nothing new in this video about Boston Dynamics from CNBC, but it's always cool to see a little wander around their headquarters.

[ CNBC ]

Computing power doubles every two years, an observation known as Moore's Law. Prof Maarten Steinbuch, a high-tech systems scientist, entrepreneur and communicator, from Eindhoven University of Technology, discussed how this exponential rate of change enables accelerating developments in sensor technology, AI computing and automotive machines, to make products in modern factories that will soon be smart and self-learning.

[ ESA ]

On episode three of The Robot Brains Podcast, we have deep learning pioneer: Yann LeCun. Yann is a winner of the Turing Award (often called the Nobel Prize of Computer Science) who in 2013 was handpicked by Mark Zuckerberg to bring AI to Facebook. Yann also offers his predictions for the future of artificial general intelligence, talks about his life straddling the worlds of academia and business and explains why he likes to picture AI as a chocolate layer cake with a cherry on top.

[ Robot Brains ]

This week's CMU RI seminar is from Tom Howard at the University of Rochester, on "Enabling Grounded Language Communication for Human-Robot Teaming."

[ CMU RI ]

A pair of talks from the Maryland Robotics Center, including Maggie Wigness from ARL and Dieter Fox from UW and NVIDIA.

[ Maryland Robotics ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USASSRR 2021 – October 25-27, 2021 – New York, NY, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

This project investigates the interaction between robots and animals, in particular, the quadruped ANYmal and wild vervet monkeys. We will test whether robots can be tolerated but also socially accepted in a group of vervets. We will evaluate whether social bonds are created between them and whether vervets trust knowledge from robots.

[ RSL ]

At this year's ACM Symposium on User Interface Software and Technology (UIST), the Student Innovation Contest was based around Sony Toio robots. Here are some of the things that teams came up with:

[ UIST ]

Collecting samples from Mars and bringing them back to Earth will be a historic undertaking that started with the launch of NASA's Perseverance rover on July 30, 2020. Perseverance collected its first rock core samples in September 2021. The rover will leave them on Mars for a future mission to retrieve and return to Earth. NASA and the European Space Agency (ESA) are solidifying concepts for this proposed Mars Sample Return campaign. The current concept includes a lander, a fetch rover, an ascent vehicle to launch the sample container to Martian orbit, and a retrieval spacecraft with a payload for capturing and containing the samples and then sending them back to Earth to land in an unpopulated area.

[ JPL ]

FCSTAR is a minimally actuated flying climbing robot capable of crawling vertically. It is the latest in the family of the STAR robots. Designed and built at the Bio-Inspired and Medical Robotics Lab at the Ben Gurion University of the Negev by Nitzan Ben David and David Zarrouk.

[ BGU ]

Evidently the novelty of Spot has not quite worn off yet.

[ IRL ]

As much as I like Covariant, it seems weird to call a robot like this "Waldo" when the world waldo already has a specific meaning in robotics, thanks to the short story by Robert A. Heinlein.

Also, kinda looks like it failed that very first pick in the video...?

[ Covariant ]

Thanks, Alice!

Here is how I will be assembling the Digit that I'm sure Agility Robotics will be sending me any day now.

[ Agility Robotics ]

Robotis would like to remind you that ROS World is next week, and also that they make a lot of ROS-friendly robots!

[ ROS World ] via [ Robotis ]

Researchers at the Australian UTS School of Architecture have partnered with construction design firm BVN Architecture to develop a unique 3D printed air-diffusion system.

[ UTS ]

Team MARBLE, who took third at the DARPA SubT Challenge, has put together this video which combines DARPA's videos with footage taken by the team to tell the whole story with some behind the scenes stuff thrown in.

[ MARBLE ]

You probably don't need to watch all 10 minutes of the first public flight of Volocopter's cargo drone, but it's fun to see the propellers spin up for the takeoff.

[ Volocopter ]

Nothing new in this video about Boston Dynamics from CNBC, but it's always cool to see a little wander around their headquarters.

[ CNBC ]

Computing power doubles every two years, an observation known as Moore's Law. Prof Maarten Steinbuch, a high-tech systems scientist, entrepreneur and communicator, from Eindhoven University of Technology, discussed how this exponential rate of change enables accelerating developments in sensor technology, AI computing and automotive machines, to make products in modern factories that will soon be smart and self-learning.

[ ESA ]

On episode three of The Robot Brains Podcast, we have deep learning pioneer: Yann LeCun. Yann is a winner of the Turing Award (often called the Nobel Prize of Computer Science) who in 2013 was handpicked by Mark Zuckerberg to bring AI to Facebook. Yann also offers his predictions for the future of artificial general intelligence, talks about his life straddling the worlds of academia and business and explains why he likes to picture AI as a chocolate layer cake with a cherry on top.

[ Robot Brains ]

This week's CMU RI seminar is from Tom Howard at the University of Rochester, on "Enabling Grounded Language Communication for Human-Robot Teaming."

[ CMU RI ]

A pair of talks from the Maryland Robotics Center, including Maggie Wigness from ARL and Dieter Fox from UW and NVIDIA.

[ Maryland Robotics ]

The affective motion of humans conveys messages that other humans perceive and understand without conventional linguistic processing. This ability to classify human movement into meaningful gestures or segments plays also a critical role in creating social interaction between humans and robots. In the research presented here, grasping and social gesture recognition by humans and four machine learning techniques (k-Nearest Neighbor, Locality-Sensitive Hashing Forest, Random Forest and Support Vector Machine) is assessed by using human classification data as a reference for evaluating the classification performance of machine learning techniques for thirty hand/arm gestures. The gestures are rated according to the extent of grasping motion on one task and the extent to which the same gestures are perceived as social according to another task. The results indicate that humans clearly rate differently according to the two different tasks. The machine learning techniques provide a similar classification of the actions according to grasping kinematics and social quality. Furthermore, there is a strong association between gesture kinematics and judgments of grasping and the social quality of the hand/arm gestures. Our results support previous research on intention-from-movement understanding that demonstrates the reliance on kinematic information for perceiving the social aspects and intentions in different grasping actions as well as communicative point-light actions.



As quadrupedal robots learn to do more and more dynamic tasks, they're likely to spend more and more time not on their feet. Not falling over, necessarily (although that's inevitable of course, because they're legged robots after all)—but just being in flight in one way or another. The most risky of flight phases would be a fall from a substantial height, because it's almost certain to break your very expensive robot and any payload it might have.

Falls being bad is not a problem unique to robots, and it's not surprising that quadrupeds in nature have already solved it. Or at least, it's already been solved by cats, which are able to reliably land on their feet to mitigate fall damage. To teach quadrupedal robots this trick, roboticists from the University of Notre Dame have been teaching a Mini Cheetah quadruped some mid-air self-righting skills, with the aid of boots full of nickels.

If this research looks a little bit familiar, it's because we recently covered some work from ETH Zurich that looked at using legs to reorient their SpaceBok quadruped in microgravity. This work with Mini Cheetah has to contend with Earth gravity, however, which puts some fairly severe time constraints on the whole reorientation thing with the penalty for failure being a smashed-up robot rather than just a weird bounce. When we asked the ETH Zurich researchers what might improve the performance of SpaceBok, they told us that "heavy shoes would definitely help," and it looks like the folks from Notre Dame had the same idea, which they were able to implement on Mini Cheetah.

Mini Cheetah's legs (like the legs of many robots) were specifically designed to be lightweight because they have to move quickly, and you want to minimize the mass that moves back and forth with every step to make the robot as efficient as possible. But for a robot to reorient itself in mid air, it's got to start swinging as much mass around as it can. Each of Mini Cheetah's legs has been modified with 3D printed boots, packed with two rolls of American nickels each, adding about 500g to each foot—enough to move the robot around like it needs to. The reason why nickel boots are important is because the only way that Mini Cheetah has of changing its orientation while falling is by flailing its legs around. When its legs move one way, its body will move the other way, and the heavier the legs are, the more force they can exert on the body.

As with everything robotics, getting the hardware to do what you want it to do is only half the battle. Or sometimes much, much less than half the battle. The challenge with Mini Cheetah flipping itself over is that it has a very, very small amount of time to figure out how to do it properly. It has to detect that it's falling, figure out what orientation it's in, make a plan of how to get itself feet down, and then execute on that plan successfully. The robot doesn't have enough time to put a whole heck of a lot of thought into things as it starts to plummet, so the technique that the researchers came up with to enable it to do what it needs to do is called a "reflex" approach. Vince Kurtz, first author on the paper describing this technique, explains how it works:

While trajectory optimization algorithms keep getting better and better, they still aren't quite fast enough to find a solution from scratch in the fraction of a second between when the robot detects a fall and when it needs to start a recovery motion. We got around this by dropping the robot a bunch of times in simulation, where we can take as much time as we need to find a solution, and training a neural network to imitate the trajectory optimizer. The trained neural network maps initial orientations to trajectories that land the robot on its feet. We call this the "reflex" approach, since the neural network has basically learned an automatic response that can be executed when the robot detects that it's falling.

This technique works quite well, but there are a few constraints, most of which wouldn't seem so bad if we weren't comparing quadrupedal robots to quadrupedal animals. Cats are just, like, super competent at what they do, says Kurtz, and being able to mimic their ability to rapidly twist themselves into a favorable landing configuration from any starting orientation is just going to be really hard for a robot to pull off:

The more I do robotics research the more I appreciate how amazing nature is, and this project is a great example of that. Cats can do a full 180° rotation when dropped from about shoulder height. Our robot ran up against torque limits when rotating 90° from about 10ft off the ground. Using the full 3D motion would be a big improvement (rotating sideways should be easier because the robot's moment of inertia is smaller in that direction), though I'd be surprised if that alone got us to cat-level performance.

The biggest challenge that I see in going from 2D to 3D is self-collisions. Keeping the robot from hitting itself seems like it should be simple, but self-collisions turn out to impose rather nasty non-convex constraints that make it numerically difficult (though not impossible) for trajectory optimization algorithms to find high-quality solutions.

Lastly, we asked Kurtz to talk a bit about whether it's worth exploring flexible actuated spines for quadrupedal robots. We know that such spines offer many advantages (a distant relative of Mini Cheetah had one, for example), but that they're also quite complex. So is it worth it?

This is an interesting question. Certainly in the case of the falling cat problem a flexible spine would help, both in terms of having a naturally flexible mass distribution and in terms of controller design, since we might be able to directly imitate the "bend-and-twist" motion of cats. Similarly, a flexible spine might help for tasks with large flight phases, like the jumping in space problems discussed in the ETH paper.

With that being said, mid-air reorientation is not the primary task of most quadruped robots, and it's not obvious to me that a flexible spine would help much for walking, running, or scrambling over uneven terrain. Also, existing hardware platforms with rigid backs like the Mini Cheetah are quite capable and I think we still haven't unlocked the full potential of these robots. Control algorithms are still the primary limiting factor for today's legged robots, and adding a flexible spine would probably make for even more difficult control problems.

Mini Cheetah, the Falling Cat: A Case Study in Machine Learning and Trajectory Optimization for Robot Acrobatics, by Vince Kurtz, He Li, Patrick M. Wensing, and Hai Lin from University of Notre Dame, is available on arXiv.



As quadrupedal robots learn to do more and more dynamic tasks, they're likely to spend more and more time not on their feet. Not falling over, necessarily (although that's inevitable of course, because they're legged robots after all)—but just being in flight in one way or another. The most risky of flight phases would be a fall from a substantial height, because it's almost certain to break your very expensive robot and any payload it might have.

Falls being bad is not a problem unique to robots, and it's not surprising that quadrupeds in nature have already solved it. Or at least, it's already been solved by cats, which are able to reliably land on their feet to mitigate fall damage. To teach quadrupedal robots this trick, roboticists from the University of Notre Dame have been teaching a Mini Cheetah quadruped some mid-air self-righting skills, with the aid of boots full of nickels.

If this research looks a little bit familiar, it's because we recently covered some work from ETH Zurich that looked at using legs to reorient their SpaceBok quadruped in microgravity. This work with Mini Cheetah has to contend with Earth gravity, however, which puts some fairly severe time constraints on the whole reorientation thing with the penalty for failure being a smashed-up robot rather than just a weird bounce. When we asked the ETH Zurich researchers what might improve the performance of SpaceBok, they told us that "heavy shoes would definitely help," and it looks like the folks from Notre Dame had the same idea, which they were able to implement on Mini Cheetah.

Mini Cheetah's legs (like the legs of many robots) were specifically designed to be lightweight because they have to move quickly, and you want to minimize the mass that moves back and forth with every step to make the robot as efficient as possible. But for a robot to reorient itself in mid air, it's got to start swinging as much mass around as it can. Each of Mini Cheetah's legs has been modified with 3D printed boots, packed with two rolls of American nickels each, adding about 500g to each foot—enough to move the robot around like it needs to. The reason why nickel boots are important is because the only way that Mini Cheetah has of changing its orientation while falling is by flailing its legs around. When its legs move one way, its body will move the other way, and the heavier the legs are, the more force they can exert on the body.

As with everything robotics, getting the hardware to do what you want it to do is only half the battle. Or sometimes much, much less than half the battle. The challenge with Mini Cheetah flipping itself over is that it has a very, very small amount of time to figure out how to do it properly. It has to detect that it's falling, figure out what orientation it's in, make a plan of how to get itself feet down, and then execute on that plan successfully. The robot doesn't have enough time to put a whole heck of a lot of thought into things as it starts to plummet, so the technique that the researchers came up with to enable it to do what it needs to do is called a "reflex" approach. Vince Kurtz, first author on the paper describing this technique, explains how it works:

While trajectory optimization algorithms keep getting better and better, they still aren't quite fast enough to find a solution from scratch in the fraction of a second between when the robot detects a fall and when it needs to start a recovery motion. We got around this by dropping the robot a bunch of times in simulation, where we can take as much time as we need to find a solution, and training a neural network to imitate the trajectory optimizer. The trained neural network maps initial orientations to trajectories that land the robot on its feet. We call this the "reflex" approach, since the neural network has basically learned an automatic response that can be executed when the robot detects that it's falling.

This technique works quite well, but there are a few constraints, most of which wouldn't seem so bad if we weren't comparing quadrupedal robots to quadrupedal animals. Cats are just, like, super competent at what they do, says Kurtz, and being able to mimic their ability to rapidly twist themselves into a favorable landing configuration from any starting orientation is just going to be really hard for a robot to pull off:

The more I do robotics research the more I appreciate how amazing nature is, and this project is a great example of that. Cats can do a full 180° rotation when dropped from about shoulder height. Our robot ran up against torque limits when rotating 90° from about 10ft off the ground. Using the full 3D motion would be a big improvement (rotating sideways should be easier because the robot's moment of inertia is smaller in that direction), though I'd be surprised if that alone got us to cat-level performance.

The biggest challenge that I see in going from 2D to 3D is self-collisions. Keeping the robot from hitting itself seems like it should be simple, but self-collisions turn out to impose rather nasty non-convex constraints that make it numerically difficult (though not impossible) for trajectory optimization algorithms to find high-quality solutions.

Lastly, we asked Kurtz to talk a bit about whether it's worth exploring flexible actuated spines for quadrupedal robots. We know that such spines offer many advantages (a distant relative of Mini Cheetah had one, for example), but that they're also quite complex. So is it worth it?

This is an interesting question. Certainly in the case of the falling cat problem a flexible spine would help, both in terms of having a naturally flexible mass distribution and in terms of controller design, since we might be able to directly imitate the "bend-and-twist" motion of cats. Similarly, a flexible spine might help for tasks with large flight phases, like the jumping in space problems discussed in the ETH paper.

With that being said, mid-air reorientation is not the primary task of most quadruped robots, and it's not obvious to me that a flexible spine would help much for walking, running, or scrambling over uneven terrain. Also, existing hardware platforms with rigid backs like the Mini Cheetah are quite capable and I think we still haven't unlocked the full potential of these robots. Control algorithms are still the primary limiting factor for today's legged robots, and adding a flexible spine would probably make for even more difficult control problems.

Mini Cheetah, the Falling Cat: A Case Study in Machine Learning and Trajectory Optimization for Robot Acrobatics, by Vince Kurtz, He Li, Patrick M. Wensing, and Hai Lin from University of Notre Dame, is available on arXiv.

In this study, we implemented a model with which a robot expressed such complex emotions as heartwarming (e.g., happy and sad) or horror (fear and surprise) by its touches and experimentally investigated the effectiveness of the modeled touch behaviors. Robots that can express emotions through touching behaviors increase their interaction capabilities with humans. Although past studies achieved ways to express emotions through a robot’s touch, such studies focused on expressing such basic emotions as happiness and sadness and downplayed these complex emotions. Such studies only proposed a model that expresses these emotions by touch behaviors without evaluations. Therefore, we conducted the experiment to evaluate the model with participants. In the experiment, they evaluated the perceived emotions and empathies from a robot’s touch while they watched a video stimulus with the robot. Our results showed that the touch timing before the climax received higher evaluations than touch timing after for both the scary and heartwarming videos.

In this article, we report on research and creative practice that explores the aesthetic interplay between movement and sound for soft robotics. Our inquiry seeks to interrogate what sound designs might be aesthetically engaging and appropriate for soft robotic movement in a social human-robot interaction setting. We present the design of a soft sound-producing robot, SONŌ, made of pliable and expandable silicone and three sound designs made for this robot. The article comprises an articulation of the underlying design process and results from two empirical interaction experiments (N = 66, N = 60) conducted to evaluate the sound designs. The sound designs did not have statistically significant effects on people’s perception of the social attributes of two different soft robots. Qualitative results, however, indicate that people’s interpretations of the sound designs depend on robot type.

The authors evaluate the extent to which a user’s impression of an AI agent can be improved by giving the agent the ability of self-estimation, thinking time, and coordination of risk tendency. The authors modified the algorithm of an AI agent in the cooperative game Hanabi to have all of these traits, and investigated the change in the user’s impression by playing with the user. The authors used a self-estimation task to evaluate the effect that the ability to read the intention of a user had on an impression. The authors also show thinking time of an agent influences impression for an agent. The authors also investigated the relationship between the concordance of the risk-taking tendencies of players and agents, the player’s impression of agents, and the game experience. The results of the self-estimation task experiment showed that the more accurate the estimation of the agent’s self, the more likely it is that the partner will perceive humanity, affinity, intelligence, and communication skills in the agent. The authors also found that an agent that changes the length of thinking time according to the priority of action gives the impression that it is smarter than an agent with a normal thinking time when the player notices the difference in thinking time or an agent that randomly changes the thinking time. The result of the experiment regarding concordance of the risk-taking tendency shows that influence player’s impression toward agents. These results suggest that game agent designers can improve the player’s disposition toward an agent and the game experience by adjusting the agent’s self-estimation level, thinking time, and risk-taking tendency according to the player’s personality and inner state during the game.

The sense of touch is a key aspect in the human capability to robustly grasp and manipulate a wide variety of objects. Despite many years of development, there is still no preferred solution for tactile sensing in robotic hands: multiple technologies are available, each one with different benefits depending on the application. This study compares the performance of different tactile sensors mounted on the variable stiffness gripper CLASH 2F, including three commercial sensors: a single taxel sensor from the companies Tacterion and Kinfinity, the Robotic Finger Sensor v2 from Sparkfun, plus a self-built resistive 3 × 3 sensor array, and two self-built magnetic 3-DoF touch sensors, one with four taxels and one with one taxel. We verify the minimal force detectable by the sensors, test if slip detection is possible with the available taxels on each sensor, and use the sensors for edge detection to obtain the orientation of the grasped object. To evaluate the benefits obtained with each technology and to assess which sensor fits better the control loop in a variable stiffness hand, we use the CLASH gripper to grasp fruits and vegetables following a published benchmark for pick and place operations. To facilitate the repetition of tests, the CLASH hand is endowed with tactile buttons that ease human–robot interactions, including execution of a predefined program, resetting errors, or commanding the full robot to move in gravity compensation mode.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I love watching Dusty Robotics' field printer at work. I don't know whether it's intentional or not, but it's go so much personality somehow.

[ Dusty Robotics ]

A busy commuter is ready to walk out the door, only to realize they've misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys. Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.

While finding lost keys is helpful, RFusion could have many broader applications in the future, like sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, or helping an elderly individual perform daily tasks in the home, though the current prototype isn't quite fast enough yet for these uses.

[ MIT ]

CSIRO Data61 had, I'm pretty sure, the most massive robots in the entire SubT competition. And this is how you solve doors with a massive robot.

[ CSIRO ]

You know how robots are supposed to be doing things that are too dangerous for humans? I think sailing through a hurricane qualifies..

This second video, also captured by this poor Saildrone, is if anything even worse:

[ Saildrone ] via [ NOAA ]

Soft Robotics can handle my taquitos anytime.

[ Soft Robotics ]

This is brilliant, if likely unaffordable for most people.

[ Eric Paulos ]

I do not understand this robot at all, nor can I tell whether it's friendly or potentially dangerous or both.

[ Keunwook Kim ]

This sort of thing really shouldn't have to exist for social home robots, but I'm glad it does, I guess?

It costs $100, though.

[ Digital Dream Labs ]

If you watch this video closely, you'll see that whenever a simulated ANYmal falls over, it vanishes from existence. This is a new technique for teaching robots to walk by threatening them with extinction if they fail.

But seriously how do I get this as a screensaver?

[ RSL ]

Zimbabwe Flying Labs' Tawanda Chihambakwe shares how Zimbabwe Flying Labs got their start, using drones for STEM programs, and how drones impact conservation and agriculture.

[ Zimbabwe Flying Labs ]

DARPA thoughtfully provides a video tour of the location of every artifact on the SubT Final prize course. Some of them are hidden extraordinarily well.

Also posted by DARPA this week are full prize round run videos for every team; here are the top three: MARBLE, CSIRO Data61, and CERBERUS.

[ DARPA SubT ]

An ICRA 2021 plenary talk from Fumihito Arai at the University of Tokyo, on "Robotics and Automation in Micro & Nano-Scales."

[ ICRA 2021 ]

This week's UPenn GRASP Lab Seminar comes from Rahul Mangharam, on "What can we learn from Autonomous Racing?"

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I love watching Dusty Robotics' field printer at work. I don't know whether it's intentional or not, but it's go so much personality somehow.

[ Dusty Robotics ]

A busy commuter is ready to walk out the door, only to realize they've misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys. Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.

While finding lost keys is helpful, RFusion could have many broader applications in the future, like sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, or helping an elderly individual perform daily tasks in the home, though the current prototype isn't quite fast enough yet for these uses.

[ MIT ]

CSIRO Data61 had, I'm pretty sure, the most massive robots in the entire SubT competition. And this is how you solve doors with a massive robot.

[ CSIRO ]

You know how robots are supposed to be doing things that are too dangerous for humans? I think sailing through a hurricane qualifies..

This second video, also captured by this poor Saildrone, is if anything even worse:

[ Saildrone ] via [ NOAA ]

Soft Robotics can handle my taquitos anytime.

[ Soft Robotics ]

This is brilliant, if likely unaffordable for most people.

[ Eric Paulos ]

I do not understand this robot at all, nor can I tell whether it's friendly or potentially dangerous or both.

[ Keunwook Kim ]

This sort of thing really shouldn't have to exist for social home robots, but I'm glad it does, I guess?

It costs $100, though.

[ Digital Dream Labs ]

If you watch this video closely, you'll see that whenever a simulated ANYmal falls over, it vanishes from existence. This is a new technique for teaching robots to walk by threatening them with extinction if they fail.

But seriously how do I get this as a screensaver?

[ RSL ]

Zimbabwe Flying Labs' Tawanda Chihambakwe shares how Zimbabwe Flying Labs got their start, using drones for STEM programs, and how drones impact conservation and agriculture.

[ Zimbabwe Flying Labs ]

DARPA thoughtfully provides a video tour of the location of every artifact on the SubT Final prize course. Some of them are hidden extraordinarily well.

Also posted by DARPA this week are full prize round run videos for every team; here are the top three: MARBLE, CSIRO Data61, and CERBERUS.

[ DARPA SubT ]

An ICRA 2021 plenary talk from Fumihito Arai at the University of Tokyo, on "Robotics and Automation in Micro & Nano-Scales."

[ ICRA 2021 ]

This week's UPenn GRASP Lab Seminar comes from Rahul Mangharam, on "What can we learn from Autonomous Racing?"

[ UPenn ]



It seems inevitable that sooner or later, the performance of autonomous drones will surpass the performance of even the best human pilots. Usually things in robotics that seem inevitable happen later as opposed to sooner, but drone technology seems to be the exception to this. We've seen an astonishing amount of progress over the past few years, even to the extent of sophisticated autonomy making it into the hands of consumers at an affordable price.

The cutting edge of drone research right now is putting drones with relatively simple onboard sensing and computing in situations that require fast and highly aggressive maneuvers. In a paper published yesterday in Science Robotics, roboticists from Davide Scaramuzza's Robotics and Perception Group at the University of Zurich along with partners at Intel demonstrate a small, self-contained, fully autonomous drone that can aggressively fly through complex environments at speeds of up to 40kph.

The trick here, to the extent that there's a trick, is that the drone performs a direct mapping of sensor input (from an Intel RealSense 435 stereo depth camera) to collision-free trajectories. Conventional obstacle avoidance involves first collecting sensor data; making a map based on that sensor data; and finally making a plan based on that map. This approach works perfectly fine as long as you're not concerned with getting all of that done quickly, but for a drone with limited onboard resources moving at high speed, it just takes too long. UZH's approach is instead to go straight from sensor input to trajectory output, which is much faster and allows the speed of the drone to increase substantially.

The convolutional network that performs this sensor-to-trajectory mapping was trained entirely in simulation, which is cheaper and easier but (I would have to guess) less fun than letting actual drones hammer themselves against obstacles over and over until they figure things out. A simulated "expert" drone pilot that has access to a 3D point cloud, perfect state estimation, and computation that's not constrained by real-time requirements trains its own end-to-end policy, which is of course not achievable in real life. But then, the simulated system that will be operating under real-life constraints just learns in simulation to match the expert as closely as possible, which is how you get that expert-level performance in a way that can be taken out of simulation and transferred to a real drone without any adaptation or fine-tuning.

The other big part of this is making that sim-to-real transition, which can be problematic because simulation doesn't always do a great job of simulating everything that happens in the world that can screw with a robot. But this method turns out to be very robust against motion blur, sensor noise, and other perception artifacts. The drone has successfully navigated through real world environments including snowy terrains, derailed trains, ruins, thick vegetation, and collapsed buildings.

"While humans require years to train, the AI, leveraging high-performance simulators, can reach comparable navigation abilities much faster, basically overnight." -Antonio Loquercio, UZH

This is not to say that the performance here is flawless—the system still has trouble with very low illumination conditions (because the cameras simply can't see), as well as similar vision challenges like dust, fog, glare, and transparent or reflective surfaces. The training also didn't include dynamic obstacles, although the researchers tell us that moving things shouldn't be a problem even now as long as their speed relative to the drone is negligible. Many of these problems could potentially be mitigated by using event cameras rather than traditional cameras, since faster sensors, especially ones tuned to detect motion, would be ideal for high speed drones.

The researchers tell us that their system does not (yet) surpass the performance of expert humans in these challenging environments:

Analyzing their performance indicates that humans have a very rich and detailed understanding of their surroundings and are capable of planning and executing plans that span far in the future (our approach plans only one second into the future). Both are capabilities that today's autonomous systems still lack. We see our work as a stepping stone towards faster autonomous flight that is enabled by directly predicting collision-free trajectories from high-dimensional (noisy) sensory input.

This is one of the things that is likely coming next, though—giving the drone the ability to learn and improve from real-world experience. Coupled with more capable sensors and always increasing computer power, pushing that flight envelope past 40 kph in complex environments seems like it's not just possible, but inevitable.



It seems inevitable that sooner or later, the performance of autonomous drones will surpass the performance of even the best human pilots. Usually things in robotics that seem inevitable happen later as opposed to sooner, but drone technology seems to be the exception to this. We've seen an astonishing amount of progress over the past few years, even to the extent of sophisticated autonomy making it into the hands of consumers at an affordable price.

The cutting edge of drone research right now is putting drones with relatively simple onboard sensing and computing in situations that require fast and highly aggressive maneuvers. In a paper published yesterday in Science Robotics, roboticists from Davide Scaramuzza's Robotics and Perception Group at the University of Zurich along with partners at Intel demonstrate a small, self-contained, fully autonomous drone that can aggressively fly through complex environments at speeds of up to 40kph.

The trick here, to the extent that there's a trick, is that the drone performs a direct mapping of sensor input (from an Intel RealSense 435 stereo depth camera) to collision-free trajectories. Conventional obstacle avoidance involves first collecting sensor data; making a map based on that sensor data; and finally making a plan based on that map. This approach works perfectly fine as long as you're not concerned with getting all of that done quickly, but for a drone with limited onboard resources moving at high speed, it just takes too long. UZH's approach is instead to go straight from sensor input to trajectory output, which is much faster and allows the speed of the drone to increase substantially.

The convolutional network that performs this sensor-to-trajectory mapping was trained entirely in simulation, which is cheaper and easier but (I would have to guess) less fun than letting actual drones hammer themselves against obstacles over and over until they figure things out. A simulated "expert" drone pilot that has access to a 3D point cloud, perfect state estimation, and computation that's not constrained by real-time requirements trains its own end-to-end policy, which is of course not achievable in real life. But then, the simulated system that will be operating under real-life constraints just learns in simulation to match the expert as closely as possible, which is how you get that expert-level performance in a way that can be taken out of simulation and transferred to a real drone without any adaptation or fine-tuning.

The other big part of this is making that sim-to-real transition, which can be problematic because simulation doesn't always do a great job of simulating everything that happens in the world that can screw with a robot. But this method turns out to be very robust against motion blur, sensor noise, and other perception artifacts. The drone has successfully navigated through real world environments including snowy terrains, derailed trains, ruins, thick vegetation, and collapsed buildings.

"While humans require years to train, the AI, leveraging high-performance simulators, can reach comparable navigation abilities much faster, basically overnight." -Antonio Loquercio, UZH

This is not to say that the performance here is flawless—the system still has trouble with very low illumination conditions (because the cameras simply can't see), as well as similar vision challenges like dust, fog, glare, and transparent or reflective surfaces. The training also didn't include dynamic obstacles, although the researchers tell us that moving things shouldn't be a problem even now as long as their speed relative to the drone is negligible. Many of these problems could potentially be mitigated by using event cameras rather than traditional cameras, since faster sensors, especially ones tuned to detect motion, would be ideal for high speed drones.

The researchers tell us that their system does not (yet) surpass the performance of expert humans in these challenging environments:

Analyzing their performance indicates that humans have a very rich and detailed understanding of their surroundings and are capable of planning and executing plans that span far in the future (our approach plans only one second into the future). Both are capabilities that today's autonomous systems still lack. We see our work as a stepping stone towards faster autonomous flight that is enabled by directly predicting collision-free trajectories from high-dimensional (noisy) sensory input.

This is one of the things that is likely coming next, though—giving the drone the ability to learn and improve from real-world experience. Coupled with more capable sensors and always increasing computer power, pushing that flight envelope past 40 kph in complex environments seems like it's not just possible, but inevitable.

In human-robot interactions, people tend to attribute to robots mental states such as intentions or desires, in order to make sense of their behaviour. This cognitive strategy is termed “intentional stance”. Adopting the intentional stance influences how one will consider, engage and behave towards robots. However, people differ in their likelihood to adopt intentional stance towards robots. Therefore, it seems crucial to assess these interindividual differences. In two studies we developed and validated the structure of a task aiming at evaluating to what extent people adopt intentional stance towards robot actions, the Intentional Stance task (IST). The Intentional Stance Task consists in a task that probes participants’ stance by requiring them to choose the plausibility of a description (mentalistic vs. mechanistic) of behaviour of a robot depicted in a scenario composed of three photographs. Results showed a reliable psychometric structure of the IST. This paper therefore concludes with the proposal of using the IST as a proxy for assessing the degree of adoption of the intentional stance towards robots.



Back in February of 2019, we wrote about a sort of humanoid robot thing (?) under development at Caltech, called Leonardo. LEO combines lightweight bipedal legs with torso-mounted thrusters powerful enough to lift the entire robot off the ground, which can handily take care of on-ground dynamic balancing while also enabling some slick aerial maneuvers.

In a paper published today in Science Robotics, the Caltech researchers get us caught up on what they've been doing with LEO for the past several years, and it can now skateboard, slackline, and make dainty airborne hops with exceptionally elegant landings.

Those heels! Seems like a real sponsorship opportunity, right?

The version of LEO you see here is significantly different from the version we first met two years ago. Most importantly, while "Leonardo" used to stand for "LEg ON Aerial Robotic DrOne," it now stands for "LEgs ONboARD drOne," which may be the first even moderately successful re-backronym I've ever seen. Otherwise, the robot has been completely redesigned, with the version you see here sharing zero parts in hardware or software with the 2019 version. We're told that the old robot, and I'm quoting from the researchers here, "unfortunately never worked," in the sense that it was much more limited than the new one—the old design had promise, but it couldn't really walk and the thrusters were only useful for jumping augmentation as opposed to sustained flight.

To enable the new LEO to fly, it now has much lighter weight legs driven by lightweight servo motors. The thrusters have been changed from two coaxial propellers to four tilted propellers, enabling attitude control in all directions. And everything is now onboard, including computers, batteries, and a new software stack. I particularly love how LEO lands into a walking gait so gently and elegantly. Professor Soon-Jo Chung from Caltech's Aerospace Robotics and Control Lab explains how they did it:

Creatures that have more than two locomotion modes must learn and master how to properly switch between them. Birds, for instance, undergo a complex yet intriguing behavior at the transitional interface of their two locomotion modes of flying and walking. Similarly, the Leonardo robot uses synchronized control of distributed propeller-based thrusters and leg joints to realize smooth transitions between its flying and walking modes. In particular, the LEO robot follows a smooth flying trajectory up to the landing point prior to landing. The forward landing velocity is then matched to the chosen walking speed, and the walking phase is triggered when one foot touches the ground. After the touchdown, the robot continues to walk by tracking its walking trajectory. A state machine is run on-board LEO to allow for these smooth transitions, which are detected using contact sensors embedded in the foot.

It's very cool how Leo neatly solves some of the most difficult problems with bipedal robotics, including dynamic balancing and traversing large changes in height. And Leo can also do things that no biped (or human) can do, like actually fly short distances. As a multimodal hybrid of a bipedal robot and a drone, though, it's important to note that Leo's design includes some significant compromises as well. The robot has to be very lightweight in order to fly at all, which limits how effective it can be as a biped without using its thrusters for assistance. And because so much of its balancing requires active input from the thrusters, it's very inefficient relative to both drones and other bipedal robots.

When walking on the ground, LEO (which weighs 2.5kg and is 75cm tall) sucks down 544 watts, of which 445 watts go to the propellers and 99 watts are used by the electronics and legs. When flying, LEO's power consumption almost doubles, but it's obviously much faster—the robot has a cost of transport (a measure of efficiency of self-movement) of 108 when walking at a speed of 20 cm/s, dropping to 15.5 when flying at 3 m/s. Compare this to the cost of transport for an average human, which is well under 1, or a typical quadrupedal robot, which is in the low single digits. The most efficient humanoid we've ever seen, SRI's DURUS, has a cost of transport of about 1, whereas the rumor is that the cost of transport for a robot like Atlas is closer to 20.

Long term, this low efficiency could be a problem for LEO, since its battery life is good for only about 100 seconds of flight or 3.5 minutes of walking. But, explains Soon-Jo Chung, efficiency hasn't yet been a priority, and there's more that can potentially be done to improve LEO's performance, although always with some compromises:

The extreme balancing ability of LEO comes at the cost of continuously running propellers, which leads to higher energy consumption than leg-based ground robots. However, this stabilization with propellers allowed the use of low-power leg servo motors and lightweight legs with flexibility, which was a design choice to minimize the overall weight of LEO to improve its flying performance.

There are possible ways to improve the energy efficiency by making different design tradeoffs. For instance, LEO could walk with the reduced support from the propellers by adopting finite feet for better stability or higher power [leg] motors with torque control for joint actuation that would allow for fast and accurate enough foot position tracking to stabilize the walking gait. In such a case, propellers may need to turn on only when the legs fail to maintain stability on the ground without having to run continuously. These solutions would cause a weight increase and lead to a higher energy consumption during flight maneuvers, but they would lower energy consumption during walking. In the case of LEO, we aimed to achieve balanced aerial and ground locomotion capabilities, and we opted for lightweight legs. Achieving efficient walking with lightweight legs similar to LEO's is still an open challenge in the field of bipedal robots, and it remains to be investigated in future work.

A rendering of a future version of LEO with fancy yellow skins

At this point in its development, the Caltech researchers have been focusing primarily on LEO's mobility systems, but they hope to get LEO doing useful stuff out in the world, and that almost certainly means giving the robot autonomy and manipulation capabilities. At the moment, LEO isn't particularly autonomous, in the sense that it follows predefined paths and doesn't decide on its own whether it should be using walking or flying to traverse a given obstacle. But the researchers are already working on ways in which LEO can make these decisions autonomously through vision and machine learning.

As for manipulation, Chung tells us that "a new version of LEO could be appended with lightweight manipulators that have similar linkage design to its legs and servo motors to expand the range of tasks it can perform," with the goal of "enabling a wide range of robotic missions that are hard to accomplish by the sole use of ground or aerial robots."

Perhaps the most well-suited applications for LEO would be the ones that involve physical interactions with structures at a high altitude, which are usually dangerous for human workers and could use robotic workers. For instance, high voltage line inspection or monitoring of tall bridges could be good applications for LEO, and LEO has an onboard camera that can be used for such purposes. In such applications, conventional biped robots have difficulties with reaching the site, and standard multi-rotor drones have an issue with stabilization in high disturbance environments. LEO uses the ground contact to its advantage and, compared to a standard multi-rotor, is more resistant to external disturbances such as wind. This would improve the safety of the robot operation in an outdoor environment where LEO can maintain contact with a rigid surface.

It's also tempting to look at LEO's ability to more or less just bypass so many of the challenges in bipedal robotics and think about ways in which it could be useful in places where bipedal robots tend to struggle. But it's important to remember that because of the compromises inherent in its multimodal design, LEO will likely be best suited for very specific tasks that can most directly leverage what it's particularly good at. High voltage line and bridge inspection is a good start, and you can easily imagine other inspection tasks that require stability combined with vertical agility. Hopefully, improvements in efficiency and autonomy will make this possible, although I'm still holding out for what Caltech's Chung originally promised: "the ultimate form of demonstration for us will be to build two of these Leonardo robots and then have them play tennis or badminton."

Pages