Feed aggregator

Multi-legged animals such as myriapods can locomote on unstructured rough terrain using their flexible bodies and legs. This highly adaptive locomotion emerges through the dynamic interactions between an animal’s nervous system, its flexible body, and the environment. Previous studies have primarily focused on either adaptive leg control or the passive compliance of the body parts and have shown how each enhanced adaptability to complex terrains in multi-legged locomotion. However, the essential mechanism considering both the adaptive locomotor circuits and bodily flexibility remains unclear. In this study, we focused on centipedes and aimed to understand the well-balanced coupling between the two abovementioned mechanisms for rough terrain walking by building a neuromechanical model based on behavioral findings. In the behavioral experiment, we observed a centipede walking when part of the terrain was temporarily removed and thereafter restored. We found that the ground contact sense of each leg was essential for generating rhythmic leg motions and also for establishing adaptive footfall patterns between adjacent legs. Based on this finding, we proposed decentralized control mechanisms using ground contact sense and implemented them into a physical centipede model with flexible bodies and legs. In the simulations, our model self-organized the typical gait on flat terrain and adaptive walking during gap crossing, which were similar to centipedes. Furthermore, we demonstrated that the locomotor performance deteriorated on rough terrain when adaptive leg control was removed or when the body was rigid, which indicates that both the adaptive leg control and the flexible body are essential for adaptive locomotion. Thus, our model is expected to capture the possible essential mechanisms underlying adaptive centipede walking and pave the way for designing multi-legged robots with high adaptability to irregular terrain.

Maze navigation using one or more robots has become a recurring challenge in scientific literature and real life practice, with fleets having to find faster and better ways to navigate environments such as a travel hub, airports, or for evacuation of disaster zones. Many methodologies have been explored to solve this issue, including the implementation of a variety of sensors and other signal receiving systems. Most interestingly, camera-based techniques have become more popular in this kind of scenarios, given their robustness and scalability. In this paper, we implement an end-to-end strategy to address this scenario, allowing a robot to solve a maze in an autonomous way, by using computer vision and path planning. In addition, this robot shares the generated knowledge to another by means of communication protocols, having to adapt its mechanical characteristics to be capable of solving the same challenge. The paper presents experimental validation of the four components of this solution, namely camera calibration, maze mapping, path planning and robot communication. Finally, we showcase some initial experimentation in a pair of robots with different mechanical characteristics. Further implementations of this work include communicating the robots for other tasks, such as teaching assistance, remote classes, and other innovations in higher education.

Multirotor drones are becoming increasingly popular in a number of application fields, with a unique appeal to the scientific community and the general public. Applications include security, monitoring and surveillance, environmental mapping, and emergency scenario management: in all these areas, two of the main issues to address are the availability of appropriate software architectures to coordinate teams of drones and solutions to cope with the short-term battery life. This article proposes the novel concepts of Social Drone Sharing (SDS) and Social Charging Station (SCS), which provide the basis to address these problems. Specifically, the article focuses on teams of drones in pre- and post-event monitoring and assessment. Using multirotor drones in these situations can be difficult due to the limited flight autonomy when multiple targets need to be inspected. The idea behind the SDS concept is that citizens can volunteer to recharge a drone or replace its batteries if it lands on their property. The computation of paths to inspect multiple targets will then take into account the availability of SCSs to find solutions compatible with the required inspection and flight times. The main contribution of this article is the development of a cloud-based software architecture for SDS mission management, which includes a multi-drone path-optimization algorithm taking the SDS and SCS concepts into account. Experiments in simulation and a lab environment are discussed, paving the path to a larger trial in a real scenario.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaERF 2022: 28–30 June 2022, Rotterdam, the Netherlands CLAWAR 2022: 12–14 September 2022, Açores, Portugal

Enjoy today's videos!

Novel technological solutions at the service of archaeology are being tested in Pompeii. One of the latest monitoring operations of the archaeological structures was recently carried out with the aid of Spot, a quadruped robot capable of inspecting even the smallest of spaces in complete safety, gathering and recording data useful for the study and planning of interventions.

[ Pompeii ]

A drone show in Japan in support of Ukraine.

[ Robotstart ]

In this paper, we propose a lip-inspired soft robotic gripper. This gripper is motivated by animals’ oral structure, especially from lips. Lips have various functions: holding, regrasping, sucking in, and spitting objects. This gripper especially focuses on the functions of holding and regrasping. We validated the capability of the lip pouch of the gripper with various objects through experiments. Moreover, we demonstrated regrasping objects with this gripper.

[ Kimlab ]

A small drone with a 360-degree camera on top of it has no problems creating a dense map of a complex environment, including the insides of pipes.

[ RoblabWHGe ]

Thanks, Hartmut!

I have no idea what’s happening here, and perhaps it’s better that way.

[ Naver Labs ]

EPFL engineers have developed a silicone raspberry that can help teach harvesting robots to grasp fruit without exerting too much pressure.

[ EPFL ]

Robots have now conquered Habitrail environments!

[ Paper ]

Welcome, human. Your job is to watch this robot not fall over, to music.

[ Agility Robotics ]

Robotic wheelchairs may soon be able to move through crowds smoothly and safely. As part of CrowdBot, an E.U.-funded project, EPFL engineers are exploring the technical, ethical, and safety issues related to this kind of technology. The aim of the project is to eventually help the disabled get around more easily.

[ EPFL ]

Self-driving cars are expected on our roads soon. In the project SNOW (Self-driving Navigation Optimized for Winter), we focus on the unexplored problem of autonomous driving during winter, which still raises reliability concerns. We have the expertise to automatically build 3D maps of the environment while moving through it with robots. We aim at using this knowledge to investigate mapping and control solutions for challenging conditions related to Canadian weather.

[ Norlab ]

The amphibious drone of the PON PLaCE project and its shelter station made their debut in a real scenario, an artificial lake. During the three-day test, the various systems and automatisms of this sophisticated drone were tested, from autonomous aerial take-off and monitoring, to ditching and on-site testing of biological parameters in the water column (pH, temperature, salinity, photosynthetically active radiation, chlorophyll).

[ PlaCE ]

The HEBI Robotics Platform can seamlessly integrate with other robots and tools. In this demo, a HEBI arm and vision system is connected to a Clearpath Jackal.

[ HEBI Robotics ]

With a screwdriver and about 3 minutes, you can replace the vacuum motor in a Roomba S9. I’ve never had durability issues with my Roombas, but I really appreciate the thoughtfulness that goes into their repairability.

[ iRobot ]

For Episode 13 of the Robot Brains Podcast, we’re joined by industry pioneer Dean Ayanna Howard. She began working at NASA’s JPL at 18 years old to help build the Mars rover and never slowed down from there. She is a successful roboticist, entrepreneur, and educator, and is the author of the recent book Sex, Race, and Robots: How to Be Human in the Age of AI.

[ Robot Brains ]

Waymo just started operating its vehicles with no in-car safety drivers, although they may or may not be “fully autonomous,” depending on what definition you use. Anyway, here’s how it’s going.

[ Waymo ]

An IUI 2022 keynote by Stuart Russell, on “Provably Beneficial Artificial Intelligence.”

[ IUI 2022 ]




Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaERF 2022: 28–30 June 2022, Rotterdam, the Netherlands CLAWAR 2022: 12–14 September 2022, Açores, Portugal

Enjoy today's videos!

Novel technological solutions at the service of archaeology are being tested in Pompeii. One of the latest monitoring operations of the archaeological structures was recently carried out with the aid of Spot, a quadruped robot capable of inspecting even the smallest of spaces in complete safety, gathering and recording data useful for the study and planning of interventions.

[ Pompeii ]

A drone show in Japan in support of Ukraine.

[ Robotstart ]

In this paper, we propose a lip-inspired soft robotic gripper. This gripper is motivated by animals’ oral structure, especially from lips. Lips have various functions: holding, regrasping, sucking in, and spitting objects. This gripper especially focuses on the functions of holding and regrasping. We validated the capability of the lip pouch of the gripper with various objects through experiments. Moreover, we demonstrated regrasping objects with this gripper.

[ Kimlab ]

A small drone with a 360-degree camera on top of it has no problems creating a dense map of a complex environment, including the insides of pipes.

[ RoblabWHGe ]

Thanks, Hartmut!

I have no idea what’s happening here, and perhaps it’s better that way.

[ Naver Labs ]

EPFL engineers have developed a silicone raspberry that can help teach harvesting robots to grasp fruit without exerting too much pressure.

[ EPFL ]

Robots have now conquered Habitrail environments!

[ Paper ]

Welcome, human. Your job is to watch this robot not fall over, to music.

[ Agility Robotics ]

Robotic wheelchairs may soon be able to move through crowds smoothly and safely. As part of CrowdBot, an E.U.-funded project, EPFL engineers are exploring the technical, ethical, and safety issues related to this kind of technology. The aim of the project is to eventually help the disabled get around more easily.

[ EPFL ]

Self-driving cars are expected on our roads soon. In the project SNOW (Self-driving Navigation Optimized for Winter), we focus on the unexplored problem of autonomous driving during winter, which still raises reliability concerns. We have the expertise to automatically build 3D maps of the environment while moving through it with robots. We aim at using this knowledge to investigate mapping and control solutions for challenging conditions related to Canadian weather.

[ Norlab ]

The amphibious drone of the PON PLaCE project and its shelter station made their debut in a real scenario, an artificial lake. During the three-day test, the various systems and automatisms of this sophisticated drone were tested, from autonomous aerial take-off and monitoring, to ditching and on-site testing of biological parameters in the water column (pH, temperature, salinity, photosynthetically active radiation, chlorophyll).

[ PlaCE ]

The HEBI Robotics Platform can seamlessly integrate with other robots and tools. In this demo, a HEBI arm and vision system is connected to a Clearpath Jackal.

[ HEBI Robotics ]

With a screwdriver and about 3 minutes, you can replace the vacuum motor in a Roomba S9. I’ve never had durability issues with my Roombas, but I really appreciate the thoughtfulness that goes into their repairability.

[ iRobot ]

For Episode 13 of the Robot Brains Podcast, we’re joined by industry pioneer Dean Ayanna Howard. She began working at NASA’s JPL at 18 years old to help build the Mars rover and never slowed down from there. She is a successful roboticist, entrepreneur, and educator, and is the author of the recent book Sex, Race, and Robots: How to Be Human in the Age of AI.

[ Robot Brains ]

Waymo just started operating its vehicles with no in-car safety drivers, although they may or may not be “fully autonomous,” depending on what definition you use. Anyway, here’s how it’s going.

[ Waymo ]

An IUI 2022 keynote by Stuart Russell, on “Provably Beneficial Artificial Intelligence.”

[ IUI 2022 ]


In robot localisation and mapping, outliers are unavoidable when loop-closure measurements are taken into account. A single false-positive loop-closure can have a very negative impact on SLAM problems causing an inferior trajectory to be produced or even for the optimisation to fail entirely. To address this issue, popular existing approaches define a hard switch for each loop-closure constraint. This paper presents AEROS, a novel approach to adaptively solve a robust least-squares minimisation problem by adding just a single extra latent parameter. It can be used in the back-end component of the SLAM system to enable generalised robust cost minimisation by simultaneously estimating the continuous latent parameter along with the set of sensor poses in a single joint optimisation. This leads to a very closely curve fitting on the distribution of the residuals, thereby reducing the effect of outliers. Additionally, we formulate the robust optimisation problem using standard Gaussian factors so that it can be solved by direct application of popular incremental estimation approaches such as iSAM. Experimental results on publicly available synthetic datasets and real LiDAR-SLAM datasets collected from the 2D and 3D LiDAR systems show the competitiveness of our approach with the state-of-the-art techniques and its superiority on real world scenarios.



If we’ve learned anything over the past few years, it’s how important it is not to take physical contact for granted. Unfortunately, physical contact with humans is something that robots find especially difficult and occasionally dangerous, so that we cannot (yet) safely use them as a proxy for nuanced physical contact with another person. It’s not just that robots are strong and humans are squishy (although both of those things are true), it’s that there are a lot of complex facets to human-on-human interaction that robots simply don’t understand.

In 2018, we wrote about research by Alexis E. Block and Katherine J. Kuchenbecker from the Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, on teaching robots to give good hugs. Over the last several years, they’ve continued this research (along with coauthors Sammy Christen, Hasti Seifi, Otmar Hilliges, and Roger Gassert), and have just published a paper outlining the introduction of a new hugging robot along with 11 commandments that robots can follow to give hugs that humans will be able to appreciate and enjoy—without getting squished.

To grasp why the apparently simple act of hugging demands so much research effort, next time you hug another person pay careful attention to what you’re doing and what they’re doing, and you’ll begin to understand. Hugs are interactive, emotional, and complex, and giving a good hug (especially to someone whom you don’t know well or have never hugged before) is challenging. It takes a lot of social experience and intuition, which is another way of saying that it’s a hard robotics problem, because social experience and intuition are things that robots tend not to be great at. Obviously, robotic embraces are never going to supplant human hugs, but the idea here is that sometimes getting physical human comfort is difficult or impossible, and in these cases, maybe robots could have something useful to offer.

In this paper—just accepted in the ACM Transactions on Human-Robot Interaction (THRI)—Block used a data-driven approach to develop the commandments for hugging robots, building off of research also presented at the 2021 Human-Robot Interaction Conference. Through a series of hardware iterations and user studies, the original PR2-based robotic hugging platform (HuggieBot) was been completely rebuilt and upgraded to HuggieBot 3.0, which is “the first fully autonomous human-sized hugging robot that recognizes and responds to the user’s intra-hug gestures.”

HuggieBot 3.0 is built around two six-degree-of-freedom Kinova JACO arms mounted horizontally on a custom metal frame, on top of a v-shaped horizontal base that makes it easier for humans to get in nice and close. The arms are padded, and the end effectors have mittens on them. Placed over the frame are chest and back panels made of air-filled chambers that provide softness as well as pressure sensing, and there’s a heating pad on top of each air chamber to make sure that the robot is nice and warm.

When HuggieBot detects a user in its personal space, it opens its arms and invites the user in for a hug. Based on the height and size of the user, the robot does its best to appropriately place its arms, even making sure that its wrist joints are oriented to keep the end effector contact as flat as possible. The robot, being a robot, will hug you until you’re all hugged out, but releasing your embrace or starting to back away will signal HuggieBot that you’re done and it’ll let you go, presumably with some reluctance. But if you want another hug, go for it, because no two hugs from the robot will ever be identical.

“Hugging HuggieBot 3.0 is (in my humble and unbiased opinion) really enjoyable,” author Alexis E. Block tells IEEE Spectrum. “We are not trying to fool anyone by saying that it feels like hugging a person, because it does not. You’re hugging a robot, but that doesn’t mean that it can’t be enjoyable.”

Part of making hugs enjoyable for humans involves the use of intrahug gestures, the development and test of which is one of the major contributions of the new paper. Intrahug gestures are the things you do with your arms and hands midhug, and while you may not always be consciously aware that you’re doing them, they could include things like gentle rubbing, pats, or squeezes.

The hug “background” gesture is a hold, but (and you should absolutely try this at home), just doing an extended static hold-type hug will definitely make a hug feel kind of robotic. Human hugs involve extra gestures, and HuggieBot is now equipped for this. It’s able to classify the gestures that the human makes and respond with gestures of its own, although (to avoid being too robotic) those gestures aren’t always directly reciprocal, and sometimes the robot will initiate them independently. While the current version of HuggieBot can only rub, pat, or squeeze, future versions may also be able to perform other intrahug gestures, like leaning, or even tickling, if you’re into that.

Here are all 11 the commandments that HuggieBot 3.0 follows:

  1. A hugging robot shall be soft.
  2. A hugging robot shall be warm.
  3. A hugging robot shall be sized similar to an adult human.
  4. When a hugging robot is the one initiating the interaction, it shall autonomously invite the user for a hug when it detects someone in its personal space. A hugging robot should wait for the user to begin walking toward it before closing its arms to ensure a consensual and synchronous hugging experience.
  5. A hugging robot shall autonomously adapt its embrace to the size and position of the user’s body, rather than hug in a constant manner
  6. A hugging robot shall reliably detect and react to a user’s desire to be released from a hug regardless of his or her arm positions.
  7. A good hugging robot shall perceive the user’s height and adapt its arm positions accordingly to comfortably fit around the user at appropriate body locations.
  8. It is advantageous for a hugging robot to accurately detect and classify gestures applied to its torso in real time, regardless of the user’s hand placement.
  9. Users like a robot that responds quickly to their intrahug gestures.
  10. To avoid appearing too robotic and to help conceal inevitable errors in gesture perception, a hugging robot shall not attempt perfect reciprocation of intrahug gestures. Rather, the robot should adopt a gesture response paradigm that blends user preferences with slight variety and spontaneity.
  11. To evoke user feelings that the robot is alive and caring, a hugging robot shall occasionally provide unprompted, proactive affective social touch to the user through intrahug gestures.

The researchers tested out HuggieBot 3.0 with actual human volunteers who seemed perfectly okay being partially crushed by an experimental hugging robot. Some of them couldn’t seem to get enough, in fact:

The study participants were generally able to successfully detect and classify the majority of HuggieBot’s intrahug gestures. People appreciated the gestures, too, commenting that they helped the robot feel more alive, more social, and more realistic. Squeezes were particularly popular, with participants commenting that it felt “closest to a real human hug” and gave them “a sense of security and comfort.” Interestingly, some people characterized the hugs that they received from the robot in very specific, anthropomorphic ways, often attributing emotions, mood swings, and attitudes to the robot depending on their perception of the hug. Hugs were described as “a comforting hug from a mother,” “a distant relative at a funeral,” “receiving a pity hug from someone who doesn’t want to,” and even “hugging a lover.” Overall, the majority of the study participants felt that the experience was pretty great, and 40 percent of them said that they came to think of HuggieBot as their friend.

It’s important to note that this is not an “in-the-wild” study of HuggieBot. It took place in a laboratory environment, and all of the participants specifically signed up to be part of a robot hugging study, for which they were compensated. “We look forward to conducting a thorough in-the-wild study to see how many everyday people would and would not be interested in hugging a robot,” say the researchers. Ideally, such a study would take place over weeks and months, to help determine how much of HuggieBot’s appeal is simply due to novelty—always a potential problem for new robots.

The other thing to keep in mind is that, as the researchers point out, HuggieBot does not in any way understand what hugs mean:

We acknowledge that the current version of our robot does not deliver on the full aspirational goal of a hugging robot. Rather, HuggieBot simulates a hug in a reasonably compelling way, and our data suggest that users enjoy the hug and can engage with the robot and relate to it as an autonomous being. However, in its current state, HuggieBot does not have an internal emotional model similar to humans, and thus it is not capable of engaging in the embodied emotional experience of a hug.

Fortunately, hugs are both an emotional thing and a physical thing, and even an emotionless robot can use physical contact to potentially have a tangible, measurable impact on the emotional states of humans—something that the researchers do in fact hope to measure in a quantitative way.

We certainly are not trying to replace human hugs but to provide a supplement when it might be difficult or impossible to receive a hug from another person. —Alexis Block

We’ll talk about their future work in just a minute, but first, IEEE Spectrum spoke with first author Alexis Block (who just received the Max Planck Society’s Otto Hahn Medal for her work on HuggieBot) for more details on this new generation robot hugs:

We asked you in 2018 why teaching robots to hug is important. Three years on, what do you now think the importance of this research is?

Alexis Block with her hugging robot

Alexis Block: I think the COVID-19 pandemic has made the importance of this research more salient than ever. While we could never have foreseen a situation like we are currently experiencing, in 2018 we were researching robot hugs so we could one day provide the emotional support and health benefits of human hugs to people wherever or whenever they needed it. At the time, my advisor Katherine Kuchenbecker and I were primarily thinking of friends and family separated by a physical distance, like how we were separated from our families (living in Germany and our families living in the United States). Before the pandemic, we hugged, high-fived, and otherwise socially contacted our friends and family without a second thought. Now, many of us have realized that social distancing and the resulting lack of physical contact can harm our overall well-being. Even after we return to “normal,” some members of our society may be medically more vulnerable and will not be able to join us. We believe HuggieBot could be used as a tool to supplement, not replace, human hugs for situations when it is difficult or uncomfortable to get the support you need or want from another person.

Can you share some qualitative feedback from study participants?

Block: In our validation study with HuggieBot 3.0, the average user hug duration was about 25 seconds long. For comparison, the average hug between humans is 2 to 3 seconds. To receive the positive benefits of deep pressure touch, researchers have found 20 seconds of constant hugging between romantic partners is necessary, and our users, on average, hugged our robot for even longer. We made it clear to our users that they were free to hug the robot for as long or as short a duration as they liked. Compare hugging a stranger or acquaintance (2 to 3 seconds) to hugging a partner, a friend, or a family member (20 seconds).

On average, based on the duration of how long our users felt comfortable hugging our robot, we think users treated HuggieBot 3.0 more like a partner, friend, or family member than a stranger or acquaintance. That was impressive because they had never met HuggieBot before! In their free-response questionnaires, several of our users mentioned that they thought the robot was their “friend” by the end of the experiment. We believe these results speak to the quality of the embrace users felt during the embrace; they truly felt like they were hugging a friend, which was especially meaningful because we conducted this study during the pandemic.

Many participants seemed very happy while hugging the robot! But did you have anyone react negatively?

Block: While most users had positive things to say about the robot and their experience during the study, two users still mentioned that while they enjoyed the interaction, they didn’t understand the purpose of a hugging robot because they felt that human hugs are “irreplaceable.” We certainly are not trying to replace human hugs but to provide a supplement when it might be difficult or impossible to receive a hug from another person.

Are there any ways in which robot hugs are potentially superior to human hugs?

Block: The main way robot hugs are potentially superior to human hugs is due to the lack of social pressure. When you’re hugging HuggieBot, you know you’re hugging a robot and not another person, and that’s part of its beauty. You don’t have to worry about being judged for needing to be held “too long” to “too tight.” Instead, the robot is there to support you and your needs. Many users commented that they feel more comfortable hugging the robot than other people because they don’t have to worry about the timing or judgment aspect involved with hugging another person.

You are likely the world’s foremost expert on robot hugs—what have you learned over the past several years that was most surprising to you?

Block: One surprising result was that when investigating how the robot should respond to users’ intrahug gestures, we initially thought the robot should mimic the gestures it felt. But, interestingly, users expressed that they wanted a variety of gestures in response to theirs instead of one-to-one reciprocation. Furthermore, they explained that it felt superficial and mechanical when the robot parroted back their gestures. However, when the robot responded with a different gesture of a similar “emotional investment level,” they mentioned feeling like the robot “understands [them] and makes his own decision.”

We also were unsure how users would respond to proactive robotic intrahug gestures, which is when the robot squeezes, pats, or rubs a user who is holding still within the hug. We worried, particularly with the squeeze, that the users would be alarmed by the unprompted motion and think that the robot was malfunctioning. In this instance, we were pleasantly surprised to find that users really enjoyed proactive robotic intrahug gestures, mentioning that they felt the robot was comforting them rather than responding to their inputs. Furthermore, they attributed emotions and feelings to the robot, saying they felt the robot cared about them when it chose to perform its own gesture.

Ultimately, if we can help even just a few people be a little happier by giving them a way to hug friends and family they thought they wouldn’t be able to, I think that would be an incredible outcome. —Alexis Block

How do you hope that your research will be applied in useful ways in the future?

Block: Back in 2016, when Katherine and I started this work as my master’s thesis at the University of Pennsylvania, we were inspired because our families lived far away, and we missed them. Especially given the COVID-19 pandemic and the resulting isolation, I think many people now understand first-hand the significant effect social touch with friends and loved ones has on our mental health. I hope that in the future, this research can be used to help strengthen personal relationships separated by a physical distance. Ultimately, if we can help even just a few people be a little happier by giving them a way to hug friends and family they thought they wouldn’t be able to, I think that would be an incredible outcome.

Block is already testing an upgraded version of HuggieBot: HuggieBot 4.0 is the best hugging robot yet, featuring improved hug positioning and better prehug technique, among other upgrades. “With these improvements to HuggieBot, we finally felt we had a version of a hugging robot that was of high enough quality to compare to hugging another person,” says Block. This comparison will be physiological, measuring whether and to what extent hugging a robot may elicit physical responses that are similar to hugging a real human. The researchers plan to “induce stress upon voluntary participants” (!) and then provide either an active human hug, a passive human hug, an active robot hug, or a passive robot hug and use periodic saliva measurements to measure cortisol and oxytocin levels. Hopefully, the results will show that humans can derive real benefits from robot hugs, and that when human hugs are not an option, we can look for a soft, warm robotics embrace instead.



If we’ve learned anything over the past few years, it’s how important it is not to take physical contact for granted. Unfortunately, physical contact with humans is something that robots find especially difficult and occasionally dangerous, so that we cannot (yet) safely use them as a proxy for nuanced physical contact with another person. It’s not just that robots are strong and humans are squishy (although both of those things are true), it’s that there are a lot of complex facets to human-on-human interaction that robots simply don’t understand.

In 2018, we wrote about research by Alexis E. Block and Katherine J. Kuchenbecker from the Haptic Intelligence Department at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, on teaching robots to give good hugs. Over the last several years, they’ve continued this research (along with coauthors Sammy Christen, Hasti Seifi, Otmar Hilliges, and Roger Gassert), and have just published a paper outlining the introduction of a new hugging robot along with 11 commandments that robots can follow to give hugs that humans will be able to appreciate and enjoy—without getting squished.

To grasp why the apparently simple act of hugging demands so much research effort, next time you hug another person pay careful attention to what you’re doing and what they’re doing, and you’ll begin to understand. Hugs are interactive, emotional, and complex, and giving a good hug (especially to someone whom you don’t know well or have never hugged before) is challenging. It takes a lot of social experience and intuition, which is another way of saying that it’s a hard robotics problem, because social experience and intuition are things that robots tend not to be great at. Obviously, robotic embraces are never going to supplant human hugs, but the idea here is that sometimes getting physical human comfort is difficult or impossible, and in these cases, maybe robots could have something useful to offer.

In this paper—just accepted in the ACM Transactions on Human-Robot Interaction (THRI)—Block used a data-driven approach to develop the commandments for hugging robots, building off of research also presented at the 2021 Human-Robot Interaction Conference. Through a series of hardware iterations and user studies, the original PR2-based robotic hugging platform (HuggieBot) was been completely rebuilt and upgraded to HuggieBot 3.0, which is “the first fully autonomous human-sized hugging robot that recognizes and responds to the user’s intra-hug gestures.”

HuggieBot 3.0 is built around two six-degree-of-freedom Kinova JACO arms mounted horizontally on a custom metal frame, on top of a v-shaped horizontal base that makes it easier for humans to get in nice and close. The arms are padded, and the end effectors have mittens on them. Placed over the frame are chest and back panels made of air-filled chambers that provide softness as well as pressure sensing, and there’s a heating pad on top of each air chamber to make sure that the robot is nice and warm.

When HuggieBot detects a user in its personal space, it opens its arms and invites the user in for a hug. Based on the height and size of the user, the robot does its best to appropriately place its arms, even making sure that its wrist joints are oriented to keep the end effector contact as flat as possible. The robot, being a robot, will hug you until you’re all hugged out, but releasing your embrace or starting to back away will signal HuggieBot that you’re done and it’ll let you go, presumably with some reluctance. But if you want another hug, go for it, because no two hugs from the robot will ever be identical.

“Hugging HuggieBot 3.0 is (in my humble and unbiased opinion) really enjoyable,” author Alexis E. Block tells IEEE Spectrum. “We are not trying to fool anyone by saying that it feels like hugging a person, because it does not. You’re hugging a robot, but that doesn’t mean that it can’t be enjoyable.”

Part of making hugs enjoyable for humans involves the use of intrahug gestures, the development and test of which is one of the major contributions of the new paper. Intrahug gestures are the things you do with your arms and hands midhug, and while you may not always be consciously aware that you’re doing them, they could include things like gentle rubbing, pats, or squeezes.

The hug “background” gesture is a hold, but (and you should absolutely try this at home), just doing an extended static hold-type hug will definitely make a hug feel kind of robotic. Human hugs involve extra gestures, and HuggieBot is now equipped for this. It’s able to classify the gestures that the human makes and respond with gestures of its own, although (to avoid being too robotic) those gestures aren’t always directly reciprocal, and sometimes the robot will initiate them independently. While the current version of HuggieBot can only rub, pat, or squeeze, future versions may also be able to perform other intrahug gestures, like leaning, or even tickling, if you’re into that.

Here are all 11 the commandments that HuggieBot 3.0 follows:

  1. A hugging robot shall be soft.
  2. A hugging robot shall be warm.
  3. A hugging robot shall be sized similar to an adult human.
  4. When a hugging robot is the one initiating the interaction, it shall autonomously invite the user for a hug when it detects someone in its personal space. A hugging robot should wait for the user to begin walking toward it before closing its arms to ensure a consensual and synchronous hugging experience.
  5. A hugging robot shall autonomously adapt its embrace to the size and position of the user’s body, rather than hug in a constant manner
  6. A hugging robot shall reliably detect and react to a user’s desire to be released from a hug regardless of his or her arm positions.
  7. A good hugging robot shall perceive the user’s height and adapt its arm positions accordingly to comfortably fit around the user at appropriate body locations.
  8. It is advantageous for a hugging robot to accurately detect and classify gestures applied to its torso in real time, regardless of the user’s hand placement.
  9. Users like a robot that responds quickly to their intrahug gestures.
  10. To avoid appearing too robotic and to help conceal inevitable errors in gesture perception, a hugging robot shall not attempt perfect reciprocation of intrahug gestures. Rather, the robot should adopt a gesture response paradigm that blends user preferences with slight variety and spontaneity.
  11. To evoke user feelings that the robot is alive and caring, a hugging robot shall occasionally provide unprompted, proactive affective social touch to the user through intrahug gestures.

The researchers tested out HuggieBot 3.0 with actual human volunteers who seemed perfectly okay being partially crushed by an experimental hugging robot. Some of them couldn’t seem to get enough, in fact:

The study participants were generally able to successfully detect and classify the majority of HuggieBot’s intrahug gestures. People appreciated the gestures, too, commenting that they helped the robot feel more alive, more social, and more realistic. Squeezes were particularly popular, with participants commenting that it felt “closest to a real human hug” and gave them “a sense of security and comfort.” Interestingly, some people characterized the hugs that they received from the robot in very specific, anthropomorphic ways, often attributing emotions, mood swings, and attitudes to the robot depending on their perception of the hug. Hugs were described as “a comforting hug from a mother,” “a distant relative at a funeral,” “receiving a pity hug from someone who doesn’t want to,” and even “hugging a lover.” Overall, the majority of the study participants felt that the experience was pretty great, and 40 percent of them said that they came to think of HuggieBot as their friend.

It’s important to note that this is not an “in-the-wild” study of HuggieBot. It took place in a laboratory environment, and all of the participants specifically signed up to be part of a robot hugging study, for which they were compensated. “We look forward to conducting a thorough in-the-wild study to see how many everyday people would and would not be interested in hugging a robot,” say the researchers. Ideally, such a study would take place over weeks and months, to help determine how much of HuggieBot’s appeal is simply due to novelty—always a potential problem for new robots.

The other thing to keep in mind is that, as the researchers point out, HuggieBot does not in any way understand what hugs mean:

We acknowledge that the current version of our robot does not deliver on the full aspirational goal of a hugging robot. Rather, HuggieBot simulates a hug in a reasonably compelling way, and our data suggest that users enjoy the hug and can engage with the robot and relate to it as an autonomous being. However, in its current state, HuggieBot does not have an internal emotional model similar to humans, and thus it is not capable of engaging in the embodied emotional experience of a hug.

Fortunately, hugs are both an emotional thing and a physical thing, and even an emotionless robot can use physical contact to potentially have a tangible, measurable impact on the emotional states of humans—something that the researchers do in fact hope to measure in a quantitative way.

We certainly are not trying to replace human hugs but to provide a supplement when it might be difficult or impossible to receive a hug from another person. —Alexis Block

We’ll talk about their future work in just a minute, but first, IEEE Spectrum spoke with first author Alexis Block (who just received the Max Planck Society’s Otto Hahn Medal for her work on HuggieBot) for more details on this new generation robot hugs:

We asked you in 2018 why teaching robots to hug is important. Three years on, what do you now think the importance of this research is?

Alexis Block with her hugging robot

Alexis Block: I think the COVID-19 pandemic has made the importance of this research more salient than ever. While we could never have foreseen a situation like we are currently experiencing, in 2018 we were researching robot hugs so we could one day provide the emotional support and health benefits of human hugs to people wherever or whenever they needed it. At the time, my advisor Katherine Kuchenbecker and I were primarily thinking of friends and family separated by a physical distance, like how we were separated from our families (living in Germany and our families living in the United States). Before the pandemic, we hugged, high-fived, and otherwise socially contacted our friends and family without a second thought. Now, many of us have realized that social distancing and the resulting lack of physical contact can harm our overall well-being. Even after we return to “normal,” some members of our society may be medically more vulnerable and will not be able to join us. We believe HuggieBot could be used as a tool to supplement, not replace, human hugs for situations when it is difficult or uncomfortable to get the support you need or want from another person.

Can you share some qualitative feedback from study participants?

Block: In our validation study with HuggieBot 3.0, the average user hug duration was about 25 seconds long. For comparison, the average hug between humans is 2 to 3 seconds. To receive the positive benefits of deep pressure touch, researchers have found 20 seconds of constant hugging between romantic partners is necessary, and our users, on average, hugged our robot for even longer. We made it clear to our users that they were free to hug the robot for as long or as short a duration as they liked. Compare hugging a stranger or acquaintance (2 to 3 seconds) to hugging a partner, a friend, or a family member (20 seconds).

On average, based on the duration of how long our users felt comfortable hugging our robot, we think users treated HuggieBot 3.0 more like a partner, friend, or family member than a stranger or acquaintance. That was impressive because they had never met HuggieBot before! In their free-response questionnaires, several of our users mentioned that they thought the robot was their “friend” by the end of the experiment. We believe these results speak to the quality of the embrace users felt during the embrace; they truly felt like they were hugging a friend, which was especially meaningful because we conducted this study during the pandemic.

Many participants seemed very happy while hugging the robot! But did you have anyone react negatively?

Block: While most users had positive things to say about the robot and their experience during the study, two users still mentioned that while they enjoyed the interaction, they didn’t understand the purpose of a hugging robot because they felt that human hugs are “irreplaceable.” We certainly are not trying to replace human hugs but to provide a supplement when it might be difficult or impossible to receive a hug from another person.

Are there any ways in which robot hugs are potentially superior to human hugs?

Block: The main way robot hugs are potentially superior to human hugs is due to the lack of social pressure. When you’re hugging HuggieBot, you know you’re hugging a robot and not another person, and that’s part of its beauty. You don’t have to worry about being judged for needing to be held “too long” to “too tight.” Instead, the robot is there to support you and your needs. Many users commented that they feel more comfortable hugging the robot than other people because they don’t have to worry about the timing or judgment aspect involved with hugging another person.

You are likely the world’s foremost expert on robot hugs—what have you learned over the past several years that was most surprising to you?

Block: One surprising result was that when investigating how the robot should respond to users’ intrahug gestures, we initially thought the robot should mimic the gestures it felt. But, interestingly, users expressed that they wanted a variety of gestures in response to theirs instead of one-to-one reciprocation. Furthermore, they explained that it felt superficial and mechanical when the robot parroted back their gestures. However, when the robot responded with a different gesture of a similar “emotional investment level,” they mentioned feeling like the robot “understands [them] and makes his own decision.”

We also were unsure how users would respond to proactive robotic intrahug gestures, which is when the robot squeezes, pats, or rubs a user who is holding still within the hug. We worried, particularly with the squeeze, that the users would be alarmed by the unprompted motion and think that the robot was malfunctioning. In this instance, we were pleasantly surprised to find that users really enjoyed proactive robotic intrahug gestures, mentioning that they felt the robot was comforting them rather than responding to their inputs. Furthermore, they attributed emotions and feelings to the robot, saying they felt the robot cared about them when it chose to perform its own gesture.

Ultimately, if we can help even just a few people be a little happier by giving them a way to hug friends and family they thought they wouldn’t be able to, I think that would be an incredible outcome. —Alexis Block

How do you hope that your research will be applied in useful ways in the future?

Block: Back in 2016, when Katherine and I started this work as my master’s thesis at the University of Pennsylvania, we were inspired because our families lived far away, and we missed them. Especially given the COVID-19 pandemic and the resulting isolation, I think many people now understand first-hand the significant effect social touch with friends and loved ones has on our mental health. I hope that in the future, this research can be used to help strengthen personal relationships separated by a physical distance. Ultimately, if we can help even just a few people be a little happier by giving them a way to hug friends and family they thought they wouldn’t be able to, I think that would be an incredible outcome.

Block is already testing an upgraded version of HuggieBot: HuggieBot 4.0 is the best hugging robot yet, featuring improved hug positioning and better prehug technique, among other upgrades. “With these improvements to HuggieBot, we finally felt we had a version of a hugging robot that was of high enough quality to compare to hugging another person,” says Block. This comparison will be physiological, measuring whether and to what extent hugging a robot may elicit physical responses that are similar to hugging a real human. The researchers plan to “induce stress upon voluntary participants” (!) and then provide either an active human hug, a passive human hug, an active robot hug, or a passive robot hug and use periodic saliva measurements to measure cortisol and oxytocin levels. Hopefully, the results will show that humans can derive real benefits from robot hugs, and that when human hugs are not an option, we can look for a soft, warm robotics embrace instead.

Space manipulator arms often exhibit significant joint flexibility and limited motor torque. Future space missions, including satellite servicing and large structure assembly, may involve the manipulation of massive objects, which will accentuate these limitations. Currently, astronauts use visual feedback on-orbit to mitigate oscillations and trajectory following issues. Large time delays between orbit and Earth make ground teleoperation difficult in these conditions, so more autonomous operations must be considered to remove the astronaut resource requirement and expand robotic capabilities in space. Trajectory planning for autonomous systems must therefore be considered to prevent poor trajectory tracking performance. We provide a model-based trajectory generation methodology that incorporates constraints on joint speed, motor torque, and base actuation for flexible-joint space manipulators while minimizing total trajectory time. Full spatial computer simulation results, as well as physical experiment results with a single-joint robot on an air bearing table, show the efficacy of our methodology.

Exoskeletons and more in general wearable mechatronic devices represent a promising opportunity for rehabilitation and assistance to people presenting with temporary and/or permanent diseases. However, there are still some limits in the diffusion of robotic technologies for neuro-rehabilitation, notwithstanding their technological developments and evidence of clinical effectiveness. One of the main bottlenecks that constrain the complexity, weight, and costs of exoskeletons is represented by the actuators. This problem is particularly evident in devices designed for the upper limb, and in particular for the hand, in which dimension limits and kinematics complexity are particularly challenging. This study presents the design and prototyping of a hand finger exoskeleton. In particular, we focus on the design of a gear-based differential mechanism aimed at coupling the motion of two adjacent fingers and limiting the complexity and costs of the system. The exoskeleton is able to actuate the flexion/extension motion of the fingers and apply bidirectional forces, that is, it is able to both open and close the fingers. The kinematic structure of the finger actuation system has the peculiarity to present three DoFs when the exoskeleton is not worn and one DoF when it is worn, allowing better adaptability and higher wearability. The design of the gear-based differential is inspired by the mechanism widely used in the automotive field; it allows actuating two fingers with one actuator only, keeping their movements independent.

Teleoperation is one of the oldest applications of human-robot interaction, yet decades later, robots are still difficult to control in a variety of situations, especially when used by non-expert robot operators. That difficulty has relegated teleoperation to mostly expert-level use cases, though everyday jobs and lives could benefit from teleoperated robots by enabling people to get tasks done remotely. Research has made great progress by improving the capabilities of robots, and exploring a variety of interfaces to improve operator performance, but many non-expert applications of teleoperation are limited by the operator’s ability to understand and control the robot effectively. We discuss the state of the art of user-centered research for teleoperation interfaces along with challenges teleoperation researchers face and discuss how an increased focus on human-centered teleoperation research can help push teleoperation into more everyday situations.

The handshake is the most acceptable gesture of greeting in many cultures throughout many centuries. To date, robotic arms are not capable of fully replicating this typical human gesture. Using multiple sensors that detect contact forces and displacements, we characterized the movements that occured during handshakes. A typical human-to-human handshake took around 3.63 s (SD = 0.45 s) to perform. It can be divided into three phases: reaching (M = 0.92 s, SD = 0.45 s), contact (M = 1.96 s, SD = 0.46 s), and return (M = 0.75 s, SD = 0.12 s). The handshake was further investigated to understand its subtle movements. Using a multiphase jerk minimization model, a smooth human-to-human handshake can be modelled with fifth or fourth degree polynomials at the reaching and return phases, and a sinusoidal function with exponential decay at the contact phase. We show that the contact phase (1.96 s) can be further divided according to the following subphases: preshake (0.06 s), main shake (1.31 s), postshake (0.06 s), and a period of no movement (0.52 s) just before both hands are retracted. We compared these to the existing handshake models that were proposed for physical human-robot interaction (pHRI). From our findings in human-to-human handshakes, we proposed guidelines for a more natural handshake movement between humanoid robots and their human partners.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaERF 2022: 28–30 June 2022, Rotterdam, Netherlands CLAWAR 2022: 12–14 September 2022, Azores, Portugal

Enjoy today’s videos!

I’m nearly convinced that all robots should be quadrupeds and humanoids and have wheels.

Also, I’m sorry, but looking at the picture at the top of this article I now CANNOT UNSEE the bottom half of the robot as an angry red face gripping those wheel limbs in its mouth.

[ Swiss-Mile ]

OTTO Lifter drives nimbly in crowded and dynamic environments and improves safety in warehouses and facilities. With advanced safety sensors and class-leading autonomous driving capabilities, OTTO Lifter works alongside people, other vehicles, and existing infrastructure; providing businesses a safer material handling solution for as low as $9 per hour.

I have mixed feelings about this, because I’ve worked in a factory before, and getting to drive a forklift was my only source of joy.

[ OTTO ]

When you create a humanoid robot that can punch through solid objects and then give it a black mustache and goatee, you are just asking for trouble.

[ DFKI ]

Welcome to feeling bad about your level of flexibility, with Digit.

[ Agility ]

I am only slightly disappointed that the new “ex-proof” ANYmal is not actually explosion-proof, but rather is unlikely to cause other things to explode.

Although I suppose this means that technically any other version of ANYmal is therefore much more likely to cause explosions, right?

[ ANYbotics ]

There is the compilation of robot failure videos I recorded for the past year when I worked on the research projects related with the legged robots. Legged robots are awesome, but the key to success is coping with failure. Because of the hard work by so many researchers in the community, we could see legged robots performing these wonderful agile maneuvers.

Thanks to Steven Hong for recording and sharing these videos, and I hope you’re inspired to share some of your own failures. With the same kind of great commentary, of course.

[ ROAHM Lab ]

The thing to know about this research is that we now have a path toward getting a thruster-assisted 40 ton Gundam robot to run.

[ JSK ]

What makes me most uncomfortable about this video is the sound the eyelids make.

[ Child-type Android Project ]

The OpenCV AI Game Show is a thing that exists, and here’s a segment.

[ OpenCV ]

A long-horizon dexterous robot manipulation task of deformable objects, such as banana peeling, is problematic because of difficulties in object modeling and a lack of knowledge about stable and dexterous manipulation skills. This paper presents a goal-conditioned dual-action deep imitation learning (DIL) which can learn dexterous manipulation skills using human demonstration data.

This is very impressive, but a simpler solution is to just outlaw bananas because they’re disgusting.

[ Paper ]

Presenting the arch-nemesis of bottle scramblers everywhere, the bottle unscrambler.

[ B&R Automation ]

How does the Waymo Driver safely handle interactions with cyclists in dense urban environments like San Francisco? Jack, a product manager at Waymo, shares a couple interactions and the personal connection he has with getting it right.

[ Waymo ]

On Episode 11 of Season 2 of the Robot Brains podcast, we’re joined by entrepreneur and philanthropist, Jared Schrieber. He envisions a world where there are as many elementary and high school robotics teams as there are basketball or football teams. He founded Revolution Robotics; a non-profit dedicated to making robotics hardware and software kits accessible to all communities, to make his vision into a reality.

[ Robot Brains ]

Thanks, Alice!

A 2021 ICRA keynote from MIT’s Kevin Chen, on “Agile and Robust Micro-Aerial-Robots Powered by Soft Artificial Muscles.”

[ MIT ]

This GRASP SFI is from Shuran Song at Columbia University, on “The Reasonable Effectiveness of Dynamic Manipulation for Deformable Objects.”

From unfurling a blanket to swinging a rope; high-velocity dynamic actions play a crucial role in how people interact with deformable objects. In this talk, I will discuss how we can get robots to learn to dynamically manipulate deformable objects, where we embrace high-velocity dynamics rather than avoid them (e.g., exclusively using slow pick and place actions). With robots that can fling, swing, or blow with air, our experiments show that these interactions are surprisingly effective for many classically hard manipulation problems and enable new robot capabilities.

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PhiladelphiaERF 2022: 28–30 June 2022, Rotterdam, Netherlands CLAWAR 2022: 12–14 September 2022, Azores, Portugal

Enjoy today’s videos!

I’m nearly convinced that all robots should be quadrupeds and humanoids and have wheels.

Also, I’m sorry, but looking at the picture at the top of this article I now CANNOT UNSEE the bottom half of the robot as an angry red face gripping those wheel limbs in its mouth.

[ Swiss-Mile ]

OTTO Lifter drives nimbly in crowded and dynamic environments and improves safety in warehouses and facilities. With advanced safety sensors and class-leading autonomous driving capabilities, OTTO Lifter works alongside people, other vehicles, and existing infrastructure; providing businesses a safer material handling solution for as low as $9 per hour.

I have mixed feelings about this, because I’ve worked in a factory before, and getting to drive a forklift was my only source of joy.

[ OTTO ]

When you create a humanoid robot that can punch through solid objects and then give it a black mustache and goatee, you are just asking for trouble.

[ DFKI ]

Welcome to feeling bad about your level of flexibility, with Digit.

[ Agility ]

I am only slightly disappointed that the new “ex-proof” ANYmal is not actually explosion-proof, but rather is unlikely to cause other things to explode.

Although I suppose this means that technically any other version of ANYmal is therefore much more likely to cause explosions, right?

[ ANYbotics ]

There is the compilation of robot failure videos I recorded for the past year when I worked on the research projects related with the legged robots. Legged robots are awesome, but the key to success is coping with failure. Because of the hard work by so many researchers in the community, we could see legged robots performing these wonderful agile maneuvers.

Thanks to Steven Hong for recording and sharing these videos, and I hope you’re inspired to share some of your own failures. With the same kind of great commentary, of course.

[ ROAHM Lab ]

The thing to know about this research is that we now have a path toward getting a thruster-assisted 40 ton Gundam robot to run.

[ JSK ]

What makes me most uncomfortable about this video is the sound the eyelids make.

[ Child-type Android Project ]

The OpenCV AI Game Show is a thing that exists, and here’s a segment.

[ OpenCV ]

A long-horizon dexterous robot manipulation task of deformable objects, such as banana peeling, is problematic because of difficulties in object modeling and a lack of knowledge about stable and dexterous manipulation skills. This paper presents a goal-conditioned dual-action deep imitation learning (DIL) which can learn dexterous manipulation skills using human demonstration data.

This is very impressive, but a simpler solution is to just outlaw bananas because they’re disgusting.

[ Paper ]

Presenting the arch-nemesis of bottle scramblers everywhere, the bottle unscrambler.

[ B&R Automation ]

How does the Waymo Driver safely handle interactions with cyclists in dense urban environments like San Francisco? Jack, a product manager at Waymo, shares a couple interactions and the personal connection he has with getting it right.

[ Waymo ]

On Episode 11 of Season 2 of the Robot Brains podcast, we’re joined by entrepreneur and philanthropist, Jared Schrieber. He envisions a world where there are as many elementary and high school robotics teams as there are basketball or football teams. He founded Revolution Robotics; a non-profit dedicated to making robotics hardware and software kits accessible to all communities, to make his vision into a reality.

[ Robot Brains ]

Thanks, Alice!

A 2021 ICRA keynote from MIT’s Kevin Chen, on “Agile and Robust Micro-Aerial-Robots Powered by Soft Artificial Muscles.”

[ MIT ]

This GRASP SFI is from Shuran Song at Columbia University, on “The Reasonable Effectiveness of Dynamic Manipulation for Deformable Objects.”

From unfurling a blanket to swinging a rope; high-velocity dynamic actions play a crucial role in how people interact with deformable objects. In this talk, I will discuss how we can get robots to learn to dynamically manipulate deformable objects, where we embrace high-velocity dynamics rather than avoid them (e.g., exclusively using slow pick and place actions). With robots that can fling, swing, or blow with air, our experiments show that these interactions are surprisingly effective for many classically hard manipulation problems and enable new robot capabilities.

[ UPenn ]

Over the last decade, there has been an increased interest in developing aerial robotic platforms that exhibit grasping and perching capabilities not only within the research community but also in companies across different industry sectors. Aerial robots range from standard multicopter vehicles/drones, to autonomous helicopters, and fixed-wing or hybrid devices. Such devices rely on a range of different solutions for achieving grasping and perching. These solutions can be classified as: 1) simple gripper systems, 2) arm-gripper systems, 3) tethered gripping mechanisms, 4) reconfigurable robot frames, 5) adhesion solutions, and 6) embedment solutions. Grasping and perching are two crucial capabilities that allow aerial robots to interact with the environment and execute a plethora of complex tasks, facilitating new applications that range from autonomous package delivery and search and rescue to autonomous inspection of dangerous or remote environments. In this review paper, we present the state-of-the-art in aerial grasping and perching mechanisms and we provide a comprehensive comparison of their characteristics. Furthermore, we analyze these mechanisms by comparing the advantages and disadvantages of the proposed technologies and we summarize the significant achievements in these two research topics. Finally, we conclude the review by suggesting a series of potential future research directions that we believe that are promising.



Ten years ago this week (more or less), the Open Source Robotics Foundation announced that it was spinning out of Willow Garage as a more permanent home for the Robot Operating System. We covered this news at the time (which makes yours truly feel not quite so young anymore), but it wasn’t entirely clear just what would happen to OSRF long term.

Obviously, things have gone well over the last decade, not just for OSRF, but also for Gazebo, ROS, and the ROS community as a whole. OSRF is now officially Open Robotics, but that hasn’t stopped all sane people from continuing to call it OSRF anyway, because five syllables is just ridiculous. Meanwhile, ROS has been successful enough that it’s getting increasingly difficult to find alliterative turtle names to mark new releases.

To celebrate this milestone, we asked some of the original OSRF folks some awkward questions, including what it is about ROS or ROS users that scares them the most.

First, some fun statistics:

  • Unique visitor downloads of ROS packages in 2011: 4,517
  • Unique visitor downloads of ROS packages in 2021: 789,956
  • Public Github repositories currently tagged for ROS or ROS2: 6,559
  • Cumulative citations of the original ROS paper (::cough:: workshop paper ::cough::): 9,451
  • Number of syllables added by changing “OSRF” to “Open Robotics”: 1

For a bit more history, we sent a couple of questions to some OSRF folks who go way back, including Brian Gerkey (cofounder and CEO, Open Robotics), Ryan Gariepy (cofounder of Clearpath Robotics and OTTO Motors and Open Robotics board member), and Nate Koenig (cofounder and CTO, Open Robotics).

IEEE Spectrum: When did you first hear about ROS and/or Gazebo?

Nate Koenig: Out of the mouth of my advisor Andrew Howard as we worked on creating Gazebo back in 2002.

Brian Gerkey: I first heard about Gazebo in the early 2000s, when I was still in grad school at USC. We had written Player, a precursor to ROS, and Stage, a 2D indoor robot simulator that’s still used today. Andrew Howard and Nate started work on a 3D outdoor simulator, and they called it Gazebo because a gazebo is an outdoor stage (sort of). I first heard about ROS when I joined Willow Garage in early 2008. The team was iterating on a system called Switchyard that Morgan Quigley had built at Stanford. The working name was “ROS,” but there was plenty of debate on the name in the early days. I lobbied to make it version 3 of Player, but my argument did not carry the day.

Ryan Gariepy: I first heard of ROS on 4 May 2010. In the sunny metropolis of Anchorage, Alaska, at the Willow Garage booth at ICRA.

What surprises you most about the current state of ROS and/or Gazebo?

Gariepy: Running into so many people outside of the “typical” autonomous robotics fields who know what ROS is and use it.

Koenig: I’m honestly surprised that Gazebo has lasted and grown for 20 years. I did not expect a grad-school side project to transform into a tool utilized by researchers, companies, and government organizations. It’s amazing to see how Gazebo has progressed from its humble beginnings to its present-day capabilities.

What’s different about the ROS community between then and now?

Gariepy: The vast majority of contributors no longer trace their heritage back to Willow Garage, the Willow Garage PR2 beta program and internship programs, and Clearpath. Also, I no longer need to explain “open source” to investors and bankers.

Gerkey: The biggest change I’ve observed is that over the past 10 years a modern robotics industry has, at long last, taken off. We’d been telling ourselves for years and years that capable, semiautonomous robots would soon be out running around in the world, and now they finally are. And because many, perhaps most, of those robots run ROS, our community now has much greater participation from industry, which is a big shift from our original user base in academic research.

Was there a point in time when you realized ROS was reaching critical mass?

Gariepy: To be honest, I never had a “we’ve arrived” moment. Instead, I had a certainty we would get there back in 2010. Our company had done this big survey of how robotics researchers worked back when we first started, and the focus on community and user experience that Steve Cousins, Brian Gerkey, and team had built was completely different from everything which had gone before. Once we decided to switch to ROS in 2010, we never looked back.

Gerkey: For me the tipping point was May of 2012 when we hosted the first ROSCon. We were asking people to spend their weekend in the bowels of a hotel conference center talking about open-source robot software, which was, to say the least, a niche topic. I honestly had no idea whether anybody would show up. In the end we had over 200 attendees, which still amazes me today.

Why was OSRF the best idea you ever had, and why is this the worst idea you ever had?

Gariepy: Best idea: [Gestures broadly] Worst idea: It continues to rub in how far I’ve fallen as a software developer. C++17 terrifies me.

Koenig: OSRF was the brainchild of Brian; my best idea related to OSRF was tagging along with Brian, which allowed Gazebo to grow into a popular and widely used robotics simulator. It was the worst idea because now there are a lot of users of Gazebo.

What about ROS (or ROS users) scares you?

Gariepy: The ROS wiki.

Gerkey: The number of deployed robots that are still out there running long-ago EOL’d versions of ROS.

What has been your dream for OSRF/ROS/Gazebo, and have you achieved that dream? If not, why not, and if so, what’s next?

Koenig: My original dream for Gazebo was to have fun while making a useful tool for other roboticists. That dream has grown to providing a first-class simulation application that streamlines robotic development and lowers the barrier to entry into robotics. It’s a good dream because it never quite ends.

Gariepy: Even before I knew about ROS, I’ve always believed that there would never be one single company which would be the “best” in robotics in its entirety. Robotics will change the world (in the literal sense, not the Silicon Valley sense). We all need to work together. Open Robotics has made this community of developers a reality, but we still have quite a ways to go before the full potential of robotics is realized.



Ten years ago this week (more or less), the Open Source Robotics Foundation announced that it was spinning out of Willow Garage as a more permanent home for the Robot Operating System. We covered this news at the time (which makes yours truly feel not quite so young anymore), but it wasn’t entirely clear just what would happen to OSRF long term.

Obviously, things have gone well over the last decade, not just for OSRF, but also for Gazebo, ROS, and the ROS community as a whole. OSRF is now officially Open Robotics, but that hasn’t stopped all sane people from continuing to call it OSRF anyway, because five syllables is just ridiculous. Meanwhile, ROS has been successful enough that it’s getting increasingly difficult to find alliterative turtle names to mark new releases.

To celebrate this milestone, we asked some of the original OSRF folks some awkward questions, including what it is about ROS or ROS users that scares them the most.

First, some fun statistics:

  • Unique visitor downloads of ROS packages in 2011: 4,517
  • Unique visitor downloads of ROS packages in 2021: 789,956
  • Public Github repositories currently tagged for ROS or ROS2: 6,559
  • Cumulative citations of the original ROS paper (::cough:: workshop paper ::cough::): 9,451
  • Number of syllables added by changing “OSRF” to “Open Robotics”: 1

For a bit more history, we sent a couple of questions to some OSRF folks who go way back, including Brian Gerkey (cofounder and CEO, Open Robotics), Ryan Gariepy (cofounder of Clearpath Robotics and OTTO Motors and Open Robotics board member), and Nate Koenig (cofounder and CTO, Open Robotics).

IEEE Spectrum: When did you first hear about ROS and/or Gazebo?

Nate Koenig: Out of the mouth of my advisor Andrew Howard as we worked on creating Gazebo back in 2002.

Brian Gerkey: I first heard about Gazebo in the early 2000s, when I was still in grad school at USC. We had written Player, a precursor to ROS, and Stage, a 2D indoor robot simulator that’s still used today. Andrew Howard and Nate started work on a 3D outdoor simulator, and they called it Gazebo because a gazebo is an outdoor stage (sort of). I first heard about ROS when I joined Willow Garage in early 2008. The team was iterating on a system called Switchyard that Morgan Quigley had built at Stanford. The working name was “ROS,” but there was plenty of debate on the name in the early days. I lobbied to make it version 3 of Player, but my argument did not carry the day.

Ryan Gariepy: I first heard of ROS on 4 May 2010. In the sunny metropolis of Anchorage, Alaska, at the Willow Garage booth at ICRA.

What surprises you most about the current state of ROS and/or Gazebo?

Gariepy: Running into so many people outside of the “typical” autonomous robotics fields who know what ROS is and use it.

Koenig: I’m honestly surprised that Gazebo has lasted and grown for 20 years. I did not expect a grad-school side project to transform into a tool utilized by researchers, companies, and government organizations. It’s amazing to see how Gazebo has progressed from its humble beginnings to its present-day capabilities.

What’s different about the ROS community between then and now?

Gariepy: The vast majority of contributors no longer trace their heritage back to Willow Garage, the Willow Garage PR2 beta program and internship programs, and Clearpath. Also, I no longer need to explain “open source” to investors and bankers.

Gerkey: The biggest change I’ve observed is that over the past 10 years a modern robotics industry has, at long last, taken off. We’d been telling ourselves for years and years that capable, semiautonomous robots would soon be out running around in the world, and now they finally are. And because many, perhaps most, of those robots run ROS, our community now has much greater participation from industry, which is a big shift from our original user base in academic research.

Was there a point in time when you realized ROS was reaching critical mass?

Gariepy: To be honest, I never had a “we’ve arrived” moment. Instead, I had a certainty we would get there back in 2010. Our company had done this big survey of how robotics researchers worked back when we first started, and the focus on community and user experience that Steve Cousins, Brian Gerkey, and team had built was completely different from everything which had gone before. Once we decided to switch to ROS in 2010, we never looked back.

Gerkey: For me the tipping point was May of 2012 when we hosted the first ROSCon. We were asking people to spend their weekend in the bowels of a hotel conference center talking about open-source robot software, which was, to say the least, a niche topic. I honestly had no idea whether anybody would show up. In the end we had over 200 attendees, which still amazes me today.

Why was OSRF the best idea you ever had, and why is this the worst idea you ever had?

Gariepy: Best idea: [Gestures broadly] Worst idea: It continues to rub in how far I’ve fallen as a software developer. C++17 terrifies me.

Koenig: OSRF was the brainchild of Brian; my best idea related to OSRF was tagging along with Brian, which allowed Gazebo to grow into a popular and widely used robotics simulator. It was the worst idea because now there are a lot of users of Gazebo.

What about ROS (or ROS users) scares you?

Gariepy: The ROS wiki.

Gerkey: The number of deployed robots that are still out there running long-ago EOL’d versions of ROS.

What has been your dream for OSRF/ROS/Gazebo, and have you achieved that dream? If not, why not, and if so, what’s next?

Koenig: My original dream for Gazebo was to have fun while making a useful tool for other roboticists. That dream has grown to providing a first-class simulation application that streamlines robotic development and lowers the barrier to entry into robotics. It’s a good dream because it never quite ends.

Gariepy: Even before I knew about ROS, I’ve always believed that there would never be one single company which would be the “best” in robotics in its entirety. Robotics will change the world (in the literal sense, not the Silicon Valley sense). We all need to work together. Open Robotics has made this community of developers a reality, but we still have quite a ways to go before the full potential of robotics is realized.

While exploring complex unmapped spaces is a persistent challenge for robots, plants are able to reliably accomplish this task. In this work we develop branching robots that deploy through an eversion process that mimics key features of plant growth (i.e., apical extension, branching). We show that by optimizing the design of these robots, we can successfully traverse complex terrain even in unseen instances of an environment. By simulating robot growth through a set of known training maps and evaluating performance with a reward heuristic specific to the intended application (i.e., exploration, anchoring), we optimized robot designs with a particle swarm algorithm. We show these optimization efforts transfer from training on known maps to performance on unseen maps in the same type of environment, and that the resulting designs are specialized to the environment used in training. Furthermore, we fabricated several optimized branching everting robot designs and demonstrated key aspects of their performance in hardware. Our branching designs replicated three properties found in nature: anchoring, coverage, and reachability. The branching designs were able to reach 25% more of a given space than non-branching robots, improved anchoring forces by 12.55×, and were able to hold greater than 100× their own mass (i.e., a device weighing 5 g held 575 g). We also demonstrated anchoring with a robot that held a load of over 66.7 N at an internal pressure of 50 kPa. These results show the promise of using branching vine robots for traversing complex and unmapped terrain.

Pages