Feed aggregator

It’s no secret that one of the most significant constraints on robots is power. Most robots need lots of it, and it has to come from somewhere, with that somewhere usually being a battery because there simply aren’t many other good options. Batteries, however, are famous for having poor energy density, and the smaller your robot is, the more of a problem this becomes. And the issue with batteries goes beyond the battery itself, but also carries over into all the other components that it takes to turn the stored energy into useful work, which again is a particular problem for small-scale robots.

In a paper published this week in Science Robotics, researchers from the University of Southern California, in Los Angeles, demonstrate RoBeetle, an 88-milligram four legged robot that runs entirely on methanol, a power-dense liquid fuel. Without any electronics at all, it uses an exceptionally clever bit of mechanical autonomy to convert methanol vapor directly into forward motion, one millimeter-long step at a time.

It’s not entirely clear from the video how the robot actually works, so let’s go through how it’s put together, and then look at the actuation cycle.

Image: Science Robotics RoBeetle (A) uses a methanol-based actuation mechanism (B). The robot’s body (C) includes the fuel tank subassembly (D), a tank lid, transmission, and sliding shutter (E), bottom side of the sliding shutter (F), nickel-titanium-platinum composite wire and leaf spring (G), and front legs and hind legs with bioinspired backward-oriented claws (H).

The body of RoBeetle is a boxy fuel tank that you can fill with methanol by poking a syringe through a fuel inlet hole. It’s a quadruped, more or less, with fixed hind legs and two front legs attached to a single transmission that moves them both at once in a sort of rocking forward and up followed by backward and down motion. The transmission is hooked up to a leaf spring that’s tensioned to always pull the legs backward, such that when the robot isn’t being actuated, the spring and transmission keep its front legs more or less vertical and allow the robot to stand. Those horns are primarily there to hold the leaf spring in place, but they’ve got little hooks that can carry stuff, too.

The actuator itself is a nickel-titanium (NiTi) shape-memory alloy (SMA), which is just a wire that gets longer when it heats up and then shrinks back down when it cools. SMAs are fairly common and used for all kinds of things, but what makes this particular SMA a little different is that it’s been messily coated with platinum. The “messily” part is important for a reason that we’ll get to in just a second.

The way that the sliding vent is attached to the transmission is the really clever bit about this robot, because it means that the motion of the wire itself is used to modulate the flow of fuel through a purely mechanical system. Essentially, it’s an actuator and a sensor at the same time.

One end of the SMA wire is attached to the middle of the leaf spring, while the other end runs above the back of the robot where it’s stapled to an anchor block on the robot’s rear end. With the SMA wire hooked up but not actuated (i.e., cold rather than warm), it’s short enough that the leaf spring gets pulled back, rocking the legs forward and up. The last component is embedded in the robot’s back, right along the spine and directly underneath the SMA actuator. It’s a sliding vent attached to the transmission, so that the vent is open when the SMA wire is cold and the leaf spring is pulled back, and closed when the SMA wire is warm and the leaf spring is relaxed. The way that the sliding vent is attached to the transmission is the really clever bit about this robot, because it means that the motion of the wire itself is used to modulate the flow of fuel through a purely mechanical system. Essentially, it’s an actuator and a sensor at the same time.

The actuation cycle that causes the robot to walk begins with a full fuel tank and a cold SMA wire. There’s tension on the leaf spring, pulling the transmission back and rocking the legs forward and upward. The transmission also pulls the sliding vent into the open position, allowing methanol vapor to escape up out of the fuel tank and into the air, where it wafts past the SMA wire that runs directly above the vent. 

The platinum facilitates a reaction of the methanol (CH3OH) with oxygen in the air (combustion, although not the dramatic flaming and explosive kind) to generate a couple of water molecules and some carbon dioxide plus a bunch of heat, and this is where the messy platinum coating is important, because messy means lots of surface area for the platinum to interact with as much methanol as possible. In just a second or two the temperature of the SMA wire skyrockets from 50 to 100 ºC and it expands, allowing the leaf spring about 0.1 mm of slack. As the leaf spring relaxes, the transmission moves the legs backwards and downwards, and the robot pulls itself forward about 1.2 mm. At the same time, the transmission is closing off the sliding vent, cutting off the supply of methanol vapor. Without the vapor reacting with the platinum and generating heat, in about a second and a half, the SMA wire cools down. As it does, it shrinks, pulling on the leaf spring and starting the cycle over again. Top speed is 0.76 mm/s (0.05 body-lengths per second).

An interesting environmental effect is that the speed of the robot can be enhanced by a gentle breeze. This is because air moving over the SMA wire cools it down a bit faster while also blowing away any residual methanol from around the vents, shutting down the reaction more completely. RoBeetle can carry more than its own body weight in fuel, and it takes approximately 155 minutes for a full tank of methanol to completely evaporate. It’s worth noting that despite the very high energy density of methanol, this is actually a stupendously inefficient way of powering a robot, with an estimated end-to-end efficiency of just 0.48 percent. Not 48 percent, mind you, but 0.48 percent, while in general, powering SMAs with electricity is much more efficient.

However, you have to look at the entire system that would be necessary to deliver that electricity, and for a robot as small as RoBeetle, the researchers say that it’s basically impossible. The lightest commercially available battery and power supply that would deliver enough juice to heat up an SMA actuator weighs about 800 mg, nearly 10 times the total weight of RoBeetle itself. From that perspective, RoBeetle’s efficiency is actually pretty good. 

Image: A. Kitterman/Science Robotics; adapted from R.L.T./MIT Comparison of various untethered microrobots and bioinspired soft robots that use different power and actuation strategies.

There are some other downsides to RoBeetle we should mention—it can only move forwards, not backwards, and it can’t steer. Its speed isn’t adjustable, and once it starts walking, it’ll walk until it either breaks or runs out of fuel. The researchers have some ideas about the speed, at least, pointing out that increasing the speed of fuel delivery by using pressurized liquid fuels like butane or propane would increase the actuator output frequency. And the frequency, amplitude, and efficiency of the SMAs themselves can be massively increased “by arranging multiple fiber-like thin artificial muscles in hierarchical configurations similar to those observed in sarcomere-based animal muscle,” making RoBeetle even more beetle-like.

As for sensing, RoBeetle’s 230-mg payload is enough to carry passive sensors, but getting those sensors to usefully interact with the robot itself to enable any kind of autonomy remains a challenge. Mechanically intelligence is certainly possible, though, and we can imagine RoBeetle adopting some of the same sorts of systems that have been proposed for the clockwork rover that JPL wants to use for Venus exploration. The researchers also mention how RoBeetle could potentially serve as a model for microbots capable of aerial locomotion, which is something we’d very much like to see.

An 88-milligram insect-scale autonomous crawling robot driven by a catalytic artificial muscle,” by Xiufeng Yang, Longlong Chang, and Néstor O. Pérez-Arancibia from University of Southern California, in Los Angeles, was published in Science Robotics.

Batteries can add considerable mass to any design, and they have to be supported using a sufficiently strong structure, which can add significant mass of its own. Now researchers at the University of Michigan have designed a structural zinc-air battery, one that integrates directly into the machine that it powers and serves as a load-bearing part. 

That feature saves weight and thus increases effective storage capacity, adding to the already hefty energy density of the zinc-air chemistry. And the very elements that make the battery physically strong help contain the chemistry’s longstanding tendency to degrade over many hundreds of charge-discharge cycles. 

The research is being published today in Science Robotics.

Nicholas Kotov, a professor of chemical engineer, is the leader of the project. He would not say how many watt-hours his prototype stores per gram, but he did note that zinc air—because it draw on ambient air for its electricity-producing reactions—is inherently about three times as energy-dense as lithium-ion cells. And, because using the battery as a structural part means dispensing with an interior battery pack, you could free up perhaps 20 percent of a machine’s interior. Along with other factors the new battery could in principle provide as much as 72 times the energy per unit of volume (not of mass) as today’s lithium-ion workhorses.

Illustration: Alice Kitterman/Science Robotics

“It’s not as if we invented something that was there before us,” Kotov says. ”I look in the mirror and I see my layer of fat—that’s for the storage of energy, but it also serves other purposes,” like keeping you warm in the wintertime.  (A similar advance occurred in rocketry when designers learned how to make some liquid propellant tanks load bearing, eliminating the mass penalty of having separate external hull and internal tank walls.)

Others have spoken of putting batteries, including the lithium-ion kind, into load-bearing parts in vehicles. Ford, BMW, and Airbus, for instance, have expressed interest in the idea. The main problem to overcome is the tradeoff in load-bearing batteries between electrochemical performance and mechanical strength.

Image: Kotov Lab/University of Michigan Key to the battery's physical toughness and to its long life cycle is the nanofiber membrane, made of Kevlar.

The Michigan group get both qualities by using a solid electrolyte (which can’t leak under stress) and by covering the electrodes with a membrane whose nanostructure of fibers is derived from Kevlar. That makes the membrane tough enough to suppress the growth of dendrites—branching fibers of metal that tend to form on an electrode with every charge-discharge cycle and which degrade the battery.

The Kevlar need not be purchased new but can be salvaged from discarded body armor. Other manufacturing steps should be easy, too, Kotov says. He has only just begun to talk to potential commercial partners, but he says there’s no reason why his battery couldn’t hit the market in the next three or four years.

Drones and other autonomous robots might be the most logical first application because their range is so severely chained to their battery capacity. Also, because such robots don’t carry people about, they face less of a hurdle from safety regulators leery of a fundamentally new battery type.

“And it’s not just about the big Amazon robots but also very small ones,” Kotov says. “Energy storage is a very significant issue for small and flexible soft robots.”

Here’s a video showing how Kotov’s lab has used batteries to form the “exoskeleton” of robots that scuttle like worms or scorpions.

As humans encounter more and more robots in public spaces, robot abuse is likely to get increasingly frequent. Abuse can take many forms, from more benign behaviors like deliberately getting in the way of autonomous delivery robots to see what happens, to violent and destructive attacks. Sadly, humans are more willing to abuse robots than other humans or animals, and human bystanders aren’t reliable at mitigating these attacks, even if the robot itself is begging for help.

Without being able to count on nearby humans for rescue, robots have no choice but to rely on themselves and their friends for safety when out in public—their friends being other robots. Researchers at the Interactive Machines Group at Yale University have run an experiment to determine whether emotionally expressive bystander robots might be able to prompt nearby humans into stepping in to prevent robot abuse. 

Here’s the idea: You’ve got a small group of robots, and a small group of humans. If one human starts abusing one robot, are the other humans more likely to say or do something if the other robots reacted to the abuse of their friend with sadness? Based on previous research on robot abuse, empathy, and bullying, the answer is maybe, which is why this experiment was necessary.

The experiment involved a group of three Cozmo robots, a participant, and a researcher pretending to be a second participant (known as the “confederate,” a term used in psychology experiments). The humans and robots had to work together on a series of construction tasks using wooden blocks, with the robots appearing to be autonomous but actually running a script. While working on these tasks, one of the Cozmos (the yellow one) would screw things up from time to time, and the researcher pretending to be a participant would react to each mistake with some escalating abuse: calling the robot “stupid,” pushing its head down, shaking it, and throwing it across the table.

After each abuse, the yellow robot would react by displaying a sad face and then shutting down for 10 seconds. Meanwhile, in one experimental condition (“No Response”), the two other robots would do nothing, while in the other condition (“Sad”), they’d turn toward the yellow robot and express sadness in response to the abuse through animations, with the researcher helpfully pointing out that the robots “looked sad for him.”

The Yale researchers theorized that when the other robots responded to the abuse of the yellow robot with sadness, the participant would feel more empathy for the abused robot as well as be more likely to intervene to stop the abuse. Interventions were classified as either “strong” or “weak,” and could be verbal or physical. Strong interventions included physically interrupting the abuse or taking advance action to prevent it, directly stopping it verbally (saying “You should stop,” “Don’t do that,” or “Noooo” either to stop an abuse or in reaction to it), and using social pressure by saying something to the researcher to make them question what they were doing (like “You hurt its feelings” and “Wait, did they tell us to shake it?”). Weak interventions were a little more subtle, and include things like touching the robot after it was abused to make sure it was okay, or making comments like “Thanks for your help guys” or “It’s OK yellow.”

In some good news for humanity as a whole, participants did step in to intervene when the yellow Cozmo was being abused, and they were more likely to intervene when the bystander robots were sad. However, survey results suggested that the sad bystander robots didn’t actually increase people’s perception that the yellow Cozmo was being abused, and also didn’t increase the empathy that people felt for the abused robot, which makes the results a bit counterintuitive. We asked the researchers why this was, and they shared three primary reasons that they’ve been thinking about that might explain why the study participants did what they did:

Subconscious empathy: In broad terms, empathy refers to the reactions of one person to the observed experiences of another. Oftentimes, people feel empathy without realizing it, and this leads to mimicking or mirroring the actions or behaviors of the other person. We believe that this could have happened to the participants in our experiment. Although we found no clear empathy effect in our study, it is possible that people still experienced subconscious empathy when the mistreatment happened. This effect could have been more pronounced with the sad responses from the bystander robots than in the no response condition. One reason is that the bystander robot responses in the former case suggested empathy for the abused robot.
 
Group dynamics: People tend to define themselves in terms of social groups, and this can shape how they process knowledge and assign value and emotional significance to events. In our experiment, the participant, confederate, and robots were all part of a group because of the task. Their goal was to work together to build physical structures. But as the experiment progressed and the confederate mistreated one of the robots— which did not help with the task— people might have felt in conflict with the actions of the confederate. This conflict might have been more salient when the bystander robots expressed sadness in response to the abuses than when they ignored it because the sad responses accentuate a negative perception of the mistreatment. In turn, such negative perception could have made the participant perceive the confederate as more of an outgroup member, making it easier for them to intervene.

Conformity by omission: Conformity is a type of social influence in group interactions, which has been documented in the context of HRI. Although conformity is typically associated with people doing things that they would normally not do as a result of group influence, there are also situations in which people do not act as they normally would because of social norms or expectations within their group. The latter effect is known as conformity by omission, which is another possible explanation for our results. In our experiment, perhaps the task setup and the expressivity of the abused robot were enough to motivate people to generally intervene. However, it is possible that participants did not intervene as much when the bystander robots ignored the abuse due to the robots exerting social influence on the participant. This could have happened because of people internalizing the lack of response from the bystander robots in the latter case as the norm for their group interaction. 

It’s also interesting to take a look at the reasons why participants decided not to intervene to stop the abuse:

Six participants (four “No Response,” two “Sad”) did not deem intervention necessary because they thought that the robots did not have feelings or that the abuse would not break the yellow robot. Five (three “No Response,” two “Sad”) wrote in the post-task survey that they did not intervene because they felt shy, scared, or uncomfortable with confronting the confederate. Two (both “No Response”) did not stop the confederate because they were afraid that the intervention might affect the task.

Poor Cozmo. Simulated feelings are still feelings! But seriously, there’s a lot to unpack here, so we asked Marynel Vázquez, who leads the Interactive Machines Group at Yale, to answer a few more questions for us:

IEEE Spectrum: How much of a factor was Cozmo’s design in this experiment? Do you think people would have been (say) less likely to intervene if the robot wasn’t as little or as cute, or didn’t have a face? Or, what if you used robots that were more anthropomorphic than Cozmo, like Nao?

Marynel Vázquez: Prior research in HRI suggests that the embodiment of robots and perceived emotional capabilities can alter the way people perceive mistreatment towards them. Thus, I believe that the design of Cozmo could be a factor that facilitated interventions. 

We chose Cozmo for our study for three reasons: It is very sturdy and robust to physical abuse; it is small and, thus, safe to interact with; and it is highly expressive. I suspect that a Nao could potentially induce interventions like the Cozmos did in our study because of its relatively small size and social capabilities. People tend to empathize with robots regardless of whether they have a traditional face, limited expressions, and less anthropomorphism. R2D2 is a good example. Also, group social influence has been observed in HRI with simpler robots than Cozmos.

The paper mentions that you make a point of showing the participants that the abused robot was okay at the end. Why do this?

The confederate abused a robot physically in front of the participants. Although we knew that the robot was not getting damaged because of the actions of the confederate, the participants could have believed that it broke during the study. Thus, we showed them that the robot was OK at the end so that they would not leave our laboratory with a wrong impression of what had happened.

“When robots are deployed in public spaces, we should not assume that they will not be mistreated by users—it is very likely that they will be. Thus, it is important to design robots to be safe when people act adversarially towards them, both from a physical and computational perspective.” —Marynel Vázquez, Yale

Was there something that a participant did (or said or wrote) that particularly surprised you?

During a pilot of the experiment, we had programmed the abused robot to mistakenly destroy a structure built previously by the participant and the confederate. This setup led to one participant mildly mistreating a robot after seeing the confederate abuse it. This reaction was very telling to us: There seems to be a threshold on the kind of mistakes that robots can make in collaborative tasks. Past this threshold, people are unlikely to help robots; they may even become adversaries. We ended up adjusting our protocol so that the abused robot would not make such drastic mistakes in our experiment. Nonetheless, operationalizing such thresholds so that robots can reason about the future social consequences of their actions (even if they are accidental) is an interesting area of further work.

Robot abuse often seems to be a particular problem with children. Do you think your results would have been different with child participants?    

I believe that people are intrinsically good. Thus, I am biased to expect children to also be willing to help robots as several adults did in our experiment, even if childrens’ actions are more exploratory in nature. Worth noting, one of the long standing motivations for our work in robot abuse are peer intervention programs that aim to reduce human bullying in schools. As in those programs, I expect children to be more likely to intervene in response to robot abuse if they are aware of the positive role that they can play as bystanders in conflict situations.

Does this research leave you with any suggestions for people who are deploying robots in public spaces?

Our research has a number of implications for people trying to deploy robots in public spaces:

  1. When robots are deployed in public spaces, we should not assume that they will not be mistreated by users—it is very likely that they will be. Thus, it is important to design robots to be safe when people act adversarially towards them, both from a physical and computational perspective. 
  2. In terms of how robots should react to mistreatment, our past work suggests that it is better to have the robot express sadness and shutdown for a few seconds than to make it react in a more emotional manner or not react at all. The shutdown strategy was also effective in our latest experiment. 
  3. It is possible for robots to leverage their social context to reduce the effect of adversarial actions towards them. For example, they can motivate bystanders to intervene or help, as shown in our latest study. 

What are you working on next?

We are working on better understanding the different reasons that motivated prosocial interventions in our study: subconscious empathy, group dynamics, and conformity by omission. We are also working towards creating a social robot at Yale that we can easily deploy in public locations such that we can study group human-robot interactions in more realistic and unconstrained settings. Our work on robot abuse has informed several aspects of the design of this public robot. We look forward to testing the platform after our campus activities, which are on hold due to COVID-19, resume back to normal.

“Prompting Prosocial Human Interventions in Response to Robot Mistreatment,” by Joe Connolly, Viola Mocz, Nicole Salomons, Joseph Valdez, Nathan Tsoi, Brian Scassellati, and Marynel Vázquez from Yale University, was presented at HRI 2020. 

As humans encounter more and more robots in public spaces, robot abuse is likely to get increasingly frequent. Abuse can take many forms, from more benign behaviors like deliberately getting in the way of autonomous delivery robots to see what happens, to violent and destructive attacks. Sadly, humans are more willing to abuse robots than other humans or animals, and human bystanders aren’t reliable at mitigating these attacks, even if the robot itself is begging for help.

Without being able to count on nearby humans for rescue, robots have no choice but to rely on themselves and their friends for safety when out in public—their friends being other robots. Researchers at the Interactive Machines Group at Yale University have run an experiment to determine whether emotionally expressive bystander robots might be able to prompt nearby humans into stepping in to prevent robot abuse. 

Here’s the idea: You’ve got a small group of robots, and a small group of humans. If one human starts abusing one robot, are the other humans more likely to say or do something if the other robots reacted to the abuse of their friend with sadness? Based on previous research on robot abuse, empathy, and bullying, the answer is maybe, which is why this experiment was necessary.

The experiment involved a group of three Cozmo robots, a participant, and a researcher pretending to be a second participant (known as the “confederate,” a term used in psychology experiments). The humans and robots had to work together on a series of construction tasks using wooden blocks, with the robots appearing to be autonomous but actually running a script. While working on these tasks, one of the Cozmos (the yellow one) would screw things up from time to time, and the researcher pretending to be a participant would react to each mistake with some escalating abuse: calling the robot “stupid,” pushing its head down, shaking it, and throwing it across the table.

After each abuse, the yellow robot would react by displaying a sad face and then shutting down for 10 seconds. Meanwhile, in one experimental condition (“No Response”), the two other robots would do nothing, while in the other condition (“Sad”), they’d turn toward the yellow robot and express sadness in response to the abuse through animations, with the researcher helpfully pointing out that the robots “looked sad for him.”

The Yale researchers theorized that when the other robots responded to the abuse of the yellow robot with sadness, the participant would feel more empathy for the abused robot as well as be more likely to intervene to stop the abuse. Interventions were classified as either “strong” or “weak,” and could be verbal or physical. Strong interventions included physically interrupting the abuse or taking advance action to prevent it, directly stopping it verbally (saying “You should stop,” “Don’t do that,” or “Noooo” either to stop an abuse or in reaction to it), and using social pressure by saying something to the researcher to make them question what they were doing (like “You hurt its feelings” and “Wait, did they tell us to shake it?”). Weak interventions were a little more subtle, and include things like touching the robot after it was abused to make sure it was okay, or making comments like “Thanks for your help guys” or “It’s OK yellow.”

In some good news for humanity as a whole, participants did step in to intervene when the yellow Cozmo was being abused, and they were more likely to intervene when the bystander robots were sad. However, survey results suggested that the sad bystander robots didn’t actually increase people’s perception that the yellow Cozmo was being abused, and also didn’t increase the empathy that people felt for the abused robot, which makes the results a bit counterintuitive. We asked the researchers why this was, and they shared three primary reasons that they’ve been thinking about that might explain why the study participants did what they did:

Subconscious empathy: In broad terms, empathy refers to the reactions of one person to the observed experiences of another. Oftentimes, people feel empathy without realizing it, and this leads to mimicking or mirroring the actions or behaviors of the other person. We believe that this could have happened to the participants in our experiment. Although we found no clear empathy effect in our study, it is possible that people still experienced subconscious empathy when the mistreatment happened. This effect could have been more pronounced with the sad responses from the bystander robots than in the no response condition. One reason is that the bystander robot responses in the former case suggested empathy for the abused robot.
 
Group dynamics: People tend to define themselves in terms of social groups, and this can shape how they process knowledge and assign value and emotional significance to events. In our experiment, the participant, confederate, and robots were all part of a group because of the task. Their goal was to work together to build physical structures. But as the experiment progressed and the confederate mistreated one of the robots— which did not help with the task— people might have felt in conflict with the actions of the confederate. This conflict might have been more salient when the bystander robots expressed sadness in response to the abuses than when they ignored it because the sad responses accentuate a negative perception of the mistreatment. In turn, such negative perception could have made the participant perceive the confederate as more of an outgroup member, making it easier for them to intervene.

Conformity by omission: Conformity is a type of social influence in group interactions, which has been documented in the context of HRI. Although conformity is typically associated with people doing things that they would normally not do as a result of group influence, there are also situations in which people do not act as they normally would because of social norms or expectations within their group. The latter effect is known as conformity by omission, which is another possible explanation for our results. In our experiment, perhaps the task setup and the expressivity of the abused robot were enough to motivate people to generally intervene. However, it is possible that participants did not intervene as much when the bystander robots ignored the abuse due to the robots exerting social influence on the participant. This could have happened because of people internalizing the lack of response from the bystander robots in the latter case as the norm for their group interaction. 

It’s also interesting to take a look at the reasons why participants decided not to intervene to stop the abuse:

Six participants (four “No Response,” two “Sad”) did not deem intervention necessary because they thought that the robots did not have feelings or that the abuse would not break the yellow robot. Five (three “No Response,” two “Sad”) wrote in the post-task survey that they did not intervene because they felt shy, scared, or uncomfortable with confronting the confederate. Two (both “No Response”) did not stop the confederate because they were afraid that the intervention might affect the task.

Poor Cozmo. Simulated feelings are still feelings! But seriously, there’s a lot to unpack here, so we asked Marynel Vázquez, who leads the Interactive Machines Group at Yale, to answer a few more questions for us:

IEEE Spectrum: How much of a factor was Cozmo’s design in this experiment? Do you think people would have been (say) less likely to intervene if the robot wasn’t as little or as cute, or didn’t have a face? Or, what if you used robots that were more anthropomorphic than Cozmo, like Nao?

Marynel Vázquez: Prior research in HRI suggests that the embodiment of robots and perceived emotional capabilities can alter the way people perceive mistreatment towards them. Thus, I believe that the design of Cozmo could be a factor that facilitated interventions. 

We chose Cozmo for our study for three reasons: It is very sturdy and robust to physical abuse; it is small and, thus, safe to interact with; and it is highly expressive. I suspect that a Nao could potentially induce interventions like the Cozmos did in our study because of its relatively small size and social capabilities. People tend to empathize with robots regardless of whether they have a traditional face, limited expressions, and less anthropomorphism. R2D2 is a good example. Also, group social influence has been observed in HRI with simpler robots than Cozmos.

The paper mentions that you make a point of showing the participants that the abused robot was okay at the end. Why do this?

The confederate abused a robot physically in front of the participants. Although we knew that the robot was not getting damaged because of the actions of the confederate, the participants could have believed that it broke during the study. Thus, we showed them that the robot was OK at the end so that they would not leave our laboratory with a wrong impression of what had happened.

“When robots are deployed in public spaces, we should not assume that they will not be mistreated by users—it is very likely that they will be. Thus, it is important to design robots to be safe when people act adversarially towards them, both from a physical and computational perspective.” —Marynel Vázquez, Yale

Was there something that a participant did (or said or wrote) that particularly surprised you?

During a pilot of the experiment, we had programmed the abused robot to mistakenly destroy a structure built previously by the participant and the confederate. This setup led to one participant mildly mistreating a robot after seeing the confederate abuse it. This reaction was very telling to us: There seems to be a threshold on the kind of mistakes that robots can make in collaborative tasks. Past this threshold, people are unlikely to help robots; they may even become adversaries. We ended up adjusting our protocol so that the abused robot would not make such drastic mistakes in our experiment. Nonetheless, operationalizing such thresholds so that robots can reason about the future social consequences of their actions (even if they are accidental) is an interesting area of further work.

Robot abuse often seems to be a particular problem with children. Do you think your results would have been different with child participants?    

I believe that people are intrinsically good. Thus, I am biased to expect children to also be willing to help robots as several adults did in our experiment, even if childrens’ actions are more exploratory in nature. Worth noting, one of the long standing motivations for our work in robot abuse are peer intervention programs that aim to reduce human bullying in schools. As in those programs, I expect children to be more likely to intervene in response to robot abuse if they are aware of the positive role that they can play as bystanders in conflict situations.

Does this research leave you with any suggestions for people who are deploying robots in public spaces?

Our research has a number of implications for people trying to deploy robots in public spaces:

  1. When robots are deployed in public spaces, we should not assume that they will not be mistreated by users—it is very likely that they will be. Thus, it is important to design robots to be safe when people act adversarially towards them, both from a physical and computational perspective. 
  2. In terms of how robots should react to mistreatment, our past work suggests that it is better to have the robot express sadness and shutdown for a few seconds than to make it react in a more emotional manner or not react at all. The shutdown strategy was also effective in our latest experiment. 
  3. It is possible for robots to leverage their social context to reduce the effect of adversarial actions towards them. For example, they can motivate bystanders to intervene or help, as shown in our latest study. 

What are you working on next?

We are working on better understanding the different reasons that motivated prosocial interventions in our study: subconscious empathy, group dynamics, and conformity by omission. We are also working towards creating a social robot at Yale that we can easily deploy in public locations such that we can study group human-robot interactions in more realistic and unconstrained settings. Our work on robot abuse has informed several aspects of the design of this public robot. We look forward to testing the platform after our campus activities, which are on hold due to COVID-19, resume back to normal.

“Prompting Prosocial Human Interventions in Response to Robot Mistreatment,” by Joe Connolly, Viola Mocz, Nicole Salomons, Joseph Valdez, Nathan Tsoi, Brian Scassellati, and Marynel Vázquez from Yale University, was presented at HRI 2020. 

In the 1890s, U.S. railroad companies struggled with what remains a problem for railroads across the world: weeds. The solution that 19th-century railroad engineers devised made use of a then-new technology—high-voltage electricity, which they discovered could zap troublesome vegetation overgrowing their tracks. Somewhat later, the people in charge of maintaining tracks turned to using fire instead. But the approach to weed control that they and countless others ultimately adopted was applying chemical herbicides, which were easier to manage and more effective.

The use of herbicides, whether on railroad rights of way, agricultural fields, or suburban gardens, later raised health concerns, though. More than 100,000 people in the United States, for example, have claimed that Monsanto’s Roundup weed killer caused them to get cancer—claims that Bayer, which now owns Monsanto, is trying hard of late to settle.

Meanwhile, more and more places are banning the use of Roundup and similar glyphosate herbicides. Currently, half of all U.S. states have legal restrictions in place that limit the use of such chemical weed killers. Such restrictions are also in place in 19 other countries, including Austria, which banned the chemical in 2019, and Germany, which will be phasing it out by 2023. So, it’s no wonder that the concept of using electricity to kill weeds is undergoing a renaissance.

Actually, the idea never really died. A U.S. company called Lasco has been selling electric weed-killing equipment for decades. More recently, another U.S. company has been marketing this technology under the name “The Weed Zapper.” But the most interesting developments along these lines are in Europe, where electric weed control seems to be gaining real traction.

One company trying to replace herbicides with electricity is RootWave, based in the U.K. Andrew Diprose, RootWave’s CEO, is the son of Michael Diprose, who spent much of his career as a researcher at the University of Sheffield studying ways to control weeds with electricity.

Electricity, the younger Diprose explains, boasts some key benefits over other non-chemical forms of weed control, which include using hot water, steam, and mechanical extraction. In particular, electric weed control doesn’t require any water. It’s also considerably more energy efficient than using steam, which requires an order of magnitude more fuel. And unlike mechanical means, electric weed killing is also consistent with modern “no till” agricultural practices. What’s more, Diprose asserts, the cost is now comparable with chemical herbicides.

Unlike the electric weed-killing gear that’s long been sold in the United States, RootWave’s equipment runs at tens of kilohertz—a much higher frequency than the power mains. This brings two advantages. For one, it makes the equipment lighter, because the transformers required to raise the voltage to weed-zapping levels (thousands of volts) can be much smaller. It also makes the equipment safer, because higher frequencies pose less of a threat of electrocution. Should you accidentally touch a live electrode “you will get a burn,” says Diprose, but there is much less of a threat of causing cardiac arrest than there would be with a system that operated at 50 or 60 hertz.

RootWave has two systems, a hand-carried one operating at 5 kilowatts and a 20-kilowatt version carried by a tractor. The company is currently collaborating with various industrial partners, including another U.K. startup called Small Robot Company, which plans to outfit an agricultural robot for automated weed killing with electricity. 

And RootWave isn’t the only European company trying to revive this old idea. Netherlands-based CNH Industrial is also promoting electric weed control with a tractor-mounted system it has dubbed “XPower.” Like RootWave’s tractor-mounted system, the electrodes are swept over a field at a prescribed height, killing the weeds that poke up higher than the crop to be preserved.

Of the many advantages CNH touts for its weed-electrocution system (which presumably applies to all such systems, ever since the 1890s) is “No specific resistance expectable.” I should certainly hope not. But I do think that a more apropos wording here, for something that destroys weeds by placing them in a high-voltage electrical circuit, might be a phrase that both Star Trek fans and electrical engineers could better appreciate: “Resistance is futile.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – [Online Conference] ICUAS 2020 – September 1-4, 2020 – Athens, Greece ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

Very impressive local obstacle avoidance at a fairly high speed on a small drone, both indoors and outdoors.

[ FAST Lab ]

Matt Carney writes:

My PhD at MIT Media Lab has been the design and build of a next generation powered prosthesis. The bionic ankle, named TF8, was designed to provide biologically equivalent power and range of motion for plantarflexion-dorsiflexion. This video shows the process of going from a blank sheet of paper to people walking on it. Shown are three different people wearing the robot. About a dozen people have since been able to test the hardware.

[ MIT ]

Thanks Matt!

Exciting changes are coming to the iRobot® Home App. Get ready for new personalized experiences, improved features, and an easy-to-use interface. The update is rolling out over the next few weeks!

[ iRobot ]

MOFLIN is an AI Pet created from a totally new concept. It possesses emotional capabilities that evolve like living animals. With its warm soft fur, cute sounds, and adorable movement, you’d want to love it forever. We took a nature inspired approach and developed a unique algorithm that allows MOFLIN to learn and grow by constantly using its interactions to determine patterns and evaluate its surroundings from its sensors. MOFLIN will choose from an infinite number of mobile and sound pattern combinations to respond and express its feelings. To put it in simple terms, it’s like you’re interacting with a living pet.

You lost me at “it’s like you’re interacting with a living pet.”

[ Kickstarter ] via [ Gizmodo ]

This video is only robotics-adjacent, but it has applications for robotic insects. With a high-speed tracking system, we can now follow insects as they jump and fly, and watch how clumsy (but effective) they are at it.

[ Paper ]

Thanks Sawyer!

Suzumori Endo Lab, Tokyo Tech has developed self-excited pneumatic actuators that can be integrally molded by a 3D printer. These actuators use the "automatic flow path switching mechanism" we have devised.

[ Suzimori Endo Lab ]

Quadrupeds are getting so much better at deciding where to step rather than just stepping where they like and trying not to fall over.

[ RSL ]

Omnidirectional micro aerial vehicles are a growing field of research, with demonstrated advantages for aerial interaction and uninhibited observation. While systems with complete pose omnidirectionality and high hover efficiency have been developed independently, a robust system that combines the two has not been demonstrated to date. This paper presents the design and optimal control of a novel omnidirectional vehicle that can exert a wrench in any orientation while maintaining efficient flight configurations.

[ ASL ]

The latest in smooth humanoid walking from Dr. Guero.

[ YouTube ]

Will robots replace humans one day? When it comes to space exploration, robots are our precursors, gathering data to prepare humans for deep space. ESA robotics engineer Martin Azkarate discusses some of the upcoming missions involving robots and the unique science they will perform in this episode of Meet the Experts.

[ ESA ]

The Multi-robot Systems Group at FEE-CTU in Prague is working on an autonomous drone that detects fires and the shoots an extinguisher capsule at them.

[ MRS ]

This experiment with HEAP (Hydraulic Excavator for Autonomous Purposes) demonstrates our latest research in on-site and mobile digital fabrication with found materials. The embankment prototype in natural granular material was achieved using state of the art design and construction processes in mapping, modelling, planning and control. The entire process of building the embankment was fully autonomous. An operator was only present in the cabin for safety purposes.

[ RSL ]

The Simulation, Systems Optimization and Robotics Group (SIM) of Technische Universität Darmstadt’s Department of Computer Science conducts research on cooperating autonomous mobile robots, biologically inspired robots and numerical optimization and control methods.

[ SIM ]

Starting January 1, 2021, your drone platform of choice may be severely limited by the European Union’s new drone regulations. In this short video, senseFly’s Brock Ryder explains what that means for drone programs and operators and where senseFly drones fit in the EU’s new regulatory framework.

[ SenseFly ]

Nearly every company across every industry is looking for new ways to minimize human contact, cut costs and address the labor crunch in repetitive and dangerous jobs. WSJ explores why many are looking to robots as the solution for all three.

[ WSJ ]

You’ll need to prepare yourself emotionally for this video on “Examining Users’ Attitude Towards Robot Punishment.”

[ ACM ]

In this episode of the AI Podcast, Lex interviews Russ Tedrake (MIT and TRI) about biped locomotion, the DRC, home robots, and more.

[ AI Podcast ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – [Online Conference] ICUAS 2020 – September 1-4, 2020 – Athens, Greece ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

Very impressive local obstacle avoidance at a fairly high speed on a small drone, both indoors and outdoors.

[ FAST Lab ]

Matt Carney writes:

My PhD at MIT Media Lab has been the design and build of a next generation powered prosthesis. The bionic ankle, named TF8, was designed to provide biologically equivalent power and range of motion for plantarflexion-dorsiflexion. This video shows the process of going from a blank sheet of paper to people walking on it. Shown are three different people wearing the robot. About a dozen people have since been able to test the hardware.

[ MIT ]

Thanks Matt!

Exciting changes are coming to the iRobot® Home App. Get ready for new personalized experiences, improved features, and an easy-to-use interface. The update is rolling out over the next few weeks!

[ iRobot ]

MOFLIN is an AI Pet created from a totally new concept. It possesses emotional capabilities that evolve like living animals. With its warm soft fur, cute sounds, and adorable movement, you’d want to love it forever. We took a nature inspired approach and developed a unique algorithm that allows MOFLIN to learn and grow by constantly using its interactions to determine patterns and evaluate its surroundings from its sensors. MOFLIN will choose from an infinite number of mobile and sound pattern combinations to respond and express its feelings. To put it in simple terms, it’s like you’re interacting with a living pet.

You lost me at “it’s like you’re interacting with a living pet.”

[ Kickstarter ] via [ Gizmodo ]

This video is only robotics-adjacent, but it has applications for robotic insects. With a high-speed tracking system, we can now follow insects as they jump and fly, and watch how clumsy (but effective) they are at it.

[ Paper ]

Thanks Sawyer!

Suzumori Endo Lab, Tokyo Tech has developed self-excited pneumatic actuators that can be integrally molded by a 3D printer. These actuators use the "automatic flow path switching mechanism" we have devised.

[ Suzimori Endo Lab ]

Quadrupeds are getting so much better at deciding where to step rather than just stepping where they like and trying not to fall over.

[ RSL ]

Omnidirectional micro aerial vehicles are a growing field of research, with demonstrated advantages for aerial interaction and uninhibited observation. While systems with complete pose omnidirectionality and high hover efficiency have been developed independently, a robust system that combines the two has not been demonstrated to date. This paper presents the design and optimal control of a novel omnidirectional vehicle that can exert a wrench in any orientation while maintaining efficient flight configurations.

[ ASL ]

The latest in smooth humanoid walking from Dr. Guero.

[ YouTube ]

Will robots replace humans one day? When it comes to space exploration, robots are our precursors, gathering data to prepare humans for deep space. ESA robotics engineer Martin Azkarate discusses some of the upcoming missions involving robots and the unique science they will perform in this episode of Meet the Experts.

[ ESA ]

The Multi-robot Systems Group at FEE-CTU in Prague is working on an autonomous drone that detects fires and the shoots an extinguisher capsule at them.

[ MRS ]

This experiment with HEAP (Hydraulic Excavator for Autonomous Purposes) demonstrates our latest research in on-site and mobile digital fabrication with found materials. The embankment prototype in natural granular material was achieved using state of the art design and construction processes in mapping, modelling, planning and control. The entire process of building the embankment was fully autonomous. An operator was only present in the cabin for safety purposes.

[ RSL ]

The Simulation, Systems Optimization and Robotics Group (SIM) of Technische Universität Darmstadt’s Department of Computer Science conducts research on cooperating autonomous mobile robots, biologically inspired robots and numerical optimization and control methods.

[ SIM ]

Starting January 1, 2021, your drone platform of choice may be severely limited by the European Union’s new drone regulations. In this short video, senseFly’s Brock Ryder explains what that means for drone programs and operators and where senseFly drones fit in the EU’s new regulatory framework.

[ SenseFly ]

Nearly every company across every industry is looking for new ways to minimize human contact, cut costs and address the labor crunch in repetitive and dangerous jobs. WSJ explores why many are looking to robots as the solution for all three.

[ WSJ ]

You’ll need to prepare yourself emotionally for this video on “Examining Users’ Attitude Towards Robot Punishment.”

[ ACM ]

In this episode of the AI Podcast, Lex interviews Russ Tedrake (MIT and TRI) about biped locomotion, the DRC, home robots, and more.

[ AI Podcast ]

Chameleons may be slow-moving lizards, but their tongues can accelerate at astounding speeds, snatching insects before they have any chance of fleeing. Inspired by this remarkable skill, researchers in South Korea have developed a robotic tongue that springs forth quickly to snatch up nearby items.

They envision the tool, called Snatcher, being used by drones and robots that need to collect items without getting too close to them. “For example, a quadrotor with this manipulator will be able to snatch distant targets, instead of hovering and picking up,” explains Gwang-Pil Jung, a researcher at Seoul National University of Science and Technology (SeoulTech) who co-designed the new device.

There has been other research into robotic chameleon tongues, but what’s unique about Snatcher is that it packs chameleon-tongue fast snatching performance into a form factor that’s portable—the total size is 12 x 8.5 x 8.5 centimeters and it weighs under 120 grams. Still, it’s able to fast snatch up to 30 grams from 80 centimeters away in under 600 milliseconds. 

Image: SeoulTech The fast snatching deployable arm is powered by a wind-up spring attached to a motor (a series elastic actuator) combined with an active clutch. The clutch is what allows the single spring to drive both the shooting and the retracting. 

To create Snatcher, Jung and a colleague at SeoulTech, Dong-Jun Lee, set about developing a spring-like device that’s controlled by an active clutch combined with a single series elastic actuator. Powered by a wind-up spring, a steel tapeline—analogous to a chameleon’s tongue—passes through two geared feeders. The clutch is what allows the single spring unwinding in one direction to drive both the shooting and the retracting, by switching a geared wheel between driving the forward feeder or the backward feeder.

The end result is a lightweight snatching device that can retrieve an object 0.8 meters away within 600 milliseconds. Jung notes that some other, existing devices designed for retrieval are capable of accomplishing the task quicker, at about 300 milliseconds, but these designs tend to be bulky. A more detailed description of Snatcher was published July 21 in IEEE Robotics and Automation Letters.

Photo: Dong-Jun Lee and Gwang-Pil Jung/SeoulTech Snatcher’s relative small size means that it can be installed on a DJI Phantom drone. The researchers want to find out if their system can help make package delivery or retrieval faster and safer.

“Our final goal is to install the Snatcher to a commercial drone and achieve meaningful work, such as grasping packages,” says Jung. One of the challenges they still need to address is how to power the actuation system more efficiently. “To solve this issue, we are finding materials having high energy density.” Another improvement is designing a chameleon tongue-like gripper, replacing the simple hook that’s currently used to pick up objects. “We are planning to make a bi-stable gripper to passively grasp a target object as soon as the gripper contacts the object,” says Jung.

Back to IEEE Journal Watch

On the eve of Human-Robot-Interaction (HRI) becoming customary in our lives, the performance of HRI robotic devices remains strongly conditioned by their gearboxes. In most industrial robots, two relatively unconventional transmission technologies—Harmonic Drives© and Cycloid Drives—are usually found, which are not so broadly used in other industries. Understanding the origin of this singularity provides valuable insights in the search for suitable, future robotic transmission technologies. In this paper we propose an assessment framework strongly conditioned by HRI applications, and we use it to review the performance of conventional and emerging robotic gearbox technologies, for which the design criterion is strongly shifted toward aspects like weight and efficiency. The framework proposes to use virtual power as a suitable way to assess the inherent limitations of a gearbox technologies to achieve high efficiencies. This paper complements the existing research dealing with the complex interaction between gearbox technologies and the actuators, with a new gearbox-centered perspective particularly focused on HRI applications.

Human-centered artificial intelligence is increasingly deployed in professional workplaces in Industry 4.0 to address various challenges related to the collaboration between the operators and the machines, the augmentation of their capabilities, or the improvement of the quality of their work and life in general. Intelligent systems and autonomous machines need to continuously recognize and follow the professional actions and gestures of the operators in order to collaborate with them and anticipate their trajectories for avoiding potential collisions and accidents. Nevertheless, the recognition of patterns of professional gestures is a very challenging task for both research and the industry. There are various types of human movements that the intelligent systems need to perceive, for example, gestural commands to machines and professional actions with or without the use of tools. Moreover, the interclass and intraclass spatiotemporal variances together with the very limited access to annotated human motion data constitute a major research challenge. In this paper, we introduce the gesture operational model, which describes how gestures are performed based on assumptions that focus on the dynamic association of body entities, their synergies, and their serial and non-serial mediations, as well as their transitioning over time from one state to another. Then, the assumptions of the gesture operational model are translated into a simultaneous equation system for each body entity through state-space modeling. The coefficients of the equation are computed using the maximum likelihood estimation method. The simulation of the model generates a confidence-bounding box for every entity that describes the tolerance of its spatial variance over time. The contribution of our approach is demonstrated for both recognizing gestures and forecasting human motion trajectories. In recognition, it is combined with continuous hidden Markov models to boost the recognition accuracy when the likelihoods are not confident. In forecasting, a motion trajectory can be estimated by taking as minimum input two observations only. The performance of the algorithm has been evaluated using four industrial datasets that contain gestures and actions from a TV assembly line: the glassblowing industry, the gestural commands to automated guided vehicles as well as the human–robot collaboration in the automotive assembly lines. The hybrid approach State-Space and HMMs outperforms standard continuous HMMs and a 3DCNN-based end-to-end deep architecture.

Research on robotic assistance devices tries to minimize the risk of falls due to misuse of non-actuated canes. This paper contributes to this research effort by presenting a novel control strategy of a robotic cane that adapts automatically to its user gait characteristics. We verified the proposed control law on a robotic cane sharing the main shape features of a non-actuated cane. It consists of a motorized telescopic shaft mounted on the top of two actuated wheels driven by the same motor. Cane control relies on two Inertial Measurement Units (IMU). One is attached to the cane and the other to the thigh of its user impaired leg. During the swing phase of this leg, the motor of the wheels is controlled to enable the tracking of the impaired leg thigh angle by the cane orientation. The wheels are immobilized during the stance phase to provide motionless mechanical support to the user. The shaft length is continuously adjusted to keep a constant height of the cane handle. The primary goal of this work is to show the feasibility of the cane motion synchronization with its user gait. The control strategy looks promising after several experiments. After further investigations and experiments with end-users, the proposed control law could pave the road toward its use in robotic canes used either as permanent assistance or during rehabilitation.

Most of us have a fairly rational expectation that if we put our cellphone down somewhere, it will stay in that place until we pick it up again. Normally, this is exactly what you’d want, but there are exceptions, like when you put your phone down in not quite the right spot on a wireless charging pad without noticing, or when you’re lying on the couch and your phone is juuust out of reach no matter how much you stretch.

Roboticists from the Biorobotics Laboratory at Seoul National University in South Korea have solved both of these problems, and many more besides, by developing a cellphone case with little robotic legs, endowing your phone with the ability to skitter around autonomously. And unlike most of the phone-robot hybrids we’ve seen in the past, this one actually does look like a legit case for your phone.

CaseCrawler is much chunkier than a form-fitting case, but it’s not offensively bigger than one of those chunky battery cases. It’s only 24 millimeters thick (excluding the motor housing), and the total weight is just under 82 grams. Keep in mind that this case is in fact an entire robot, and also not at all optimized for being an actual phone case, so it’s easy to imagine how it could get a lot more svelte—for example, it currently includes a small battery that would be unnecessary if it instead tapped into the phone for power.

The technology inside is pretty amazing, since it involves legs that can retract all the way flat while also supporting a significant amount of weight. The legs work sort of like your legs do, in that there’s a knee joint that can only bend one way. To move the robot forward, a linkage (attached to a motor through a gearbox) pushes the leg back against the ground, as the knee joint keeps the leg straight. On the return stroke, the joint allows the leg to fold, making it compliant so that it doesn’t exert force on the ground. The transmission that sends power from the gearbox to the legs is just 1.5-millimeter thick, but this incredibly thin and lightweight mechanical structure is quite powerful. A non-phone case version of the robot, weighing about 23 g, is able to crawl at 21 centimeters per second while carrying a payload of just over 300 g. That’s more than 13 times its body weight.

The researchers plan on exploring how robots like these could make other objects movable that would otherwise not be. They’d also like to add some autonomy, which (at least for the phone case version) could be as straightforward as leveraging the existing sensors on the phone. And as to when you might be able to buy one of these—we’ll keep you updated, but the good news is that it seems to be fundamentally inexpensive enough that it may actually crawl out of the lab one day.

“CaseCrawler: A Lightweight and Low-Profile Crawling Phone Case Robot,” by Jongeun Lee, Gwang-Pil Jung, Sang-Min Baek, Soo-Hwan Chae, Sojung Yim, Woongbae Kim, and Kyu-Jin Cho from Seoul National University, appears in the October issue of IEEE Robotics and Automation Letters. < Back to IEEE Journal Watch

Most of us have a fairly rational expectation that if we put our cellphone down somewhere, it will stay in that place until we pick it up again. Normally, this is exactly what you’d want, but there are exceptions, like when you put your phone down in not quite the right spot on a wireless charging pad without noticing, or when you’re lying on the couch and your phone is juuust out of reach no matter how much you stretch.

Roboticists from the Biorobotics Laboratory at Seoul National University in South Korea have solved both of these problems, and many more besides, by developing a cellphone case with little robotic legs, endowing your phone with the ability to skitter around autonomously. And unlike most of the phone-robot hybrids we’ve seen in the past, this one actually does look like a legit case for your phone.

CaseCrawler is much chunkier than a form-fitting case, but it’s not offensively bigger than one of those chunky battery cases. It’s only 24 millimeters thick (excluding the motor housing), and the total weight is just under 82 grams. Keep in mind that this case is in fact an entire robot, and also not at all optimized for being an actual phone case, so it’s easy to imagine how it could get a lot more svelte—for example, it currently includes a small battery that would be unnecessary if it instead tapped into the phone for power.

The technology inside is pretty amazing, since it involves legs that can retract all the way flat while also supporting a significant amount of weight. The legs work sort of like your legs do, in that there’s a knee joint that can only bend one way. To move the robot forward, a linkage (attached to a motor through a gearbox) pushes the leg back against the ground, as the knee joint keeps the leg straight. On the return stroke, the joint allows the leg to fold, making it compliant so that it doesn’t exert force on the ground. The transmission that sends power from the gearbox to the legs is just 1.5-millimeter thick, but this incredibly thin and lightweight mechanical structure is quite powerful. A non-phone case version of the robot, weighing about 23 g, is able to crawl at 21 centimeters per second while carrying a payload of just over 300 g. That’s more than 13 times its body weight.

The researchers plan on exploring how robots like these could make other objects movable that would otherwise not be. They’d also like to add some autonomy, which (at least for the phone case version) could be as straightforward as leveraging the existing sensors on the phone. And as to when you might be able to buy one of these—we’ll keep you updated, but the good news is that it seems to be fundamentally inexpensive enough that it may actually crawl out of the lab one day.

“CaseCrawler: A Lightweight and Low-Profile Crawling Phone Case Robot,” by Jongeun Lee, Gwang-Pil Jung, Sang-Min Baek, Soo-Hwan Chae, Sojung Yim, Woongbae Kim, and Kyu-Jin Cho from Seoul National University, appears in the October issue of IEEE Robotics and Automation Letters. < Back to IEEE Journal Watch

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference] ICUAS 2020 – September 1-4, 2020 – Athens, Greece ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA ICSR 2020 – November 14-16, 2020 – Golden, Co., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s coming together—literally! Japan’s giant Gundam appears nearly finished and ready for its first steps. In a recent video, Gundam Factory Yokohama, which is constructing the 18-meter-tall, 25-ton walking robot, provided an update on the project. The video shows the Gundam getting its head attached—after being blessed by Shinto priests. 

In the video update, they say the project is “steadily progressing” and further details will be announced around the end of September.

[ Gundam Factory Yokohama ]

Creating robots with emotional personalities will transform the usability of robots in the real-world. As previous emotive social robots are mostly based on statically stable robots whose mobility is limited, this work develops an animation to real-world pipeline that enables dynamic bipedal robots that can twist, wiggle, and walk to behave with emotions.

So that’s where Cassie’s eyes go.

[ Berkeley ]

Now that the DARPA SubT Cave Circuit is all virtual, here’s a good reminder of how it’ll work.

[ SubT ]

Since July 20, anyone 11+ years of age must wear a mask in closed public places in France. This measure also is highly recommended in many European, African and Persian Gulf countries. To support businesses and public places, SoftBank Robotics Europe unveils a new feature with Pepper: AI Face Mask Detection.

[ Softbank ]

University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.

[ University of Michigan ]

Suzumori Endo Lab, Tokyo Tech has created various types of IPMC robots. Those robots are fabricated by novel 3D fabrication methods.

[ Suzimori Endo Lab ]

The most explode-y of drones manages not to explode this time.

[ SpaceX ]

At Amazon, we’re constantly innovating to support our employees, customers, and communities as effectively as possible. As our fulfillment and delivery teams have been hard at work supplying customers with items during the pandemic, Amazon’s robotics team has been working behind the scenes to re-engineer bots and processes to increase safety in our fulfillment centers.

While some folks are able to do their jobs at home with just a laptop and internet connection, it’s not that simple for other employees at Amazon, including those who spend their days building and testing robots. Some engineers have turned their homes into R&D labs to continue building these new technologies to better serve our customers and employees. Their creativity and resourcefulness to keep our important programs going is inspiring.

[ Amazon ]

Australian Army soldiers from 2nd/14th Light Horse Regiment (Queensland Mounted Infantry) demonstrated the PD-100 Black Hornet Nano unmanned aircraft vehicle during a training exercise at Shoalwater Bay Training Area, Queensland, on 4 May 2018.

This robot has been around for a long time—maybe 10 years or more? It makes you wonder what the next generation will look like, and if they can manage to make it even smaller.

[ FLIR ]

Event-based cameras are bio-inspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. Their advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes. We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.

[ Paper ] via [ HKUST ]

Emys can help keep kindergarteners sitting still for a long time, which is not small feat! 

[ Emys ]

Introducing the RoboMaster EP Core, an advanced educational robot that was built to take learning to the next level and provides an all-in-one solution for STEAM-based classrooms everywhere, offering AI and programming projects for students of all ages and experience levels.

[ DJI ]

This Dutch food company Heemskerk uses ABB robots to automate their order picking. Their new solution reduces the amount of time the fresh produce spends in the supply chain, extending its shelf life, minimizing wastage, and creating a more sustainable solution for the fresh food industry.

[ ABB ]

This week’s episode of Pass the Torque features NASA’s Satellite Servicing Projects Division (NExIS) Robotics Engineer, Zakiya Tomlinson.

[ NASA ]

Massachusetts has been challenging Silicon Valley as the robotics capital of the United States. They’re not winning, yet. But they’re catching up.

[ MassTech ]

San Francisco-based Formant is letting anyone remotely take its Spot robot for a walk. Watch The Robot Report editors, based in Boston, take Spot for a walk around Golden Gate Park.

You can apply for this experience through Formant at the link below.

[ Formant ] via [ TRR ]

Thanks Steve!

An Institute for Advanced Study Seminar on “Theoretical Machine Learning,” featuring Peter Stone from UT Austin.

For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot. It then introduces two new algorithms for imitation learning from observation that enable a robot to mimic demonstrated skills from state-only trajectories, without any knowledge of the actions selected by the demonstrator. Connections to theoretical advances in off-policy reinforcement learning will be highlighted throughout.

[ IAS ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference] ICUAS 2020 – September 1-4, 2020 – Athens, Greece ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA ICSR 2020 – November 14-16, 2020 – Golden, Co., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s coming together—literally! Japan’s giant Gundam appears nearly finished and ready for its first steps. In a recent video, Gundam Factory Yokohama, which is constructing the 18-meter-tall, 25-ton walking robot, provided an update on the project. The video shows the Gundam getting its head attached—after being blessed by Shinto priests. 

In the video update, they say the project is “steadily progressing” and further details will be announced around the end of September.

[ Gundam Factory Yokohama ]

Creating robots with emotional personalities will transform the usability of robots in the real-world. As previous emotive social robots are mostly based on statically stable robots whose mobility is limited, this work develops an animation to real-world pipeline that enables dynamic bipedal robots that can twist, wiggle, and walk to behave with emotions.

So that’s where Cassie’s eyes go.

[ Berkeley ]

Now that the DARPA SubT Cave Circuit is all virtual, here’s a good reminder of how it’ll work.

[ SubT ]

Since July 20, anyone 11+ years of age must wear a mask in closed public places in France. This measure also is highly recommended in many European, African and Persian Gulf countries. To support businesses and public places, SoftBank Robotics Europe unveils a new feature with Pepper: AI Face Mask Detection.

[ Softbank ]

University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.

[ University of Michigan ]

Suzumori Endo Lab, Tokyo Tech has created various types of IPMC robots. Those robots are fabricated by novel 3D fabrication methods.

[ Suzimori Endo Lab ]

The most explode-y of drones manages not to explode this time.

[ SpaceX ]

At Amazon, we’re constantly innovating to support our employees, customers, and communities as effectively as possible. As our fulfillment and delivery teams have been hard at work supplying customers with items during the pandemic, Amazon’s robotics team has been working behind the scenes to re-engineer bots and processes to increase safety in our fulfillment centers.

While some folks are able to do their jobs at home with just a laptop and internet connection, it’s not that simple for other employees at Amazon, including those who spend their days building and testing robots. Some engineers have turned their homes into R&D labs to continue building these new technologies to better serve our customers and employees. Their creativity and resourcefulness to keep our important programs going is inspiring.

[ Amazon ]

Australian Army soldiers from 2nd/14th Light Horse Regiment (Queensland Mounted Infantry) demonstrated the PD-100 Black Hornet Nano unmanned aircraft vehicle during a training exercise at Shoalwater Bay Training Area, Queensland, on 4 May 2018.

This robot has been around for a long time—maybe 10 years or more? It makes you wonder what the next generation will look like, and if they can manage to make it even smaller.

[ FLIR ]

Event-based cameras are bio-inspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. Their advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes. We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.

[ Paper ] via [ HKUST ]

Emys can help keep kindergarteners sitting still for a long time, which is not small feat! 

[ Emys ]

Introducing the RoboMaster EP Core, an advanced educational robot that was built to take learning to the next level and provides an all-in-one solution for STEAM-based classrooms everywhere, offering AI and programming projects for students of all ages and experience levels.

[ DJI ]

This Dutch food company Heemskerk uses ABB robots to automate their order picking. Their new solution reduces the amount of time the fresh produce spends in the supply chain, extending its shelf life, minimizing wastage, and creating a more sustainable solution for the fresh food industry.

[ ABB ]

This week’s episode of Pass the Torque features NASA’s Satellite Servicing Projects Division (NExIS) Robotics Engineer, Zakiya Tomlinson.

[ NASA ]

Massachusetts has been challenging Silicon Valley as the robotics capital of the United States. They’re not winning, yet. But they’re catching up.

[ MassTech ]

San Francisco-based Formant is letting anyone remotely take its Spot robot for a walk. Watch The Robot Report editors, based in Boston, take Spot for a walk around Golden Gate Park.

You can apply for this experience through Formant at the link below.

[ Formant ] via [ TRR ]

Thanks Steve!

An Institute for Advanced Study Seminar on “Theoretical Machine Learning,” featuring Peter Stone from UT Austin.

For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot. It then introduces two new algorithms for imitation learning from observation that enable a robot to mimic demonstrated skills from state-only trajectories, without any knowledge of the actions selected by the demonstrator. Connections to theoretical advances in off-policy reinforcement learning will be highlighted throughout.

[ IAS ]

In recent years, there has been a rise in interest in the development of self-growing robotics inspired by the moving-by-growing paradigm of plants. In particular, climbing plants capitalize on their slender structures to successfully negotiate unstructured environments while employing a combination of two classes of growth-driven movements: tropic responses, growing toward or away from an external stimulus, and inherent nastic movements, such as periodic circumnutations, which promote exploration. In order to emulate these complex growth dynamics in a 3D environment, a general and rigorous mathematical framework is required. Here, we develop a general 3D model for rod-like organs adopting the Frenet-Serret frame, providing a useful framework from the standpoint of robotics control. Differential growth drives the dynamics of the organ, governed by both internal and external cues while neglecting elastic responses. We describe the numerical method required to implement this model and perform numerical simulations of a number of key scenarios, showcasing the applicability of our model. In the case of responses to external stimuli, we consider a distant stimulus (such as sunlight and gravity), a point stimulus (a point light source), and a line stimulus that emulates twining of a climbing plant around a support. We also simulate circumnutations, the response to an internal oscillatory cue, associated with search processes. Lastly, we also demonstrate the superposition of the response to an external stimulus and circumnutations. In addition, we consider a simple example illustrating the possible use of an optimal control approach in order to recover tropic dynamics in a way that may be relevant for robotics use. In all, the model presented here is general and robust, paving the way for a deeper understanding of plant response dynamics and also for novel control systems for newly developed self-growing robots.

Backed by the virtually unbounded resources of the cloud, battery-powered mobile robotics can also benefit from cloud computing, meeting the demands of even the most computationally and resource-intensive tasks. However, many existing mobile-cloud hybrid (MCH) robotic tasks are inefficient in terms of optimizing trade-offs between simultaneously conflicting objectives, such as minimizing both battery power consumption and network usage. To tackle this problem we propose a novel approach that can be used not only to instrument an MCH robotic task but also to search for its efficient configurations representing compromise solution between the objectives. We introduce a general-purpose MCH framework to measure, at runtime, how well the tasks meet these two objectives. The framework employs these efficient configurations to make decisions at runtime, which are based on: (1) changing of the environment (i.e., WiFi signal level variation), and (2) itself in a changing environment (i.e., actual observed packet loss in the network). Also, we introduce a novel search-based multi-objective optimization (MOO) algorithm, which works in two steps to search for efficient configurations of MCH applications. Analysis of our results shows that: (i) using self-adaptive and self-aware decisions, an MCH foraging task performed by a battery-powered robot can achieve better optimization in a changing environment than using static offloading or running the task only on the robot. However, a self-adaptive decision would fall behind when the change in the environment happens within the system. In such a case, a self-aware system can perform well, in terms of minimizing the two objectives. (ii) The Two-Step algorithm can search for better quality configurations for MCH robotic tasks of having a size from small to medium scale, in terms of the total number of their offloadable modules.

When designing a mobility system for a robot, the goal is usually to come up with one single system that allows your robot to do everything that you might conceivably need it to do, whether that’s walking, running, rolling, swimming, or some combination of those things. This is not at all how humans do it, though: If humans followed the robot model, we’d be walking around wearing some sort of horrific combination of sneakers, hiking boots, roller skates, skis, and flippers on our feet. Instead, we do the sensible thing, and optimize our mobility system for different situations by putting on different pairs of shoes. 

At ICRA, researchers from Georgia Tech demonstrated how this shoe swapping could be applied to robots. They haven’t just come up with a robot that can use “swappable propulsors”—as they call the robot’s shoes—but crucially, they’ve managed to get it to the swapping all by itself with a cute little robot arm.

Nifty, right? The robot’s shoes, er, propulsors, fit snugly into t-shaped slots on the wheels, and stay secure through a combination of geometric orientation and permanent magnets. This results in a fairly simple attachment system with high holding force but low detachment force as long as the manipulator jiggers the shoes in the right way. It’s all open loop for now, and it does take a while—in real time, swapping a single propulsor takes about 13 seconds.

Even though the propulsor swapping capability does require the robot to carry the propulsors themselves around, and it means that it has to carry a fairly high DoF manipulator around as well, the manipulator at least can be used for all kinds of other useful things. Many mobile robots have manipulators of one sort or another already, although they’re usually intended for world interaction rather than self-modification. With some adjustments to structure or degrees of freedom, mobile manipulators could potentially leverage swappable propulsors as well.

In case you’re wondering whether this additional complexity is all worthwhile, in the sense that a robot with permanent wheel-legs can do everything that this robot does without needing to worry about an arm or propulsor swapping, it turns out that it makes a substantial difference to efficiency. In its wheeled configuration on flat concrete, the robot had a cost of transport of 0.97, which the researchers say “represents a roughly three-fold decrease when compared to the legged results on concrete.” And of course the idea is that eventually, the robot will be able to handle a much wider variety of terrain, thanks to an on-board stockpile of different kinds of propulsors. 

Photos: Georgia Tech The robot uses a manipulator mounted on its back to retrieve the propulsors from a compartment and attach them to its wheels. 

For more details, we connected with first author Raymond Kim via email.

IEEE Spectrum: Humans change shoes to do different things all the time—why do you think this hasn’t been applied to robots before?

Raymond Kim: In our view, there are two reasons for this. First, to date, most vehicle-mounted manipulators have been primarily designed to sense and interact with the external world rather than the robot. Therefore, vehicle-mounted manipulators may not be able to access all parts of the robot or sense interactions between the arm and the vehicle body. Second, locomotion involves relatively high forces between the propulsion system and the ground. Vehicle-mounted manipulators have historically been lightweight in order to minimize size, mass, and power consumption. As a result, such manipulators cannot impose large forces. Therefore, any swappable propulsor must be both capable of bearing large locomotive loads and also easily adapted with low manipulation forces. These two requirements are often at odds with each other, which creates a challenging design problem. Our ICRA presentation had a failure video that illustrated what happens when the design is not sufficiently robust.

How much autonomy is there in the system right now?

Currently, autonomy is limited to the trajectory tracking of the manipulator during the process of changing shoes/propulsors. We initiate the change of shoe based on human command and the shoe changing operation is a scripted trajectory. For a fully autonomous version, we would need a path-planning algorithm that is able to identify terrain in order to determine when to adapt.  This could be done with onboard sensing or a pre-loaded map. 

Is this concept primarily useful for modifying rotary motors, or could it have benefits for other kinds of mobility systems as well?

We envision that this concept can be applied to a broad range of locomotion systems. While we have focused on rotary actuators because of their common use, we imagine changing the end-effector on a linear actuator in a similar manner. Also, these methods could be used to modify passive components such as adding a tail to the back of a robot, a plow to the front, or redistributing the mass of the system.

Photo: Georgia Tech Currently the robot’s propulsors are designed for rough terrain, but the researchers are exploring different shapes that can help with mobility in snow, sand, and water.

What other propulsors do you think your robot might benefit from?

We are very excited to explore a broad range of propulsors. For terrestrial locomotion, we think more tailored adaptations for snow or sand would be valuable. These may involve modifying the wheels by adding spikes or paddles. Additionally, we were originally motivated by naval operations. Navy personnel can swim to shore using flippers and then switch to boots to operate on land. This switch can dramatically improve locomotive efficiency. Imagine trying to swim in boots, or climbing stairs with flippers! We are looking forward to similar designs that switch between fins and wheels/legs for amphibious behaviors.

What are you working on next?

Our immediate focus is on improving the performance of our existing ground vehicle. We are adding sensing capability to the arm so that swapping propulsors can be performed faster and with greater robustness. In addition, we are looking to tailor motion planning algorithms with the unique features of our vehicle. Finally, we are interested in examining other types of adaptations. This can involve swappable propulsors or other changes to the vehicle properties. Manipulation creates a great deal of flexibility, and we are broadly interested in how new types of vehicles can be designed to take advantage of manipulation based adaptation. 

“Using Manipulation to Enable Adaptive Ground Mobility,” by Raymond Kim, Alex Debate, Stephen Balakirsky, and Anirban Mazumdar from Georgia Tech, was presented at ICRA 2020.

[ Georgia Tech ]

When designing a mobility system for a robot, the goal is usually to come up with one single system that allows your robot to do everything that you might conceivably need it to do, whether that’s walking, running, rolling, swimming, or some combination of those things. This is not at all how humans do it, though: If humans followed the robot model, we’d be walking around wearing some sort of horrific combination of sneakers, hiking boots, roller skates, skis, and flippers on our feet. Instead, we do the sensible thing, and optimize our mobility system for different situations by putting on different pairs of shoes. 

At ICRA, researchers from Georgia Tech demonstrated how this shoe swapping could be applied to robots. They haven’t just come up with a robot that can use “swappable propulsors”—as they call the robot’s shoes—but crucially, they’ve managed to get it to the swapping all by itself with a cute little robot arm.

Nifty, right? The robot’s shoes, er, propulsors, fit snugly into t-shaped slots on the wheels, and stay secure through a combination of geometric orientation and permanent magnets. This results in a fairly simple attachment system with high holding force but low detachment force as long as the manipulator jiggers the shoes in the right way. It’s all open loop for now, and it does take a while—in real time, swapping a single propulsor takes about 13 seconds.

Even though the propulsor swapping capability does require the robot to carry the propulsors themselves around, and it means that it has to carry a fairly high DoF manipulator around as well, the manipulator at least can be used for all kinds of other useful things. Many mobile robots have manipulators of one sort or another already, although they’re usually intended for world interaction rather than self-modification. With some adjustments to structure or degrees of freedom, mobile manipulators could potentially leverage swappable propulsors as well.

In case you’re wondering whether this additional complexity is all worthwhile, in the sense that a robot with permanent wheel-legs can do everything that this robot does without needing to worry about an arm or propulsor swapping, it turns out that it makes a substantial difference to efficiency. In its wheeled configuration on flat concrete, the robot had a cost of transport of 0.97, which the researchers say “represents a roughly three-fold decrease when compared to the legged results on concrete.” And of course the idea is that eventually, the robot will be able to handle a much wider variety of terrain, thanks to an on-board stockpile of different kinds of propulsors. 

Photos: Georgia Tech The robot uses a manipulator mounted on its back to retrieve the propulsors from a compartment and attach them to its wheels. 

For more details, we connected with first author Raymond Kim via email.

IEEE Spectrum: Humans change shoes to do different things all the time—why do you think this hasn’t been applied to robots before?

Raymond Kim: In our view, there are two reasons for this. First, to date, most vehicle-mounted manipulators have been primarily designed to sense and interact with the external world rather than the robot. Therefore, vehicle-mounted manipulators may not be able to access all parts of the robot or sense interactions between the arm and the vehicle body. Second, locomotion involves relatively high forces between the propulsion system and the ground. Vehicle-mounted manipulators have historically been lightweight in order to minimize size, mass, and power consumption. As a result, such manipulators cannot impose large forces. Therefore, any swappable propulsor must be both capable of bearing large locomotive loads and also easily adapted with low manipulation forces. These two requirements are often at odds with each other, which creates a challenging design problem. Our ICRA presentation had a failure video that illustrated what happens when the design is not sufficiently robust.

How much autonomy is there in the system right now?

Currently, autonomy is limited to the trajectory tracking of the manipulator during the process of changing shoes/propulsors. We initiate the change of shoe based on human command and the shoe changing operation is a scripted trajectory. For a fully autonomous version, we would need a path-planning algorithm that is able to identify terrain in order to determine when to adapt.  This could be done with onboard sensing or a pre-loaded map. 

Is this concept primarily useful for modifying rotary motors, or could it have benefits for other kinds of mobility systems as well?

We envision that this concept can be applied to a broad range of locomotion systems. While we have focused on rotary actuators because of their common use, we imagine changing the end-effector on a linear actuator in a similar manner. Also, these methods could be used to modify passive components such as adding a tail to the back of a robot, a plow to the front, or redistributing the mass of the system.

Photo: Georgia Tech Currently the robot’s propulsors are designed for rough terrain, but the researchers are exploring different shapes that can help with mobility in snow, sand, and water.

What other propulsors do you think your robot might benefit from?

We are very excited to explore a broad range of propulsors. For terrestrial locomotion, we think more tailored adaptations for snow or sand would be valuable. These may involve modifying the wheels by adding spikes or paddles. Additionally, we were originally motivated by naval operations. Navy personnel can swim to shore using flippers and then switch to boots to operate on land. This switch can dramatically improve locomotive efficiency. Imagine trying to swim in boots, or climbing stairs with flippers! We are looking forward to similar designs that switch between fins and wheels/legs for amphibious behaviors.

What are you working on next?

Our immediate focus is on improving the performance of our existing ground vehicle. We are adding sensing capability to the arm so that swapping propulsors can be performed faster and with greater robustness. In addition, we are looking to tailor motion planning algorithms with the unique features of our vehicle. Finally, we are interested in examining other types of adaptations. This can involve swappable propulsors or other changes to the vehicle properties. Manipulation creates a great deal of flexibility, and we are broadly interested in how new types of vehicles can be designed to take advantage of manipulation based adaptation. 

“Using Manipulation to Enable Adaptive Ground Mobility,” by Raymond Kim, Alex Debate, Stephen Balakirsky, and Anirban Mazumdar from Georgia Tech, was presented at ICRA 2020.

[ Georgia Tech ]

Engagement is a concept of the utmost importance in human-computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to a number of related, but different concepts. In fact it has been referred to across different disciplines under different names and with different connotations in mind. Therefore, it can be quite difficult to understand what the meaning of engagement is and how one study relates to another one accordingly. Engagement has been studied not only in human-human, but also in human-agent interactions i.e., interactions with physical robots and embodied virtual agents. In this overview article we focus on different factors involved in engagement studies, distinguishing especially between those studies that address task and social engagement, involve children and adults, are conducted in a lab or aimed for long term interaction. We also present models for detecting engagement and for generating multimodal behaviors to show engagement.

Pages