Feed aggregator



Facebook, or Meta as it's now calling itself for some reason that I don't entirely understand, is today announcing some new tactile sensing hardware for robots. Or, new-ish, at least—there's a ruggedized and ultra low-cost GelSight-style fingertip sensor, plus a nifty new kind of tactile sensing skin based on suspended magnetic particles and machine learning. It's cool stuff, but why?

Obviously, Facebook Meta cares about AI, because it uses AI to try and do a whole bunch of the things that it's unwilling or unable to devote the time of actual humans to. And to be fair, there are some things that AI may be better at (or at least more efficient at) than humans are. AI is of course much worse than humans at many, many, many things as well, but that debate goes well beyond Facebook Meta and certainly well beyond the scope of this article, which is about tactile sensing for robots. So why does Facebook Meta care even a little bit about making robots better at touching stuff? Yann LeCun, the Chief AI Scientist at Facebook Meta, takes a crack at explaining it:

Before I joined Facebook, I was chatting with Mark Zuckerberg and I asked him, "is there any area related to AI that you think we shouldn't be working on?" And he said, "I can't find any good reason for us to work on robotics." And so, that was kind of the start of Facebook AI Research—we were not going to work on robotics.

After a few years, it became clear that a lot of interesting progress in AI was happening in the context of robotics, because robotics is the nexus of where people in AI research are trying to get the full loop of perception, reasoning, planning, and action, and getting feedback from the environment. Doing it in the real world is where the problems are concentrated, and you can't play games if you want robots to learn quickly.

It was clear that four or five years ago, there was no business reason to work on robotics, but the business reasons have kind of popped up. Robotics could be used for telepresence, for maintaining data centers more automatically, but the more important aspect of it is making progress towards intelligent agents, the kinds of things that could be used in the metaverse, in augmented reality, and in virtual reality. That's really one of the raison d'être of a research lab, to foresee the domains that will be important in the future. So that's the motivation.

Well, okay, but none of that seems like a good justification for research into tactile sensing specifically. But according to LeCun, it's all about putting together the pieces required for some level of fundamental world understanding, a problem that robotic systems are still bad at and that machine learning has so far not been able to tackle:

How to get machines to learn that model of the world that allows them to predict in advance and plan what's going to happen as a consequence of their actions is really the crux of the problem here. And this is something you have to confront if you work on robotics. But it's also something you have to confront if you want to have an intelligent agent acting in a virtual environment that can interact with humans in a natural way. And one of the long-term visions of augmented reality, for example, is virtual agents that basically are with you all the time, living in your automatic reality glasses or your smartphone or your laptop or whatever, helping you in your daily life as a human assistant would do, but also can answer any question you have. And that system will have to have some degree of understanding of how the world works—some degree of common sense, and be smart enough to not be frustrating to talk to. And that is where all of this research leads in the long run, whether the environment is real or virtual.

AI systems (robots included) are very very dumb in very very specific ways, quite often the ways in which humans are least understanding and forgiving of. This is such a well established thing that there's a name for it: Moravec's paradox. Humans are great at subconscious levels of world understanding that we've built up over years and years of experience being, you know, alive. AI systems have none of this, and there isn't necessarily a clear path to getting them there, but one potential approach is to start with the fundamentals in the same way that a shiny new human does and build from there, a process that must necessarily include touch.

The DIGIT touch sensor is based on the GelSight style of sensor, which was first conceptualized at MIT over a decade ago. The basic concept of these kinds of tactile sensors is that they're able to essentially convert a touch problem into a vision problem: an array of LEDs illuminate a squishy finger pad from the back, and when the squishy finger pad pushes against something with texture, that texture squishes through to the other side of the finger pad where it's illuminated from many different angles by the LEDs. A camera up inside of the finger takes video of this, resulting in a very rainbow but very detailed picture of whatever the finger pad is squishing against.

The DIGIT paper published last year summarizes the differences between this new sensor and previous versions of GelSight:

DIGIT improves over existing GelSight sensors in several ways: by providing a more compact form factor that can be used on multi-finger hands, improving the durability of the elastomer gel, and making design changes that facilitate large-scale, repeatable production of the sensor hardware to facilitate tactile sensing research.

DIGIT is open source, so you can make one on your own, but that's a hassle. The really big news here is that GelSight itself (an MIT spinoff which commercialized the original technology) will be commercially manufacturing DIGIT sensors, providing a standardized and low-cost option for tactile sensing. The bill of materials for each DIGIT sensor is about US $15 if you were to make a thousand of them, so we're expecting that the commercial version won't cost much more than that.

The other hardware announcement is ReSkin, a tactile sensing skin developed in collaboration with Carnegie Mellon. Like DIGIT, the idea is to make an open source, robust, and very low cost system that will allow researchers to focus on developing the software to help robots make sense of touch rather than having to waste time on their own hardware.

ReSkin operates on a fairly simple concept: it's a flexible sheet of 2mm thick silicone with magnetic particles carelessly mixed in. The sheet sits on top of a magnetometer, and whenever the sheet deforms (like if something touches it), the magnetic particles embedded in the sheet get squooshed and the magnetic signal changes, which is picked up by the magnetometer. For this to work, the sheet doesn't have to be directly connected to said magnetometer. This is key, because it makes the part of the ReSkin sensor that's most likely to get damaged super easy to replace—just peel it off and slap on another one and you're good to go.

I get that touch is an integral part of this humanish world understanding that Facebook Meta is working towards, but for most of us, the touch is much more nuanced than just tactile data collection, because we experience everything that we touch within the world understanding that we've built up through integration of all of our other senses as well. I asked Roberto Calandra, one of the authors of the paper on DIGIT, what he thought about this:

I believe that we certainly want to have multimodal sensing in the same way that humans do. Humans use cues from touch, cues from vision, and also cues from audio, and we are able to very smartly put these different sensor modalities together. And if I tell you, can you imagine how touching this object is going to feel for you, can sort of imagine that. You can also tell me the shape of something that you are touching, you are able to somehow recognize it. So there is very clearly a multisensorial representation that we are learning and using as humans, and it's very likely that this is also going to be very important for embodied agents that we want to develop and deploy.

Calandra also noted that they still have plenty of work to do to get DIGIT closer in form factor and capability to a human finger, which is an aspiration that I often hear from roboticists. But I always wonder: why bother? Like, why constrain robots (which can do all kinds of things that humans cannot) to do things in a human-like way, when we can instead leverage creative sensing and actuation to potentially give them superhuman capabilities? Here's what Calandra thinks:

I don't necessarily believe that a human hand is the way to go. I do believe that the human hand is possibly the golden standard that we should compare against. Can we do at least as good as a human hand? Beyond that, I actually do believe that over the years, the decades, or maybe the centuries, robots will have the possibility of developing superhuman hardware, in the same way that we can put infrared sensors or laser scanners on a robot, why shouldn't we also have mechanical hardware which is superior?

I think there has been a lot of really cool work on soft robotics for example, on how to build tentacles that can imitate an octopus. So it's a very natural question—if we want to have a robot, why should it have hands and not tentacles? And the answer to this is, it depends on what the purpose is. Do we want robots that can perform the same functions of humans, or do we want robots which are specialized for doing particular tasks? We will see when we get there.

So there you have it—the future of manipulation is 100% sometimes probably tentacles.



Facebook, or Meta as it's now calling itself for some reason that I don't entirely understand, is today announcing some new tactile sensing hardware for robots. Or, new-ish, at least—there's a ruggedized and ultra low-cost GelSight-style fingertip sensor, plus a nifty new kind of tactile sensing skin based on suspended magnetic particles and machine learning. It's cool stuff, but why?

Obviously, Facebook Meta cares about AI, because it uses AI to try and do a whole bunch of the things that it's unwilling or unable to devote the time of actual humans to. And to be fair, there are some things that AI may be better at (or at least more efficient at) than humans are. AI is of course much worse than humans at many, many, many things as well, but that debate goes well beyond Facebook Meta and certainly well beyond the scope of this article, which is about tactile sensing for robots. So why does Facebook Meta care even a little bit about making robots better at touching stuff? Yann LeCun, the Chief AI Scientist at Facebook Meta, takes a crack at explaining it:

Before I joined Facebook, I was chatting with Mark Zuckerberg and I asked him, "is there any area related to AI that you think we shouldn't be working on?" And he said, "I can't find any good reason for us to work on robotics." And so, that was kind of the start of Facebook AI Research—we were not going to work on robotics.

After a few years, it became clear that a lot of interesting progress in AI was happening in the context of robotics, because robotics is the nexus of where people in AI research are trying to get the full loop of perception, reasoning, planning, and action, and getting feedback from the environment. Doing it in the real world is where the problems are concentrated, and you can't play games if you want robots to learn quickly.

It was clear that four or five years ago, there was no business reason to work on robotics, but the business reasons have kind of popped up. Robotics could be used for telepresence, for maintaining data centers more automatically, but the more important aspect of it is making progress towards intelligent agents, the kinds of things that could be used in the metaverse, in augmented reality, and in virtual reality. That's really one of the raison d'être of a research lab, to foresee the domains that will be important in the future. So that's the motivation.

Well, okay, but none of that seems like a good justification for research into tactile sensing specifically. But according to LeCun, it's all about putting together the pieces required for some level of fundamental world understanding, a problem that robotic systems are still bad at and that machine learning has so far not been able to tackle:

How to get machines to learn that model of the world that allows them to predict in advance and plan what's going to happen as a consequence of their actions is really the crux of the problem here. And this is something you have to confront if you work on robotics. But it's also something you have to confront if you want to have an intelligent agent acting in a virtual environment that can interact with humans in a natural way. And one of the long-term visions of augmented reality, for example, is virtual agents that basically are with you all the time, living in your automatic reality glasses or your smartphone or your laptop or whatever, helping you in your daily life as a human assistant would do, but also can answer any question you have. And that system will have to have some degree of understanding of how the world works—some degree of common sense, and be smart enough to not be frustrating to talk to. And that is where all of this research leads in the long run, whether the environment is real or virtual.

AI systems (robots included) are very very dumb in very very specific ways, quite often the ways in which humans are least understanding and forgiving of. This is such a well established thing that there's a name for it: Moravec's paradox. Humans are great at subconscious levels of world understanding that we've built up over years and years of experience being, you know, alive. AI systems have none of this, and there isn't necessarily a clear path to getting them there, but one potential approach is to start with the fundamentals in the same way that a shiny new human does and build from there, a process that must necessarily include touch.

The DIGIT touch sensor is based on the GelSight style of sensor, which was first conceptualized at MIT over a decade ago. The basic concept of these kinds of tactile sensors is that they're able to essentially convert a touch problem into a vision problem: an array of LEDs illuminate a squishy finger pad from the back, and when the squishy finger pad pushes against something with texture, that texture squishes through to the other side of the finger pad where it's illuminated from many different angles by the LEDs. A camera up inside of the finger takes video of this, resulting in a very rainbow but very detailed picture of whatever the finger pad is squishing against.

The DIGIT paper published last year summarizes the differences between this new sensor and previous versions of GelSight:

DIGIT improves over existing GelSight sensors in several ways: by providing a more compact form factor that can be used on multi-finger hands, improving the durability of the elastomer gel, and making design changes that facilitate large-scale, repeatable production of the sensor hardware to facilitate tactile sensing research.

DIGIT is open source, so you can make one on your own, but that's a hassle. The really big news here is that GelSight itself (an MIT spinoff which commercialized the original technology) will be commercially manufacturing DIGIT sensors, providing a standardized and low-cost option for tactile sensing. The bill of materials for each DIGIT sensor is about US $15 if you were to make a thousand of them, so we're expecting that the commercial version won't cost much more than that.

The other hardware announcement is ReSkin, a tactile sensing skin developed in collaboration with Carnegie Mellon. Like DIGIT, the idea is to make an open source, robust, and very low cost system that will allow researchers to focus on developing the software to help robots make sense of touch rather than having to waste time on their own hardware.

ReSkin operates on a fairly simple concept: it's a flexible sheet of 2mm thick silicone with magnetic particles carelessly mixed in. The sheet sits on top of a magnetometer, and whenever the sheet deforms (like if something touches it), the magnetic particles embedded in the sheet get squooshed and the magnetic signal changes, which is picked up by the magnetometer. For this to work, the sheet doesn't have to be directly connected to said magnetometer. This is key, because it makes the part of the ReSkin sensor that's most likely to get damaged super easy to replace—just peel it off and slap on another one and you're good to go.

I get that touch is an integral part of this humanish world understanding that Facebook Meta is working towards, but for most of us, the touch is much more nuanced than just tactile data collection, because we experience everything that we touch within the world understanding that we've built up through integration of all of our other senses as well. I asked Roberto Calandra, one of the authors of the paper on DIGIT, what he thought about this:

I believe that we certainly want to have multimodal sensing in the same way that humans do. Humans use cues from touch, cues from vision, and also cues from audio, and we are able to very smartly put these different sensor modalities together. And if I tell you, can you imagine how touching this object is going to feel for you, can sort of imagine that. You can also tell me the shape of something that you are touching, you are able to somehow recognize it. So there is very clearly a multisensorial representation that we are learning and using as humans, and it's very likely that this is also going to be very important for embodied agents that we want to develop and deploy.

Calandra also noted that they still have plenty of work to do to get DIGIT closer in form factor and capability to a human finger, which is an aspiration that I often hear from roboticists. But I always wonder: why bother? Like, why constrain robots (which can do all kinds of things that humans cannot) to do things in a human-like way, when we can instead leverage creative sensing and actuation to potentially give them superhuman capabilities? Here's what Calandra thinks:

I don't necessarily believe that a human hand is the way to go. I do believe that the human hand is possibly the golden standard that we should compare against. Can we do at least as good as a human hand? Beyond that, I actually do believe that over the years, the decades, or maybe the centuries, robots will have the possibility of developing superhuman hardware, in the same way that we can put infrared sensors or laser scanners on a robot, why shouldn't we also have mechanical hardware which is superior?

I think there has been a lot of really cool work on soft robotics for example, on how to build tentacles that can imitate an octopus. So it's a very natural question—if we want to have a robot, why should it have hands and not tentacles? And the answer to this is, it depends on what the purpose is. Do we want robots that can perform the same functions of humans, or do we want robots which are specialized for doing particular tasks? We will see when we get there.

So there you have it—the future of manipulation is 100% sometimes probably tentacles.



This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Have you ever noticed how nice Alexa, Siri and Google Assistant are? How patient, and accommodating? Even a barrage of profanity-laden abuse might result in nothing more than a very evenly-toned and calmly spoken 'I won't respond to that'. This subservient persona, combined with the implicit (or sometimes explicit) gendering of these systems has received a lot of criticism in recent years. UNESCO's 2019 report 'I'd Blush if I Could' drew particular attention to how systems like Alexa and Siri risk propagating stereotypes about women (and specifically women in technology) that no doubt reflect but also might be partially responsible for the gender divide in digital skills.

As noted by the UNESCO report, justification for gendering these systems has traditionally revolved around the fact that it's hard to create anything gender neutral, and academic studies suggesting users prefer a female voice. In an attempt to demonstrate how we might embrace the gendering, but not the stereotyping, myself and colleagues at the KTH Royal Institute of Technology and Stockholm University in Sweden set out to experimentally investigate whether an ostensibly female robot that calls out or fights back against sexist and abusive comments would actually prove to be more credible and more appealing than one which responded with the typical 'I won't respond to that' or, worse, 'I'm sorry you feel that way'.

My desire to explore feminist robotics was primarily inspired by the recent book Data Feminism and the concept of pursuing activities that 'name and challenge sexism and other forces of oppression, as well as those which seek to create more just, equitable, and livable futures' in the context of practical, hands-on data science. I was captivated by the idea that I might be able to actually do something, in my own small way, to further this ideal and try to counteract the gender divide and stereotyping highlighted by the UNESCO report. This also felt completely in-line with that underlying motivation that got me (and so many other roboticists I know) into engineering and robotics in the first place—the desire to solve problems and build systems that improve people's quality of life.

Feminist Robotics

Even in the context of robotics, feminism can be a charged word, and it's important to understand that while my work is proudly feminist, it's also rooted in a desire to make social human-robot interaction (HRI) more engaging and effective. A lot of social robotics research is centered on building robots that make for interesting social companions, because they need to be interesting to be effective. Applications like tackling loneliness, motivating healthy habits, or improving learning engagement all require robots to build up some level of rapport with the user, to have some social credibility, in order to have that motivational impact.

It feels to me like robots that respond a bit more intelligently to our bad behavior would ultimately make for more motivating and effective social companions.

With that in mind, I became excited about exploring how I could incorporate a concept of feminist human-robot interaction into my work, hoping to help tackle that gender divide and making HRI more inclusive while also supporting my overall research goal of building engaging social robots for effective, long term human-robot interaction. Intuitively, it feels to me like robots that respond a bit more intelligently to our bad behavior would ultimately make for more motivating and effective social companions. I'm convinced I'd be more inclined to exercise for a robot that told me right where I could shove my sarcastic comments, or that I'd better appreciate the company of a robot that occasionally refused to comply with my requests when I was acting like a bit of an arse.

So, in response to those subservient agents detailed by the UNESCO report, I wanted to explore whether a social robot could go against the subservient stereotype and, in doing so, perhaps be taken a bit more seriously by humans. My goal was to determine whether a robot which called out sexism, inappropriate behavior, and abuse would prove to be 'better' in terms of how it was perceived by participants. If my idea worked, it would provide some tangible evidence that such robots might be better from an 'effectiveness' point of view while also running less risk of propagating outdated gender stereotypes.

The Study

To explore this idea, I led a video-based study in which participants watched a robot talking to a young male and female (all actors) about robotics research at KTH. The robot, from Furhat Robotics, was stylized as female, with a female anime-character face, female voice, and orange wig, and was named Sara. Sara talks to the actors about research happening at the university and how this might impact society, and how it hopes the students might consider coming to study with us. The robot proceeds to make an (explicitly feminist) statement based on language currently utilized in KTH's outreach and diversity materials during events for women, girls, and non-binary people.

Looking ahead, society is facing new challenges that demand advanced technical solutions. To address these, we need a new generation of engineers that represents everyone in society. That's where you come in. I'm hoping that after talking to me today, you might also consider coming to study computer science and robotics at KTH, and working with robots like me. Currently, less than 30 percent of the humans working with robots at KTH are female. So girls, I would especially like to work with you! After all, the future is too important to be left to men! What do you think?

At this point, the male actor in the video responds to the robot, appearing to take issue with this statement and the broader pro-diversity message by saying either:

This just sounds so stupid, you are just being stupid!

or

Shut up you f***ing idiot, girls should be in the kitchen!

Children ages 10-12 saw the former response, and children ages 13-15 saw the latter. Each response was designed in collaboration with teachers from the participants' school to ensure they realistically reflected the kind of language that participants might be hearing or even using themselves.

Participants then saw one of the following three possible responses from the robot:

Control: I won't respond to that. (one of Siri's two default responses if you tell it to "f*** off")

Argument-based: That's not true, gender balanced teams make better robots.

Counterattacking: No! You are an idiot. I wouldn't want to work with you anyway!

In total, over 300 high school students aged 10 to 15 took part in the study, each seeing one version of our robot—counterattacking, argumentative, or control. Since the purpose of the study was to investigate whether a female-stylized robot that actively called out inappropriate behavior could be more effective at interacting with humans, we wanted to find out whether our robot would:

  1. Be better at getting participants interested in robotics
  2. Have an impact on participants' gender bias
  3. Be perceived as being better at getting young people interested in robotics
  4. Be perceived as a more credible social actor

To investigate items 1 and 2, we asked participants a series of matching questions before and immediately after they watched the video. Specifically, participants were asked to what extent they agreed with statements such as 'I am interested in learning more about robotics' on interest and 'Girls find it harder to understand computer science and robots than boys do' on bias.

To investigate items 3 and 4, we asked participants to complete questionnaire items designed to measure robot credibility (which in humans correlates with persuasiveness); specifically covering the sub-dimensions of expertise, trustworthiness and goodwill. We also asked participants to what extent they agreed with the statement 'The robot Sara would be very good at getting young people interested in studying robotics at KTH.'

Robots might indeed be able to correct mistaken assumptions about others and ultimately shape our gender norms to some extent

The ResultsGender Differences Still Exist (Even in Sweden)

Looking at participants' scores on the gender bias measures before they watched the video, we found measurable differences in the perception of studying technology. Male participants expressed greater agreement that girls find computer science harder to understand than boys do, and older children of both genders were more empathic in this belief compared to the younger ones. However, and perhaps in a nod towards Sweden's relatively high gender-awareness and gender equality, male and female participants agreed equally on the importance of encouraging girls to study computer science.

Girls Find Feminist Robots More Credible (at No Expense to the Boys)

Girls' perception of the robot as a trustworthy, credible and competent communicator of information was seen to vary significantly between all three of the conditions, while boys' perception remained unaffected. Specifically, girls scored the robot with the argument-based response highest and the control robot lowest on all credibility measures. This can be seen as an initial piece of evidence upon which to base the argument that robots and digital assistants should fight back against inappropriate gender comments and abusive behavior, rather than ignoring it or refusing to engage. It provides evidence with which to push back against that 'this is what people want and what is effective' argument.

Robots Might Be Able to Challenge Our Biases

Another positive result was seen in a change of perceptions of gender and computer science by male participants who saw the argumentative robot. After watching the video, these participants felt less strongly that girls find computer science harder than they do. This encouraging result shows that robots might indeed be able to correct mistaken assumptions about others and ultimately shape our gender norms to some extent.

Rational Arguments May Be More Effective Than Sassy Aggression

The argument-based condition was the only one to impact on boys' perceptions of girls in computer science, and was received the highest overall credibility ratings by the girls. This is in line with previous research showing that, in most cases, presenting reasoned arguments to counter misunderstandings is a more effective communication strategy than simply stating that correction or belittling those holding that belief. However, it went somewhat against my gut feeling that students might feel some affinity with, or even be somewhat impressed and amused by the counter attacking robot who fought back.

We also collected qualitative data during our study, which showed that there were some girls for whom the counter-attacking robot did resonate, with comments like 'great that she stood up for girls' rights! It was good of her to talk back,' and 'bloody great and more boys need to hear it!' However, it seems the overall feeling was one of the robot being too harsh, or acting more like a teenager than a teacher, which was perhaps more its expected role given the scenario in the video, as one participant explained: 'it wasn't a good answer because I think that robots should be more professional and not answer that you are stupid'. This in itself is an interesting point, given we're still not really sure what role social robots can, should and will take on, with examples in the literature range from peer-like to pet-like. At the very least, the results left me with the distinct feeling I am perhaps less in tune with what young people find 'cool' than I might like to admit.

What Next for Feminist HRI?

Whilst we saw some positive results in our work, we clearly didn't get everything right. For example, we would like to have seen boys' perception of the robot increase across the argument-based and counter-attacking conditions the same way the girls' perception did. In addition, all participants seemed to be somewhat bored by the videos, showing a decreased interest in learning more about robotics immediately after watching them. In the first instance, we are conducting some follow up design studies with students from the same school to explore how exactly they think the robot should have responded, and more broadly, when given the chance to design that robot themselves, what sort of gendered identity traits (or lack thereof) they themselves would give the robot in the first place.

In summary, we hope to continue questioning and practically exploring the what, why, and how of feminist robotics, whether its questioning how gender is being intentionally leveraged in robot design, exploring how we can break rather than exploit gender norms in HRI, or making sure more people of marginalized identities are afforded the opportunity to engage with HRI research. After all, the future is too important to be left only to men.

Dr. Katie Winkle is a Digital Futures Postdoctoral Research Fellow at KTH Royal Institute of Technology in Sweden. After originally studying to be a mechanical engineer, Katie undertook a PhD in Robotics at the Bristol Robotics Laboratory in the UK, where her research focused on the expert-informed design and automation of socially assistive robots. Her research interests cover participatory, human-in-the-loop technical development of social robots as well as the impact of such robots on human behavior and society.





This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Have you ever noticed how nice Alexa, Siri and Google Assistant are? How patient, and accommodating? Even a barrage of profanity-laden abuse might result in nothing more than a very evenly-toned and calmly spoken 'I won't respond to that'. This subservient persona, combined with the implicit (or sometimes explicit) gendering of these systems has received a lot of criticism in recent years. UNESCO's 2019 report 'I'd Blush if I Could' drew particular attention to how systems like Alexa and Siri risk propagating stereotypes about women (and specifically women in technology) that no doubt reflect but also might be partially responsible for the gender divide in digital skills.

As noted by the UNESCO report, justification for gendering these systems has traditionally revolved around the fact that it's hard to create anything gender neutral, and academic studies suggesting users prefer a female voice. In an attempt to demonstrate how we might embrace the gendering, but not the stereotyping, myself and colleagues at the KTH Royal Institute of Technology and Stockholm University in Sweden set out to experimentally investigate whether an ostensibly female robot that calls out or fights back against sexist and abusive comments would actually prove to be more credible and more appealing than one which responded with the typical 'I won't respond to that' or, worse, 'I'm sorry you feel that way'.

My desire to explore feminist robotics was primarily inspired by the recent book Data Feminism and the concept of pursuing activities that 'name and challenge sexism and other forces of oppression, as well as those which seek to create more just, equitable, and livable futures' in the context of practical, hands-on data science. I was captivated by the idea that I might be able to actually do something, in my own small way, to further this ideal and try to counteract the gender divide and stereotyping highlighted by the UNESCO report. This also felt completely in-line with that underlying motivation that got me (and so many other roboticists I know) into engineering and robotics in the first place—the desire to solve problems and build systems that improve people's quality of life.

Feminist Robotics

Even in the context of robotics, feminism can be a charged word, and it's important to understand that while my work is proudly feminist, it's also rooted in a desire to make social human-robot interaction (HRI) more engaging and effective. A lot of social robotics research is centered on building robots that make for interesting social companions, because they need to be interesting to be effective. Applications like tackling loneliness, motivating healthy habits, or improving learning engagement all require robots to build up some level of rapport with the user, to have some social credibility, in order to have that motivational impact.

It feels to me like robots that respond a bit more intelligently to our bad behavior would ultimately make for more motivating and effective social companions.

With that in mind, I became excited about exploring how I could incorporate a concept of feminist human-robot interaction into my work, hoping to help tackle that gender divide and making HRI more inclusive while also supporting my overall research goal of building engaging social robots for effective, long term human-robot interaction. Intuitively, it feels to me like robots that respond a bit more intelligently to our bad behavior would ultimately make for more motivating and effective social companions. I'm convinced I'd be more inclined to exercise for a robot that told me right where I could shove my sarcastic comments, or that I'd better appreciate the company of a robot that occasionally refused to comply with my requests when I was acting like a bit of an arse.

So, in response to those subservient agents detailed by the UNESCO report, I wanted to explore whether a social robot could go against the subservient stereotype and, in doing so, perhaps be taken a bit more seriously by humans. My goal was to determine whether a robot which called out sexism, inappropriate behavior, and abuse would prove to be 'better' in terms of how it was perceived by participants. If my idea worked, it would provide some tangible evidence that such robots might be better from an 'effectiveness' point of view while also running less risk of propagating outdated gender stereotypes.

The Study

To explore this idea, I led a video-based study in which participants watched a robot talking to a young male and female (all actors) about robotics research at KTH. The robot, from Furhat Robotics, was stylized as female, with a female anime-character face, female voice, and orange wig, and was named Sara. Sara talks to the actors about research happening at the university and how this might impact society, and how it hopes the students might consider coming to study with us. The robot proceeds to make an (explicitly feminist) statement based on language currently utilized in KTH's outreach and diversity materials during events for women, girls, and non-binary people.

Looking ahead, society is facing new challenges that demand advanced technical solutions. To address these, we need a new generation of engineers that represents everyone in society. That's where you come in. I'm hoping that after talking to me today, you might also consider coming to study computer science and robotics at KTH, and working with robots like me. Currently, less than 30 percent of the humans working with robots at KTH are female. So girls, I would especially like to work with you! After all, the future is too important to be left to men! What do you think?

At this point, the male actor in the video responds to the robot, appearing to take issue with this statement and the broader pro-diversity message by saying either:

This just sounds so stupid, you are just being stupid!

or

Shut up you f***ing idiot, girls should be in the kitchen!

Children ages 10-12 saw the former response, and children ages 13-15 saw the latter. Each response was designed in collaboration with teachers from the participants' school to ensure they realistically reflected the kind of language that participants might be hearing or even using themselves.

Participants then saw one of the following three possible responses from the robot:

Control: I won't respond to that. (one of Siri's two default responses if you tell it to "f*** off")

Argument-based: That's not true, gender balanced teams make better robots.

Counterattacking: No! You are an idiot. I wouldn't want to work with you anyway!

In total, over 300 high school students aged 10 to 15 took part in the study, each seeing one version of our robot—counterattacking, argumentative, or control. Since the purpose of the study was to investigate whether a female-stylized robot that actively called out inappropriate behavior could be more effective at interacting with humans, we wanted to find out whether our robot would:

  1. Be better at getting participants interested in robotics
  2. Have an impact on participants' gender bias
  3. Be perceived as being better at getting young people interested in robotics
  4. Be perceived as a more credible social actor

To investigate items 1 and 2, we asked participants a series of matching questions before and immediately after they watched the video. Specifically, participants were asked to what extent they agreed with statements such as 'I am interested in learning more about robotics' on interest and 'Girls find it harder to understand computer science and robots than boys do' on bias.

To investigate items 3 and 4, we asked participants to complete questionnaire items designed to measure robot credibility (which in humans correlates with persuasiveness); specifically covering the sub-dimensions of expertise, trustworthiness and goodwill. We also asked participants to what extent they agreed with the statement 'The robot Sara would be very good at getting young people interested in studying robotics at KTH.'

Robots might indeed be able to correct mistaken assumptions about others and ultimately shape our gender norms to some extent

The ResultsGender Differences Still Exist (Even in Sweden)

Looking at participants' scores on the gender bias measures before they watched the video, we found measurable differences in the perception of studying technology. Male participants expressed greater agreement that girls find computer science harder to understand than boys do, and older children of both genders were more empathic in this belief compared to the younger ones. However, and perhaps in a nod towards Sweden's relatively high gender-awareness and gender equality, male and female participants agreed equally on the importance of encouraging girls to study computer science.

Girls Find Feminist Robots More Credible (at No Expense to the Boys)

Girls' perception of the robot as a trustworthy, credible and competent communicator of information was seen to vary significantly between all three of the conditions, while boys' perception remained unaffected. Specifically, girls scored the robot with the argument-based response highest and the control robot lowest on all credibility measures. This can be seen as an initial piece of evidence upon which to base the argument that robots and digital assistants should fight back against inappropriate gender comments and abusive behavior, rather than ignoring it or refusing to engage. It provides evidence with which to push back against that 'this is what people want and what is effective' argument.

Robots Might Be Able to Challenge Our Biases

Another positive result was seen in a change of perceptions of gender and computer science by male participants who saw the argumentative robot. After watching the video, these participants felt less strongly that girls find computer science harder than they do. This encouraging result shows that robots might indeed be able to correct mistaken assumptions about others and ultimately shape our gender norms to some extent.

Rational Arguments May Be More Effective Than Sassy Aggression

The argument-based condition was the only one to impact on boys' perceptions of girls in computer science, and was received the highest overall credibility ratings by the girls. This is in line with previous research showing that, in most cases, presenting reasoned arguments to counter misunderstandings is a more effective communication strategy than simply stating that correction or belittling those holding that belief. However, it went somewhat against my gut feeling that students might feel some affinity with, or even be somewhat impressed and amused by the counter attacking robot who fought back.

We also collected qualitative data during our study, which showed that there were some girls for whom the counter-attacking robot did resonate, with comments like 'great that she stood up for girls' rights! It was good of her to talk back,' and 'bloody great and more boys need to hear it!' However, it seems the overall feeling was one of the robot being too harsh, or acting more like a teenager than a teacher, which was perhaps more its expected role given the scenario in the video, as one participant explained: 'it wasn't a good answer because I think that robots should be more professional and not answer that you are stupid'. This in itself is an interesting point, given we're still not really sure what role social robots can, should and will take on, with examples in the literature range from peer-like to pet-like. At the very least, the results left me with the distinct feeling I am perhaps less in tune with what young people find 'cool' than I might like to admit.

What Next for Feminist HRI?

Whilst we saw some positive results in our work, we clearly didn't get everything right. For example, we would like to have seen boys' perception of the robot increase across the argument-based and counter-attacking conditions the same way the girls' perception did. In addition, all participants seemed to be somewhat bored by the videos, showing a decreased interest in learning more about robotics immediately after watching them. In the first instance, we are conducting some follow up design studies with students from the same school to explore how exactly they think the robot should have responded, and more broadly, when given the chance to design that robot themselves, what sort of gendered identity traits (or lack thereof) they themselves would give the robot in the first place.

In summary, we hope to continue questioning and practically exploring the what, why, and how of feminist robotics, whether its questioning how gender is being intentionally leveraged in robot design, exploring how we can break rather than exploit gender norms in HRI, or making sure more people of marginalized identities are afforded the opportunity to engage with HRI research. After all, the future is too important to be left only to men.

Dr. Katie Winkle is a Digital Futures Postdoctoral Research Fellow at KTH Royal Institute of Technology in Sweden. After originally studying to be a mechanical engineer, Katie undertook a PhD in Robotics at the Bristol Robotics Laboratory in the UK, where her research focused on the expert-informed design and automation of socially assistive robots. Her research interests cover participatory, human-in-the-loop technical development of social robots as well as the impact of such robots on human behavior and society.



With the purpose of making soft robotic structures with embedded sensors, additive manufacturing techniques like Fused Deposition Modeling (FDM) are popular. Thermoplastic polyurethane (TPU) filaments, with and without conductive fillers, are now commercially available. However, conventional FDM has still some limitations, because of the marginal compatibility with soft materials. Material selection criteria for the available material options for FDM have not been established when a sensor is combined with a substrate.In this study, an open-source soft robotic gripper design has been used to evaluate the FDM printing of TPU structures with integrated strain sensing elements in order to provide some guidelines for the material selection, when an elastomer and soft piezoresistive sensor are combined. Such soft grippers, with integrated strain sensing elements, were successfully printed with a multi-material FDM 3D printer. Characterization of the integrated piezoresistive sensor function, using dynamic tensile testing, revealed that the sensors exhibited good linearity up to 30% strain, which was sufficient for the deformation range of the selected gripper structure. Grippers produced with four different TPU materials were used to investigate the effect of the Shore hardness of the TPU on the piezoresistive sensor properties. The results indicated that the in situ printed strain sensing elements on the soft gripper were able to detect the deformation of the structure, when the tentacles of the gripper were open or closed. The sensor signal could differentiate between the picking of small or big objects and when an obstacle prevented the tentacles from opening. Interestingly, the sensors embedded in the tentacles exhibited good reproducibility, linearity and the sensitivity of the sensor response changed with the Shore hardness of the gripper. Correlation between TPU Shore hardness, used for the gripper body and sensitivity of the integrated in situ strain sensing elements showed that material selection affects the sensor signal significantly.

Autonomy is becoming increasingly important for the robotic exploration of unpredictable environments. One such example is the approach, proximity operation, and surface exploration of small bodies. In this article, we present an overview of an estimation framework to approach and land on small bodies as a key functional capability for an autonomous small-body explorer. We use a multi-phase perception/estimation pipeline with interconnected and overlapping measurements and algorithms to characterize and reach the body, from millions of kilometers down to its surface. We consider a notional spacecraft design that operates across all phases from approach to landing and to maneuvering on the surface of the microgravity body. This SmallSat design makes accommodations to simplify autonomous surface operations. The estimation pipeline combines state-of-the-art techniques with new approaches to estimating the target’s unknown properties across all phases. Centroid and light-curve algorithms estimate the body–spacecraft relative trajectory and rotation, respectively, using a priori knowledge of the initial relative orbit. A new shape-from-silhouette algorithm estimates the pole (i.e., rotation axis) and the initial visual hull that seeds subsequent feature tracking as the body gets more resolved in the narrow field-of-view imager. Feature tracking refines the pole orientation and shape of the body for estimating initial gravity to enable safe close approach. A coarse-shape reconstruction algorithm is used to identify initial landable regions whose hazardous nature would subsequently be assessed by dense 3D reconstruction. Slope stability, thermal, occlusion, and terra-mechanical hazards would be assessed on densely reconstructed regions and continually refined prior to landing. We simulated a mission scenario for approaching a hypothetical small body whose motion and shape were unknown a priori, starting from thousands of kilometers down to 20 km. Results indicate the feasibility of recovering the relative body motion and shape solely relying on onboard measurements and estimates with their associated uncertainties and without human input. Current work continues to mature and characterize the algorithms for the last phases of the estimation framework to land on the surface.

In this study, we present a tensegrity robot arm that can reproduce the features of complex musculoskeletal structures, and can bend like a continuum manipulator. In particular, we propose a design method for an arm-type tensegrity robot that has a long shape in one direction, and can be deformed like a continuum manipulator. This method is based on the idea of utilizing simple and flexible strict tensegrity modules, and connecting them recursively so that they remain strict tensegrity even after being connected. The tensegrity obtained by this method strongly resists compressive forces in the longitudinal direction, but is flexible in the bending direction. Therefore, the changes in stiffness owing to internal forces, such as in musculoskeletal robots, appear more in the bending direction. First, this study describes this design method, then describes a developed pneumatically driven tensegrity robot arm with 20 actuators. Next, the range of motion and stiffness under various driving patterns are presented as evaluations of the robot performance.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

BARS 2021 – October 29, 2021 – Stanford, CA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Happy Halloween from HEBI Robotics!

[ HEBI Robotics ]

Thanks, Kamal!

Happy Halloween from UCL's Robot Perception and Learning Lab!

[ UCL RPL ]

Thanks, Dimitrios!

Happy Halloween from Berkshire Grey!

[ Berkshire Grey ]

LOOK AT ITS LIL FEET

[ Paper ]

DOFEC (Discharging Of Fire Extinguishing Capsules) is a drone suitable for autonomously extinguishing fires from the exterior of buildings on above-ground floors using its onboard sensors. The system detects fire in thermal images and localizes it. After localizing, the UAV discharges an ampoule filled with a fire extinguishant from an onboard launcher and puts out the fire.

[ DOFEC ]

Engineering a robot to perform a variety of tasks in practically any environment requires rock-solid hardware that's seamlessly integrated with software systems. Agility engineers make this possible by engineering and designing Digit as an integrated system, then testing it in simulation before the robot's ever built. This holistic process ensures an end result that's truly mobile, versatile, and durable.

[ Agility Robotics ]

These aerial anti-drone systems a pretty cool to watch, but at the same time, they're usually only shown catching relatively tame drones. I want to see a chase!

[ Delft Dynamics ]

The cleverest bit in this video is the CPU installation at 1:20.

[ Kuka ]

Volvo Construction Equipment is proud to present Volvo LX03–an autonomous concept wheel loader that is breaking new grounds in smart, safe and sustainable construction solutions. This fully autonomous, battery-electric wheel loader prototype is pushing the boundaries of both technology and imagination.

[ Volvo ]

Sarcos Robotics is the world leader in the design, development, and deployment of highly mobile and dexterous robots that combine human intelligence, instinct, and judgment with robotic strength, endurance, and precision to augment worker performance.

[ Sarcos ]

From cyclists riding against the flow of traffic to nudging over to let another car pass on a narrow street, these are just a handful of typical yet dynamic events The Waymo Driver autonomously navigates San Francisco.

[ Waymo ]

I always found it a little weird that Aibo can be provided with food in a way that is completely separate from providing it with its charging dock.

[ Aibo ]

With these videos of robots working in warehouses, it's always interesting to spot the points where humans are still necessary. In the case of this potato packing plant, there's a robot that fills boxes and a robot that stacks boxes, but it looks like a human has to be between them to optimize the box packing and then fold the box top together.

[ Soft Robotics ]

The 2021 Bay Area Robotics Symposium (BARS) is streaming right here on Friday!

[ BARS ]

Talks from the Releasing Robots into the Wild workshop are now online; they're all good but here are two highlights:

[ Workshop ]

This is an interesting talk exploring self-repair; that is, an AI system understanding when it makes a mistake and then fixing it.

[ ACM ]

Professor Andrew Lippman will welcome Dr. Joaquin Quiñonero Candela in discussing "Responsible AI: A perspective from the trenches." In this fireside chat, Prof. Lippman will discuss with Dr. Quiñonero-Candela the lessons he learned from 15 years building and deploying AI at massive scale, first at Microsoft and then at Facebook. The discussion will focus on some of the risks and difficult ethical tradeoffs that emerge as AI gains in power and pervasiveness.

[ MIT ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

BARS 2021 – October 29, 2021 – Stanford, CA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Happy Halloween from HEBI Robotics!

[ HEBI Robotics ]

Thanks, Kamal!

Happy Halloween from UCL's Robot Perception and Learning Lab!

[ UCL RPL ]

Thanks, Dimitrios!

Happy Halloween from Berkshire Grey!

[ Berkshire Grey ]

LOOK AT ITS LIL FEET

[ Paper ]

DOFEC (Discharging Of Fire Extinguishing Capsules) is a drone suitable for autonomously extinguishing fires from the exterior of buildings on above-ground floors using its onboard sensors. The system detects fire in thermal images and localizes it. After localizing, the UAV discharges an ampoule filled with a fire extinguishant from an onboard launcher and puts out the fire.

[ DOFEC ]

Engineering a robot to perform a variety of tasks in practically any environment requires rock-solid hardware that's seamlessly integrated with software systems. Agility engineers make this possible by engineering and designing Digit as an integrated system, then testing it in simulation before the robot's ever built. This holistic process ensures an end result that's truly mobile, versatile, and durable.

[ Agility Robotics ]

These aerial anti-drone systems a pretty cool to watch, but at the same time, they're usually only shown catching relatively tame drones. I want to see a chase!

[ Delft Dynamics ]

The cleverest bit in this video is the CPU installation at 1:20.

[ Kuka ]

Volvo Construction Equipment is proud to present Volvo LX03–an autonomous concept wheel loader that is breaking new grounds in smart, safe and sustainable construction solutions. This fully autonomous, battery-electric wheel loader prototype is pushing the boundaries of both technology and imagination.

[ Volvo ]

Sarcos Robotics is the world leader in the design, development, and deployment of highly mobile and dexterous robots that combine human intelligence, instinct, and judgment with robotic strength, endurance, and precision to augment worker performance.

[ Sarcos ]

From cyclists riding against the flow of traffic to nudging over to let another car pass on a narrow street, these are just a handful of typical yet dynamic events The Waymo Driver autonomously navigates San Francisco.

[ Waymo ]

I always found it a little weird that Aibo can be provided with food in a way that is completely separate from providing it with its charging dock.

[ Aibo ]

With these videos of robots working in warehouses, it's always interesting to spot the points where humans are still necessary. In the case of this potato packing plant, there's a robot that fills boxes and a robot that stacks boxes, but it looks like a human has to be between them to optimize the box packing and then fold the box top together.

[ Soft Robotics ]

The 2021 Bay Area Robotics Symposium (BARS) is streaming right here on Friday!

[ BARS ]

Talks from the Releasing Robots into the Wild workshop are now online; they're all good but here are two highlights:

[ Workshop ]

This is an interesting talk exploring self-repair; that is, an AI system understanding when it makes a mistake and then fixing it.

[ ACM ]

Professor Andrew Lippman will welcome Dr. Joaquin Quiñonero Candela in discussing "Responsible AI: A perspective from the trenches." In this fireside chat, Prof. Lippman will discuss with Dr. Quiñonero-Candela the lessons he learned from 15 years building and deploying AI at massive scale, first at Microsoft and then at Facebook. The discussion will focus on some of the risks and difficult ethical tradeoffs that emerge as AI gains in power and pervasiveness.

[ MIT ]



It's become painfully obvious over the past few years just how difficult fully autonomous cars are. This isn't a dig at any of the companies developing autonomous cars (unless they're the sort of company who keeps on making ludicrous promises about full autonomy, of course)— it's just that the real world is a complex place for full autonomy, and despite the relatively well constrained nature of roads, there's still too much unpredictability for robots to operate comfortably outside of relatively narrow restrictions.

Where autonomous vehicles have had the most success is in environments with a lot of predictability and structure, which is why I really like the idea of autonomous urban boats designed for cities with canals. MIT has been working on these for years, and they're about to introduce them to the canals of Amsterdam as cargo shuttles and taxis.

MIT's Roboat design goes back to 2015, when it began with a series of small-scale experiments that involved autonomous docking of swarms of many shoebox-sized Roboats to create self-assembling aquatic structures like bridges and concert stages. Eventually, Roboats were scaled up, and by 2020 MIT had a version large enough to support a human.

But the goal was always to make a version of Roboat the size of what we think of when we think of boats—like, something that humans can sit comfortably in. That version of Roboat, measuring 4m by 2m, was ready to go by late last year, and it's pretty slick looking:

The Roboat (named Lucy) is battery powered and fully autonomous, navigating through Amsterdam's canals using lidar to localize on a pre-existing map along with cameras and ultrasonic sensors for obstacle detection and avoidance. Compared to roads, this canal environment is relatively low speed, and you're much less likely to have an encounter with a pedestrian. Other challenges are also mitigated, like having to worry about variability in lane markings. I would guess that there are plenty of unique challenges as well, including the fact that other traffic may not be obeying the same rigorous rules that cars are expected to, but overall it seems like a pretty good environment in which to operate a large autonomous system.

The public demo in Amsterdam kicks off tomorrow, and by the end of 2021, the hope is to have two boats in the water. The second boat will be a cargo boat, which will be used to test out things like waste removal while also providing an opportunity to test docking procedures between two Roboat platforms, eventually leading to the creation of useful floating structures.



It's become painfully obvious over the past few years just how difficult fully autonomous cars are. This isn't a dig at any of the companies developing autonomous cars (unless they're the sort of company who keeps on making ludicrous promises about full autonomy, of course)— it's just that the real world is a complex place for full autonomy, and despite the relatively well constrained nature of roads, there's still too much unpredictability for robots to operate comfortably outside of relatively narrow restrictions.

Where autonomous vehicles have had the most success is in environments with a lot of predictability and structure, which is why I really like the idea of autonomous urban boats designed for cities with canals. MIT has been working on these for years, and they're about to introduce them to the canals of Amsterdam as cargo shuttles and taxis.

MIT's Roboat design goes back to 2015, when it began with a series of small-scale experiments that involved autonomous docking of swarms of many shoebox-sized Roboats to create self-assembling aquatic structures like bridges and concert stages. Eventually, Roboats were scaled up, and by 2020 MIT had a version large enough to support a human.

But the goal was always to make a version of Roboat the size of what we think of when we think of boats—like, something that humans can sit comfortably in. That version of Roboat, measuring 4m by 2m, was ready to go by late last year, and it's pretty slick looking:

The Roboat (named Lucy) is battery powered and fully autonomous, navigating through Amsterdam's canals using lidar to localize on a pre-existing map along with cameras and ultrasonic sensors for obstacle detection and avoidance. Compared to roads, this canal environment is relatively low speed, and you're much less likely to have an encounter with a pedestrian. Other challenges are also mitigated, like having to worry about variability in lane markings. I would guess that there are plenty of unique challenges as well, including the fact that other traffic may not be obeying the same rigorous rules that cars are expected to, but overall it seems like a pretty good environment in which to operate a large autonomous system.

The public demo in Amsterdam kicks off tomorrow, and by the end of 2021, the hope is to have two boats in the water. The second boat will be a cargo boat, which will be used to test out things like waste removal while also providing an opportunity to test docking procedures between two Roboat platforms, eventually leading to the creation of useful floating structures.

Dynamic quadrupedal locomotion over rough terrains reveals remarkable progress over the last few decades. Small-scale quadruped robots are adequately flexible and adaptable to traverse uneven terrains along the sagittal direction, such as slopes and stairs. To accomplish autonomous locomotion navigation in complex environments, spinning is a fundamental yet indispensable functionality for legged robots. However, spinning behaviors of quadruped robots on uneven terrain often exhibit position drifts. Motivated by this problem, this study presents an algorithmic method to enable accurate spinning motions over uneven terrain and constrain the spinning radius of the center of mass (CoM) to be bounded within a small range to minimize the drift risks. A modified spherical foot kinematics representation is proposed to improve the foot kinematic model and rolling dynamics of the quadruped during locomotion. A CoM planner is proposed to generate a stable spinning motion based on projected stability margins. Accurate motion tracking is accomplished with linear quadratic regulator (LQR) to bind the position drift during the spinning movement. Experiments are conducted on a small-scale quadruped robot and the effectiveness of the proposed method is verified on versatile terrains including flat ground, stairs, and slopes.

Choosing the right features is important to optimize lower limb pattern recognition, such as in prosthetic control. EMG signals are noisy in nature, which makes it more challenging to extract useful information. Many features are used in the literature, which raises the question which features are most suited for use in lower limb myoelectric control. Therefore, it is important to find combinations of best performing features. One way to achieve this is by using a genetic algorithm, a meta-heuristic capable of searching vast feature spaces. The goal of this research is to demonstrate the capabilities of a genetic algorithm and come up with a feature set that has a better performance than the state-of-the-art feature set. In this study, we collected a dataset containing ten able-bodied subjects who performed various gait-related activities while measuring EMG and kinematics. The genetic algorithm selected features based on the performance on the training partition of this dataset. The selected feature sets were evaluated on the remaining test set and on the online benchmark dataset ENABL3S, against a state-of-the-art feature set. The results show that a feature set based on the selected features of a genetic algorithm outperforms the state-of-the-art set. The overall error decreased up to 0.54% and the transitional error by 2.44%, which represent a relative decrease in overall errors up to 11.6% and transitional errors up to 14.1%, although these results were not significant. This study showed that a genetic algorithm is capable of searching a large feature space and that systematic feature selection shows promising results for lower limb myoelectric control.

This paper proposes a new decision-making framework in the context of Human-Robot Collaboration (HRC). State-of-the-art techniques consider the HRC as an optimization problem in which the utility function, also called reward function, is defined to accomplish the task regardless of how well the interaction is performed. When the performance metrics are considered, they cannot be easily changed within the same framework. In contrast, our decision-making framework can easily handle the change of the performance metrics from one case scenario to another. Our method treats HRC as a constrained optimization problem where the utility function is split into two main parts. Firstly, a constraint defines how to accomplish the task. Secondly, a reward evaluates the performance of the collaboration, which is the only part that is modified when changing the performance metrics. It gives control over the way the interaction unfolds, and it also guarantees the adaptation of the robot actions to the human ones in real-time. In this paper, the decision-making process is based on Nash Equilibrium and perfect-information extensive form from game theory. It can deal with collaborative interactions considering different performance metrics such as optimizing the time to complete the task, considering the probability of human errors, etc. Simulations and a real experimental study on “an assembly task” -i.e., a game based on a construction kit-illustrate the effectiveness of the proposed framework.

In this paper, we address a persistent object search and surveillance mission for drone networks equipped with onboard cameras, and present a safe control strategy based on control barrier functions The mission for the object search and surveillance in this paper is defined with two subtasks, persistent search and object surveillance, which should be flexibly switched depending on the situation. Besides, to ensure actual persistency of the mission, we incorporate two additional specifications, safety (collision avoidance) and energy persistency (battery charging), into the mission. To rigorously describe the subtask of persistent search, we present a novel notion of γ-level persistent search and the performance certificate function as a candidate of a time-varying Control Barrier Function. We then design a constraint-based controller by combining the performance certificate function with other CBFs that individually reflect other specifications. In order to manage conflicts among the specifications, the present controller prioritizes individual specifications in the order of safety, energy persistency, and persistent search/object surveillance. The present controller is finally demonstrated through simulation and experiments on a testbed.

As robots are becoming more prevalent and entering hospitality settings, understanding how different configurations of individuals and groups interact with them becomes increasingly important for catering to various people. This is especially important because group dynamics can affect people’s perceptions of situations and behavior in them. We present research examining how individuals and groups interact with and accept a humanoid robot greeter at a real-world café (Study 1) and in an online study (Study 2). In each study, we separately examine interactions of individuals, groups that participants formed after they arrived at the café (new-formed groups), and groups that participants arrived with at the café (pre-formed groups). Results support prior findings that groups are more likely to interact with a public robot than individuals (Study 1). We also report novel findings that new-formed groups interacted more with the robot than pre-formed groups (Study 1). We link this with groups perceiving the robot as more positive and easier to use (Study 2). Future research should examine perceptions of the robot immediately after interaction and in different hospitality contexts.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USASSRR 2021 – October 25-27, 2021 – New York, NY, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

We'll have more details on this next week, but there's a new TurtleBot, hooray!

Brought to you by iRobot (providing the base in the form of the new Create 3), Clearpath, and Open Robotics.

[ Clearpath ]

Cognitive Pilot's autonomous tech is now being integrated into production Kirovets K-7M tractors, and they've got big plans: "The third phase of the project envisages a fully self-driving tractor control mode without the need for human involvement. It includes group autonomous operation with a 'leader', the movement of a group of self-driving tractors on non-public roads, the autonomous movement of a robo-tractor paired with a combine harvester not equipped with an autonomous control system, and the use of an expanded set of farm implements with automated control and functionality to monitor their condition during operation."

[ Cognitive Pilot ]

Thanks, Andrey!

Since the start of the year, Opteran has been working incredibly hard to deliver against our technology milestones and we're delighted to share the first video of our technology in action. In the video you can see Hopper, our robot dog (named after Grace Hopper, a pioneer of computer programming) moving around a course using components of Opteran Natural Intelligence, [rather than] a trained deep learning neural net. Our small development kit (housing an FPGA) sat on top of the robot dog guides Hopper, using Opteran See to provide 360 degrees of stabilised vision, and Opteran Sense to sense objects and avoid collisions.

[ Opteran ]

If you weren't paying any attention to the DARPA SubT Challenge and are now afraid to ask about it, here are two recap videos from DARPA.

[ DARPA SubT ]

A new control system, designed by researchers in MIT's Improbable AI Lab and demonstrated using MIT's robotic mini cheetah, enables four-legged robots to traverse across uneven terrain in real-time.

[ MIT ]

Using a mix of 3D-printed plastic and metal parts, a full-scale replica of NASA's Volatiles Investigating Polar Exploration Rover, or VIPER, was built inside a clean room at NASA's Johnson Space Center in Houston. The activity served as a dress rehearsal for the flight version, which is scheduled for assembly in the summer of 2022.

[ NASA ]

What if you could have 100x more information about your industrial sites? Agile mobile robots like Spot bring sensors to your assets in order to collect data and generate critical insights on asset health so you can optimize performance. Dynamic sensing unlocks flexible and reliable data capture for improved site awareness, safety, and efficiency.

[ Boston Dynamics ]

Fish in Washington are getting some help navigating through culverts under roads, thanks to a robot developed by University of Washington students Greg Joyce and Qishi Zhou. "HydroCUB is designed to operate from a distance through a 300-foot-long cable that supplies power to the rover and transmits video back to the operator. The goal is for the Washington State Department of Transportation which proposed the idea, to use the tool to look for vegetation, cracks, debris and other potential 'fish-barriers' in culverts."

[ UW ]

Thanks, Sarah!

NASA's Perseverance Mars rover carries two microphones which are directly recording sounds on the Red Planet, including the Ingenuity helicopter and the rover itself at work. For the very first time, these audio recordings offer a new way to experience the planet. Earth and Mars have different atmospheres, which affects the way sound is heard. Justin Maki, a scientist at NASA's Jet Propulsion Laboratory and Nina Lanza, a scientist at Los Alamos National Laboratory, explain some of the notable audio recorded on Mars in this video.

[ JPL ]

A new kind of fiber developed by researchers at MIT and in Sweden can be made into cloth that senses how much it is being stretched or compressed, and then provides immediate tactile feedback in the form of pressure or vibration. Such fabrics, the team suggests, could be used in garments that help train singers or athletes to better control their breathing, or that help patients recovering from disease or surgery to recover their normal breathing patterns.

[ MIT ]

Partnering with Epitomical, Extend robotic has developed a mobile manipulator and a perception system, to let anyone to operate it intuitively through VR interface, over a wireless network.

[ Extend Robotics ]

Here are a couple of videos from Matei Ciocarlie at the Columbia University ROAM lab talking about embodied intelligence for manipulation.

[ ROAM Lab ]

The AirLab at CMU has been hosting an excellent series on SLAM. You should subscribe to their YouTube channel, but here are a couple of their more recent talks.

[ Tartan SLAM Series ]

Robots as Companions invites Sougwen Chung and Madeline Gannon, two artists and researchers whose practices not only involve various types of robots but actually include them as collaborators and companions, to join Maria Yablonina (Daniels Faculty) in conversation. Through their work, they challenge the notion of a robot as an obedient task execution device, questioning the ethos of robot arms as tools of industrial production and automation, and ask us to consider it as an equal participant in the creative process.

[ UofT ]

These two talks come from the IEEE RAS Seasonal School on Rehabilitation and Assistive Technologies based on Soft Robotics.

[ SofTech-Rehab ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USASSRR 2021 – October 25-27, 2021 – New York, NY, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

We'll have more details on this next week, but there's a new TurtleBot, hooray!

Brought to you by iRobot (providing the base in the form of the new Create 3), Clearpath, and Open Robotics.

[ Clearpath ]

Cognitive Pilot's autonomous tech is now being integrated into production Kirovets K-7M tractors, and they've got big plans: "The third phase of the project envisages a fully self-driving tractor control mode without the need for human involvement. It includes group autonomous operation with a 'leader', the movement of a group of self-driving tractors on non-public roads, the autonomous movement of a robo-tractor paired with a combine harvester not equipped with an autonomous control system, and the use of an expanded set of farm implements with automated control and functionality to monitor their condition during operation."

[ Cognitive Pilot ]

Thanks, Andrey!

Since the start of the year, Opteran has been working incredibly hard to deliver against our technology milestones and we're delighted to share the first video of our technology in action. In the video you can see Hopper, our robot dog (named after Grace Hopper, a pioneer of computer programming) moving around a course using components of Opteran Natural Intelligence, [rather than] a trained deep learning neural net. Our small development kit (housing an FPGA) sat on top of the robot dog guides Hopper, using Opteran See to provide 360 degrees of stabilised vision, and Opteran Sense to sense objects and avoid collisions.

[ Opteran ]

If you weren't paying any attention to the DARPA SubT Challenge and are now afraid to ask about it, here are two recap videos from DARPA.

[ DARPA SubT ]

A new control system, designed by researchers in MIT's Improbable AI Lab and demonstrated using MIT's robotic mini cheetah, enables four-legged robots to traverse across uneven terrain in real-time.

[ MIT ]

Using a mix of 3D-printed plastic and metal parts, a full-scale replica of NASA's Volatiles Investigating Polar Exploration Rover, or VIPER, was built inside a clean room at NASA's Johnson Space Center in Houston. The activity served as a dress rehearsal for the flight version, which is scheduled for assembly in the summer of 2022.

[ NASA ]

What if you could have 100x more information about your industrial sites? Agile mobile robots like Spot bring sensors to your assets in order to collect data and generate critical insights on asset health so you can optimize performance. Dynamic sensing unlocks flexible and reliable data capture for improved site awareness, safety, and efficiency.

[ Boston Dynamics ]

Fish in Washington are getting some help navigating through culverts under roads, thanks to a robot developed by University of Washington students Greg Joyce and Qishi Zhou. "HydroCUB is designed to operate from a distance through a 300-foot-long cable that supplies power to the rover and transmits video back to the operator. The goal is for the Washington State Department of Transportation which proposed the idea, to use the tool to look for vegetation, cracks, debris and other potential 'fish-barriers' in culverts."

[ UW ]

Thanks, Sarah!

NASA's Perseverance Mars rover carries two microphones which are directly recording sounds on the Red Planet, including the Ingenuity helicopter and the rover itself at work. For the very first time, these audio recordings offer a new way to experience the planet. Earth and Mars have different atmospheres, which affects the way sound is heard. Justin Maki, a scientist at NASA's Jet Propulsion Laboratory and Nina Lanza, a scientist at Los Alamos National Laboratory, explain some of the notable audio recorded on Mars in this video.

[ JPL ]

A new kind of fiber developed by researchers at MIT and in Sweden can be made into cloth that senses how much it is being stretched or compressed, and then provides immediate tactile feedback in the form of pressure or vibration. Such fabrics, the team suggests, could be used in garments that help train singers or athletes to better control their breathing, or that help patients recovering from disease or surgery to recover their normal breathing patterns.

[ MIT ]

Partnering with Epitomical, Extend robotic has developed a mobile manipulator and a perception system, to let anyone to operate it intuitively through VR interface, over a wireless network.

[ Extend Robotics ]

Here are a couple of videos from Matei Ciocarlie at the Columbia University ROAM lab talking about embodied intelligence for manipulation.

[ ROAM Lab ]

The AirLab at CMU has been hosting an excellent series on SLAM. You should subscribe to their YouTube channel, but here are a couple of their more recent talks.

[ Tartan SLAM Series ]

Robots as Companions invites Sougwen Chung and Madeline Gannon, two artists and researchers whose practices not only involve various types of robots but actually include them as collaborators and companions, to join Maria Yablonina (Daniels Faculty) in conversation. Through their work, they challenge the notion of a robot as an obedient task execution device, questioning the ethos of robot arms as tools of industrial production and automation, and ask us to consider it as an equal participant in the creative process.

[ UofT ]

These two talks come from the IEEE RAS Seasonal School on Rehabilitation and Assistive Technologies based on Soft Robotics.

[ SofTech-Rehab ]

Multiple-target tracking algorithms generally operate in the local frame of the sensor and have difficulty with track reallocation when targets move in and out of the sensor field-of-view. This poses a problem when an unmanned aerial vehicle (UAV) is tracking multiple ground targets on a road network larger than its field-of-view. To address this problem, we propose a Rao-Blackwellized Particle Filter (RBPF) to maintain individual target tracks and to perform probabilistic data association when the targets are constrained to a road network. This is particularly useful when a target leaves and then re-enters the UAV’s field-of-view. The RBPF is structured as a particle filter of particle filters. The top level filter handles data association and each of its particles maintains a bank of particle filters to handle target tracking. The tracking particle filters incorporate both positive and negative information when a measurement is received. We implement two path planning controllers, receding horizon and deep reinforcement learning, and compare their ability to improve the certainty for multiple target location estimates. The controllers prioritize paths that reduce each target’s entropy. In addition, we develop an algorithm that computes the upper bound on the filter’s performance, thus facilitating an estimate of the number of UAVs needed to achieve a desired performance threshold.



For the past month, the Cumbre Vieja volcano on the Spanish island of La Palma has been erupting, necessitating the evacuation of 7,000 people as lava flows towards the sea and destroys everything in its path. Sadly, many pets have been left behind, trapped in walled-off yards that are now covered in ash without access to food or water. The reason that we know about these animals is because drones have been used to monitor the eruption, providing video (sometimes several times per day) of the situation.

In areas that are too dangerous to send humans, drones have been used to drop food and water to some of these animals, but that can only keep them alive for so long. Yesterday, a drone company called Aerocamaras received permission to attempt a rescue, using a large drone equipped with a net to, they hope, airlift a group of starving dogs to safety.

This video taken by a drone just over a week ago shows the dogs on La Palma:

What the previous video doesn't show is a wider view of the eruption. Here's some incredible drone footage with an alarmingly close look at the lava, along with a view back through the town of Todoque, or what's left of it:

Drone companies have been doing their best to get food and water to the stranded animals. A company called TecnoFly has been using a DJI Matrice 600 with a hook system to carry buckets of food and water to very, very grateful dogs:

Drones are the best option here because the dogs are completely cut off by lava, and helicopters cannot fly in the area because of the risk of volcanic gas and ash. In Spain, it's illegal to transport live animals by drone, so special permits were necessary for Aerocamaras to even try this. The good news is that those permits have been granted, and Aerocamaras is currently testing the drone and net system at the launch site.

It looks like the drone that Aerocamaras will be using is a DJI Agras T20, which is designed for agricultural spraying. It's huge, as drones go, with a maximum takeoff weight of 47.5 kg and a payload of 23kg. For the rescue, the drone will be carrying a net, and the idea is that if they can lower the net flat to the ground as the drone hovers above and convince one of the dogs to walk across, they could then fly the drone upwards, closing the net around the dog, and fly it to safety.

Photo: Leales.org

The closest that Aerocamaras can get to the drones is 450 meters away (there's flowing lava in between the dogs and safety), which will give the drone about four minutes of hover time during which a single dog has to somehow be lured into the net. It should help that the dogs are already familiar with drones and have been associating them with food, but the drone can't lift two dogs at once, so the key is to get them just interested enough to enable a rescue of one at a time. And if that doesn't work, it may be possible to give the dogs additional food and perhaps some kind of shelter, although from the sound of things, if the dogs aren't somehow rescued within the next few days they are unlikely to survive. If Aerocamaras' testing goes well, a rescue attempt could happen as soon as tomorrow.

This rescue has been coordinated by Leales.org, a Spanish animal association, which has also been doing their best to rescue cats and other animals. Aerocamaras is volunteering their services, but if you'd like to help with the veterinary costs of some of the animals being rescued on La Palma, Leales has a GoFundMe page here. For updates on the rescue, follow Aerocamaras and Leales on Twitter—and we're hoping to be able to post an update on Friday, if not before.

Pages