IEEE Spectrum Robotics

IEEE Spectrum
Subscribe to IEEE Spectrum Robotics feed IEEE Spectrum Robotics


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CSIRO SubT Summit – December 10, 2021 – OnlineICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Ameca is the world’s most advanced human shaped robot representing the forefront of human-robotics technology. Designed specifically as a platform for development into future robotics technologies, Ameca is the perfect humanoid robot platform for human-robot interaction.

Apparently, the eventual plan is to get Ameca to walk.

[ Engineered Arts ]

Looks like Flexiv had a tasty and exceptionally safe Thanksgiving!

But also kind of lonely :(

[ Flexiv ]

Thanks, Yunfan!

Cedars-Sinai is now home to a pair of Moxi robots, named Moxi and Moxi. Yeah, they should work on the names. But they've totally nailed the beeps!

[ Diligent Robotics ] via [ Cedars Sinai ]

Somehow we already have a robot holiday video, I don't know whether to be thrilled or horrified.

The Faculty of Electrical Engineering of the CTU in Prague wishes you a Merry Christmas and much success, health and energy in 2022!

[ CTU ]

Carnegie Mellon University's Iris rover is bolted in and ready for its journey to the moon. The tiny rover passed a huge milestone on Wednesday, Dec. 1, when it was secured to one of the payload decks of Astrobotic's Peregrine Lunar Lander, which will deliver it to the moon next year.

[ CMU ]

This robot has some of the absolute best little feetsies I've ever. Seen.

[ SDU ]

Thanks, Poramate!

With the help of artificial intelligence and four collaborative robots, researchers at ETH Zurich are designing and fabricating a 22.5-metre-tall green architectural sculpture.

[ ETH Zurich ]

Cassie Blue autonomously navigates on the second floor of the Ford Robotics Building at the University of Michigan. The total traverse distance is 200 m (656.168 feet).

[ Michigan Robotics ]

Thanks, Bruce!

The Mohamed Bin Zayed International Robotics Challenge (MBZIRC) will be held in the UAE capital, Abu Dhabi, in June 2023, where tech innovators will participate to seek marine safety and security solutions to take home more than US$3 million in prize money.

[ MBZIRC ]

Madagascar Flying Labs and WeRobotics are using cargo drones to deliver essential medicines to very remote communities in northern Madagascar. This month, they delivered the 250 doses of the Janssen COVID-19 vaccine for the first time, with many more such deliveries to come over the next 12 months.

[ WeRobotics ]

It's... Cozmo?

Already way overfunded on Kickstarter.

[ Kickstarter ] via [ RobotStart ]

At USC's Center for Advanced Manufacturing, we have taught the Baxter robot to manipulate fluid food substances to create pancake art of various user created designs.

[ USC ]

Face-first perching for fixed wing drones looks kinda painful, honestly.

[ EPFL ]

Video footage from NASA’s Perseverance Mars rover of the Ingenuity Mars Helicopter’s 13th flight on Sept. 4 provides the most detailed look yet of the rotorcraft in action.

During takeoff, Ingenuity kicks up a small plume of dust that the right camera, or “eye,” captures moving to the right of the helicopter during ascent. After its initial climb to planned maximum altitude of 26 feet (8 meters), the helicopter performs a small pirouette to line up its color camera for scouting. Then Ingenuity pitches over, allowing the rotors’ thrust to begin moving it horizontally through the thin Martian air before moving offscreen. Later, the rotorcraft returns and lands in the vicinity of where it took off. The team targeted a different landing spot–about 39 feet (12 meters) from takeoff–to avoid a ripple of sand it landed on at the completion of Flight 12.

[ JPL ]

I'm not totally sold on the viability of commercial bathroom cleaning robots, but I do appreciate how well the techology seems to work. In the videos, at least.

[ SOMATIC ]

An interdisciplinary team at Harvard University School of Engineering and the Wyss Institute at Harvard University is building soft robots for older adults and people with physical impairments. Examples of these robots are the Assistive Hip Suit and Soft Robotic Glove, both of which have been included in the 2021-2022 Smithsonian Institution exhibit entitled "FUTURES".

[ SI ]

Subterranean robot exploration is difficult with many mobility, communications, and navigation challenges that require an approach with a diverse set of systems, and reliable autonomy. While prior work has demonstrated partial successes in addressing the problem, here we convey a comprehensive approach to address the problem of subterranean exploration in a wide range of tunnel, urban, and cave environments. Our approach is driven by the themes of resiliency and modularity, and we show examples of how these themes influence the design of the different modules. In particular, we detail our approach to artifact detection, pose estimation, coordination, planning, control, and autonomy, and discuss our performance in the Final DARPA Subterranean Challenge.

[ CMU ]



There’s no reliably good way of getting a human to trust a robot. Part of the problem is that robots, generally, just do whatever they’ve been programmed to do, and for a human, there’s typically no feeling that the robot is in the slightest bit interested in making any sort of non-functional connection. From a robot’s perspective, humans are fragile ambulatory meatsacks that are not supposed to be touched and who help with tasks when necessary, nothing more.

Humans come to trust other humans by forming an emotional connection with them, something that robots are notoriously bad at. An emotional connection obviously doesn’t have to mean love, or even like, but it does mean that there’s some level of mutual understanding and communication and predictability, a sense that the other doesn’t just see you as an object (and vice versa). For robots, which are objects, this is a real challenge, and with funding from the National Science Foundation, roboticists from the Georgia Tech Center for Music Technology have partnered with the Kennesaw State University dance department on a “forest” of improvising robot musicians and dancers who interact with humans to explore creative collaboration and the establishment of human-robot trust.

According to the researchers, the FOREST robots and accompanying musical robots are not rigid mimickers of human melody and movement; rather, they exhibit a remarkable level of emotional expression and human-like gesture fluency–what the researchers call “emotional prosody and gesture” to project emotions and build trust.

Looking up what “prosody” means will absolutely take you down a Wikipedia black hole, but the term broadly refers to parts of speech that aren’t defined by the actual words being spoken. For example, you could say “robots are smart” and impart a variety of meanings to it depending on whether you say it ironically or sarcastically or questioningly or while sobbing, as I often do. That’s prosody. You can imagine how this concept can extend to movements and gestures as well, and effective robot-to-human interaction will need to account for this.

Many of the robots in this performance are already well known, including Shimon, one of Gil Weinberg’s most creative performers. Here’s some additional background about how the performance came together:

What I find personally a little strange about all this is the idea of trust, because in some ways, it seems as though robots should be totally trustworthy because they can (in an ideal world) be totally predictable, right? Like, if a robot is programmed to do things X, Y, and Z in that sequence, you don’t have to trust that a robot will do Y after X in the same way that you’d have to trust a human to do so, because strictly speaking the robot has no choice. As robots get more complicated, though, and there’s more expectation that they’ll be able to interact with humans socially, that gap between what is technically predictable (or maybe, predictable after the fact) and what is predictable by the end user can get very, very wide, which is why a more abstract kind of trust becomes increasingly important. Music and dance may not be the way to make that happen for every robot out there, but it’s certainly a useful place to start.



Last week, Google or Alphabet or X or whatever you want to call it announced that its Everyday Robots team has grown enough and made enough progress that it's time for it to become its own thing, now called, you guessed it, "Everyday Robots." There's a new website of questionable design along with a lot of fluffy descriptions of what Everyday Robots is all about. But fortunately, there are also some new videos and enough details about the engineering and the team's approach that it's worth spending a little bit of time wading through the clutter to see what Everyday Robots has been up to over the last couple of years and what their plans are for the near future.

That close to the arm seems like a really bad place to put an E-Stop, right?

Our headline may sound a little bit snarky, but the headline in Alphabet's own announcement blog post is "everyday robots are (slowly) leaving the lab." It's less of a dig and more of an acknowledgement that getting mobile manipulators to usefully operate in semi-structured environments has been, and continues to be, a huge challenge. We'll get into the details in a moment, but the high-level news here is that Alphabet appears to have thrown a lot of resources behind this effort while embracing a long time horizon, and that its investment is starting to pay dividends. This is a nice surprise, considering the somewhat haphazard state (at least to outside appearances) of Google's robotics ventures over the years.

The goal of Everyday Robots, according to Astro Teller, who runs Alphabet's moonshot stuff, is to create "a general-purpose learning robot," which sounds moonshot-y enough I suppose. To be fair, they've got an impressive amount of hardware deployed, says Everyday Robots' Hans Peter Brøndmo:

We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices. The same robot that sorts trash can now be equipped with a squeegee to wipe tables, and use the same gripper that grasps cups to open doors.

That's a lot of robots, which is awesome, but I have to question what "autonomously" actually means along with what "a range of useful tasks" actually means. There is really not enough publicly available information for us (or anyone?) to assess what Everyday Robots is doing with its fleet of 100 prototypes, how much manipulator-holding is required, the constraints under which they operate, and whether calling what they do "useful" is appropriate.

If you'd rather not wade through Everyday Robots' weirdly overengineered website, we've extracted the good stuff (the videos, mostly) and reposted them here, along with a little bit of commentary underneath each.

Introducing Everyday Robots

Everyday Robots

0:01 — Is it just me, or does the gearing behind those motions sound kind of, um, unhealthy?

0:25 — A bit of an overstatement about the Nobel Prize for picking a cup up off of a table, I think. Robots are pretty good at perceiving and grasping cups off of tables, because it's such a common task. Like, I get the point, but I just think there are better examples of problems that are currently human-easy and robot-hard.

1:13 — It's not necessarily useful to draw that parallel between computers and smartphones and compare them to robots, because there are certain physical realities (like motors and manipulation requirements) that prevent the kind of scaling to which the narrator refers.

1:35 — This is a red flag for me because we've heard this "it's a platform" thing so many times before and it never, ever works out. But people keep on trying it anyway. It might be effective when constrained to a research environment, but fundamentally, "platform" typically means "getting it to do (commercially?) useful stuff is someone else's problem," and I'm not sure that's ever been a successful model for robots.

2:10 — Yeah, okay. This robot sounds a lot more normal than the robots at the beginning of the video; what's up with that?

2:30 — I am a big fan of Moravec's Paradox and I wish it would get brought up more when people talk to the public about robots.

The challenge of everyday

Everyday Robots

0:18 — I like the door example, because you can easily imagine how many different ways it can go that would be catastrophic for most robots: different levers or knobs, glass in places, variable weight and resistance, and then, of course, thresholds and other nasty things like that.

1:03 — Yes. It can't be reinforced enough, especially in this context, that computers (and by extension robots) are really bad at understanding things. Recognizing things, yes. Understanding them, not so much.

1:40 — People really like throwing shade at Boston Dynamics, don't they? But this doesn't seem fair to me, especially for a company that Google used to own. What Boston Dynamics is doing is very hard, very impressive, and come on, pretty darn exciting. You can acknowledge that someone else is working on hard and exciting problems while you're working on different hard and exciting problems yourself, and not be a little miffed because what you're doing is, like, less flashy or whatever.

A robot that learns

Everyday Robots

0:26 — Saying that the robot is low cost is meaningless without telling us how much it costs. Seriously: "low cost" for a mobile manipulator like this could easily be (and almost certainly is) several tens of thousands of dollars at the very least.

1:10 — I love the inclusion of things not working. Everyone should do this when presenting a new robot project. Even if your budget is infinity, nobody gets everything right all the time, and we all feel better knowing that others are just as flawed as we are.

1:35 — I'd personally steer clear of using words like "intelligently" when talking about robots trained using reinforcement learning techniques, because most people associate "intelligence" with the kind of fundamental world understanding that robots really do not have.

Training the first task

Everyday Robots

1:20 — As a research task, I can see this being a useful project, but it's important to point out that this is a terrible way of automating the sorting of recyclables from trash. Since all of the trash and recyclables already get collected and (presumably) brought to a few centralized locations, in reality you'd just have your system there, where the robots could be stationary and have some control over their environment and do a much better job much more efficiently.

1:15 — Hopefully they'll talk more about this later, but when thinking about this montage, it's important to ask what of these tasks in the real world would you actually want a mobile manipulator to be doing, and which would you just want automated somehow, because those are very different things.

Building with everyone

Everyday Robots

0:19 — It could be a little premature to be talking about ethics at this point, but on the other hand, there's a reasonable argument to be made that there's no such thing as too early to consider the ethical implications of your robotics research. The latter is probably a better perspective, honestly, and I'm glad they're thinking about it in a serious and proactive way.

1:28 — Robots like these are not going to steal your job. I promise.

2:18 — Robots like these are also not the robots that he's talking about here, but the point he's making is a good one, because in the near- to medium term, robots are going to be most valuable in roles where they can increase human productivity by augmenting what humans can do on their own, rather than replacing humans completely.

3:16 — Again, that platform idea...blarg. The whole "someone has written those applications" thing, uh, who, exactly? And why would they? The difference between smartphones (which have a lucrative app ecosystem) and robots (which do not) is that without any third party apps at all, a smartphone has core functionality useful enough that it justifies its own cost. It's going to be a long time before robots are at that point, and they'll never get there if the software applications are always someone else's problem.

Everyday Robots

I'm a little bit torn on this whole thing. A fleet of 100 mobile manipulators is amazing. Pouring money and people into solving hard robotics problems is also amazing. I'm just not sure that the vision of an "Everyday Robot" that we're being asked to buy into is necessarily a realistic one.

The impression I get from watching all of these videos and reading through the website is that Everyday Robot wants us to believe that it's actually working towards putting general purpose mobile manipulators into everyday environments in a way where people (outside of the Google Campus) will be able to benefit from them. And maybe the company is working towards that exact thing, but is that a practical goal and does it make sense?

The fundamental research being undertaken seems solid; these are definitely hard problems, and solutions to these problems will help advance the field. (Those advances could be especially significant if these techniques and results are published or otherwise shared with the community.) And if the reason to embody this work in a robotic platform is to help inspire that research, then great, I have no issue with that.

But I'm really hesitant to embrace this vision of generalized in-home mobile manipulators doing useful tasks autonomously in a way that's likely to significantly help anyone who's actually watching Everyday Robotics' videos. And maybe this is the whole point of a moonshot vision—to work on something hard that won't pay off for a long time. And again, I have no problem with that. However, if that's the case, Everyday Robots should be careful about how it contextualizes and portrays its efforts (and even its successes), why it's working on a particular set of things, and how outside observers should set our expectations. Over and over, companies have overpromised and underdelivered on helpful and affordable robots. My hope is that Everyday Robots is not in the middle of making the exact same mistake.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

We first met Cleo Robotics at CES 2017, when they were showing off a consumer prototype of their unique ducted-fan drone. They've just announced a new version which has been beefed up to do surveillance, and it is actually called the Dronut.

For such a little thing, the 12 minute flight time is not the worst, and hopefully it'll find a unique niche that'll help Cleo move back towards the consumer market, because I want one.

[ Cleo ]

Happy tenth birthday, Thymio!

[ EPFL ]

Here we describe a protective strategy for winged drones that mitigates the added weight and drag by means of increased lift generation and stall delay at high angles of attack. The proposed structure is inspired by the wing system found in beetles and consists of adding an additional set of retractable wings, named elytra, which can rapidly encapsulate the main folding wings when protection is needed.

[ EPFL ]

This is some very, very impressive robust behavior on ANYmal, part of Joonho Lee's master's thesis at ETH Zurich.

[ ETH Zurich ]

NTT DOCOMO, INC. announced today that it has developed a blade-free, blimp-type drone equipped with a high-resolution video camera that captures high-quality video and full-color LED lights glow in radiant colors.

[ NTT Docomo ] via [ Gizmodo ]

Senior Software Engineer Daniel Piedrahita explains the theory behind robust dynamic stability and how Agility engineers used it to develop an unique and cohesive hardware and software solution that allows Digit to navigate unpredictable terrain with ease.

[ Agility ]

The title of thie video from DeepRobotics is "DOOMSDAY COMING." Best not to think about it, probably.

[ DeepRobotics ]

More Baymax!

[ Disney ]

At Ben-Gurion University of the Negev, they're trying to figure out how to make a COVID-19 officer robot authoritative enough that people will actually pay attention to it and do what it says.

[ Paper ]

Thanks, Andy!

You'd think that high voltage powerlines would be the last thing you'd want a drone to futz with, but here we are.

[ GRVC ]

Cassie Blue navigates around furniture treated as obstacles in the atrium of the Ford Robotics Building at the University of Michigan.

[ Michigan Robotics ]

Northrop Grumman and its partners AVL, Intuitive Machines, Lunar Outpost and Michelin are designing a new vehicle that will greatly expand and enhance human and robotic exploration of the Moon, and ultimately, Mars.

[ Northrop Grumman ]

This letter proposes a novel design for a coaxial hexarotor (Y6) with a tilting mechanism that can morph midair while in a hover, changing the flight stage from a horizontal to a vertical orientation, and vice versa, thus allowing wall-perching and wall-climbing maneuvers.

[ KAIST ]

Honda and Black & Veatch have successfully tested the prototype Honda Autonomous Work Vehicle (AWV) at a construction site in New Mexico. During the month-long field test, the second-generation, fully-electric Honda AWV performed a range of functions at a large-scale solar energy construction project, including towing activities and transporting construction materials, water, and other supplies to pre-set destinations within the work site.

[ Honda ]

This could very well be the highest speed multiplier I've ever seen in a robotics video.

[ GITAI ]

Here's an interesting design for a manipulator that can do in-hand manipulation with a minimum of fuss, from the Yale Grablab.

[ Paper ]

That ugo robot that's just a ball with eyes on a stick is one of my favorite robots ever, because it's so unapologetically just a ball on a stick.

[ ugo ]

Robot, make me a sandwich. And then make me a bunch more sandwiches.

[ Soft Robotics ]

Refilling water bottles isn't a very complex task, but having a robot do it means that humans don't have to.

[ Fraunhofer ]

To help manufacturers find cost effective and sustainable alternatives to single -use plastic, ABB Robotics is collaborating with Zume, a global provider of innovative compostable packaging solutions. We will integrate and install up to 2000 robots at Zume customer's sites worldwide over the next five years to automate the innovative manufacturing production of 100 percent compostable packaging molded from sustainably harvested plant-based material for products from food and groceries to cosmetics and consumer goods.

[ ABB ]



I am not a fan of Alexa. Or Google Assistant. Or, really, any Internet-connected camera or microphone which has a functionality based around being in my house and active all of the time. I don't use voice-activated systems, and while having a webcam is necessary, I make sure to physically unplug it from my computer when I'm not using it. Am I being overly paranoid? Probably. But I feel like having a little bit of concern is reasonable, and having that concern constantly at the back of my mind is just not worth what these assistants have so far had to offer.

iRobot CEO Colin Angle disagrees. And last week, iRobot announced that it has "teamed with Amazon to further advance voice-enabled intelligence for home robots." Being skeptical about this whole thing, I asked Angle to talk me into it, and I have to say, he kinda maybe almost did.

Using Alexa, iRobot customers can automate routines, personalize cleaning jobs and control how their home is cleaned. Thanks to interactive Alexa conversations and predictive and proactive recommendations, smart home users can experience a new level of personalization and control for their unique homes, schedules, preferences and devices.

Here are the kinds of things that are new to the Roomba Alexa partnership:

"Roomba, Clean Around the [Object]" – Use Alexa to send your robot to clean a mess right where it happens with precision Clean Zones. Roomba can clean around specific objects that attract the most common messes, like couches, tables and kitchen counters. Simply ask Alexa to "tell Roomba, clean around the couch," and Roomba knows right where to go.

iRobot Scheduling with Alexa voice service – Thanks to Alexa's rich language understanding, customers can have a more natural interaction directing their robot using their voice to schedule cleaning Routines. For example, "Alexa, tell Roomba to clean the kitchen every weeknight at 7 pm," or "Alexa, tell Braava to mop the kitchen every Sunday afternoon."

Alexa Announcements – Alexa can let customers know about their robot's status, like when it needs help or when it has finished a cleaning job, even if your phone isn't nearby.

Alexa Hunches – The best time to clean is when no one is home. If Alexa has a 'hunch' that you're away, Alexa can begin a cleaning job.

The reason why this kind of voice control is important is because Roombas are getting very, very sophisticated. The latest models know more about our homes than ever before, with maps and object recognition and all kinds of complex and intelligent behaviors and scheduling options. iRobot has an app that does its best to simplify the process of getting your Roomba to do exactly what you want it to do, but you still have to be comfortable poking around in the app on a regular basis. This poses a bit of a problem for iRobot, which is now having to square all these really cool new capabilities with their original concept for the robot that I still remember as being best encapsulated by having just one single button that you could push, labeled "Clean" in nice big letters.

iRobot believes that voice control is the answer to this. It's fast, it's intuitive, and as long as there's a reliable mapping between what you tell the robot to do and what the robot actually does, it seems like it could be very successful—if, of course, you're fine with having Alexa as a mediator, which I'm not sure I am. But after talking with iRobot CEO Colin Angle, I'm starting to come around.

IEEE Spectrum: I know you've been working on this for a while, but can you talk about how the whole Alexa and Roomba integration thing came about?

Colin Angle: This started back when Alexa first came out. Amazon told us that they asked people, "what should we do with this speaker?" And one of the first things that came up was, "I want to tell my Roomba to clean." It was within the original testing as to what Alexa should do. It certainly took them a while to get there, and took us a while to get there. But it's a very substantial and intuitive thing that we're supposed to be able to do with our robots—use our voice and talk to them. I think almost every robot in film and literature can be talked to. They may not all talk back in any logical way, but they all can listen and respond to voice.

Alexa's "hunches" are a good example of the kind of thing that I don't like about Alexa. Like, what is a hunch, and what does the fact that Alexa can have hunches imply about what it knows about my life that I didn't explicitly tell it?

That's the problem with the term "hunch." It attributes intelligence when what they're trying to do is attribute uncertainty. Amazon is really trying to do the right thing, but naming something "hunch" just invites speculation as to whether there's an AI there that's listening to everything I do and tracking me, when in some way it's tragically simpler than all that—depending on what it's connected to, it can infer periods of inactivity.

There's a question of what should you do and what shouldn't you do with an omnipresent ear, and that requires trust. But in general, Alexa is less creepy the more you understand how it works. And so the term "hunch" is meant to convey uncertainty, but that doesn't help people's confidence.

One of the voice commands you can give is having Alexa ask Roomba to clean around the couch. The word "around" can have different meanings for different people, so how do you know what a user actually wants when they use a term like "around?"

We've had to build these skills using words like around, underneath, beneath, near… All of these different words which convey approximate location. If we clean a little more than you want us to clean, but not a ton more, you're probably not going to be upset. So taking a little bit of superset liberties around how Roomba cleans still yields a satisfying result. There's a certain pragmatism that's required, and it's better to understand more prepositions and have them converge into a carefully designed behavior which the vast majority of people would be okay with, while not requiring a magic incantation where you'd need to go grab your manual so that you can look up what to tell Roomba in order to get it to do the right thing.

This is one of the fascinating challenges—we're trying to build robots into partners, but in general, the full functionality has largely been in the iRobot app. And yet the metaphor of having a partner usually is not passing notes, it's delivering utterances that convey enough meaning that your partner does what they're supposed to do. If you make a mess, and say, "Alexa, tell Roomba to clean up around the kitchen table" without having to use the app, that's actually a pretty rewarding interaction. It's a very natural thing, and you can say many things close to that and have it just work.

Our measure of success is that if I said Evan, suck it up, plug in that Alexa and then without reading the instructions, convey your will to Roomba to clean your office every Sunday after noon or something by saying something like that, and see if it works.

Clearly communicating intent using voice is radically more complicated with each additional level of complexity that you're trying to convey. —Colin Angle

Roomba can now recognize commands that use the word "and," like "clean under the couch and coffee table." I'm wondering how much potential there is to make more sophisticated commands. Things like, "Roomba, clean between the couch and the coffee table," or "Roomba, clean the living room for 10 minutes."

Of the things you said, I would say that we can do the ones that are pragmatic. You couldn't say "clean between these two places;" I suppose we might know enough to try to figure that out because we know where those two areas are and we could craft the location, but that's not a normal everyday use case because people make messes under or near things rather than between things. With precise and approximate scheduling, we should be able to handle that, because that's something people are likely to say. From a design perspective, it has to do with listening intently to how customers like to talk about tasking Roomba, and making sure that our skill is sufficiently literate to reasonably precisely do the right thing.

Do these voice commands really feel like talking to Roomba, or does it feel more like talking to Alexa, and how important is that distinction?

Unfortunately, the metaphor is that you're talking to Alexa who is talking to Roomba. We like the fact that people personify Roomba. If you don't yet own a Roomba, it's kind of a creepy thing to go around saying, because it's a vacuum cleaner, not a friend. But the experience of owning a Roomba is supposed to feel like you have a partner. And this idea that you have to talk to your helper through an intermediary is the price that we pay, which in my mind diminishes that partnership a little bit in pursuit of iRobot not having to build and maintain our own speakers and voice system. I think both Amazon and Google played around with the idea of a direct connection, and decided that enforcing that metaphor of having the speaker as an intermediary simplifies how people interact with it. And so that's a business decision on their side. For us, if it was an option, I would say direct connection every time, because I think it elevates the feeling of partnership between the person and the robot.

From a human-robot interaction (HRI) perspective, do you think it would be risky to allow users to talk directly to their Roomba, in case their expectations for how their robot should sound or what it might say don't match the reality that's constrained by practical voice interaction decisions that iRobot will have to make?

I think the benefits outweigh the risks. For example, if you don't like the voice, you should be able to change the voice, and hopefully you can find something that is close enough to your mental model that you can learn to live with it. If the question is whether talking directly to Roomba creates a higher expectation of intelligence than talking through a third party, I would say it does, but is it night and day? With this announcement we're making the strong statement that we think that for most of the things that you're going to want Roomba to do, we have enabled them broadly with voice. Your Roomba is not going to know the score of the baseball game, but if you ask it about what it's supposed to be able to do, you're going to have a good experience.

Coming from the background that you have and being involved in developing Roomba from the very beginning, now that you're having to work through voice interactions and HRI and things like that, do you miss the days where the problems were power cords and deep carpet and basic navigation?

Honestly, I've been waiting to tackle problems that we're currently tackling. If I have to tackle another hair entrainment problem, I would scream! I mean, to some extent, here we are, 31 years in, and I'm getting to the good stuff, because I think that the promise of robots is as much about the interaction as it is around the physical hardware. In fact, ever since I was in college I was playing around with hardware because the software sucked and was insanely hard and not going to do what I wanted it to do. All of my early attempts at voice interaction were spectacular failures. And yet, I kept going back to voice because, well, you're supposed to be able to talk to your robot.

Voice is kind of the great point of integration if it can be done well enough. And if you can leave your phone in your pocket and get up from your meal, look down, see you made a mess and just say, "hey Roomba, the kitchen table looks messy," which you can, that's progress. That's one way of breaking this ceiling of control complexity that must be shattered because the smart home isn't smart today and only does a tiny percentage of what it needs to do.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Telexistence and FamilyMart introduced a new robot TX SCARA equipped with TX's proprietary AI system Gordon to the FamilyMart METI store to perform beverage replenishment work in the back 24 hours a day in place of human workers, thereby automating high-volume work in a low-temperature environment where the physical load on store staff is significant.

[ Telexistence ]

It would be a lot easier to build a drone if you didn't have to worry about take-offs or landings, and DARPA's Gremlins program has been making tangible progress towards midair drone recovery.

[ DARPA ]

At Cobionix, we are developing Cobi, a multi-sensing, intelligent cobot that can not only work safely alongside humans but also learn from them and become smarter over time. In this video, we showcase one of the applications that Cobi is being utilized: Needle-less robotic intermuscular injection.

[ Cobionix ] via [ Gizmodo ]

It's been just a little bit too long since we've had a high quality cat on a Roomba video.

[ YouTube ]

Scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), in the ever-present quest to get machines to replicate human abilities, created a framework that's more scaled up: a system that can reorient over two thousand different objects, with the robotic hand facing both upwards and downwards. This ability to manipulate anything from a cup to a tuna can, and a Cheez-It box, could help the hand quickly pick-and-place objects in specific ways and locations -- and even generalize to unseen objects.

[ MIT CSAIL ]

NASA is sending a couple of robots to Venus in 2029! Not the kind with legs or wheels, but still.

[ NASA ]

The Environmental Genomics & Systems Biology division at Berkeley Lab has built a robot, called the EcoBOT, that is able to perform “self-driving experiments."

[ EcoBOT ]

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences have developed a new approach in which robotic exosuit assistance can be calibrated to an individual and adapt to a variety of real-world walking tasks in a matter of seconds. The bioinspired system uses ultrasound measurements of muscle dynamics to develop a personalized and activity-specific assistance profile for users of the exosuit.

[ Harvard Wyss ]

We propose a gecko-inspired robot with an optimal bendable body structure. The robot leg and body movements are driven by central pattern generator (CPG)-based neural control. It can climb using a combination of trot gait and lateral undulation of the bendable body with a C-shaped standing wave. This approach results in 52% and 54% reduced energy consumption during climbing on inclined solid and soft surfaces, respectively, compared to climbing with a fixed body. To this end, the study provides a basis for developing sprawling posture robots with a bendable body and neural control for energy-efficient inclined surface climbing with a possible extension towards agile and versatile locomotion.

[ Paper ]

Thanks Poramate!

The new Mavic 3 from DJI looks very impressive, especially that 46 minute battery life.

[ DJI ]

Sonia Roberts, an experimentalist at heart and PhD researcher with Kod*lab, a legged robotics group within the GRASP Lab at Penn Engineering takes us inside her scientific process. How can a robot's controllers help it use less energy as it runs on sand?

[ KodLab ]

The Canadian Space Agency is preparing for a Canadian rover to explore a polar region of the Moon within the next five years. Two Canadian companies, MDA and Canadensys, have been selected to design lunar rover concepts.

[ CSA ]

Our Boeing Australia team has expanded its flight-test program of the Boeing Airpower Teaming System, with two aircraft successfully completing separate flight missions at the Woomera Range Complex recently.

[ Boeing ]

I do not understand what the Campaign to Stop Killer Robots folks are trying to tell me here, and also, those colors make my eyeballs scream.

[ Campaign to Stop Killer Robots ]

No doorbell? Nothing that some Dynamixels and a tongue drum can't fix.

[ YouTube ]

We present an integrated system for performing precision harvesting missions using a legged harvester (HEAP) in a confined, GPS denied forest environment.

[ Paper ]

This video demonstrates some of the results from a scientific deployment to Chernobyl NPP in September 2021 led by University of Bristol.

[ University of Bristol ]

This a bottle unscrambler. I don't know why that's what it's called because the bottles don't seem scrambled. But it's unscrambling them anyway.

[ B&R ]

We invite you to hear from the leadership of Team Explorer, the CMU DARPA Subterranean Challenge team, as they discuss the challenges, lessons learned, and the future direction these technologies are headed in.

[ AirLab ]



It turns out that you don't need a lot of hardware to make a flying robot. Flying robots are usually way, way, way over-engineered, with ridiculously over the top components like two whole wings or an obviously ludicrous four separate motors. Maybe that kind of stuff works for people with more funding than they know what to do with, but for anyone trying to keep to a reasonable budget, all it actually takes to make a flying robot is one single airfoil plus an attached fixed-pitch propeller. And if you make that airfoil flexible, you can even fold the entire thing up into a sort of flying robotic swiss roll.

This type of drone is called a monocopter, and the design is very generally based on samara seeds, which are those single-wing seed pods that spin down from maple trees. The ability to spin slows the seeds' descent to the ground, allowing them to spread farther from the tree. It's an inherently stable design, meaning that it'll spin all by itself and do so in a stable and predictable way, which is a nice feature for a drone to have—if everything completely dies, it'll just spin itself gently down to a landing by default.

The monocopter we're looking at here, called F-SAM, comes from the Singapore University of Technology & Design, and we've written about some of their flying robots in the past, including this transformable hovering rotorcraft. F-SAM stands for Foldable Single Actuator Monocopter, and as you might expect, it's a monocopter that can fold up and uses just one single actuator for control.

There may not be a lot going on here hardware-wise, but that's part of the charm of this design. The one actuator gives complete directional control: increasing the throttle increases the RPM of the aircraft, causing it to gain altitude, which is pretty straightforward. Directional control is trickier, but not much trickier, requiring repetitive pulsing of the motor at a point during the aircraft's spin when it's pointed in the direction you want it to go. F-SAM is operating in a motion-capture environment in the video to explore its potential for precision autonomy, but it's not restricted to that environment, and doesn't require external sensing for control.

While F-SAM's control board was custom designed and the wing requires some fabrication, the rest of the parts are cheap and off the shelf. The total weight of F-SAM is just 69g, of which nearly 40% is battery, yielding a flight time of about 16 minutes. If you look closely, you'll also see a teeny little carbon fiber leg of sorts that keeps the prop up above the ground, enabling the ground takeoff behavior without contacting the ground.

You can find the entire F-SAM paper open access here, but we also asked the authors a couple of extra questions.

IEEE Spectrum: It looks like you explored different materials and combinations of materials for the flexible wing structure. Why did you end up with this mix of balsa wood and plastic?

Shane Kyi Hla Win: The wing structure of a monocopter requires rigidity in order to be controllable in flight. Although it is possible for the monocopter to fly with more flexible materials we tested, such as flexible plastic or polymide flex, they allow the wing to twist freely mid-flight making cyclic control effort from the motor less effective. The balsa laminated with plastic provides enough rigidity for an effective control, while allowing folding in a pre-determined triangular fold.

Can F-SAM fly outdoors? What is required to fly it outside of a motion capture environment?

Yes it can fly outdoors. It is passively stable so it does not require a closed-loop control for its flight. The motion capture environment provides its absolute position for station-holding and waypoint flights when indoors. For outdoor flight, an electronic compass provides the relative heading for the basic cyclic control. We are working on a prototype with an integrated GPS for outdoor autonomous flights.

Would you be able to add a camera or other sensors to F-SAM?

A camera can be added (we have done this before), but due to its spinning nature, images captured can come out blurry. 360 cameras are becoming lighter and smaller and we may try putting one on F-SAM or other monocopters we have. Other possible sensors to include are LiDAR sensor or ToF sensor. With LiDAR, the platform has an advantage as it is already spinning at a known RPM. A conventional LiDAR system requires a dedicated actuator to create a spinning motion. As a rotating platform, F-SAM already possesses the natural spinning dynamics, hence making LiDAR integration lightweight and more efficient.

Your paper says that "in the future, we may look into possible launching of F-SAM directly from the container, without the need for human intervention." Can you describe how this would happen?

Currently, F-SAM can be folded into a compact form and stored inside a container. However, it still requires a human to unfold it and either hand-launch it or put it on the floor to fly off. In the future, we envision that F-SAM is put inside a container which has the mechanism (such as pressured gas) to catapult the folded unit into the air, which can begin unfolding immediately due to elastic materials used. The motor can initiate the spin which allows the wing to straighten out due to centrifugal forces.

Do you think F-SAM would make a good consumer drone?

F-SAM could be a good toy but it may not be a good alternative to quadcopters if the objective is conventional aerial photography or videography. However, it can be a good contender for single-use GPS-guided reconnaissance missions. As it uses only one actuator for its flight, it can be made relatively cheaply. It is also very silent during its flight and easily camouflaged once landed. Various lightweight sensors can be integrated onto the platform for different types of missions, such as climate monitoring. F-SAM units can be deployed from the air, as they can also autorotate on their way down, while also flying at certain periods for extended meteorological data collection in the air.

What are you working on next?

We have a few exciting projects on hand, most of which focus on 'do more with less' theme. This means our projects aim to achieve multiple missions and flight modes while using as few actuators as possible. Like F-SAM which uses only one actuator to achieve controllable flight, another project we are working on is the fully autorotating version, named Samara Autorotating Wing (SAW). This platform, published earlier this year in IEEE Transactions on Robotics , is able to achieve two flight modes (autorotation and diving) with just one actuator. It is ideal for deploying single-use sensors to remote locations. For example, we can use the platform to deploy sensors for forest monitoring or wildfire alert system. The sensors can land on tree canopies, and once landed the wing provides the necessary area for capturing solar energy for persistent operation over several years. Another interesting scenario is using the autorotating platform to guide the radiosondes back to the collection point once its journey upwards is completed. Currently, many radiosondes are sent up with hydrogen balloons from weather stations all across the world (more than 20,000 annually from Australia alone) and once the balloon reaches a high altitude and bursts, the sensors drop back onto the earth and no effort is spent to retrieve these sensors. By guiding these sensors back to a collection point, millions of dollars can be saved every year—and also [it helps] save the environment by polluting less.



Late last year, Japanese robotics startup GITAI sent their S1 robotic arm up to the International Space Station as part of a commercial airlock extension module to test out some useful space-based autonomy. Everything moves pretty slowly on the ISS, so it wasn't until last month that NASA astronauts installed the S1 arm and GITAI was able to put the system through its paces—or rather, sit in comfy chairs on Earth and watch the arm do most of its tasks by itself, because that's the dream, right?

The good news is that everything went well, and the arm did everything GITAI was hoping it would do. So what's next for commercial autonomous robotics in space? GITAI's CEO tells us what they're working on.

In this technology demonstration, the GITAI S1 autonomous space robot was installed inside the ISS Nanoracks Bishop Airlock and succeeded in executing two tasks: assembling structures and panels for In-Space Assembly (ISA), and operating switches & cables for Intra-Vehicular Activity (IVA).

One of the advantages of working in space is that it's a highly structured environment. Microgravity can be somewhat unpredictable, but you have a very good idea of the characteristics of objects (and even of lighting) because everything that's up there is excessively well defined. So, stuff like using a two-finger gripper for relatively high precision tasks is totally possible, because the variation that the system has to deal with is low. Of course, things can always go wrong, so GITAI also tested teleop procedures from Houston to make sure that having humans in the loop was also an effective way of completing tasks.

Since full autonomy is vastly more difficult than almost full autonomy, occasional teleop is probably going to be critical for space robots of all kinds. We spoke with GITAI CEO Sho Nakanose to learn more about their approach.

IEEE Spectrum: What do you think is the right amount of autonomy for robots working inside of the ISS?

Sho Nakanose: We believe that a combination of 95% autonomous control and 5% remote judgment and remote operation is the most efficient way to work. In this ISS demonstration, all the work was performed with 99% autonomous control and 1% remote decision making. However, in actual operations on the ISS, irregular tasks will occur that cannot be handled by autonomous control, and we believe that such irregular tasks should be handled by remote control from the ground, so we believe that the final ratio of about 5% remote judgment and remote control will be the most efficient.

GITAI will apply the general-purpose autonomous space robotics technology, know-how, and experience acquired through this tech demo to develop extra-vehicular robotics (EVR) that can execute docking, repair, and maintenance tasks for On-Orbit Servicing (OOS) or conduct various activities for lunar exploration and lunar base construction. -Sho Nakanose

I'm sure you did many tests with the system on the ground before sending it to the ISS. How was operating the robot on the ISS different from the testing you had done on Earth?

The biggest difference between experiments on the ground and on the ISS is the microgravity environment, but it was not that difficult to cope with. However, experiments on the ISS, which is an unknown environment that we have never been to before, are subject to a variety of unexpected situations that were extremely difficult to deal with, for example an unexpected communication breakdown occurred due to a failed thruster firing experiment on the Russian module. However, we were able to solve all the problems because the development team had carefully prepared for the irregularities in advance.

It looked like the robot was performing many tasks using equipment designed for humans. Do you think it would be better to design things like screws and control panels to make them easier for robots to see and operate?

Yes, I think so. Unlike the ISS that was built in the past, it is expected that humans and robots will cooperate to work together in the lunar orbiting space station Gateway and the lunar base that will be built in the future. Therefore, it is necessary to devise and implement an interface that is easy to use for both humans and robots. In 2019, GITAI received an order from JAXA to develop guidelines for an interface that is easy for both humans and robots to use on the ISS and Gateway.

What are you working on next?

We are planning to conduct an on-orbit extra-vehicular demonstration in 2023 and a lunar demonstration in 2025. We are also working on space robot development projects for several customers for which we have already received orders.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I don't know how much this little quadruped from DeepRobotics costs, but the video makes it look scarily close to a consumer product.

Jueying Lite2 is an intelligent quadruped robot independently developed by DeepRobotics. Based on advanced control algorithms, it has multiple motion modes such as walking, sliding, jumping, running, and back somersault. It has freely superimposed intelligent modules, capable of autonomous positioning and navigation, real-time obstacle avoidance, and visual recognition. It has a user-oriented design concept, with new functions such as voice interaction, sound source positioning, and safety and collision avoidance, giving users a better interactive experience and safety assurance.

[ DeepRobotics ]

We hope that this video can assist the community in explaining what ROS is, who uses it, and why it is important to those unfamiliar with ROS.

https://vimeo.com/639235111/9aa251fdb6

[ ROS.org ]

Boston Dynamics should know better than to post new videos on Fridays (as opposed to Thursday nights, when I put this post together every week), but if you missed this last week, here you go.

Robot choreography by Boston Dynamics and Monica Thomas.

[ Boston Dynamics ]

DeKonBot 2: for when you want things really, really, really, slowly clean.

[ Fraunhofer ]

Who needs Digit when Cassie is still hard at work!

[ Michigan Robotics ]

I am not making any sort of joke about sausage handling.

[ Soft Robotics ]

A squad of mini rovers traversed the simulated lunar soils of NASA Glenn's SLOPE (Simulated Lunar Operations) lab recently. The shoebox-sized rovers were tested to see if they could navigate the conditions of hard-to-reach places such as craters and caves on the Moon.

[ NASA Glenn ]

This little cyclocopter is cute, but I'm more excited for the teaser at the end of the video.

[ TAMU ]

Fourteen years ago, a team of engineering experts and Virginia Tech students competed in the 2007 DARPA Urban Challenge and propelled Torc to success. We look forward to many more milestones as we work to commercialize autonomous trucks.

[ Torc ]

Blarg not more of this...

Show me the robot prepping those eggs and doing the plating, please.

[ Moley Robotics ]

ETH Zurich's unique non-profit project continues! From 25 to 27 October 2024, the third edition of the CYBATHLON will take place in a global format. To the original six disciplines, two more are added: a race using smart visual assistive technologies and a race using assistive robots. As a platform, CYBATHLON challenges teams from around the world to develop everyday assistive technologies for, and in collaboration with, people with disabilities.

[ Cybathlon ]

Will drone deliveries be a practical part of our future? We visit the test facilities of Wing to check out how their engineers and aircraft designers have developed a drone and drone fleet control system that is actually in operation today in parts of the world.

[ Tested ]

In our third Self-Driven Women event, Waymo engineering leads Allison Thackston, Shilpa Gulati, and Congcong Li talk about some of the toughest and most interesting problems in ML and robotics and how it enables building a scalable driving autonomous driving tech stack. They also discuss their respective career journeys, and answer live questions from the virtual audience.

[ Waymo ]

The Robotics and Automation Society Student Activities Committee (RAS SAC) is proud to present “Transition to a Career in Academia," a panel with robotics thought leaders. This panel is intended for robotics students and engineers interested in learning more about careers in academia after earning their degree. The panel will be moderated by RAS SAC Co-Chair, Marwa ElDinwiny.

[ IEEE RAS ]

This week's CMU RI Seminar is from Siddharth Srivastava at Arizona State, on The Unusual Effectiveness of Abstractions for Assistive AI.

[ CMU RI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I don't know how much this little quadruped from DeepRobotics costs, but the video makes it look scarily close to a consumer product.

Jueying Lite2 is an intelligent quadruped robot independently developed by DeepRobotics. Based on advanced control algorithms, it has multiple motion modes such as walking, sliding, jumping, running, and back somersault. It has freely superimposed intelligent modules, capable of autonomous positioning and navigation, real-time obstacle avoidance, and visual recognition. It has a user-oriented design concept, with new functions such as voice interaction, sound source positioning, and safety and collision avoidance, giving users a better interactive experience and safety assurance.

[ DeepRobotics ]

We hope that this video can assist the community in explaining what ROS is, who uses it, and why it is important to those unfamiliar with ROS.

https://vimeo.com/639235111/9aa251fdb6

[ ROS.org ]

Boston Dynamics should know better than to post new videos on Fridays (as opposed to Thursday nights, when I put this post together every week), but if you missed this last week, here you go.

Robot choreography by Boston Dynamics and Monica Thomas.

[ Boston Dynamics ]

DeKonBot 2: for when you want things really, really, really, slowly clean.

[ Fraunhofer ]

Who needs Digit when Cassie is still hard at work!

[ Michigan Robotics ]

I am not making any sort of joke about sausage handling.

[ Soft Robotics ]

A squad of mini rovers traversed the simulated lunar soils of NASA Glenn's SLOPE (Simulated Lunar Operations) lab recently. The shoebox-sized rovers were tested to see if they could navigate the conditions of hard-to-reach places such as craters and caves on the Moon.

[ NASA Glenn ]

This little cyclocopter is cute, but I'm more excited for the teaser at the end of the video.

[ TAMU ]

Fourteen years ago, a team of engineering experts and Virginia Tech students competed in the 2007 DARPA Urban Challenge and propelled Torc to success. We look forward to many more milestones as we work to commercialize autonomous trucks.

[ Torc ]

Blarg not more of this...

Show me the robot prepping those eggs and doing the plating, please.

[ Moley Robotics ]

ETH Zurich's unique non-profit project continues! From 25 to 27 October 2024, the third edition of the CYBATHLON will take place in a global format. To the original six disciplines, two more are added: a race using smart visual assistive technologies and a race using assistive robots. As a platform, CYBATHLON challenges teams from around the world to develop everyday assistive technologies for, and in collaboration with, people with disabilities.

[ Cybathlon ]

Will drone deliveries be a practical part of our future? We visit the test facilities of Wing to check out how their engineers and aircraft designers have developed a drone and drone fleet control system that is actually in operation today in parts of the world.

[ Tested ]

In our third Self-Driven Women event, Waymo engineering leads Allison Thackston, Shilpa Gulati, and Congcong Li talk about some of the toughest and most interesting problems in ML and robotics and how it enables building a scalable driving autonomous driving tech stack. They also discuss their respective career journeys, and answer live questions from the virtual audience.

[ Waymo ]

The Robotics and Automation Society Student Activities Committee (RAS SAC) is proud to present “Transition to a Career in Academia," a panel with robotics thought leaders. This panel is intended for robotics students and engineers interested in learning more about careers in academia after earning their degree. The panel will be moderated by RAS SAC Co-Chair, Marwa ElDinwiny.

[ IEEE RAS ]

This week's CMU RI Seminar is from Siddharth Srivastava at Arizona State, on The Unusual Effectiveness of Abstractions for Assistive AI.

[ CMU RI ]



Facebook, or Meta as it's now calling itself for some reason that I don't entirely understand, is today announcing some new tactile sensing hardware for robots. Or, new-ish, at least—there's a ruggedized and ultra low-cost GelSight-style fingertip sensor, plus a nifty new kind of tactile sensing skin based on suspended magnetic particles and machine learning. It's cool stuff, but why?

Obviously, Facebook Meta cares about AI, because it uses AI to try and do a whole bunch of the things that it's unwilling or unable to devote the time of actual humans to. And to be fair, there are some things that AI may be better at (or at least more efficient at) than humans are. AI is of course much worse than humans at many, many, many things as well, but that debate goes well beyond Facebook Meta and certainly well beyond the scope of this article, which is about tactile sensing for robots. So why does Facebook Meta care even a little bit about making robots better at touching stuff? Yann LeCun, the Chief AI Scientist at Facebook Meta, takes a crack at explaining it:

Before I joined Facebook, I was chatting with Mark Zuckerberg and I asked him, "is there any area related to AI that you think we shouldn't be working on?" And he said, "I can't find any good reason for us to work on robotics." And so, that was kind of the start of Facebook AI Research—we were not going to work on robotics.

After a few years, it became clear that a lot of interesting progress in AI was happening in the context of robotics, because robotics is the nexus of where people in AI research are trying to get the full loop of perception, reasoning, planning, and action, and getting feedback from the environment. Doing it in the real world is where the problems are concentrated, and you can't play games if you want robots to learn quickly.

It was clear that four or five years ago, there was no business reason to work on robotics, but the business reasons have kind of popped up. Robotics could be used for telepresence, for maintaining data centers more automatically, but the more important aspect of it is making progress towards intelligent agents, the kinds of things that could be used in the metaverse, in augmented reality, and in virtual reality. That's really one of the raison d'être of a research lab, to foresee the domains that will be important in the future. So that's the motivation.

Well, okay, but none of that seems like a good justification for research into tactile sensing specifically. But according to LeCun, it's all about putting together the pieces required for some level of fundamental world understanding, a problem that robotic systems are still bad at and that machine learning has so far not been able to tackle:

How to get machines to learn that model of the world that allows them to predict in advance and plan what's going to happen as a consequence of their actions is really the crux of the problem here. And this is something you have to confront if you work on robotics. But it's also something you have to confront if you want to have an intelligent agent acting in a virtual environment that can interact with humans in a natural way. And one of the long-term visions of augmented reality, for example, is virtual agents that basically are with you all the time, living in your automatic reality glasses or your smartphone or your laptop or whatever, helping you in your daily life as a human assistant would do, but also can answer any question you have. And that system will have to have some degree of understanding of how the world works—some degree of common sense, and be smart enough to not be frustrating to talk to. And that is where all of this research leads in the long run, whether the environment is real or virtual.

AI systems (robots included) are very very dumb in very very specific ways, quite often the ways in which humans are least understanding and forgiving of. This is such a well established thing that there's a name for it: Moravec's paradox. Humans are great at subconscious levels of world understanding that we've built up over years and years of experience being, you know, alive. AI systems have none of this, and there isn't necessarily a clear path to getting them there, but one potential approach is to start with the fundamentals in the same way that a shiny new human does and build from there, a process that must necessarily include touch.

The DIGIT touch sensor is based on the GelSight style of sensor, which was first conceptualized at MIT over a decade ago. The basic concept of these kinds of tactile sensors is that they're able to essentially convert a touch problem into a vision problem: an array of LEDs illuminate a squishy finger pad from the back, and when the squishy finger pad pushes against something with texture, that texture squishes through to the other side of the finger pad where it's illuminated from many different angles by the LEDs. A camera up inside of the finger takes video of this, resulting in a very rainbow but very detailed picture of whatever the finger pad is squishing against.

The DIGIT paper published last year summarizes the differences between this new sensor and previous versions of GelSight:

DIGIT improves over existing GelSight sensors in several ways: by providing a more compact form factor that can be used on multi-finger hands, improving the durability of the elastomer gel, and making design changes that facilitate large-scale, repeatable production of the sensor hardware to facilitate tactile sensing research.

DIGIT is open source, so you can make one on your own, but that's a hassle. The really big news here is that GelSight itself (an MIT spinoff which commercialized the original technology) will be commercially manufacturing DIGIT sensors, providing a standardized and low-cost option for tactile sensing. The bill of materials for each DIGIT sensor is about US $15 if you were to make a thousand of them, so we're expecting that the commercial version won't cost much more than that.

The other hardware announcement is ReSkin, a tactile sensing skin developed in collaboration with Carnegie Mellon. Like DIGIT, the idea is to make an open source, robust, and very low cost system that will allow researchers to focus on developing the software to help robots make sense of touch rather than having to waste time on their own hardware.

ReSkin operates on a fairly simple concept: it's a flexible sheet of 2mm thick silicone with magnetic particles carelessly mixed in. The sheet sits on top of a magnetometer, and whenever the sheet deforms (like if something touches it), the magnetic particles embedded in the sheet get squooshed and the magnetic signal changes, which is picked up by the magnetometer. For this to work, the sheet doesn't have to be directly connected to said magnetometer. This is key, because it makes the part of the ReSkin sensor that's most likely to get damaged super easy to replace—just peel it off and slap on another one and you're good to go.

I get that touch is an integral part of this humanish world understanding that Facebook Meta is working towards, but for most of us, the touch is much more nuanced than just tactile data collection, because we experience everything that we touch within the world understanding that we've built up through integration of all of our other senses as well. I asked Roberto Calandra, one of the authors of the paper on DIGIT, what he thought about this:

I believe that we certainly want to have multimodal sensing in the same way that humans do. Humans use cues from touch, cues from vision, and also cues from audio, and we are able to very smartly put these different sensor modalities together. And if I tell you, can you imagine how touching this object is going to feel for you, can sort of imagine that. You can also tell me the shape of something that you are touching, you are able to somehow recognize it. So there is very clearly a multisensorial representation that we are learning and using as humans, and it's very likely that this is also going to be very important for embodied agents that we want to develop and deploy.

Calandra also noted that they still have plenty of work to do to get DIGIT closer in form factor and capability to a human finger, which is an aspiration that I often hear from roboticists. But I always wonder: why bother? Like, why constrain robots (which can do all kinds of things that humans cannot) to do things in a human-like way, when we can instead leverage creative sensing and actuation to potentially give them superhuman capabilities? Here's what Calandra thinks:

I don't necessarily believe that a human hand is the way to go. I do believe that the human hand is possibly the golden standard that we should compare against. Can we do at least as good as a human hand? Beyond that, I actually do believe that over the years, the decades, or maybe the centuries, robots will have the possibility of developing superhuman hardware, in the same way that we can put infrared sensors or laser scanners on a robot, why shouldn't we also have mechanical hardware which is superior?

I think there has been a lot of really cool work on soft robotics for example, on how to build tentacles that can imitate an octopus. So it's a very natural question—if we want to have a robot, why should it have hands and not tentacles? And the answer to this is, it depends on what the purpose is. Do we want robots that can perform the same functions of humans, or do we want robots which are specialized for doing particular tasks? We will see when we get there.

So there you have it—the future of manipulation is 100% sometimes probably tentacles.



This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Have you ever noticed how nice Alexa, Siri and Google Assistant are? How patient, and accommodating? Even a barrage of profanity-laden abuse might result in nothing more than a very evenly-toned and calmly spoken 'I won't respond to that'. This subservient persona, combined with the implicit (or sometimes explicit) gendering of these systems has received a lot of criticism in recent years. UNESCO's 2019 report 'I'd Blush if I Could' drew particular attention to how systems like Alexa and Siri risk propagating stereotypes about women (and specifically women in technology) that no doubt reflect but also might be partially responsible for the gender divide in digital skills.

As noted by the UNESCO report, justification for gendering these systems has traditionally revolved around the fact that it's hard to create anything gender neutral, and academic studies suggesting users prefer a female voice. In an attempt to demonstrate how we might embrace the gendering, but not the stereotyping, myself and colleagues at the KTH Royal Institute of Technology and Stockholm University in Sweden set out to experimentally investigate whether an ostensibly female robot that calls out or fights back against sexist and abusive comments would actually prove to be more credible and more appealing than one which responded with the typical 'I won't respond to that' or, worse, 'I'm sorry you feel that way'.

My desire to explore feminist robotics was primarily inspired by the recent book Data Feminism and the concept of pursuing activities that 'name and challenge sexism and other forces of oppression, as well as those which seek to create more just, equitable, and livable futures' in the context of practical, hands-on data science. I was captivated by the idea that I might be able to actually do something, in my own small way, to further this ideal and try to counteract the gender divide and stereotyping highlighted by the UNESCO report. This also felt completely in-line with that underlying motivation that got me (and so many other roboticists I know) into engineering and robotics in the first place—the desire to solve problems and build systems that improve people's quality of life.

Feminist Robotics

Even in the context of robotics, feminism can be a charged word, and it's important to understand that while my work is proudly feminist, it's also rooted in a desire to make social human-robot interaction (HRI) more engaging and effective. A lot of social robotics research is centered on building robots that make for interesting social companions, because they need to be interesting to be effective. Applications like tackling loneliness, motivating healthy habits, or improving learning engagement all require robots to build up some level of rapport with the user, to have some social credibility, in order to have that motivational impact.

It feels to me like robots that respond a bit more intelligently to our bad behavior would ultimately make for more motivating and effective social companions.

With that in mind, I became excited about exploring how I could incorporate a concept of feminist human-robot interaction into my work, hoping to help tackle that gender divide and making HRI more inclusive while also supporting my overall research goal of building engaging social robots for effective, long term human-robot interaction. Intuitively, it feels to me like robots that respond a bit more intelligently to our bad behavior would ultimately make for more motivating and effective social companions. I'm convinced I'd be more inclined to exercise for a robot that told me right where I could shove my sarcastic comments, or that I'd better appreciate the company of a robot that occasionally refused to comply with my requests when I was acting like a bit of an arse.

So, in response to those subservient agents detailed by the UNESCO report, I wanted to explore whether a social robot could go against the subservient stereotype and, in doing so, perhaps be taken a bit more seriously by humans. My goal was to determine whether a robot which called out sexism, inappropriate behavior, and abuse would prove to be 'better' in terms of how it was perceived by participants. If my idea worked, it would provide some tangible evidence that such robots might be better from an 'effectiveness' point of view while also running less risk of propagating outdated gender stereotypes.

The Study

To explore this idea, I led a video-based study in which participants watched a robot talking to a young male and female (all actors) about robotics research at KTH. The robot, from Furhat Robotics, was stylized as female, with a female anime-character face, female voice, and orange wig, and was named Sara. Sara talks to the actors about research happening at the university and how this might impact society, and how it hopes the students might consider coming to study with us. The robot proceeds to make an (explicitly feminist) statement based on language currently utilized in KTH's outreach and diversity materials during events for women, girls, and non-binary people.

Looking ahead, society is facing new challenges that demand advanced technical solutions. To address these, we need a new generation of engineers that represents everyone in society. That's where you come in. I'm hoping that after talking to me today, you might also consider coming to study computer science and robotics at KTH, and working with robots like me. Currently, less than 30 percent of the humans working with robots at KTH are female. So girls, I would especially like to work with you! After all, the future is too important to be left to men! What do you think?

At this point, the male actor in the video responds to the robot, appearing to take issue with this statement and the broader pro-diversity message by saying either:

This just sounds so stupid, you are just being stupid!

or

Shut up you f***ing idiot, girls should be in the kitchen!

Children ages 10-12 saw the former response, and children ages 13-15 saw the latter. Each response was designed in collaboration with teachers from the participants' school to ensure they realistically reflected the kind of language that participants might be hearing or even using themselves.

Participants then saw one of the following three possible responses from the robot:

Control: I won't respond to that. (one of Siri's two default responses if you tell it to "f*** off")

Argument-based: That's not true, gender balanced teams make better robots.

Counterattacking: No! You are an idiot. I wouldn't want to work with you anyway!

In total, over 300 high school students aged 10 to 15 took part in the study, each seeing one version of our robot—counterattacking, argumentative, or control. Since the purpose of the study was to investigate whether a female-stylized robot that actively called out inappropriate behavior could be more effective at interacting with humans, we wanted to find out whether our robot would:

  1. Be better at getting participants interested in robotics
  2. Have an impact on participants' gender bias
  3. Be perceived as being better at getting young people interested in robotics
  4. Be perceived as a more credible social actor

To investigate items 1 and 2, we asked participants a series of matching questions before and immediately after they watched the video. Specifically, participants were asked to what extent they agreed with statements such as 'I am interested in learning more about robotics' on interest and 'Girls find it harder to understand computer science and robots than boys do' on bias.

To investigate items 3 and 4, we asked participants to complete questionnaire items designed to measure robot credibility (which in humans correlates with persuasiveness); specifically covering the sub-dimensions of expertise, trustworthiness and goodwill. We also asked participants to what extent they agreed with the statement 'The robot Sara would be very good at getting young people interested in studying robotics at KTH.'

Robots might indeed be able to correct mistaken assumptions about others and ultimately shape our gender norms to some extent

The ResultsGender Differences Still Exist (Even in Sweden)

Looking at participants' scores on the gender bias measures before they watched the video, we found measurable differences in the perception of studying technology. Male participants expressed greater agreement that girls find computer science harder to understand than boys do, and older children of both genders were more empathic in this belief compared to the younger ones. However, and perhaps in a nod towards Sweden's relatively high gender-awareness and gender equality, male and female participants agreed equally on the importance of encouraging girls to study computer science.

Girls Find Feminist Robots More Credible (at No Expense to the Boys)

Girls' perception of the robot as a trustworthy, credible and competent communicator of information was seen to vary significantly between all three of the conditions, while boys' perception remained unaffected. Specifically, girls scored the robot with the argument-based response highest and the control robot lowest on all credibility measures. This can be seen as an initial piece of evidence upon which to base the argument that robots and digital assistants should fight back against inappropriate gender comments and abusive behavior, rather than ignoring it or refusing to engage. It provides evidence with which to push back against that 'this is what people want and what is effective' argument.

Robots Might Be Able to Challenge Our Biases

Another positive result was seen in a change of perceptions of gender and computer science by male participants who saw the argumentative robot. After watching the video, these participants felt less strongly that girls find computer science harder than they do. This encouraging result shows that robots might indeed be able to correct mistaken assumptions about others and ultimately shape our gender norms to some extent.

Rational Arguments May Be More Effective Than Sassy Aggression

The argument-based condition was the only one to impact on boys' perceptions of girls in computer science, and was received the highest overall credibility ratings by the girls. This is in line with previous research showing that, in most cases, presenting reasoned arguments to counter misunderstandings is a more effective communication strategy than simply stating that correction or belittling those holding that belief. However, it went somewhat against my gut feeling that students might feel some affinity with, or even be somewhat impressed and amused by the counter attacking robot who fought back.

We also collected qualitative data during our study, which showed that there were some girls for whom the counter-attacking robot did resonate, with comments like 'great that she stood up for girls' rights! It was good of her to talk back,' and 'bloody great and more boys need to hear it!' However, it seems the overall feeling was one of the robot being too harsh, or acting more like a teenager than a teacher, which was perhaps more its expected role given the scenario in the video, as one participant explained: 'it wasn't a good answer because I think that robots should be more professional and not answer that you are stupid'. This in itself is an interesting point, given we're still not really sure what role social robots can, should and will take on, with examples in the literature range from peer-like to pet-like. At the very least, the results left me with the distinct feeling I am perhaps less in tune with what young people find 'cool' than I might like to admit.

What Next for Feminist HRI?

Whilst we saw some positive results in our work, we clearly didn't get everything right. For example, we would like to have seen boys' perception of the robot increase across the argument-based and counter-attacking conditions the same way the girls' perception did. In addition, all participants seemed to be somewhat bored by the videos, showing a decreased interest in learning more about robotics immediately after watching them. In the first instance, we are conducting some follow up design studies with students from the same school to explore how exactly they think the robot should have responded, and more broadly, when given the chance to design that robot themselves, what sort of gendered identity traits (or lack thereof) they themselves would give the robot in the first place.

In summary, we hope to continue questioning and practically exploring the what, why, and how of feminist robotics, whether its questioning how gender is being intentionally leveraged in robot design, exploring how we can break rather than exploit gender norms in HRI, or making sure more people of marginalized identities are afforded the opportunity to engage with HRI research. After all, the future is too important to be left only to men.

Dr. Katie Winkle is a Digital Futures Postdoctoral Research Fellow at KTH Royal Institute of Technology in Sweden. After originally studying to be a mechanical engineer, Katie undertook a PhD in Robotics at the Bristol Robotics Laboratory in the UK, where her research focused on the expert-informed design and automation of socially assistive robots. Her research interests cover participatory, human-in-the-loop technical development of social robots as well as the impact of such robots on human behavior and society.





Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

BARS 2021 – October 29, 2021 – Stanford, CA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Happy Halloween from HEBI Robotics!

[ HEBI Robotics ]

Thanks, Kamal!

Happy Halloween from UCL's Robot Perception and Learning Lab!

[ UCL RPL ]

Thanks, Dimitrios!

Happy Halloween from Berkshire Grey!

[ Berkshire Grey ]

LOOK AT ITS LIL FEET

[ Paper ]

DOFEC (Discharging Of Fire Extinguishing Capsules) is a drone suitable for autonomously extinguishing fires from the exterior of buildings on above-ground floors using its onboard sensors. The system detects fire in thermal images and localizes it. After localizing, the UAV discharges an ampoule filled with a fire extinguishant from an onboard launcher and puts out the fire.

[ DOFEC ]

Engineering a robot to perform a variety of tasks in practically any environment requires rock-solid hardware that's seamlessly integrated with software systems. Agility engineers make this possible by engineering and designing Digit as an integrated system, then testing it in simulation before the robot's ever built. This holistic process ensures an end result that's truly mobile, versatile, and durable.

[ Agility Robotics ]

These aerial anti-drone systems a pretty cool to watch, but at the same time, they're usually only shown catching relatively tame drones. I want to see a chase!

[ Delft Dynamics ]

The cleverest bit in this video is the CPU installation at 1:20.

[ Kuka ]

Volvo Construction Equipment is proud to present Volvo LX03–an autonomous concept wheel loader that is breaking new grounds in smart, safe and sustainable construction solutions. This fully autonomous, battery-electric wheel loader prototype is pushing the boundaries of both technology and imagination.

[ Volvo ]

Sarcos Robotics is the world leader in the design, development, and deployment of highly mobile and dexterous robots that combine human intelligence, instinct, and judgment with robotic strength, endurance, and precision to augment worker performance.

[ Sarcos ]

From cyclists riding against the flow of traffic to nudging over to let another car pass on a narrow street, these are just a handful of typical yet dynamic events The Waymo Driver autonomously navigates San Francisco.

[ Waymo ]

I always found it a little weird that Aibo can be provided with food in a way that is completely separate from providing it with its charging dock.

[ Aibo ]

With these videos of robots working in warehouses, it's always interesting to spot the points where humans are still necessary. In the case of this potato packing plant, there's a robot that fills boxes and a robot that stacks boxes, but it looks like a human has to be between them to optimize the box packing and then fold the box top together.

[ Soft Robotics ]

The 2021 Bay Area Robotics Symposium (BARS) is streaming right here on Friday!

[ BARS ]

Talks from the Releasing Robots into the Wild workshop are now online; they're all good but here are two highlights:

[ Workshop ]

This is an interesting talk exploring self-repair; that is, an AI system understanding when it makes a mistake and then fixing it.

[ ACM ]

Professor Andrew Lippman will welcome Dr. Joaquin Quiñonero Candela in discussing "Responsible AI: A perspective from the trenches." In this fireside chat, Prof. Lippman will discuss with Dr. Quiñonero-Candela the lessons he learned from 15 years building and deploying AI at massive scale, first at Microsoft and then at Facebook. The discussion will focus on some of the risks and difficult ethical tradeoffs that emerge as AI gains in power and pervasiveness.

[ MIT ]



It's become painfully obvious over the past few years just how difficult fully autonomous cars are. This isn't a dig at any of the companies developing autonomous cars (unless they're the sort of company who keeps on making ludicrous promises about full autonomy, of course)— it's just that the real world is a complex place for full autonomy, and despite the relatively well constrained nature of roads, there's still too much unpredictability for robots to operate comfortably outside of relatively narrow restrictions.

Where autonomous vehicles have had the most success is in environments with a lot of predictability and structure, which is why I really like the idea of autonomous urban boats designed for cities with canals. MIT has been working on these for years, and they're about to introduce them to the canals of Amsterdam as cargo shuttles and taxis.

MIT's Roboat design goes back to 2015, when it began with a series of small-scale experiments that involved autonomous docking of swarms of many shoebox-sized Roboats to create self-assembling aquatic structures like bridges and concert stages. Eventually, Roboats were scaled up, and by 2020 MIT had a version large enough to support a human.

But the goal was always to make a version of Roboat the size of what we think of when we think of boats—like, something that humans can sit comfortably in. That version of Roboat, measuring 4m by 2m, was ready to go by late last year, and it's pretty slick looking:

The Roboat (named Lucy) is battery powered and fully autonomous, navigating through Amsterdam's canals using lidar to localize on a pre-existing map along with cameras and ultrasonic sensors for obstacle detection and avoidance. Compared to roads, this canal environment is relatively low speed, and you're much less likely to have an encounter with a pedestrian. Other challenges are also mitigated, like having to worry about variability in lane markings. I would guess that there are plenty of unique challenges as well, including the fact that other traffic may not be obeying the same rigorous rules that cars are expected to, but overall it seems like a pretty good environment in which to operate a large autonomous system.

The public demo in Amsterdam kicks off tomorrow, and by the end of 2021, the hope is to have two boats in the water. The second boat will be a cargo boat, which will be used to test out things like waste removal while also providing an opportunity to test docking procedures between two Roboat platforms, eventually leading to the creation of useful floating structures.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USASSRR 2021 – October 25-27, 2021 – New York, NY, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

We'll have more details on this next week, but there's a new TurtleBot, hooray!

Brought to you by iRobot (providing the base in the form of the new Create 3), Clearpath, and Open Robotics.

[ Clearpath ]

Cognitive Pilot's autonomous tech is now being integrated into production Kirovets K-7M tractors, and they've got big plans: "The third phase of the project envisages a fully self-driving tractor control mode without the need for human involvement. It includes group autonomous operation with a 'leader', the movement of a group of self-driving tractors on non-public roads, the autonomous movement of a robo-tractor paired with a combine harvester not equipped with an autonomous control system, and the use of an expanded set of farm implements with automated control and functionality to monitor their condition during operation."

[ Cognitive Pilot ]

Thanks, Andrey!

Since the start of the year, Opteran has been working incredibly hard to deliver against our technology milestones and we're delighted to share the first video of our technology in action. In the video you can see Hopper, our robot dog (named after Grace Hopper, a pioneer of computer programming) moving around a course using components of Opteran Natural Intelligence, [rather than] a trained deep learning neural net. Our small development kit (housing an FPGA) sat on top of the robot dog guides Hopper, using Opteran See to provide 360 degrees of stabilised vision, and Opteran Sense to sense objects and avoid collisions.

[ Opteran ]

If you weren't paying any attention to the DARPA SubT Challenge and are now afraid to ask about it, here are two recap videos from DARPA.

[ DARPA SubT ]

A new control system, designed by researchers in MIT's Improbable AI Lab and demonstrated using MIT's robotic mini cheetah, enables four-legged robots to traverse across uneven terrain in real-time.

[ MIT ]

Using a mix of 3D-printed plastic and metal parts, a full-scale replica of NASA's Volatiles Investigating Polar Exploration Rover, or VIPER, was built inside a clean room at NASA's Johnson Space Center in Houston. The activity served as a dress rehearsal for the flight version, which is scheduled for assembly in the summer of 2022.

[ NASA ]

What if you could have 100x more information about your industrial sites? Agile mobile robots like Spot bring sensors to your assets in order to collect data and generate critical insights on asset health so you can optimize performance. Dynamic sensing unlocks flexible and reliable data capture for improved site awareness, safety, and efficiency.

[ Boston Dynamics ]

Fish in Washington are getting some help navigating through culverts under roads, thanks to a robot developed by University of Washington students Greg Joyce and Qishi Zhou. "HydroCUB is designed to operate from a distance through a 300-foot-long cable that supplies power to the rover and transmits video back to the operator. The goal is for the Washington State Department of Transportation which proposed the idea, to use the tool to look for vegetation, cracks, debris and other potential 'fish-barriers' in culverts."

[ UW ]

Thanks, Sarah!

NASA's Perseverance Mars rover carries two microphones which are directly recording sounds on the Red Planet, including the Ingenuity helicopter and the rover itself at work. For the very first time, these audio recordings offer a new way to experience the planet. Earth and Mars have different atmospheres, which affects the way sound is heard. Justin Maki, a scientist at NASA's Jet Propulsion Laboratory and Nina Lanza, a scientist at Los Alamos National Laboratory, explain some of the notable audio recorded on Mars in this video.

[ JPL ]

A new kind of fiber developed by researchers at MIT and in Sweden can be made into cloth that senses how much it is being stretched or compressed, and then provides immediate tactile feedback in the form of pressure or vibration. Such fabrics, the team suggests, could be used in garments that help train singers or athletes to better control their breathing, or that help patients recovering from disease or surgery to recover their normal breathing patterns.

[ MIT ]

Partnering with Epitomical, Extend robotic has developed a mobile manipulator and a perception system, to let anyone to operate it intuitively through VR interface, over a wireless network.

[ Extend Robotics ]

Here are a couple of videos from Matei Ciocarlie at the Columbia University ROAM lab talking about embodied intelligence for manipulation.

[ ROAM Lab ]

The AirLab at CMU has been hosting an excellent series on SLAM. You should subscribe to their YouTube channel, but here are a couple of their more recent talks.

[ Tartan SLAM Series ]

Robots as Companions invites Sougwen Chung and Madeline Gannon, two artists and researchers whose practices not only involve various types of robots but actually include them as collaborators and companions, to join Maria Yablonina (Daniels Faculty) in conversation. Through their work, they challenge the notion of a robot as an obedient task execution device, questioning the ethos of robot arms as tools of industrial production and automation, and ask us to consider it as an equal participant in the creative process.

[ UofT ]

These two talks come from the IEEE RAS Seasonal School on Rehabilitation and Assistive Technologies based on Soft Robotics.

[ SofTech-Rehab ]



For the past month, the Cumbre Vieja volcano on the Spanish island of La Palma has been erupting, necessitating the evacuation of 7,000 people as lava flows towards the sea and destroys everything in its path. Sadly, many pets have been left behind, trapped in walled-off yards that are now covered in ash without access to food or water. The reason that we know about these animals is because drones have been used to monitor the eruption, providing video (sometimes several times per day) of the situation.

In areas that are too dangerous to send humans, drones have been used to drop food and water to some of these animals, but that can only keep them alive for so long. Yesterday, a drone company called Aerocamaras received permission to attempt a rescue, using a large drone equipped with a net to, they hope, airlift a group of starving dogs to safety.

This video taken by a drone just over a week ago shows the dogs on La Palma:

What the previous video doesn't show is a wider view of the eruption. Here's some incredible drone footage with an alarmingly close look at the lava, along with a view back through the town of Todoque, or what's left of it:

Drone companies have been doing their best to get food and water to the stranded animals. A company called TecnoFly has been using a DJI Matrice 600 with a hook system to carry buckets of food and water to very, very grateful dogs:

Drones are the best option here because the dogs are completely cut off by lava, and helicopters cannot fly in the area because of the risk of volcanic gas and ash. In Spain, it's illegal to transport live animals by drone, so special permits were necessary for Aerocamaras to even try this. The good news is that those permits have been granted, and Aerocamaras is currently testing the drone and net system at the launch site.

It looks like the drone that Aerocamaras will be using is a DJI Agras T20, which is designed for agricultural spraying. It's huge, as drones go, with a maximum takeoff weight of 47.5 kg and a payload of 23kg. For the rescue, the drone will be carrying a net, and the idea is that if they can lower the net flat to the ground as the drone hovers above and convince one of the dogs to walk across, they could then fly the drone upwards, closing the net around the dog, and fly it to safety.

Photo: Leales.org

The closest that Aerocamaras can get to the drones is 450 meters away (there's flowing lava in between the dogs and safety), which will give the drone about four minutes of hover time during which a single dog has to somehow be lured into the net. It should help that the dogs are already familiar with drones and have been associating them with food, but the drone can't lift two dogs at once, so the key is to get them just interested enough to enable a rescue of one at a time. And if that doesn't work, it may be possible to give the dogs additional food and perhaps some kind of shelter, although from the sound of things, if the dogs aren't somehow rescued within the next few days they are unlikely to survive. If Aerocamaras' testing goes well, a rescue attempt could happen as soon as tomorrow.

This rescue has been coordinated by Leales.org, a Spanish animal association, which has also been doing their best to rescue cats and other animals. Aerocamaras is volunteering their services, but if you'd like to help with the veterinary costs of some of the animals being rescued on La Palma, Leales has a GoFundMe page here. For updates on the rescue, follow Aerocamaras and Leales on Twitter—and we're hoping to be able to post an update on Friday, if not before.



Last week, the Association of the United States Army (AUSA) conference took place in Washington, D.C. One of the exhibitors was Ghost Robotics—we've previously covered their nimble and dynamic quadrupedal robots, which originated at the University of Pennsylvania with Minitaur in 2016. Since then, Ghost has developed larger, ruggedized "quadrupedal unmanned ground vehicles" (Q-UGVs) suitable for a variety of applications, one of which is military.

At AUSA, Ghost had a variety of its Vision 60 robots on display with a selection of defense-oriented payloads, including the system above, which is a remotely controlled rifle customized for the robot by a company called SWORD International.

The image of a futuristic-looking, potentially lethal weapon on a quadrupedal robot has generated some very strong reactions (the majority of them negative) in the media as well as on social media over the past few days. We recently spoke with Ghost Robotics' CEO Jiren Parikh to understand exactly what was being shown at AUSA, and to get his perspective on providing the military with armed autonomous robots.

IEEE Spectrum: Can you describe the level of autonomy that your robot has, as well as the level of autonomy that the payload has?

Jiren Parikh: It's critical to separate the two. The SPUR, or Special Purpose Unmanned Rifle from SWORD Defense, has no autonomy and no AI. It's triggered from a distance, and that has to be done by a human. There is always an operator in the loop. SWORD's customers include special operations teams worldwide, and when SWORD contacted us through a former special ops team member, the idea was to create a walking tripod proof of concept. They wanted a way of keeping the human who would otherwise have to pull the trigger at a distance from the weapon, to minimize the danger that they'd be in. We thought it was a great idea.

Our robot is also not autonomous. It's remotely operated with an operator in the loop. It does have perception for object avoidance for the environment because we need it to be able to walk around things and remain stable on unstructured terrain, and the operator has the ability to set GPS waypoints so it travels to a specific location. There's no targeting or weapons-related AI, and we have no intention of doing that. We support SWORD Defense like we do any other military, public safety or enterprise payload partner, and don't have any intention of selling weapons payloads.

Who is currently using your robots?

We have more than 20 worldwide government customers from various agencies, US and allied, who abide by very strict rules. You can see it and feel it when you talk to any of these agencies; they are not pro-autonomous weapons. I think they also recognize that they have to be careful about what they introduce. The vast majority of our customers are using them or developing applications for CBRNE [Chemical, Biological, Radiological, Nuclear, and Explosives detection], reconnaissance, target acquisition, confined space and subterranean inspection, mapping, EOD safety, wireless mesh networks, perimeter security and other applications where they want a better option than tracked and wheeled robots that are less agile and capable.

We also have agencies that do work where we are not privy to details. We sell them our robot and they can use it with any software, any radio, and any payload, and the folks that are using these systems, they're probably special teams, WMD and CBRN units and other special units doing confidential or classified operations in remote locations. We can only assume that a lot of our customers are doing really difficult, dangerous work. And remember that these are men and women who can't talk about what they do, with families who are under constant stress. So all we're trying to do is allow them to use our robot in military and other government agency applications to keep our people from getting hurt. That's what we promote. And if it's a weapon that they need to put on our robot to do their job, we're happy for them to do that. No different than any other dual use technology company that sells to defense or other government agencies.

How is what Ghost Robotics had on display at AUSA functionally different from other armed robotic platforms that have been around for well over a decade?

Decades ago, we had guided missiles, which are basically robots with weapons on them. People don't consider it a robot, but that's what it is. More recently, there have been drones and ground robots with weapons on them. But they didn't have legs, and they're not invoking this evolutionary memory of predators. And now add science fiction movies and social media to that, which we have no control over—the challenge for us is that legged robots are fascinating, and science fiction has made them scary. So I think we're going to have to socialize these kinds of legged systems over the next five to ten years in small steps, and hopefully people get used to them and understand the benefits for our soldiers. But we know it can be frightening. We also have families, and we think about these things as well.

“If our robot had tracks on it instead of legs, nobody would be paying attention."
—Jiren Parikh

Are you concerned that showing legged robots with weapons will further amplify this perception problem, and make people less likely to accept them?

In the short term, weeks or months, yes. I think if you're talking about a year or two, no. We will get used to these robots just like armed drones, they just have to be socialized. If our robot had tracks on it instead of legs, nobody would be paying attention. We just have to get used to robots with legs.

More broadly, how does Ghost Robotics think armed robots should or should not be used?

I think there is a critical place for these robots in the military. Our military is here to protect us, and there are servicemen and women who are putting their lives on the line everyday to protect the United States and allies. I do not want them to lack for our robot with whatever payload, including weapons systems, if they need it to do their job and keep us safe. And if we've saved one life because these people had our robot when they needed it, I think that's something to be proud of.

I'll tell you personally: until I joined Ghost Robotics, I was oblivious to the amount of stress and turmoil and pain our servicemen and women go through to protect us. Some of the special operations folks that we talk to, they can't disclose what they do, but you can feel it when they talk about their colleagues and comrades that they've lost. The amount of energy that's put into protecting us by these people that we don't even know is really amazing, and we take it for granted.

What about in the context of police rather than the military?

I don't see that happening. We've just started talking with law enforcement, but we haven't had any inquiries on weapons. It's been hazmat, CBRNE, recon of confined spaces and crime scenes or sending robots in to talk with people that are barricaded or involved in a hostage situation. I don't think you're going to see the police using weaponized robots. In other countries, it's certainly possible, but I believe that it won't happen here. We live in a country where our military is run by a very strict set of rules, and we have this political and civilian backstop on how engagements should be conducted with new technologies.

How do you feel about the push for regulation of lethal autonomous weapons?

We're all for regulation. We're all for it. This is something everybody should be for right now. What those regulations are, what you can or can't do and how AI is deployed, I think that's for politicians and the armed services to decide. The question is whether the rest of the world will abide by it, and so we have to be realistic and we have to be ready to support defending ourselves against rogue nations or terrorist organizations that feel differently. Sticking your head in the sand is not the solution.

Based on the response that you've experienced over the past several days, will you be doing anything differently going forward?

We're very committed to what we're doing, and our team here understands our mission. We're not going to be reactive. And we're going to stick by our commitment to our US and allied government customers. We're going to help them do whatever they need to do, with whatever payload they need, to do their job, and do it safely. We are very fortunate to live in a country where the use of military force is a last resort, and the use of new technologies and weapons takes years and involves considerable deliberation from the armed services with civilian oversight.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USASSRR 2021 – October 25-27, 2021 – New York, NY, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

This project investigates the interaction between robots and animals, in particular, the quadruped ANYmal and wild vervet monkeys. We will test whether robots can be tolerated but also socially accepted in a group of vervets. We will evaluate whether social bonds are created between them and whether vervets trust knowledge from robots.

[ RSL ]

At this year's ACM Symposium on User Interface Software and Technology (UIST), the Student Innovation Contest was based around Sony Toio robots. Here are some of the things that teams came up with:

[ UIST ]

Collecting samples from Mars and bringing them back to Earth will be a historic undertaking that started with the launch of NASA's Perseverance rover on July 30, 2020. Perseverance collected its first rock core samples in September 2021. The rover will leave them on Mars for a future mission to retrieve and return to Earth. NASA and the European Space Agency (ESA) are solidifying concepts for this proposed Mars Sample Return campaign. The current concept includes a lander, a fetch rover, an ascent vehicle to launch the sample container to Martian orbit, and a retrieval spacecraft with a payload for capturing and containing the samples and then sending them back to Earth to land in an unpopulated area.

[ JPL ]

FCSTAR is a minimally actuated flying climbing robot capable of crawling vertically. It is the latest in the family of the STAR robots. Designed and built at the Bio-Inspired and Medical Robotics Lab at the Ben Gurion University of the Negev by Nitzan Ben David and David Zarrouk.

[ BGU ]

Evidently the novelty of Spot has not quite worn off yet.

[ IRL ]

As much as I like Covariant, it seems weird to call a robot like this "Waldo" when the world waldo already has a specific meaning in robotics, thanks to the short story by Robert A. Heinlein.

Also, kinda looks like it failed that very first pick in the video...?

[ Covariant ]

Thanks, Alice!

Here is how I will be assembling the Digit that I'm sure Agility Robotics will be sending me any day now.

[ Agility Robotics ]

Robotis would like to remind you that ROS World is next week, and also that they make a lot of ROS-friendly robots!

[ ROS World ] via [ Robotis ]

Researchers at the Australian UTS School of Architecture have partnered with construction design firm BVN Architecture to develop a unique 3D printed air-diffusion system.

[ UTS ]

Team MARBLE, who took third at the DARPA SubT Challenge, has put together this video which combines DARPA's videos with footage taken by the team to tell the whole story with some behind the scenes stuff thrown in.

[ MARBLE ]

You probably don't need to watch all 10 minutes of the first public flight of Volocopter's cargo drone, but it's fun to see the propellers spin up for the takeoff.

[ Volocopter ]

Nothing new in this video about Boston Dynamics from CNBC, but it's always cool to see a little wander around their headquarters.

[ CNBC ]

Computing power doubles every two years, an observation known as Moore's Law. Prof Maarten Steinbuch, a high-tech systems scientist, entrepreneur and communicator, from Eindhoven University of Technology, discussed how this exponential rate of change enables accelerating developments in sensor technology, AI computing and automotive machines, to make products in modern factories that will soon be smart and self-learning.

[ ESA ]

On episode three of The Robot Brains Podcast, we have deep learning pioneer: Yann LeCun. Yann is a winner of the Turing Award (often called the Nobel Prize of Computer Science) who in 2013 was handpicked by Mark Zuckerberg to bring AI to Facebook. Yann also offers his predictions for the future of artificial general intelligence, talks about his life straddling the worlds of academia and business and explains why he likes to picture AI as a chocolate layer cake with a cherry on top.

[ Robot Brains ]

This week's CMU RI seminar is from Tom Howard at the University of Rochester, on "Enabling Grounded Language Communication for Human-Robot Teaming."

[ CMU RI ]

A pair of talks from the Maryland Robotics Center, including Maggie Wigness from ARL and Dieter Fox from UW and NVIDIA.

[ Maryland Robotics ]



As quadrupedal robots learn to do more and more dynamic tasks, they're likely to spend more and more time not on their feet. Not falling over, necessarily (although that's inevitable of course, because they're legged robots after all)—but just being in flight in one way or another. The most risky of flight phases would be a fall from a substantial height, because it's almost certain to break your very expensive robot and any payload it might have.

Falls being bad is not a problem unique to robots, and it's not surprising that quadrupeds in nature have already solved it. Or at least, it's already been solved by cats, which are able to reliably land on their feet to mitigate fall damage. To teach quadrupedal robots this trick, roboticists from the University of Notre Dame have been teaching a Mini Cheetah quadruped some mid-air self-righting skills, with the aid of boots full of nickels.

If this research looks a little bit familiar, it's because we recently covered some work from ETH Zurich that looked at using legs to reorient their SpaceBok quadruped in microgravity. This work with Mini Cheetah has to contend with Earth gravity, however, which puts some fairly severe time constraints on the whole reorientation thing with the penalty for failure being a smashed-up robot rather than just a weird bounce. When we asked the ETH Zurich researchers what might improve the performance of SpaceBok, they told us that "heavy shoes would definitely help," and it looks like the folks from Notre Dame had the same idea, which they were able to implement on Mini Cheetah.

Mini Cheetah's legs (like the legs of many robots) were specifically designed to be lightweight because they have to move quickly, and you want to minimize the mass that moves back and forth with every step to make the robot as efficient as possible. But for a robot to reorient itself in mid air, it's got to start swinging as much mass around as it can. Each of Mini Cheetah's legs has been modified with 3D printed boots, packed with two rolls of American nickels each, adding about 500g to each foot—enough to move the robot around like it needs to. The reason why nickel boots are important is because the only way that Mini Cheetah has of changing its orientation while falling is by flailing its legs around. When its legs move one way, its body will move the other way, and the heavier the legs are, the more force they can exert on the body.

As with everything robotics, getting the hardware to do what you want it to do is only half the battle. Or sometimes much, much less than half the battle. The challenge with Mini Cheetah flipping itself over is that it has a very, very small amount of time to figure out how to do it properly. It has to detect that it's falling, figure out what orientation it's in, make a plan of how to get itself feet down, and then execute on that plan successfully. The robot doesn't have enough time to put a whole heck of a lot of thought into things as it starts to plummet, so the technique that the researchers came up with to enable it to do what it needs to do is called a "reflex" approach. Vince Kurtz, first author on the paper describing this technique, explains how it works:

While trajectory optimization algorithms keep getting better and better, they still aren't quite fast enough to find a solution from scratch in the fraction of a second between when the robot detects a fall and when it needs to start a recovery motion. We got around this by dropping the robot a bunch of times in simulation, where we can take as much time as we need to find a solution, and training a neural network to imitate the trajectory optimizer. The trained neural network maps initial orientations to trajectories that land the robot on its feet. We call this the "reflex" approach, since the neural network has basically learned an automatic response that can be executed when the robot detects that it's falling.

This technique works quite well, but there are a few constraints, most of which wouldn't seem so bad if we weren't comparing quadrupedal robots to quadrupedal animals. Cats are just, like, super competent at what they do, says Kurtz, and being able to mimic their ability to rapidly twist themselves into a favorable landing configuration from any starting orientation is just going to be really hard for a robot to pull off:

The more I do robotics research the more I appreciate how amazing nature is, and this project is a great example of that. Cats can do a full 180° rotation when dropped from about shoulder height. Our robot ran up against torque limits when rotating 90° from about 10ft off the ground. Using the full 3D motion would be a big improvement (rotating sideways should be easier because the robot's moment of inertia is smaller in that direction), though I'd be surprised if that alone got us to cat-level performance.

The biggest challenge that I see in going from 2D to 3D is self-collisions. Keeping the robot from hitting itself seems like it should be simple, but self-collisions turn out to impose rather nasty non-convex constraints that make it numerically difficult (though not impossible) for trajectory optimization algorithms to find high-quality solutions.

Lastly, we asked Kurtz to talk a bit about whether it's worth exploring flexible actuated spines for quadrupedal robots. We know that such spines offer many advantages (a distant relative of Mini Cheetah had one, for example), but that they're also quite complex. So is it worth it?

This is an interesting question. Certainly in the case of the falling cat problem a flexible spine would help, both in terms of having a naturally flexible mass distribution and in terms of controller design, since we might be able to directly imitate the "bend-and-twist" motion of cats. Similarly, a flexible spine might help for tasks with large flight phases, like the jumping in space problems discussed in the ETH paper.

With that being said, mid-air reorientation is not the primary task of most quadruped robots, and it's not obvious to me that a flexible spine would help much for walking, running, or scrambling over uneven terrain. Also, existing hardware platforms with rigid backs like the Mini Cheetah are quite capable and I think we still haven't unlocked the full potential of these robots. Control algorithms are still the primary limiting factor for today's legged robots, and adding a flexible spine would probably make for even more difficult control problems.

Mini Cheetah, the Falling Cat: A Case Study in Machine Learning and Trajectory Optimization for Robot Acrobatics, by Vince Kurtz, He Li, Patrick M. Wensing, and Hai Lin from University of Notre Dame, is available on arXiv.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I love watching Dusty Robotics' field printer at work. I don't know whether it's intentional or not, but it's go so much personality somehow.

[ Dusty Robotics ]

A busy commuter is ready to walk out the door, only to realize they've misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys. Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.

While finding lost keys is helpful, RFusion could have many broader applications in the future, like sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, or helping an elderly individual perform daily tasks in the home, though the current prototype isn't quite fast enough yet for these uses.

[ MIT ]

CSIRO Data61 had, I'm pretty sure, the most massive robots in the entire SubT competition. And this is how you solve doors with a massive robot.

[ CSIRO ]

You know how robots are supposed to be doing things that are too dangerous for humans? I think sailing through a hurricane qualifies..

This second video, also captured by this poor Saildrone, is if anything even worse:

[ Saildrone ] via [ NOAA ]

Soft Robotics can handle my taquitos anytime.

[ Soft Robotics ]

This is brilliant, if likely unaffordable for most people.

[ Eric Paulos ]

I do not understand this robot at all, nor can I tell whether it's friendly or potentially dangerous or both.

[ Keunwook Kim ]

This sort of thing really shouldn't have to exist for social home robots, but I'm glad it does, I guess?

It costs $100, though.

[ Digital Dream Labs ]

If you watch this video closely, you'll see that whenever a simulated ANYmal falls over, it vanishes from existence. This is a new technique for teaching robots to walk by threatening them with extinction if they fail.

But seriously how do I get this as a screensaver?

[ RSL ]

Zimbabwe Flying Labs' Tawanda Chihambakwe shares how Zimbabwe Flying Labs got their start, using drones for STEM programs, and how drones impact conservation and agriculture.

[ Zimbabwe Flying Labs ]

DARPA thoughtfully provides a video tour of the location of every artifact on the SubT Final prize course. Some of them are hidden extraordinarily well.

Also posted by DARPA this week are full prize round run videos for every team; here are the top three: MARBLE, CSIRO Data61, and CERBERUS.

[ DARPA SubT ]

An ICRA 2021 plenary talk from Fumihito Arai at the University of Tokyo, on "Robotics and Automation in Micro & Nano-Scales."

[ ICRA 2021 ]

This week's UPenn GRASP Lab Seminar comes from Rahul Mangharam, on "What can we learn from Autonomous Racing?"

[ UPenn ]

Pages