Feed aggregator



Chatbot Episode 1: Making Boston Dynamics’ Robots Dance

Evan Ackerman: I’m Evan Ackerman, and welcome to ChatBot, a robotics podcast from IEEE Spectrum. On this episode of ChatBot, we’ll be talking with Monica Thomas and Amy LaViers about robots and dance. Monica Thomas is a dancer and choreographer. Monica has worked with Boston Dynamics to choreograph some of their robot videos in which Atlas, Spot, and even Handle dance to songs like Do You Love Me? The, “Do You Love Me?” Video has been viewed 37 million times. And if you haven’t seen it yet, it’s pretty amazing to see how these robots can move. Amy LaViers is the director of the Robotics, Automation, and Dance Lab, or RAD lab, which she founded in 2015 as a professor in Mechanical Science and Engineering at the University of Illinois, Urbana-Champaign. The RAD Lab is a collective for art making, commercialization, education, outreach, and research at the intersection of dance and robotics. And Amy’s work explores the creative relationships between machines and humans, as expressed through movement. So Monica, can you just tell me-- I think people in the robotics field may not know who you are or why you’re on the podcast at this point, so can you just describe how you initially got involved in Boston Dynamics?

Monica Thomas: Yeah. So I got involved really casually. I know people who work at Boston Dynamics and Marc Raibert, their founder and head. They’d been working on Spot, and they added the arm to Spot. And Marc was kind of like, “I kind of think this could dance.” And they were like, “Do you think this could dance?” And I was like, “It could definitely dance. That definitely could do a lot of dancing.” And so we just started trying to figure out, can it move in a way that feels like dance to people watching it? And the first thing we made was Uptown Spot. And it was really just figuring out moves that the robot does kind of already naturally. And that’s when they started developing, I think, Choreographer, their tool. But in terms of my thinking, it was just I was watching what the robot did as its normal patterns, like going up, going down, walking this place, different steps, different gates, what is interesting to me, what looks beautiful to me, what looks funny to me, and then imagining what else we could be doing, considering the angles of the joints. And then it just grew from there. And so once that one was out, Marc was like, “What about the rest of the robots? Could they dance? Maybe we could do a dance with all of the robots.” And I was like, “We could definitely do a dance with all of the robots. Any shape can dance.” So that’s when we started working on what turned into Do You Love Me? I didn’t really realize what a big deal it was until it came out and it went viral. And I was like, “Oh—” are we allowed to swear, or—?

Ackerman: Oh, yeah. Yeah.

Thomas: Yeah. So I was like, “[bleep bleep, bleeeep] is this?” I didn’t know how to deal with it. I didn’t know how to think about it. As a performer, the largest audience I performed for in a day was like 700 people, which is a big audience as a live performer. So when you’re hitting millions, it’s just like it doesn’t even make sense anymore, and yeah. So that was pretty mind-boggling. And then also because of kind of how it was introduced and because there is a whole world of choreo-robotics, which I was not really aware of because I was just doing my thing. Then I realized there’s all of this work that’s been happening that I couldn’t reference, didn’t know about, and conversations that were really important in the field that I also was unaware of and then suddenly was a part of. So I think doing work that has more viewership is really—it was a trip and a half—is a trip and a half. I’m still learning about it. Does that answer your question?

Ackerman: Yeah. Definitely.

Thomas: It’s a long-winded answer, but.

Ackerman: And Amy, so you have been working in these two disciplines for a long time, in the disciplines of robotics and in dance. So what made you decide to combine these two things, and why is that important?

Amy LaViers: Yeah. Well, both things, I guess in some way, have always been present in my life. I’ve danced since I was three, probably, and my dad and all of his brothers and my grandfathers were engineers. So in some sense, they were always there. And it was really-- I could tell you the date. I sometimes forget what it was, but it was a Thursday, and I was taking classes and dancing and controlling of mechanical systems, and I was realizing this over. I mean, I don’t think I’m combining them. I feel like they already kind of have this intersection that just exists. And I realized-- or I stumbled into that intersection myself, and I found lots of people working in it. And I was-- oh, my interests in both these fields kind of reinforce one another in a way that’s really exciting and interesting. I also happened to be an almost graduating-- I was in last class of my junior year of college, so I was thinking, “What am I going to do with myself?” Right? So it was very happenstance in that way. And again, I mean, I just felt like— it was like I walked into a room where all of a sudden, a lot of things made sense to me, and a lot of interests of mine were both present.

Ackerman: And can you summarize, I guess, the importance here? Because I feel like— I’m sure this is something you’ve run into, is that it’s easy for engineers or roboticists just to be— I mean, honestly, a little bit dismissive of this idea that it’s important for robots to have this expressivity. So why is it important?

LaViers: That is a great question that if I could summarize what my life is like, it’s me on a computer going like this, trying to figure out the words to answer that succinctly. But one way I might ask it, earlier when we were talking, you mentioned this idea of functional behavior versus expressive behavior, which comes up a lot when we start thinking in this space. And I think one thing that happens-- and my training and background in Laban Movement Analysis really emphasizes this duality between function and expression as opposed to the either/or. It’s kind of like the mind-body split, the idea that these things are one integrated unit. Function and expression are an integrated unit. And something that is functional is really expressive. Something that is expressive is really functional.

Ackerman: It definitely answers the question. And it looks like Monica is resonating with you a little bit, so I’m just going to get out of the way here. Amy, do you want to just start this conversation with Monica?

LaViers: Sure. Sure. Monica has already answered, literally, my first question, so I’m already having to shuffle a little bit. But I’m going to rephrase. My first question was, can robots dance? And I love how emphatically and beautifully you answered that with, “Any shape can dance.” I think that’s so beautiful. That was a great answer, and I think it brings up— you can debate, is this dance, or is this not? But there’s also a way to look at any movement through the lens of dance, and that includes factory robots that nobody ever sees.

Thomas: It’s exciting. I mean, it’s a really nice way to walk through the world, so I actually recommend it for everyone, just like taking a time and seeing the movement around you as dance. I don’t know if it’s allowing it to be intentional or just to be special, meaningful, something.

LaViers: That’s a really big challenge, particularly for an autonomous system. And for any moving system, I think that’s hard, artificial or not. I mean it’s hard for me. My family’s coming into town this weekend. I’m like, “How do I act so that they know I love them?” Right? That’s dramaticized version of real life, right, is, how do I be welcoming to my guests? And that’ll be, how do I move?

Thomas: What you’re saying is a reminder of, one of the things that I really enjoy watching robots move is that I’m allowed to project as much as I want to on them without taking away something from them. When you project too much on people, you lose the person, and that’s not really fair. But when you’re projecting on objects, things that are objects but that we personify— or not even personify, that we anthropomorphize or whatever, it is just a projection of us. But it’s acceptable. So nice for it to be acceptable, a place where you get to do that.

LaViers: Well, okay. Then can I ask my fourth question even though it’s not my turn? Because that’s just too perfect to what it is, which is just, what did you learn about yourself working with these robots?

Thomas: Well, I learned how much I love visually watching movement. I’ve always watched, but I don’t think it was as clear to me how much I like movement. The work that I made was really about context. It was about what’s happening in society, what’s happening in me as a person. But I never got into that school of dance that really spends time just really paying attention to movement or letting movement develop or explore, exploring movement. That wasn’t what I was doing. And with robots, I was like, “Oh, but yeah, I get it better now. I see it more now.” So much in life right now, for me, is not contained, and it doesn’t have answers. And translating movement across species from my body to a robot, that does have answers. It has multiple answers. It’s not like there’s a yes and a no, but you can answer a question. And it’s so nice to answer questions sometimes. I sat with this thing, and here’s something I feel like is an acceptable solution. Wow. That’s a rarity in life. So I love that about working with robots. I mean, also, they’re cool, I think. And it is also— they’re just cool. I mean, that’s true too. It’s also interesting. I guess the last thing that I really loved—and I didn’t have much opportunity to do this or as much as you’d expect because of COVID—is being in space with robots. It’s really interesting, just like being in space with anything that is different than your norm is notable. Being in space with an animal that you’re not used to being with is notable. And there’s just something really cool about being with something very different. And for me, robots are very different and not acclimatized.

Ackerman: Okay. Monica, you want to ask a question or two?

Thomas: Yeah. I do. The order of my questions is ruined also. I was thinking about the RAD Lab, and I was wondering if there are guiding principles that you feel are really important in that interdisciplinary work that you’re doing, and also any lessons maybe from the other side that are worth sharing.

LaViers: The usual way I describe it and describe my work more broadly is, I think there are a lot of roboticists that hire dancers, and they make robots and those dancers help them. And there are a lot of dancers that they hire engineers, and those engineers build something for them that they use inside of their work. And what I’m interested in, in the little litmus test or challenge I paint for myself and my collaborators is we want to be right in between those two things, right, where we are making something. First of all, we’re treating each other as peers, as technical peers, as artistic peers, as— if the robot moves on stage, I mean, that’s choreography. If the choreographer asks for the robot to move in a certain way, that’s robotics. That’s the inflection point we want to be at. And so that means, for example, in terms of crediting the work, we try to credit the creative contributions. And not just like, “Oh, well, you did 10 percent of the creative contributions.” We really try to treat each other as co-artistic collaborators and co-technical developers. And so artists are on our papers, and engineers are in our programs, to put it in that way. And likewise, that changes the questions we want to ask. We want to make something that pushes robotics just a inch further, a millimeter further. And we want to do something that pushes dance just an inch further, a millimeter further. We would love it if people would ask us, “Is this dance?” We get, “Is this robotics?” Quite a lot. So that makes me feel like we must be doing something interesting in robotics.

And every now and then, I think we do something interesting for dance too, and certainly, many of my collaborators do. And that inflection point, that’s just where I think is interesting. And I think that’s where— that’s the room I stumbled into, is where we’re asking those questions as opposed to just developing a robot and hiring someone to help us do that. I mean, it can be hard in that environment that people feel like their expertise is being given to the other side. And then, where am I an expert? And we’ve heard editors at publication venues say, “Well, this dancer can’t be a co-author,” and we’ve had venues where we’re working on the program and people say, “Well, no, this engineer isn’t a performer,” but I’m like, “But he’s queuing the robot, and if he messes up, then we all mess up.” I mean, that’s vulnerability too. So we have those conversations that are really touchy and a little sensitive and a little— and so how do you create that space where people do you feel safe and comfortable and valued and attributed for their work and that they can make a track record and do this again in another project, in another context and— so, I don’t know, if I’ve learned anything, I mean, I’ve learned that you just have to really talk about attribution all the time. I bring it up every time, and then I bring it up before we even think about writing a paper. And then I bring it up when we make the draft. And first thing I put in the draft is everybody’s name in the order it’s going to appear, with the affiliations and with the—subscripts on that don’t get added at the last minute. And when the editor of a very famous robotics venue says, “This person can’t be a co-author,” that person doesn’t get taken off as a co-author; that person is a co-author, and we figure out another way to make it work. And so I think that’s learning, or that’s just a struggle anyway.

Ackerman: Monica, I’m curious if when you saw the Boston Dynamics videos go viral, did you feel like there was much more of a focus on the robots and the mechanical capabilities than there was on the choreography and the dance? And if so, how did that make you feel?

Thomas: Yeah. So yes. Right. When dances I’ve made have been reviewed, which I’ve always really appreciated, it has been about the dance. It’s been about the choreography. And actually, kind of going way back to what we were talking about a couple things ago, a lot of the reviews that you get around this are about people, their reactions, right? Because, again, we can project so much onto robots. So I learned a lot about people, how people think about robots. There’s a lot of really overt themes, and then there’s individual nuance. But yeah, it wasn’t really about the dance, and it was in the middle of the pandemic too. So there’s really high isolation. I had no idea how people who cared about dance thought about it for a long time. And then every once in a while, I get one person here or one person there say something. So it’s a totally weird experience. Yes.

The way that I took information about the dance was kind of paying attention to the affective experience, the emotional experience that people had watching this. The dance was— nothing in that dance was— we use the structures of the traditions of dance in it for intentional reason. I chose that because I wasn’t trying to alarm people or show people ways that robots move that totally hit some old part of our brain that makes us absolutely panicked. That wasn’t my interest or the goal of that work. And honestly, at some point, it’d be really interesting to explore what the robots can just do versus what I, as a human, feel comfortable seeing them do. But the emotional response that people got told me a story about what the dance was doing in a backward-- also, what the music’s doing because—let’s be real—that music does— right? We stacked the deck.

LaViers: Yeah. And now that brings— I feel like that serves up two of my questions, and I might let you pick which one maybe we go to. I mean, one of my questions, I wrote down some of my favorite moments from the choreography that I thought we could discuss. Another question—and maybe we can do both of these in serie—is a little bit about— I’ll blush even just saying it, and I’m so glad that the people can’t see the blushing. But also, there’s been so much nodding, and I’m noticing that that won’t be in the audio recording. We’re nodding along to each other so much. But the other side—and you can just nod in a way that gives me your—the other question that comes up for that is, yeah, what is the monetary piece of this, and where are the power dynamics inside this? And how do you feel about how that sits now as that video continues to just make its rounds on the internet and establish value for Boston Dynamics?

Thomas: I would love to start with the first question. And the second one is super important, and maybe another day for that one.

Ackerman: Okay. That’s fair. That’s fair.

LaViers: Yep. I like that. I like that. So the first question, so my favorite moments of the piece that you choreographed to Do You Love Me? For the Boston Dynamics robots, the swinging arms at the beginning, where you don’t fully know where this is going. It looks so casual and so, dare I say it, natural, although it’s completely artificial, right? And the proximal rotation of the legs, I feel like it’s a genius way of getting around no spine. But you really make use of things that look like hip joints or shoulder joints as a way of, to me, accessing a good wriggle or a good juicy moment, and then the Spot space hold, I call it, where the head of the Spot is holding in place and then the robot wiggles around that, dances around that. And then the moment when you see all four complete—these distinct bodies, and it looks like they’re dancing together. And we touched on that earlier—any shape can dance—but making them all dance together I thought was really brilliant and effective in the work. So it’s one of those moments, super interesting, or you have a funny story about, I thought we could talk about it further.

Thomas: I have a funny story about the hip joints. So the initial— well, not the initial, but when they do the mashed potato, that was the first dance move that we started working on, on Atlas. And for folks who don’t know, the mashed potato is kind of the feet are going in and out; the knees are going in and out. So we ran into a couple of problems, which—and the twist. I guess it’s a combo. Both of them like you to roll your feet on the ground like rub, and that friction was not good for the robots. So when we first started really moving into the twist, which has this torso twisting— the legs are twisting. The foot should be twisting on the floor. The foot is not twisting on the floor, and the legs were so turned out that the shape of the pelvic region looked like a over-full diaper. So, I mean, it was wiggling, but it made the robot look young. It made the robot look like it was in a diaper that needed to be changed. It did not look like a twist that anybody would want to do near anybody else. And it was really amazing how— I mean, it was just hilarious to see it. And the engineers come in. They’re really seeing the movement and trying to figure out what they need for the movement. And I was like, “Well, it looks like it has a very full diaper.” And they were like, “Oh.” They knew it didn’t quite look right, but it was like—because I think they really don’t project as much as I do, I’m very projective that’s one of the ways that I’ve watched work, or you’re pulling from the work that way, but that’s not what they were looking at. And so yeah, then you change the angles of the legs, how turned in it is and whatever, and it resolved to a degree, I think, fairly successfully. It doesn’t really look like a diaper anymore. But that wasn’t really— and also to get that move right took us over a month.

Ackerman: Wow.

LaViers: Wow.

Thomas: We got much faster after that because it was the first, and we really learned. But it took a month of programming, me coming in, naming specific ways of reshifting it before we got a twist that felt natural if amended because it’s not the same way that--

LaViers: Yeah. Well, and it’s fascinating to think about how to get it to look the same. You had to change the way it did the movement, is what I heard you describing there, and I think that’s so fascinating, right? And just how distinct the morphologies between our body and any of these bodies, even the very facile human-ish looking Atlas, that there’s still a lot of really nuanced and fine-grained and human work-intensive labor to go into getting that to look the same as what we all think of as the twist or the mashed potato.

Thomas: Right. Right. And it does need to be something that we can project those dances onto, or it doesn’t work, in terms of this dance. It could work in another one. Yeah.

LaViers: Right. And you brought that up earlier, too, of trying to work inside of some established forms of dance as opposed to making us all terrified by the strange movement that can happen, which I think is interesting. And I hope one day you get to do that dance too.

Thomas: Yeah. No, I totally want to do that dance too.

Ackerman: Monica, do you have one last question you want to ask?

Thomas: I do. And this is— yeah. I want to ask you, kind of what does embodied or body-based intelligence offer in robotic engineering? So I feel like, you, more than anyone, can speak to that because I don’t do that side.

LaViers: Well, I mean, I think it can bring a couple of things. One, it can bring— I mean, the first moment in my career or life that that calls up for me is, I was watching one of my lab mates, when I was a doctoral student, give a talk about a quadruped robot that he was working on, and he was describing the crawling strategy like the gate. And someone said— and I think it was roughly like, “Move the center of gravity inside the polygon of support, and then pick up— the polygon of support formed by three of the legs. And then pick up the fourth leg and move it. Establish a new polygon of support. Move the center of mass into that polygon of support.” And it’s described with these figures. Maybe there’s a center of gravity. It’s like a circle that’s like a checkerboard, and there’s a triangle, and there’s these legs. And someone stands up and is like, “That makes no sense like that. Why would you do that?” And I’m like, “Oh, oh, I know, oh, because that’s one of the ways you can crawl.” I actually didn’t get down on the floor and do it because I was not so outlandish at that point.

But today, in the RAD lab, that would be, “Everyone on all fours, try this strategy out.” Does it feel like a good idea? Are there other ideas that we would use to do this pattern that might be worth exploring here as well? And so truly rolling around on the floor and moving your body and pretending to be a quadruped, which— in my dance classes, it’s a very common thing to practice crawling because we all forget how to crawl. We want to crawl with the cross-lateral pattern and the homo-lateral pattern, and we want to keep our butts down-- or keep the butts up, but we want to have that optionality so that we look like we’re facile, natural crawlers. We train that, right? And so for a quadruped robot talk and discussion, I think there’s a very literal way that an embodied exploration of the idea is a completely legitimate way to do research.

Ackerman: Yeah. I mean, Monica, this is what you were saying, too, as you were working with these engineers. Sometimes it sounded like they could tell that something wasn’t quite right, but they didn’t know how to describe it, and they didn’t know how to fix it because they didn’t have that language and experience that both of you have.

Thomas: Yeah. Yeah, exactly that.

Ackerman: Okay. Well, I just want to ask you each one more really quick question before we end here, which is that, what is your favorite fictional robot and why? I hope this isn’t too difficult, especially since you both work with real robots, but. Amy, you want to go first?

LaViers: I mean, I’m going to feel like a party pooper. I don’t like any robots, real or fictional. The fictional ones annoy me because-- the fictional ones annoy me because of the disambiguation issue and WALL-E and Eva are so cute. And I do love cute things, but are those machines, or are those characters? And are we losing sight of that? I mean, my favorite robot to watch move, this one-- I mean, I love the Keepon dancing to Spoon. That is something that if you’re having an off day, you google Keepon dancing to Spoon— Keepon is one word, K-E-E-P-O-N, dancing to Spoon, and you just bop. It’s just a bop. I love it. It’s so simple and so pure and so right.

Ackerman: It’s one of my favorite robots of all time, Monica. I don’t know if you’ve seen this, but it’s two little yellow balls like this, and it just goes up and down and rocks back and forth. But it does it so to music. It just does it so well. It’s amazing.

Thomas: I will definitely be watching that [crosstalk].

Ackerman: Yeah. And I should have expanded the question, and now I will expand it because Monica hasn’t answered yet. Favorite robot, real or fictional?

Thomas: So I don’t know if it’s my favorite. This one breaks my heart, and I’m currently having an empathy overdrive issue as a general problem. But there’s a robot installation - and I should know its name, but I don’t— where the robot reaches out, and it grabs the oil that they’ve created it to leak and pulls it towards its body. And it’s been doing this for several years now, but it’s really slowing down now. And I don’t think it even needs the oil. I don’t think it’s a robot that uses oil. It just thinks that it needs to keep it close. And it used to happy dance, and the oil has gotten so dark and the red rust color of, oh, this is so morbid of blood, but it just breaks my heart. So I think I love that robot and also want to save it in the really unhealthy way that we sometimes identify with things that we shouldn’t be thinking about that much.

Ackerman: And you both gave amazing answers to that question.

LaViers: And the piece is Sun Yuan and Peng Yu’s Can’t Help Myself.

Ackerman: That’s right. Yeah.

LaViers: And it is so beautiful. I couldn’t remember the artist’s name either, but—you’re right—it’s so beautiful.

Thomas: It’s beautiful. The movement is beautiful. It’s beautifully considered as an art piece, and the robot is gorgeous and heartbreaking.

Ackerman: Yeah. Those answers were so unexpected, and I love that. So thank you both, and thank you for being on this podcast. This was an amazing conversation. We didn’t have nearly enough time, so we’re going to have to come back to so much.

LaViers: Thank you for having me.

Thomas: Thank you so much for inviting me. [music]

Ackerman: We’ve been talking with Monica Thomas and Amy LaViers about robots and dance. And thanks again to our guests for joining us for ChatBot and IEEE Spectrum. I’m Evan Ackerman.


Chatbot Episode 1: Making Boston Dynamics’ Robots Dance

Evan Ackerman: I’m Evan Ackerman, and welcome to ChatBot, a robotics podcast from IEEE Spectrum. On this episode of ChatBot, we’ll be talking with Monica Thomas and Amy LaViers about robots and dance. Monica Thomas is a dancer and choreographer. Monica has worked with Boston Dynamics to choreograph some of their robot videos in which Atlas, Spot, and even Handle dance to songs like Do You Love Me? The, “Do You Love Me?” Video has been viewed 37 million times. And if you haven’t seen it yet, it’s pretty amazing to see how these robots can move. Amy LaViers is the director of the Robotics, Automation, and Dance Lab, or RAD lab, which she founded in 2015 as a professor in Mechanical Science and Engineering at the University of Illinois, Urbana-Champaign. The RAD Lab is a collective for art making, commercialization, education, outreach, and research at the intersection of dance and robotics. And Amy’s work explores the creative relationships between machines and humans, as expressed through movement. So Monica, can you just tell me-- I think people in the robotics field may not know who you are or why you’re on the podcast at this point, so can you just describe how you initially got involved in Boston Dynamics?

Monica Thomas: Yeah. So I got involved really casually. I know people who work at Boston Dynamics and Marc Raibert, their founder and head. They’d been working on Spot, and they added the arm to Spot. And Marc was kind of like, “I kind of think this could dance.” And they were like, “Do you think this could dance?” And I was like, “It could definitely dance. That definitely could do a lot of dancing.” And so we just started trying to figure out, can it move in a way that feels like dance to people watching it? And the first thing we made was Uptown Spot. And it was really just figuring out moves that the robot does kind of already naturally. And that’s when they started developing, I think, Choreographer, their tool. But in terms of my thinking, it was just I was watching what the robot did as its normal patterns, like going up, going down, walking this place, different steps, different gates, what is interesting to me, what looks beautiful to me, what looks funny to me, and then imagining what else we could be doing, considering the angles of the joints. And then it just grew from there. And so once that one was out, Marc was like, “What about the rest of the robots? Could they dance? Maybe we could do a dance with all of the robots.” And I was like, “We could definitely do a dance with all of the robots. Any shape can dance.” So that’s when we started working on what turned into Do You Love Me? I didn’t really realize what a big deal it was until it came out and it went viral. And I was like, “Oh—” are we allowed to swear, or—?

Ackerman: Oh, yeah. Yeah.

Thomas: Yeah. So I was like, “[bleep bleep, bleeeep] is this?” I didn’t know how to deal with it. I didn’t know how to think about it. As a performer, the largest audience I performed for in a day was like 700 people, which is a big audience as a live performer. So when you’re hitting millions, it’s just like it doesn’t even make sense anymore, and yeah. So that was pretty mind-boggling. And then also because of kind of how it was introduced and because there is a whole world of choreo-robotics, which I was not really aware of because I was just doing my thing. Then I realized there’s all of this work that’s been happening that I couldn’t reference, didn’t know about, and conversations that were really important in the field that I also was unaware of and then suddenly was a part of. So I think doing work that has more viewership is really—it was a trip and a half—is a trip and a half. I’m still learning about it. Does that answer your question?

Ackerman: Yeah. Definitely.

Thomas: It’s a long-winded answer, but.

Ackerman: And Amy, so you have been working in these two disciplines for a long time, in the disciplines of robotics and in dance. So what made you decide to combine these two things, and why is that important?

Amy LaViers: Yeah. Well, both things, I guess in some way, have always been present in my life. I’ve danced since I was three, probably, and my dad and all of his brothers and my grandfathers were engineers. So in some sense, they were always there. And it was really-- I could tell you the date. I sometimes forget what it was, but it was a Thursday, and I was taking classes and dancing and controlling of mechanical systems, and I was realizing this over. I mean, I don’t think I’m combining them. I feel like they already kind of have this intersection that just exists. And I realized-- or I stumbled into that intersection myself, and I found lots of people working in it. And I was-- oh, my interests in both these fields kind of reinforce one another in a way that’s really exciting and interesting. I also happened to be an almost graduating-- I was in last class of my junior year of college, so I was thinking, “What am I going to do with myself?” Right? So it was very happenstance in that way. And again, I mean, I just felt like— it was like I walked into a room where all of a sudden, a lot of things made sense to me, and a lot of interests of mine were both present.

Ackerman: And can you summarize, I guess, the importance here? Because I feel like— I’m sure this is something you’ve run into, is that it’s easy for engineers or roboticists just to be— I mean, honestly, a little bit dismissive of this idea that it’s important for robots to have this expressivity. So why is it important?

LaViers: That is a great question that if I could summarize what my life is like, it’s me on a computer going like this, trying to figure out the words to answer that succinctly. But one way I might ask it, earlier when we were talking, you mentioned this idea of functional behavior versus expressive behavior, which comes up a lot when we start thinking in this space. And I think one thing that happens-- and my training and background in Laban Movement Analysis really emphasizes this duality between function and expression as opposed to the either/or. It’s kind of like the mind-body split, the idea that these things are one integrated unit. Function and expression are an integrated unit. And something that is functional is really expressive. Something that is expressive is really functional.

Ackerman: It definitely answers the question. And it looks like Monica is resonating with you a little bit, so I’m just going to get out of the way here. Amy, do you want to just start this conversation with Monica?

LaViers: Sure. Sure. Monica has already answered, literally, my first question, so I’m already having to shuffle a little bit. But I’m going to rephrase. My first question was, can robots dance? And I love how emphatically and beautifully you answered that with, “Any shape can dance.” I think that’s so beautiful. That was a great answer, and I think it brings up— you can debate, is this dance, or is this not? But there’s also a way to look at any movement through the lens of dance, and that includes factory robots that nobody ever sees.

Thomas: It’s exciting. I mean, it’s a really nice way to walk through the world, so I actually recommend it for everyone, just like taking a time and seeing the movement around you as dance. I don’t know if it’s allowing it to be intentional or just to be special, meaningful, something.

LaViers: That’s a really big challenge, particularly for an autonomous system. And for any moving system, I think that’s hard, artificial or not. I mean it’s hard for me. My family’s coming into town this weekend. I’m like, “How do I act so that they know I love them?” Right? That’s dramaticized version of real life, right, is, how do I be welcoming to my guests? And that’ll be, how do I move?

Thomas: What you’re saying is a reminder of, one of the things that I really enjoy watching robots move is that I’m allowed to project as much as I want to on them without taking away something from them. When you project too much on people, you lose the person, and that’s not really fair. But when you’re projecting on objects, things that are objects but that we personify— or not even personify, that we anthropomorphize or whatever, it is just a projection of us. But it’s acceptable. So nice for it to be acceptable, a place where you get to do that.

LaViers: Well, okay. Then can I ask my fourth question even though it’s not my turn? Because that’s just too perfect to what it is, which is just, what did you learn about yourself working with these robots?

Thomas: Well, I learned how much I love visually watching movement. I’ve always watched, but I don’t think it was as clear to me how much I like movement. The work that I made was really about context. It was about what’s happening in society, what’s happening in me as a person. But I never got into that school of dance that really spends time just really paying attention to movement or letting movement develop or explore, exploring movement. That wasn’t what I was doing. And with robots, I was like, “Oh, but yeah, I get it better now. I see it more now.” So much in life right now, for me, is not contained, and it doesn’t have answers. And translating movement across species from my body to a robot, that does have answers. It has multiple answers. It’s not like there’s a yes and a no, but you can answer a question. And it’s so nice to answer questions sometimes. I sat with this thing, and here’s something I feel like is an acceptable solution. Wow. That’s a rarity in life. So I love that about working with robots. I mean, also, they’re cool, I think. And it is also— they’re just cool. I mean, that’s true too. It’s also interesting. I guess the last thing that I really loved—and I didn’t have much opportunity to do this or as much as you’d expect because of COVID—is being in space with robots. It’s really interesting, just like being in space with anything that is different than your norm is notable. Being in space with an animal that you’re not used to being with is notable. And there’s just something really cool about being with something very different. And for me, robots are very different and not acclimatized.

Ackerman: Okay. Monica, you want to ask a question or two?

Thomas: Yeah. I do. The order of my questions is ruined also. I was thinking about the RAD Lab, and I was wondering if there are guiding principles that you feel are really important in that interdisciplinary work that you’re doing, and also any lessons maybe from the other side that are worth sharing.

LaViers: The usual way I describe it and describe my work more broadly is, I think there are a lot of roboticists that hire dancers, and they make robots and those dancers help them. And there are a lot of dancers that they hire engineers, and those engineers build something for them that they use inside of their work. And what I’m interested in, in the little litmus test or challenge I paint for myself and my collaborators is we want to be right in between those two things, right, where we are making something. First of all, we’re treating each other as peers, as technical peers, as artistic peers, as— if the robot moves on stage, I mean, that’s choreography. If the choreographer asks for the robot to move in a certain way, that’s robotics. That’s the inflection point we want to be at. And so that means, for example, in terms of crediting the work, we try to credit the creative contributions. And not just like, “Oh, well, you did 10 percent of the creative contributions.” We really try to treat each other as co-artistic collaborators and co-technical developers. And so artists are on our papers, and engineers are in our programs, to put it in that way. And likewise, that changes the questions we want to ask. We want to make something that pushes robotics just a inch further, a millimeter further. And we want to do something that pushes dance just an inch further, a millimeter further. We would love it if people would ask us, “Is this dance?” We get, “Is this robotics?” Quite a lot. So that makes me feel like we must be doing something interesting in robotics.

And every now and then, I think we do something interesting for dance too, and certainly, many of my collaborators do. And that inflection point, that’s just where I think is interesting. And I think that’s where— that’s the room I stumbled into, is where we’re asking those questions as opposed to just developing a robot and hiring someone to help us do that. I mean, it can be hard in that environment that people feel like their expertise is being given to the other side. And then, where am I an expert? And we’ve heard editors at publication venues say, “Well, this dancer can’t be a co-author,” and we’ve had venues where we’re working on the program and people say, “Well, no, this engineer isn’t a performer,” but I’m like, “But he’s queuing the robot, and if he messes up, then we all mess up.” I mean, that’s vulnerability too. So we have those conversations that are really touchy and a little sensitive and a little— and so how do you create that space where people do you feel safe and comfortable and valued and attributed for their work and that they can make a track record and do this again in another project, in another context and— so, I don’t know, if I’ve learned anything, I mean, I’ve learned that you just have to really talk about attribution all the time. I bring it up every time, and then I bring it up before we even think about writing a paper. And then I bring it up when we make the draft. And first thing I put in the draft is everybody’s name in the order it’s going to appear, with the affiliations and with the—subscripts on that don’t get added at the last minute. And when the editor of a very famous robotics venue says, “This person can’t be a co-author,” that person doesn’t get taken off as a co-author; that person is a co-author, and we figure out another way to make it work. And so I think that’s learning, or that’s just a struggle anyway.

Ackerman: Monica, I’m curious if when you saw the Boston Dynamics videos go viral, did you feel like there was much more of a focus on the robots and the mechanical capabilities than there was on the choreography and the dance? And if so, how did that make you feel?

Thomas: Yeah. So yes. Right. When dances I’ve made have been reviewed, which I’ve always really appreciated, it has been about the dance. It’s been about the choreography. And actually, kind of going way back to what we were talking about a couple things ago, a lot of the reviews that you get around this are about people, their reactions, right? Because, again, we can project so much onto robots. So I learned a lot about people, how people think about robots. There’s a lot of really overt themes, and then there’s individual nuance. But yeah, it wasn’t really about the dance, and it was in the middle of the pandemic too. So there’s really high isolation. I had no idea how people who cared about dance thought about it for a long time. And then every once in a while, I get one person here or one person there say something. So it’s a totally weird experience. Yes.

The way that I took information about the dance was kind of paying attention to the affective experience, the emotional experience that people had watching this. The dance was— nothing in that dance was— we use the structures of the traditions of dance in it for intentional reason. I chose that because I wasn’t trying to alarm people or show people ways that robots move that totally hit some old part of our brain that makes us absolutely panicked. That wasn’t my interest or the goal of that work. And honestly, at some point, it’d be really interesting to explore what the robots can just do versus what I, as a human, feel comfortable seeing them do. But the emotional response that people got told me a story about what the dance was doing in a backward-- also, what the music’s doing because—let’s be real—that music does— right? We stacked the deck.

LaViers: Yeah. And now that brings— I feel like that serves up two of my questions, and I might let you pick which one maybe we go to. I mean, one of my questions, I wrote down some of my favorite moments from the choreography that I thought we could discuss. Another question—and maybe we can do both of these in serie—is a little bit about— I’ll blush even just saying it, and I’m so glad that the people can’t see the blushing. But also, there’s been so much nodding, and I’m noticing that that won’t be in the audio recording. We’re nodding along to each other so much. But the other side—and you can just nod in a way that gives me your—the other question that comes up for that is, yeah, what is the monetary piece of this, and where are the power dynamics inside this? And how do you feel about how that sits now as that video continues to just make its rounds on the internet and establish value for Boston Dynamics?

Thomas: I would love to start with the first question. And the second one is super important, and maybe another day for that one.

Ackerman: Okay. That’s fair. That’s fair.

LaViers: Yep. I like that. I like that. So the first question, so my favorite moments of the piece that you choreographed to Do You Love Me? For the Boston Dynamics robots, the swinging arms at the beginning, where you don’t fully know where this is going. It looks so casual and so, dare I say it, natural, although it’s completely artificial, right? And the proximal rotation of the legs, I feel like it’s a genius way of getting around no spine. But you really make use of things that look like hip joints or shoulder joints as a way of, to me, accessing a good wriggle or a good juicy moment, and then the Spot space hold, I call it, where the head of the Spot is holding in place and then the robot wiggles around that, dances around that. And then the moment when you see all four complete—these distinct bodies, and it looks like they’re dancing together. And we touched on that earlier—any shape can dance—but making them all dance together I thought was really brilliant and effective in the work. So it’s one of those moments, super interesting, or you have a funny story about, I thought we could talk about it further.

Thomas: I have a funny story about the hip joints. So the initial— well, not the initial, but when they do the mashed potato, that was the first dance move that we started working on, on Atlas. And for folks who don’t know, the mashed potato is kind of the feet are going in and out; the knees are going in and out. So we ran into a couple of problems, which—and the twist. I guess it’s a combo. Both of them like you to roll your feet on the ground like rub, and that friction was not good for the robots. So when we first started really moving into the twist, which has this torso twisting— the legs are twisting. The foot should be twisting on the floor. The foot is not twisting on the floor, and the legs were so turned out that the shape of the pelvic region looked like a over-full diaper. So, I mean, it was wiggling, but it made the robot look young. It made the robot look like it was in a diaper that needed to be changed. It did not look like a twist that anybody would want to do near anybody else. And it was really amazing how— I mean, it was just hilarious to see it. And the engineers come in. They’re really seeing the movement and trying to figure out what they need for the movement. And I was like, “Well, it looks like it has a very full diaper.” And they were like, “Oh.” They knew it didn’t quite look right, but it was like—because I think they really don’t project as much as I do, I’m very projective that’s one of the ways that I’ve watched work, or you’re pulling from the work that way, but that’s not what they were looking at. And so yeah, then you change the angles of the legs, how turned in it is and whatever, and it resolved to a degree, I think, fairly successfully. It doesn’t really look like a diaper anymore. But that wasn’t really— and also to get that move right took us over a month.

Ackerman: Wow.

LaViers: Wow.

Thomas: We got much faster after that because it was the first, and we really learned. But it took a month of programming, me coming in, naming specific ways of reshifting it before we got a twist that felt natural if amended because it’s not the same way that--

LaViers: Yeah. Well, and it’s fascinating to think about how to get it to look the same. You had to change the way it did the movement, is what I heard you describing there, and I think that’s so fascinating, right? And just how distinct the morphologies between our body and any of these bodies, even the very facile human-ish looking Atlas, that there’s still a lot of really nuanced and fine-grained and human work-intensive labor to go into getting that to look the same as what we all think of as the twist or the mashed potato.

Thomas: Right. Right. And it does need to be something that we can project those dances onto, or it doesn’t work, in terms of this dance. It could work in another one. Yeah.

LaViers: Right. And you brought that up earlier, too, of trying to work inside of some established forms of dance as opposed to making us all terrified by the strange movement that can happen, which I think is interesting. And I hope one day you get to do that dance too.

Thomas: Yeah. No, I totally want to do that dance too.

Ackerman: Monica, do you have one last question you want to ask?

Thomas: I do. And this is— yeah. I want to ask you, kind of what does embodied or body-based intelligence offer in robotic engineering? So I feel like, you, more than anyone, can speak to that because I don’t do that side.

LaViers: Well, I mean, I think it can bring a couple of things. One, it can bring— I mean, the first moment in my career or life that that calls up for me is, I was watching one of my lab mates, when I was a doctoral student, give a talk about a quadruped robot that he was working on, and he was describing the crawling strategy like the gate. And someone said— and I think it was roughly like, “Move the center of gravity inside the polygon of support, and then pick up— the polygon of support formed by three of the legs. And then pick up the fourth leg and move it. Establish a new polygon of support. Move the center of mass into that polygon of support.” And it’s described with these figures. Maybe there’s a center of gravity. It’s like a circle that’s like a checkerboard, and there’s a triangle, and there’s these legs. And someone stands up and is like, “That makes no sense like that. Why would you do that?” And I’m like, “Oh, oh, I know, oh, because that’s one of the ways you can crawl.” I actually didn’t get down on the floor and do it because I was not so outlandish at that point.

But today, in the RAD lab, that would be, “Everyone on all fours, try this strategy out.” Does it feel like a good idea? Are there other ideas that we would use to do this pattern that might be worth exploring here as well? And so truly rolling around on the floor and moving your body and pretending to be a quadruped, which— in my dance classes, it’s a very common thing to practice crawling because we all forget how to crawl. We want to crawl with the cross-lateral pattern and the homo-lateral pattern, and we want to keep our butts down-- or keep the butts up, but we want to have that optionality so that we look like we’re facile, natural crawlers. We train that, right? And so for a quadruped robot talk and discussion, I think there’s a very literal way that an embodied exploration of the idea is a completely legitimate way to do research.

Ackerman: Yeah. I mean, Monica, this is what you were saying, too, as you were working with these engineers. Sometimes it sounded like they could tell that something wasn’t quite right, but they didn’t know how to describe it, and they didn’t know how to fix it because they didn’t have that language and experience that both of you have.

Thomas: Yeah. Yeah, exactly that.

Ackerman: Okay. Well, I just want to ask you each one more really quick question before we end here, which is that, what is your favorite fictional robot and why? I hope this isn’t too difficult, especially since you both work with real robots, but. Amy, you want to go first?

LaViers: I mean, I’m going to feel like a party pooper. I don’t like any robots, real or fictional. The fictional ones annoy me because-- the fictional ones annoy me because of the disambiguation issue and WALL-E and Eva are so cute. And I do love cute things, but are those machines, or are those characters? And are we losing sight of that? I mean, my favorite robot to watch move, this one-- I mean, I love the Keepon dancing to Spoon. That is something that if you’re having an off day, you google Keepon dancing to Spoon— Keepon is one word, K-E-E-P-O-N, dancing to Spoon, and you just bop. It’s just a bop. I love it. It’s so simple and so pure and so right.

Ackerman: It’s one of my favorite robots of all time, Monica. I don’t know if you’ve seen this, but it’s two little yellow balls like this, and it just goes up and down and rocks back and forth. But it does it so to music. It just does it so well. It’s amazing.

Thomas: I will definitely be watching that [crosstalk].

Ackerman: Yeah. And I should have expanded the question, and now I will expand it because Monica hasn’t answered yet. Favorite robot, real or fictional?

Thomas: So I don’t know if it’s my favorite. This one breaks my heart, and I’m currently having an empathy overdrive issue as a general problem. But there’s a robot installation - and I should know its name, but I don’t— where the robot reaches out, and it grabs the oil that they’ve created it to leak and pulls it towards its body. And it’s been doing this for several years now, but it’s really slowing down now. And I don’t think it even needs the oil. I don’t think it’s a robot that uses oil. It just thinks that it needs to keep it close. And it used to happy dance, and the oil has gotten so dark and the red rust color of, oh, this is so morbid of blood, but it just breaks my heart. So I think I love that robot and also want to save it in the really unhealthy way that we sometimes identify with things that we shouldn’t be thinking about that much.

Ackerman: And you both gave amazing answers to that question.

LaViers: And the piece is Sun Yuan and Peng Yu’s Can’t Help Myself.

Ackerman: That’s right. Yeah.

LaViers: And it is so beautiful. I couldn’t remember the artist’s name either, but—you’re right—it’s so beautiful.

Thomas: It’s beautiful. The movement is beautiful. It’s beautifully considered as an art piece, and the robot is gorgeous and heartbreaking.

Ackerman: Yeah. Those answers were so unexpected, and I love that. So thank you both, and thank you for being on this podcast. This was an amazing conversation. We didn’t have nearly enough time, so we’re going to have to come back to so much.

LaViers: Thank you for having me.

Thomas: Thank you so much for inviting me. [music]

Ackerman: We’ve been talking with Monica Thomas and Amy LaViers about robots and dance. And thanks again to our guests for joining us for ChatBot and IEEE Spectrum. I’m Evan Ackerman.


Microfliers, or miniature wireless robots deployed in numbers, are sometimes used today for large-scale surveillance and monitoring purposes, such as in environmental or biological studies. Because of the fliers’ ability to disperse in air, they can spread out to cover large areas after being dropped from a single location, including in places where access is otherwise difficult. Plus, they are smaller, lighter, and cheaper to deploy than multiple drones.

One of the challenges in creating more efficient microfliers has been in reducing power consumption. One way to do so, as researchers from the University of Washington (UW) and Université Grenoble Alpes have demonstrated, is to get rid of the battery. With inspiration from the Japanese art of paper folding, origami, they designed programmable microfliers that can disperse in the wind and change shape using electronic actuation. This is achieved by a solar-powered actuator that can produce up to 200 millinewtons of force in 25 milliseconds.

“Think of these little fliers as a sensor platform to measure environmental conditions, like, temperature, light, and other things.”
—Vikram Iyer, University of Washington

“The cool thing about these origami designs is, we’ve created a way for them to change shape in midair, completely battery free,” says Vikram Iyer, computer scientist and engineer at UW, one of the authors. “It’s a pretty small change in shape, but it creates a very dramatic change in falling behavior…that allows us to get some control over how these things are flying.”

Tumbling and stable states: A) The origami microflier here is in its tumbling state and B) postlanding configuration. As it descends, the flier tumbles, with a typical tumbling pattern pictured in C. D) The origami microflier is here in its stable descent state. The fliers’ range of landing locations, E, reveals its dispersal patterns after being released from its parent drone. Vicente Arroyos, Kyle Johnson, and Vikram Iyer/University of Washington

This research builds on the researchers’ earlier work published in 2022, demonstrating sensors that can disperse in air like dandelion seeds. For the current study, “the goal was to deploy hundreds of these sensors and control where they land, to achieve precise deployments,” says coauthor Shyamnath Gollakota, who leads the Mobile Intelligence Lab at WU. The microfliers, each weighing less than 500 milligrams, can travel almost 100 meters in a light breeze, and wirelessly transmit data about air pressure and temperature via Bluetooth up to a distance of 60 meters. The group’s findings were published in Science Robotics earlier this month.

Discovering the difference in the falling behavior of the two origami states was serendipity, Gollakota says: “When it is flat, it’s almost like a leaf, tumbling [in the] the wind,” he says. “A very slight change from flat to a little bit of a curvature [makes] it fall like a parachute in a very controlled motion.” In their tumbling state, in lateral wind gusts, the microfliers achieve up to three times the dispersal distance as in their stable state, he adds.

This close-up of the microflier reveals the electronics and circuitry on its top side.Vicente Arroyos, Kyle Johnson, and Vikram Iyer/University of Washington

There have been other origami-based systems in which motors, electrostatic actuators, shape-memory alloys, and electrothermal polymers, for example, have been used, but these did not address the challenges facing the researchers, Gollakota says. One was to find the sweet spot between an actuation mechanism strong enough to not change shape without being triggered, yet lightweight enough to keep power consumption low. Next, it had to produce a rapid transition response while falling to the ground. Finally, it needed to have a lightweight energy storage solution onboard to trigger the transition.

The mechanism, which Gollakota describes as “pretty commonsensical” still took them a year to come up with. There’s a stem in the middle of the origami, comprising a solenoid coil (a coil that acts as a magnet when a current passes through it), and two small magnets. Four hinged carbon-fiber rods attach the stem to the edges of the structure. When a pulse of current is applied to the solenoid coil, it pushes the magnets toward each other, making the structure snap into its alternative shape.

All it requires is a tiny bit of power, just enough to put the magnets within the right distance from each other for the magnetic forces to work, Gollakota says. There is an array of thin, lightweight solar cells to harvest energy, which is stored in a little capacitor. The circuit is fabricated directly on the foldable origami structure, and also includes a microcontroller, timer, Bluetooth receiver, and pressure and temperature sensors.

“We can program these things to trigger the shape change based on any of these things—after a fixed time, when we send it a radio signal, or, at an altitude [or temperature] that this device detects,” Iyer adds. The origami structure is bistable, meaning it does not need any energy to maintain shape once it has transitioned.

The researchers say their design can be extended to incorporate sensors for a variety of environmental monitoring applications. “Think of these little fliers as a sensor platform to measure environmental conditions, like temperature, light, and other things, [and] how they vary throughout the atmosphere,” Iyer says. Or they can deploy sensors on the ground for things like digital agriculture, climate change–related studies, and tracking forest fires.

In their current prototype, the microfliers only shape-change in one direction, but the researchers want to make them transition in both directions, to be able to toggle the two states, and control the trajectory even better. They also imagine a swarm of microfliers communicating with one another, controlling their behavior, and self-organizing how they are falling and dispersing.



Microfliers, or miniature wireless robots deployed in numbers, are sometimes used today for large-scale surveillance and monitoring purposes, such as in environmental or biological studies. Because of the fliers’ ability to disperse in air, they can spread out to cover large areas after being dropped from a single location, including in places where access is otherwise difficult. Plus, they are smaller, lighter, and cheaper to deploy than multiple drones.

One of the challenges in creating more efficient microfliers has been in reducing power consumption. One way to do so, as researchers from the University of Washington (UW) and Université Grenoble Alpes have demonstrated, is to get rid of the battery. With inspiration from the Japanese art of paper folding, origami, they designed programmable microfliers that can disperse in the wind and change shape using electronic actuation. This is achieved by a solar-powered actuator that can produce up to 200 millinewtons of force in 25 milliseconds.

“Think of these little fliers as a sensor platform to measure environmental conditions, like, temperature, light, and other things.”
—Vikram Iyer, University of Washington

“The cool thing about these origami designs is, we’ve created a way for them to change shape in midair, completely battery free,” says Vikram Iyer, computer scientist and engineer at UW, one of the authors. “It’s a pretty small change in shape, but it creates a very dramatic change in falling behavior…that allows us to get some control over how these things are flying.”

Tumbling and stable states: A) The origami microflier here is in its tumbling state and B) postlanding configuration. As it descends, the flier tumbles, with a typical tumbling pattern pictured in C. D) The origami microflier is here in its stable descent state. The fliers’ range of landing locations, E, reveals its dispersal patterns after being released from its parent drone. Vicente Arroyos, Kyle Johnson, and Vikram Iyer/University of Washington

This research builds on the researchers’ earlier work published in 2022, demonstrating sensors that can disperse in air like dandelion seeds. For the current study, “the goal was to deploy hundreds of these sensors and control where they land, to achieve precise deployments,” says coauthor Shyamnath Gollakota, who leads the Mobile Intelligence Lab at WU. The microfliers, each weighing less than 500 milligrams, can travel almost 100 meters in a light breeze, and wirelessly transmit data about air pressure and temperature via Bluetooth up to a distance of 60 meters. The group’s findings were published in Science Robotics earlier this month.

Discovering the difference in the falling behavior of the two origami states was serendipity, Gollakota says: “When it is flat, it’s almost like a leaf, tumbling [in the] the wind,” he says. “A very slight change from flat to a little bit of a curvature [makes] it fall like a parachute in a very controlled motion.” In their tumbling state, in lateral wind gusts, the microfliers achieve up to three times the dispersal distance as in their stable state, he adds.

This close-up of the microflier reveals the electronics and circuitry on its top side.Vicente Arroyos, Kyle Johnson, and Vikram Iyer/University of Washington

There have been other origami-based systems in which motors, electrostatic actuators, shape-memory alloys, and electrothermal polymers, for example, have been used, but these did not address the challenges facing the researchers, Gollakota says. One was to find the sweet spot between an actuation mechanism strong enough to not change shape without being triggered, yet lightweight enough to keep power consumption low. Next, it had to produce a rapid transition response while falling to the ground. Finally, it needed to have a lightweight energy storage solution onboard to trigger the transition.

The mechanism, which Gollakota describes as “pretty commonsensical” still took them a year to come up with. There’s a stem in the middle of the origami, comprising a solenoid coil (a coil that acts as a magnet when a current passes through it), and two small magnets. Four hinged carbon-fiber rods attach the stem to the edges of the structure. When a pulse of current is applied to the solenoid coil, it pushes the magnets toward each other, making the structure snap into its alternative shape.

All it requires is a tiny bit of power, just enough to put the magnets within the right distance from each other for the magnetic forces to work, Gollakota says. There is an array of thin, lightweight solar cells to harvest energy, which is stored in a little capacitor. The circuit is fabricated directly on the foldable origami structure, and also includes a microcontroller, timer, Bluetooth receiver, and pressure and temperature sensors.

“We can program these things to trigger the shape change based on any of these things—after a fixed time, when we send it a radio signal, or, at an altitude [or temperature] that this device detects,” Iyer adds. The origami structure is bistable, meaning it does not need any energy to maintain shape once it has transitioned.

The researchers say their design can be extended to incorporate sensors for a variety of environmental monitoring applications. “Think of these little fliers as a sensor platform to measure environmental conditions, like temperature, light, and other things, [and] how they vary throughout the atmosphere,” Iyer says. Or they can deploy sensors on the ground for things like digital agriculture, climate change–related studies, and tracking forest fires.

In their current prototype, the microfliers only shape-change in one direction, but the researchers want to make them transition in both directions, to be able to toggle the two states, and control the trajectory even better. They also imagine a swarm of microfliers communicating with one another, controlling their behavior, and self-organizing how they are falling and dispersing.

Introduction: Geometric pattern formation is crucial in many tasks involving large-scale multi-agent systems. Examples include mobile agents performing surveillance, swarms of drones or robots, and smart transportation systems. Currently, most control strategies proposed to achieve pattern formation in network systems either show good performance but require expensive sensors and communication devices, or have lesser sensor requirements but behave more poorly.

Methods and result: In this paper, we provide a distributed displacement-based control law that allows large groups of agents to achieve triangular and square lattices, with low sensor requirements and without needing communication between the agents. Also, a simple, yet powerful, adaptation law is proposed to automatically tune the control gains in order to reduce the design effort, while improving robustness and flexibility.

Results: We show the validity and robustness of our approach via numerical simulations and experiments, comparing it, where possible, with other approaches from the existing literature.

The concept of sustainability and sustainable development has been well discussed and was subject to many conferences of the EU and UN resulting in agendas, goals, and resolutions. Yet, literature shows that the three dimensions of sustainability (ecological, social, and economic) are unevenly accounted for in the design of mechatronic products. The stated reasons range from a lack or inapplicability of tools for integration into the design process, models for simulation, and impact analyses to necessary changes in policy and social behavior. The influence designers have on the sustainability of a product lies mostly in the early design phases of the development process, such as requirements engineering and concept evaluation. Currently, these concepts emerge mostly from performance-based requirements rather than sustainability impact-based requirements, which are also true for service robots in urban environments. So far, the main focus of research in this innovative and growing product branch lies in performance in perception, navigation, and interaction. This paper sets its focus on integrating all three dimensions of sustainability into the design process. Therefore, we describe the development of an urban service robot supporting municipal waste management in the city of Berlin. It is the set goal for the robot to increase the service and support the employees while reducing emissions. For that, we make use of a product development process (PDP) and its adaptable nature to build a specific development process suited to include the three dimensions of sustainability during the requirements engineering and evaluation activities. Herein, we show how established design methods like the life cycle assessment or life cycle costing can be applied to the development of urban service robots and which aspects are underrepresented. Especially, the social dimension required us to look beyond standardized methods in the field of mechanical engineering. Based on our findings, we introduce a new activity to the development process that we call preliminary social assessment in order to incorporate social aspects in the early design phase.

6D pose recognition has been a crucial factor in the success of robotic grasping, and recent deep learning based approaches have achieved remarkable results on benchmarks. However, their generalization capabilities in real-world applications remain unclear. To overcome this gap, we introduce 6IMPOSE, a novel framework for sim-to-real data generation and 6D pose estimation. 6IMPOSE consists of four modules: First, a data generation pipeline that employs the 3D software suite Blender to create synthetic RGBD image datasets with 6D pose annotations. Second, an annotated RGBD dataset of five household objects was generated using the proposed pipeline. Third, a real-time two-stage 6D pose estimation approach that integrates the object detector YOLO-V4 and a streamlined, real-time version of the 6D pose estimation algorithm PVN3D optimized for time-sensitive robotics applications. Fourth, a codebase designed to facilitate the integration of the vision system into a robotic grasping experiment. Our approach demonstrates the efficient generation of large amounts of photo-realistic RGBD images and the successful transfer of the trained inference model to robotic grasping experiments, achieving an overall success rate of 87% in grasping five different household objects from cluttered backgrounds under varying lighting conditions. This is made possible by fine-tuning data generation and domain randomization techniques and optimizing the inference pipeline, overcoming the generalization and performance shortcomings of the original PVN3D algorithm. Finally, we make the code, synthetic dataset, and all the pre-trained models available on GitHub.



There seems to be two general approaches to cooking automation. There’s the “let’s make a robot that can operate in a human kitchen because everyone has a human kitchen,” which seems like a good idea, except that you then have to build your robot to function in human environments which is super hard. On the other end of the spectrum, there’s the “let’s make a dedicated automated system because automation is easier than robotics,” which seems like a good idea, except that you then have to be willing to accept compromises in recipes and texture and taste because preparing food in an automated way just does not yield the same result, as anyone who has ever attempted to Cuisinart their way out of developing some knife skills can tell you.

The Robotics and Mechanisms Lab (RoMeLa) at UCLA, run by Dennis Hong, has been working on a compromise approach that leverages both robot-friendly automation and the kind of human skills that make things taste right. Called Project YORI, which somehow stands for “Yummy Operations Robot Initiative” while also meaning “cooking” in Korean, the system combines a robot-optimized environment with a pair of arms that can operate kitchen tools sort of like a human.

“Instead of trying to mimic how humans cook,” the researchers say, “we approached the problem by thinking how cooking would be accomplished if a robot cooks. Thus the YORI system does not use the typical cooking methods, tools or utensils which are developed for humans.” In addition to a variety of automated cooking systems, the tools that YORI does use are modified to work with a tool changing system, which mostly eliminates the problem of grasping something like a knife well enough that you can precisely and repeatedly exert a substantial amount of force through it, and also helps keep things structured and accessible.

In terms of cooking methods, the system takes advantage of technology when and where it works better than conventional human cooking techniques. For example, in order to tell whether ingredients are fresh or to determine when food is cooked ideally, YORI “utilizes unique chemical sensors,” which I guess are the robot equivalent of a nose and taste buds and arguably would do a more empirical job than some useless recipe metric like “season to taste.”

The advantage of a system like this is versatility. In theory, it’s not as constrained by recipes that you can cram into a system built around automation because of those added robotic capabilities, while also being somewhat practical—or at least, more practical than a robot designed to interact with a lightly modified human kitchen. And it’s actually designed to be practical(ish), in the sense that it’s being developed under a partnership with Woowa Brothers, the company that runs the leading food delivery service in South Korea. It’s obviously still a work in progress—you can see a human hand sneaking in there from time to time. But the approach seems interesting, and I hope that RoMeLa keeps making progress on it, because I’m hungry.



There seems to be two general approaches to cooking automation. There’s the “let’s make a robot that can operate in a human kitchen because everyone has a human kitchen,” which seems like a good idea, except that you then have to build your robot to function in human environments which is super hard. On the other end of the spectrum, there’s the “let’s make a dedicated automated system because automation is easier than robotics,” which seems like a good idea, except that you then have to be willing to accept compromises in recipes and texture and taste because preparing food in an automated way just does not yield the same result, as anyone who has ever attempted to Cuisinart their way out of developing some knife skills can tell you.

The Robotics and Mechanisms Lab (RoMeLa) at UCLA, run by Dennis Hong, has been working on a compromise approach that leverages both robot-friendly automation and the kind of human skills that make things taste right. Called Project YORI, which somehow stands for “Yummy Operations Robot Initiative” while also meaning “cooking” in Korean, the system combines a robot-optimized environment with a pair of arms that can operate kitchen tools sort of like a human.

“Instead of trying to mimic how humans cook,” the researchers say, “we approached the problem by thinking how cooking would be accomplished if a robot cooks. Thus the YORI system does not use the typical cooking methods, tools or utensils which are developed for humans.” In addition to a variety of automated cooking systems, the tools that YORI does use are modified to work with a tool changing system, which mostly eliminates the problem of grasping something like a knife well enough that you can precisely and repeatedly exert a substantial amount of force through it, and also helps keep things structured and accessible.

In terms of cooking methods, the system takes advantage of technology when and where it works better than conventional human cooking techniques. For example, in order to tell whether ingredients are fresh or to determine when food is cooked ideally, YORI “utilizes unique chemical sensors,” which I guess are the robot equivalent of a nose and taste buds and arguably would do a more empirical job than some useless recipe metric like “season to taste.”

The advantage of a system like this is versatility. In theory, it’s not as constrained by recipes that you can cram into a system built around automation because of those added robotic capabilities, while also being somewhat practical—or at least, more practical than a robot designed to interact with a lightly modified human kitchen. And it’s actually designed to be practical(ish), in the sense that it’s being developed under a partnership with Woowa Brothers, the company that runs the leading food delivery service in South Korea. It’s obviously still a work in progress—you can see a human hand sneaking in there from time to time. But the approach seems interesting, and I hope that RoMeLa keeps making progress on it, because I’m hungry.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEXASCybathlon Challenges: 02 February 2024, ZURICH

Enjoy today’s videos!

Musical dancing is an ubiquitous phenomenon in human society. Providing robots the ability to dance has the potential to make the human/robot coexistence more acceptable. Hence, dancing robots have generated a considerable research interest in the recent years. In this paper, we present a novel formalization of robot dancing as planning and control of optimally timed actions based on beat timings and additional features extracted from the music.

Wow! Okay, all robotics videos definitely need confetti cannons.

[ DFKI ]

What an incredibly relaxing robot video this is.

Except for the tree bit, I mean.

[ Paper ] via [ ASL ]

Skydio has a fancy new drone, but not for you!

Skydio X10, a drone designed for first responders, infrastructure operators, and the U.S. and allied militaries around the world. It has the sensors to capture every detail of the data that matters and the AI-powered autonomy to put those sensors wherever they are needed. It packs more capability and versatility in a smaller and easier-to-use package than has ever existed.

[ Skydio X10 ]

An innovative adaptive bipedal robot with bio-inspired multimodal locomotion control can autonomously adapt its body posture to balance on pipes, surmount obstacles of up to 14 centimeters in height (48 percent of its height), and stably move between horizontal and vertical pipe segments. This cutting-edge robotics technology addresses challenges that out-pipe inspection robots have encountered and can enhance out-pipe inspections within the oil and gas industry.

[ Paper ] via [ VISTEC ]

Thanks, Poramate!

I’m not totally sure how you’d control all of these extra arms in a productive way, but I’m sure they’ll figure it out!

[ KIMLAB ]

The video is one of the tests we tried on the X30 robot dog in the R&D period, to examine the speed of its stair-climbing ability.

[ Deep Robotics ]

They’re calling this the “T-REX” but without a pair of tiny arms. Missed opportunity there.

[ AgileX ]

Drag your mouse to look around within this 360-degree panorama captured by NASA’s Curiosity Mars rover. See the steep slopes, layered buttes, and dark rocks surrounding Curiosity while it was parked below Gediz Vallis Ridge, which formed as a result of violent debris flows that were later eroded by wind into a towering formation. This happened about 3 billion years ago, during one of the last wet periods seen on this part of the Red Planet.

[ NASA ]

I don’t know why you need to drive out into the woods to drop-test your sensor rack. Though maybe the stunning Canadian backwoods scenery is reason enough.

[ NORLab ]

Here’s footage of Reachy in the kitchen, opening the fridge’s door and others, cleaning dirt and coffee stains.

If they ever make Reachy’s face symmetrical, I will refuse to include it in any more Video Fridays. O_o

[ Pollen Robotics ]

Inertial odometry is an attractive solution to the problem of state estimation for agile quadrotor flight. In this work, we propose a learning-based odometry algorithm that uses an inertial measurement unit (IMU) as the only sensor modality for autonomous drone racing tasks. We show that our inertial odometry algorithm is superior to the state-of-the-art filter-based and optimization-based visual-inertial odometry as well as the state-of-the-art learned-inertial odometry in estimating the pose of an autonomous racing drone.

[ UZH RPG ]

Robotic Choreographer is the world’s first dance performance-only robot arm born from the concept of performers that are bigger and faster than humans. This robot has a total length of 3 meters, two rotation axes that rotate infinitely, and an arm rotating up to five times for 1 second.

[ MPlusPlus ] via [ Kazumichi Moriyama ]

This video shows the latest development from Extend Robotics, demonstrating the completion of integration of the Mitsubishi Electric Melfa robot. Key demonstrations include 6 degrees-of-freedom (DoF) precision control with real-time inverse kinematics, dual Kinect camera, low-latency streaming and fusion, and high precision control drawing.

[ Extend Robotics ]

Here’s what’s been going on at the GRASP Lab at UPenn.

[ GRASP Lab ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEXASCybathlon Challenges: 02 February 2024, ZURICH

Enjoy today’s videos!

Musical dancing is an ubiquitous phenomenon in human society. Providing robots the ability to dance has the potential to make the human/robot coexistence more acceptable. Hence, dancing robots have generated a considerable research interest in the recent years. In this paper, we present a novel formalization of robot dancing as planning and control of optimally timed actions based on beat timings and additional features extracted from the music.

Wow! Okay, all robotics videos definitely need confetti cannons.

[ DFKI ]

What an incredibly relaxing robot video this is.

Except for the tree bit, I mean.

[ Paper ] via [ ASL ]

Skydio has a fancy new drone, but not for you!

Skydio X10, a drone designed for first responders, infrastructure operators, and the U.S. and allied militaries around the world. It has the sensors to capture every detail of the data that matters and the AI-powered autonomy to put those sensors wherever they are needed. It packs more capability and versatility in a smaller and easier-to-use package than has ever existed.

[ Skydio X10 ]

An innovative adaptive bipedal robot with bio-inspired multimodal locomotion control can autonomously adapt its body posture to balance on pipes, surmount obstacles of up to 14 centimeters in height (48 percent of its height), and stably move between horizontal and vertical pipe segments. This cutting-edge robotics technology addresses challenges that out-pipe inspection robots have encountered and can enhance out-pipe inspections within the oil and gas industry.

[ Paper ] via [ VISTEC ]

Thanks, Poramate!

I’m not totally sure how you’d control all of these extra arms in a productive way, but I’m sure they’ll figure it out!

[ KIMLAB ]

The video is one of the tests we tried on the X30 robot dog in the R&D period, to examine the speed of its stair-climbing ability.

[ Deep Robotics ]

They’re calling this the “T-REX” but without a pair of tiny arms. Missed opportunity there.

[ AgileX ]

Drag your mouse to look around within this 360-degree panorama captured by NASA’s Curiosity Mars rover. See the steep slopes, layered buttes, and dark rocks surrounding Curiosity while it was parked below Gediz Vallis Ridge, which formed as a result of violent debris flows that were later eroded by wind into a towering formation. This happened about 3 billion years ago, during one of the last wet periods seen on this part of the Red Planet.

[ NASA ]

I don’t know why you need to drive out into the woods to drop-test your sensor rack. Though maybe the stunning Canadian backwoods scenery is reason enough.

[ NORLab ]

Here’s footage of Reachy in the kitchen, opening the fridge’s door and others, cleaning dirt and coffee stains.

If they ever make Reachy’s face symmetrical, I will refuse to include it in any more Video Fridays. O_o

[ Pollen Robotics ]

Inertial odometry is an attractive solution to the problem of state estimation for agile quadrotor flight. In this work, we propose a learning-based odometry algorithm that uses an inertial measurement unit (IMU) as the only sensor modality for autonomous drone racing tasks. We show that our inertial odometry algorithm is superior to the state-of-the-art filter-based and optimization-based visual-inertial odometry as well as the state-of-the-art learned-inertial odometry in estimating the pose of an autonomous racing drone.

[ UZH RPG ]

Robotic Choreographer is the world’s first dance performance-only robot arm born from the concept of performers that are bigger and faster than humans. This robot has a total length of 3 meters, two rotation axes that rotate infinitely, and an arm rotating up to five times for 1 second.

[ MPlusPlus ] via [ Kazumichi Moriyama ]

This video shows the latest development from Extend Robotics, demonstrating the completion of integration of the Mitsubishi Electric Melfa robot. Key demonstrations include 6 degrees-of-freedom (DoF) precision control with real-time inverse kinematics, dual Kinect camera, low-latency streaming and fusion, and high precision control drawing.

[ Extend Robotics ]

Here’s what’s been going on at the GRASP Lab at UPenn.

[ GRASP Lab ]

This paper presents an in-pipe robot with three underactuated parallelogram crawler modules, which can automatically shift its body shape when encountering obstacles. The shape-shifting movement is achieved by only a single actuator through a simple differential mechanism by only combining a pair of spur gears. It can lead to downsizing, cost reduction, and simplification of control for adaptation to obstacles. The parallelogram shape does not change the total belt circumference length, thus, a new mechanism to maintain the belt tension is not necessary. Moreover, the proposed crawler can form the anterior-posterior symmetric parallelogram relative to the moving direction, which generates high adaptability in both forward and backward directions. However, whether the locomotion or shape-shifting is driven depends on the gear ratio of the differential mechanism because their movements are only switched mechanically. Therefore, to clarify the requirements of the gear ratio for the passive adaptation, two outputs of each crawler mechanism (torques of the flippers and front pulley) are quasi-statically analyzed, and how the environmental and design parameters influence the robot performance are verified by real experiments. From the experiments, although the robot could not adapt to the stepped pipe in vertical section, it successfully shifted its crawler’s shape to parallelogram in horizontal section only with our simulated output ratio.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLAND

Enjoy today’s videos!

Researchers at the University of Washington have developed small robotic devices that can change how they move through the air by “snapping” into a folded position during their descent. When these “microfliers” are dropped from a drone, they use a Miura-ori origami fold to switch from tumbling and dispersing outward through the air to dropping straight to the ground.

And you can make your own! The origami part, anyway:

[ Science Robotics ] via [ UW ]

Thanks, Sarah!

A central question in robotics is how to design a control system for an agile, mobile robot. This paper studies this question systematically, focusing on a challenging setting: autonomous drone racing. We show that a neural network controller trained with reinforcement learning (RL) outperforms optimal control (OC) methods in this setting. Our findings allow us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 g and a peak velocity of 108 km/h.

Also, please see our feature story on a related topic.

[ Science Robotics ]

Ascento has a fresh $4.3m in funding to develop its cute two-wheeled robot for less-cute security applications.

[ Ascento ]

Thanks, Miguel!

The evolution of Roomba is here. Introducing three new robots, with three new powerful ways to clean. For over 30 years, we have been on a mission to build robots that help people to do more. Now, we are answering the call from consumers to expand our robot lineup to include more 2 in 1 robot vacuum and mop options.

[ iRobot ]

As the beginning of 2023 Weekly KIMLAB, we want to introduce PAPRAS, Plug-And-Play Robotic Arm System. A series of PAPRAS applications will be posted in coming weeks. If you are interested in details of PAPRAS, please check our paper.

[ Paper ] via [ KIMLAB ]

Gerardo Bledt was the Head of our Locomotion and Controls Team at Apptronik. He tragically passed away this summer. He was a friend, colleague, and force of nature. He was a maestro with robots, and showed all of us what was possible. We dedicate Apollo and our work to Gerardo.

[ Apptronik ]

This robot plays my kind of Jenga.

This teleoperated robot was built by Lingkang Zhang, who tells us that it was inspired by Sanctuary AI’s robot.

[ HRC Model 4 ]

Thanks, Lingkang!

Soft universal grippers are advantageous to safely grasp a wide variety of objects. However, due to their soft material, these grippers have limited lifetimes, especially when operating in unstructured and unfamiliar environments. Our self-healing universal gripper (SHUG) can grasp various objects and recover from substantial realistic damages autonomously. It integrates damage detection, heat-assisted healing, and healing evaluation. Notably, unlike other universal grippers, the entire SHUG can be fully reprocessed and recycled.

[ Paper ] via [ BruBotics ]

Thanks Bram!

How would the movie Barbie look like with robots?

[ Misty ]

Zoox is so classy that if you get in during the day and get out at night, it’ll apparently give you a free jean jacket.

[ Zoox ]

X30, the next generation of industrial inspection quadruped robot is on its way. It is now moving and climbing faster, and it has stronger adaptability to adverse environments with advanced add-ons.

[ DeepRobotics ]

Join us on an incredible journey with Alma, a cutting-edge robot with the potential to revolutionize the lives of people with disabilities. This short documentary takes you behind the scenes of our team’s preparation for the Cybathlon challenge, a unique competition that brings together robotics and human ingenuity to solve real-world challenges.

[ Cybathlon ]

NASA’s Moon rover prototype completed software tests. The VIPER mission is managed by NASA’s Ames Research Center in California’s Silicon Valley and is scheduled to be delivered to Mons Mouton near the South Pole of the Moon in late 2024 by Astrobotic’s Griffin lander as part of the Commercial Lunar Payload Services initiative. VIPER will inform future Artemis landing sites by helping to characterize the lunar environment and help determine locations where water and other resources could be harvested to sustain humans over extended stays. 

[ NASA ]

We are excited to announce Husky Observer, a fully integrated system that enables robotics developers to accelerate inspection solutions. Built on top of the versatile Husky platform, this new configuration will enable robotics developers to build their inspection solutions and fast track their system development.

[ Clearpath ]

Land mines and other unexploded ordnance from wars past and present maim or kill thousands of civilians in dozens of nations every year. Finding and disarming them is a slow, dangerous process. Researchers from the Columbia Climate School’s Lamont-Doherty Earth Observatory and other institutions are trying to harness drones, geophysics and artificial intelligence to make the process faster and safer.

[ Columbia ]

Drones are being used by responders in the terrible Morocco earthquake. This 5 minute describes the 5 ways in which drones are typically used in earthquake response- and 4 ways that they aren’t.

[ CRASAR ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILROSCon 2023: 18–20 October 2023, NEW ORLEANSHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLAND

Enjoy today’s videos!

Researchers at the University of Washington have developed small robotic devices that can change how they move through the air by “snapping” into a folded position during their descent. When these “microfliers” are dropped from a drone, they use a Miura-ori origami fold to switch from tumbling and dispersing outward through the air to dropping straight to the ground.

And you can make your own! The origami part, anyway:

[ Science Robotics ] via [ UW ]

Thanks, Sarah!

A central question in robotics is how to design a control system for an agile, mobile robot. This paper studies this question systematically, focusing on a challenging setting: autonomous drone racing. We show that a neural network controller trained with reinforcement learning (RL) outperforms optimal control (OC) methods in this setting. Our findings allow us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 g and a peak velocity of 108 km/h.

Also, please see our feature story on a related topic.

[ Science Robotics ]

Ascento has a fresh $4.3m in funding to develop its cute two-wheeled robot for less-cute security applications.

[ Ascento ]

Thanks, Miguel!

The evolution of Roomba is here. Introducing three new robots, with three new powerful ways to clean. For over 30 years, we have been on a mission to build robots that help people to do more. Now, we are answering the call from consumers to expand our robot lineup to include more 2 in 1 robot vacuum and mop options.

[ iRobot ]

As the beginning of 2023 Weekly KIMLAB, we want to introduce PAPRAS, Plug-And-Play Robotic Arm System. A series of PAPRAS applications will be posted in coming weeks. If you are interested in details of PAPRAS, please check our paper.

[ Paper ] via [ KIMLAB ]

Gerardo Bledt was the Head of our Locomotion and Controls Team at Apptronik. He tragically passed away this summer. He was a friend, colleague, and force of nature. He was a maestro with robots, and showed all of us what was possible. We dedicate Apollo and our work to Gerardo.

[ Apptronik ]

This robot plays my kind of Jenga.

This teleoperated robot was built by Lingkang Zhang, who tells us that it was inspired by Sanctuary AI’s robot.

[ HRC Model 4 ]

Thanks, Lingkang!

Soft universal grippers are advantageous to safely grasp a wide variety of objects. However, due to their soft material, these grippers have limited lifetimes, especially when operating in unstructured and unfamiliar environments. Our self-healing universal gripper (SHUG) can grasp various objects and recover from substantial realistic damages autonomously. It integrates damage detection, heat-assisted healing, and healing evaluation. Notably, unlike other universal grippers, the entire SHUG can be fully reprocessed and recycled.

[ Paper ] via [ BruBotics ]

Thanks Bram!

How would the movie Barbie look like with robots?

[ Misty ]

Zoox is so classy that if you get in during the day and get out at night, it’ll apparently give you a free jean jacket.

[ Zoox ]

X30, the next generation of industrial inspection quadruped robot is on its way. It is now moving and climbing faster, and it has stronger adaptability to adverse environments with advanced add-ons.

[ DeepRobotics ]

Join us on an incredible journey with Alma, a cutting-edge robot with the potential to revolutionize the lives of people with disabilities. This short documentary takes you behind the scenes of our team’s preparation for the Cybathlon challenge, a unique competition that brings together robotics and human ingenuity to solve real-world challenges.

[ Cybathlon ]

NASA’s Moon rover prototype completed software tests. The VIPER mission is managed by NASA’s Ames Research Center in California’s Silicon Valley and is scheduled to be delivered to Mons Mouton near the South Pole of the Moon in late 2024 by Astrobotic’s Griffin lander as part of the Commercial Lunar Payload Services initiative. VIPER will inform future Artemis landing sites by helping to characterize the lunar environment and help determine locations where water and other resources could be harvested to sustain humans over extended stays. 

[ NASA ]

We are excited to announce Husky Observer, a fully integrated system that enables robotics developers to accelerate inspection solutions. Built on top of the versatile Husky platform, this new configuration will enable robotics developers to build their inspection solutions and fast track their system development.

[ Clearpath ]

Land mines and other unexploded ordnance from wars past and present maim or kill thousands of civilians in dozens of nations every year. Finding and disarming them is a slow, dangerous process. Researchers from the Columbia Climate School’s Lamont-Doherty Earth Observatory and other institutions are trying to harness drones, geophysics and artificial intelligence to make the process faster and safer.

[ Columbia ]

Drones are being used by responders in the terrible Morocco earthquake. This 5 minute describes the 5 ways in which drones are typically used in earthquake response- and 4 ways that they aren’t.

[ CRASAR ]

Soft robot’s natural dynamics calls for the development of tailored modeling techniques for control. However, the high-dimensional configuration space of the geometrically exact modeling approaches for soft robots, i.e., Cosserat rod and Finite Element Methods (FEM), has been identified as a key obstacle in controller design. To address this challenge, Reduced Order Modeling (ROM), i.e., the approximation of the full-order models, and Model Order Reduction (MOR), i.e., reducing the state space dimension of a high fidelity FEM-based model, are enjoying extensive research. Although both techniques serve a similar purpose and their terms have been used interchangeably in the literature, they are different in their assumptions and implementation. This review paper provides the first in-depth survey of ROM and MOR techniques in the continuum and soft robotics landscape to aid Soft Robotics researchers in selecting computationally efficient models for their specific tasks.



It’s hard to beat the energy density of chemical fuels. Batteries are quiet and clean and easy to integrate with electrically powered robots, but they’re 20 to 50 times less energy dense than a chemical fuel like methanol or butane. This is fine for most robots that can afford to just carry around a whole bunch of batteries, but as you start looking at robots that are insect-size or smaller, batteries simply don’t scale down very well. And it’s not just the batteries—electric actuators don’t scale down well either, especially if you’re looking for something that can generate a lot of power.

In a paper published 14 September in the journal Science, researchers from Cornell have tackled the small-scale actuation problem with what is essentially a very tiny, very soft internal-combustion engine. Methane vapor and oxygen are injected into a soft combustion chamber, where an itty-bitty li’l spark ignites the mixture. In half a millisecond, the top of the chamber balloons upward like a piston, generating forces of 9.5 newtons through a cycle that can repeat 100 times every second. Put two of these actuators together (driving two legs a piece) and you’ve got an exceptionally powerful soft quadruped robot.

Each of the two actuators powering this robot weighs just 325 milligrams and is about a quarter of the size of a U.S. penny. Part of the reason that they can be so small is that most of the associated components are off-board, including the fuel itself, the system that mixes and delivers the fuel, and the electrical source for the spark generator. But even without all of that stuff, the actuator has a bunch going on that enables it to operate continuously at high cycle frequencies without melting.

A view of the actuator and its component materials along with a diagram of the combustion actuation cycle.Science Robotics

The biggest issue may be that this actuator has to handle actual explosions, meaning that careful design is required to make sure that it doesn’t torch itself every time it goes off. The small combustion volume helps with this, as does the flame-resistant elastomer material and the integrated flame arrestor. Despite the violence inherent to how this actuator works, it’s actually very durable, and the researchers estimate that it can operate continuously for more than 750,000 cycles (8.5 hours at 50 hertz) without any drop in performance.

“What is interesting is just how powerful small-scale combustion is,” says Robert F. Shepherd, who runs the Organic Robotics Lab at Cornell. We covered some of Shepherd’s work on combustion-powered robots nearly a decade ago, with this weird pink jumping thing at IROS 2014. But going small has both challenges and benefits, Shepherd tells us. “We operate in the lower limit of what volumes of gases are combustible. It’s an interesting place for science, and the engineering outcomes are also useful.”

The first of those engineering outcomes is a little insect-scale quadrupedal robot that utilizes two of these soft combustion actuators to power a pair of legs each. The robot is 29 millimeters long and weighs just 1.6 grams, but it can jump a staggering 59 centimeters straight up and walk while carrying 22 times its own weight. For an insect-scale robot, Shepherd says, this is “near insect level performance, jumping extremely high, very quickly, and carrying large loads.”

Cornell University

It’s a little bit hard to see how the quadruped actually walks, since the actuators move so fast. Each actuator controls one side of the robot, with one combustion chamber connected to chambers at each foot with elastomer membranes. An advantage of this actuation system is that since the power source is gas pressure, you can implement that pressure somewhere besides the combustion chamber itself. Firing both actuators together moves the robot forward, while firing one side or the other can rotate the robot, providing some directional control.

“It took a lot of care, iterations, and intelligence to come up with this steerable, insect-scale robot,” Shepherd told us. “Does it have to have legs? No. It could be a speedy slug, or a flapping bee. The amplitudes and frequencies possible with this system allow for all of these possibilities. In fact, the real issue we have is making things move slowly.”

Getting these actuators to slow down a bit is one of the things that the researchers are looking at next. By trading speed for force, the idea is to make robots that can walk as well as run and jump. And of course finding a way to untether these systems is a natural next step. Some of the other stuff that they’re thinking about is pretty wild, as Shepherd tells us: “One idea we want to explore in the future is using aggregates of these small and powerful actuators as large, variable recruitment musculature in large robots. Putting thousands of these actuators in bundles over a rigid endoskeleton could allow for dexterous and fast land-based hybrid robots.” Personally, I’m having trouble even picturing a robot like that, but that’s what’s exciting about it, right? A large robot with muscles powered by thousands of tiny explosions—wow.

Powerful, soft combustion actuators for insect-scale robots, by Cameron A. Aubin, Ronald H. Heisser, Ofek Peretz, Julia Timko, Jacqueline Lo, E. Farrell Helbling, Sadaf Sobhani, Amir D. Gat, and Robert F. Shepherd from Cornell, is published in Science.



It’s hard to beat the energy density of chemical fuels. Batteries are quiet and clean and easy to integrate with electrically powered robots, but they’re 20 to 50 times less energy dense than a chemical fuel like methanol or butane. This is fine for most robots that can afford to just carry around a whole bunch of batteries, but as you start looking at robots that are insect-size or smaller, batteries simply don’t scale down very well. And it’s not just the batteries—electric actuators don’t scale down well either, especially if you’re looking for something that can generate a lot of power.

In a paper published 14 September in the journal Science, researchers from Cornell have tackled the small-scale actuation problem with what is essentially a very tiny, very soft internal-combustion engine. Methane vapor and oxygen are injected into a soft combustion chamber, where an itty-bitty li’l spark ignites the mixture. In half a millisecond, the top of the chamber balloons upward like a piston, generating forces of 9.5 newtons through a cycle that can repeat 100 times every second. Put two of these actuators together (driving two legs a piece) and you’ve got an exceptionally powerful soft quadruped robot.

Each of the two actuators powering this robot weighs just 325 milligrams and is about a quarter of the size of a U.S. penny. Part of the reason that they can be so small is that most of the associated components are off-board, including the fuel itself, the system that mixes and delivers the fuel, and the electrical source for the spark generator. But even without all of that stuff, the actuator has a bunch going on that enables it to operate continuously at high cycle frequencies without melting.

A view of the actuator and its component materials along with a diagram of the combustion actuation cycle.Science Robotics

The biggest issue may be that this actuator has to handle actual explosions, meaning that careful design is required to make sure that it doesn’t torch itself every time it goes off. The small combustion volume helps with this, as does the flame-resistant elastomer material and the integrated flame arrestor. Despite the violence inherent to how this actuator works, it’s actually very durable, and the researchers estimate that it can operate continuously for more than 750,000 cycles (8.5 hours at 50 hertz) without any drop in performance.

“What is interesting is just how powerful small-scale combustion is,” says Robert F. Shepherd, who runs the Organic Robotics Lab at Cornell. We covered some of Shepherd’s work on combustion-powered robots nearly a decade ago, with this weird pink jumping thing at IROS 2014. But going small has both challenges and benefits, Shepherd tells us. “We operate in the lower limit of what volumes of gases are combustible. It’s an interesting place for science, and the engineering outcomes are also useful.”

The first of those engineering outcomes is a little insect-scale quadrupedal robot that utilizes two of these soft combustion actuators to power a pair of legs each. The robot is 29 millimeters long and weighs just 1.6 grams, but it can jump a staggering 59 centimeters straight up and walk while carrying 22 times its own weight. For an insect-scale robot, Shepherd says, this is “near insect level performance, jumping extremely high, very quickly, and carrying large loads.”

Cornell University

It’s a little bit hard to see how the quadruped actually walks, since the actuators move so fast. Each actuator controls one side of the robot, with one combustion chamber connected to chambers at each foot with elastomer membranes. An advantage of this actuation system is that since the power source is gas pressure, you can implement that pressure somewhere besides the combustion chamber itself. Firing both actuators together moves the robot forward, while firing one side or the other can rotate the robot, providing some directional control.

“It took a lot of care, iterations, and intelligence to come up with this steerable, insect-scale robot,” Shepherd told us. “Does it have to have legs? No. It could be a speedy slug, or a flapping bee. The amplitudes and frequencies possible with this system allow for all of these possibilities. In fact, the real issue we have is making things move slowly.”

Getting these actuators to slow down a bit is one of the things that the researchers are looking at next. By trading speed for force, the idea is to make robots that can walk as well as run and jump. And of course finding a way to untether these systems is a natural next step. Some of the other stuff that they’re thinking about is pretty wild, as Shepherd tells us: “One idea we want to explore in the future is using aggregates of these small and powerful actuators as large, variable recruitment musculature in large robots. Putting thousands of these actuators in bundles over a rigid endoskeleton could allow for dexterous and fast land-based hybrid robots.” Personally, I’m having trouble even picturing a robot like that, but that’s what’s exciting about it, right? A large robot with muscles powered by thousands of tiny explosions—wow.

Powerful, soft combustion actuators for insect-scale robots, by Cameron A. Aubin, Ronald H. Heisser, Ofek Peretz, Julia Timko, Jacqueline Lo, E. Farrell Helbling, Sadaf Sobhani, Amir D. Gat, and Robert F. Shepherd from Cornell, is published in Science.



This sponsored article is brought to you by NYU Tandon School of Engineering.

To address today’s health challenges, especially in our aging society, we must become more intelligent in our approaches. Clinicians now have access to a range of advanced technologies designed to assist early diagnosis, evaluate prognosis, and enhance patient health outcomes, including telemedicine, medical robots, powered prosthetics, exoskeletons, and AI-powered smart wearables. However, many of these technologies are still in their infancy.

The belief that advancing technology can improve human health is central to research related to medical device technologies. This forms the heart of research for Prof. S. Farokh Atashzar who directs the Medical Robotics and Interactive Intelligent Technologies (MERIIT) Lab at the NYU Tandon School of Engineering.

Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at NYU Tandon. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life.

Atashzar’s work is dedicated to developing intelligent, interactive robotic, and AI-driven assistive machines that can augment human sensorimotor capabilities and allow our healthcare system to go beyond natural competences and overcome physiological and pathological barriers.

Stroke detection and rehabilitation

Stroke is the leading cause of age-related motor disabilities and is becoming more prevalent in younger populations as well. But while there is a burgeoning marketplace for rehabilitation devices that claim to accelerate recovery, including robotic rehabilitation systems, recommendations for how and when to use them are based mostly on subjective evaluation of the sensorimotor capacities of patients in need.

Atashzar is working in collaboration with John-Ross Rizzo, associate professor of Biomedical Engineering at NYU Tandon and Ilse Melamid Associate Professor of rehabilitation medicine at the NYU School of Medicine and Dr. Ramin Bighamian from the U.S. Food and Drug Administration to design a regulatory science tool (RST) based on data from biomarkers in order to improve the review processes for such devices and how best to use them. The team is designing and validating a robust recovery biomarker enabling a first-ever stroke rehabilitation RST based on exchanges between regions of the central and peripheral nervous systems.

S. Farokh Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at New York University Tandon School of Engineering. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life, and directs the MERIIT Lab at NYU Tandon.NYU Tandon

In addition, Atashzar is collaborating with Smita Rao, PT, the inaugural Robert S. Salant Endowed Associate Professor of Physical Therapy. Together, they aim to identify AI-driven computational biomarkers for motor control and musculoskeletal damage and to decode the hidden complex synergistic patterns of degraded muscle activation using data collected from surface electromyography (sEMG) and high-density sEMG. In the past few years, this collaborative effort has been exploring the fascinating world of “Nonlinear Functional Muscle Networks” — a new computational window (rooted in Shannon’s information theory) into human motor control and mobility. This synergistic network orchestrates the “music of mobility,” harmonizing the synchrony between muscles to facilitate fluid movement.

But rehabilitation is only one of the research thrusts at MERIIT lab. If you can prevent strokes from happening or reoccurring, you can head off the problem before it happens. For Atashzar, a big clue could be where you least expect it: in your retina.

Atashzar along with NYU Abu Dhabi Assistant Professor Farah Shamout, are working on a project they call “EyeScore,” an AI-powered technology that uses non-invasive scans of the retina to predict the recurrence of stroke in patients. They use optical coherence tomography — a scan of the back of the retina — and track changes over time using advanced deep learning models. The retina, attached directly to the brain through the optic nerve, can be used as a physiological window for changes in the brain itself.

Atashzar and Shamout are currently formulating their hybrid AI model, pinpointing the exact changes that can predict a stroke and recurrence of strokes. The outcome will be able to analyze these images and flag potentially troublesome developments. And since the scans are already in use in optometrist offices, this life-saving technology could be in the hands of medical professionals sooner than expected.

Preventing downturns

Atashzar is utilizing AI algorithms for uses beyond stroke. Like many researchers, his gaze was drawn to the largest medical event in recent history: COVID-19. In the throes of the COVID-19 pandemic, the very bedrock of global healthcare delivery was shaken. COVID-19 patients, susceptible to swift and severe deterioration, presented a serious problem for caregivers.

Especially in the pandemic’s early days, when our grasp of the virus was tenuous at best, predicting patient outcomes posed a formidable challenge. The merest tweaks in admission protocols held the power to dramatically shift patient fates, underscoring the need for vigilant monitoring. As healthcare systems groaned under the pandemic’s weight and contagion fears loomed, outpatient and nursing center residents were steered toward remote symptom tracking via telemedicine. This cautious approach sought to spare them unnecessary hospital exposure, allowing in-person visits only for those in the throes of grave symptoms.

But while much of the pandemic’s research spotlight fell on diagnosing COVID-19, this study took a different avenue: predicting patient deterioration in the future. Existing studies often juggled an array of data inputs, from complex imaging to lab results, but failed to harness data’s temporal aspects. Enter this research, which prioritized simplicity and scalability, leaning on data easily gathered not only within medical walls but also in the comfort of patients’ homes with the use of simple wearables.

S. Farokh Atashzar and colleagues at NYU Tandon are using deep neural network models to assess COVID data and try to predict patient deterioration in the future.

Atashzar, along with his Co-PI of the project Yao Wang, Professor of Biomedical Engineering and Electrical and Computer Engineering at NYU Tandon, used a novel deep neural network model to assess COVID data, leveraging time series data on just three vital signs to foresee COVID-19 patient deterioration for some 37,000 patients. The ultimate prize? A streamlined predictive model capable of aiding clinical decision-making for a wide spectrum of patients. Oxygen levels, heartbeats, and temperatures formed the trio of vital signs under scrutiny, a choice propelled by the ubiquity of wearable tech like smartwatches. A calculated exclusion of certain signs, like blood pressure, followed, due to their incompatibility with these wearables.

The researchers utilized real-world data from NYU Langone Health’s archives spanning January 2020 to September 2022 lent authenticity. Predicting deterioration within timeframes of 3 to 24 hours, the model analyzed vital sign data from the preceding 24 hours. This crystal ball aimed to forecast outcomes ranging from in-hospital mortality to intensive care unit admissions or intubations.

“In a situation where a hospital is overloaded, getting a CT scan for every single patient would be very difficult or impossible, especially in remote areas when the healthcare system is overstretched,” says Atashzar. “So we are minimizing the need for data, while at the same time, maximizing the accuracy for prediction. And that can help with creating better healthcare access in remote areas and in areas with limited healthcare.”

In addition to addressing the pandemic at the micro level (individuals), Atashzar and his team are also working on algorithmic solutions that can assist the healthcare system at the meso and macro level. In another effort related to COVID-19, Atashzar and his team are developing novel probabilistic models that can better predict the spread of disease when taking into account the effects of vaccination and mutation of the virus. Their efforts go beyond the classic small-scale models that were previously used for small epidemics. They are working on these large-scale complex models in order to help governments better prepare for pandemics and mitigate rapid disease spread. Atashzar is drawing inspiration from his active work with control algorithms used in complex networks of robotic systems. His team is now utilizing similar techniques to develop new algorithmic tools for controlling spread in the networked dynamic models of human society.

A state-of-the-art human-machine interface module with wearable controller is one of many multi-modal technologies tested in S. Farokh Atashzar’s MERIIT Lab at NYU Tandon.NYU Tandon

Where minds meet machines

These projects represent only a fraction of Atashzar’s work. In the MERIIT lab, he and his students build cyber-physical systems that augment the functionality of the next-generation medical robotic systems. They delve into haptics and robotics for a wide range of medical applications. Examples include telesurgery and telerobotic rehabilitation, which are built upon the capabilities of next-generation telecommunications. The team is specifically interested in the application of 5G-based tactile internet in medical robotics.

Recently, he received a donation from the Intuitive Foundation: a Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent. While several researchers have investigated this vision in the past decade, Atashzar is specifically concentrating on connecting the power of the surgeon’s mind with the autonomy of surgical robots - promoting discussions on ways to share the surgical autonomy between the intelligence of machines and the mind of surgeons. This approach aims to reduce mental fatigue and cognitive load on surgeons while reintroducing the sense of haptics lost in traditional surgical robotic systems.

Atashzar poses with NYU Tandon’s Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent.NYU Tandon

In a related line of research, the MERIIT lab is also focusing on cutting-edge human-machine interface technologies that enable neuro-to-device capabilities. These technologies have direct applications in exoskeletal devices, next-generation prosthetics, rehabilitation robots, and possibly the upcoming wave of augmented reality systems in our smart and connected society. One common significant challenge of such systems which is focused by the team is predicting the intended actions of the human users through processing signals generated by functional behavior of motor neurons.

By solving this challenge using advanced AI modules in real-time, the team can decode a user’s motor intentions and predict the intended gestures for controlling robots and virtual reality systems in an agile and robust manner. Some practical challenges include ensuring the generalizability, scalability, and robustness of these AI-driven solutions, given the variability of human neurophysiology and heavy reliance of classic models on data. Powered by such predictive models, the team is advancing the complex control of human-centric machines and robots. They are also crafting algorithms that take into account human physiology and biomechanics. This requires conducting transdisciplinary solutions bridging AI and nonlinear control theories.

Atashzar’s work dovetails perfectly with the work of other researchers at NYU Tandon, which prizes interdisciplinary work without the silos of traditional departments.

“Dr. Atashzar shines brightly in the realm of haptics for telerobotic medical procedures, positioning him as a rising star in his research community,” says Katsuo Kurabayashi, the new chair of the Mechanical and Aerospace Engineering department at NYU Tandon. “His pioneering research carries the exciting potential to revolutionize rehabilitation therapy, facilitate the diagnosis of neuromuscular diseases, and elevate the field of surgery. This holds the key to ushering in a new era of sophisticated remote human-machine interactions and leveraging machine learning-driven sensor signal interpretations.”

This commitment to human health, through the embrace of new advances in biosignals, robotics, and rehabilitation, is at the heart of Atashzar’s enduring work, and his unconventional approaches to age-old problem make him a perfect example of the approach to engineering embraced at NYU Tandon.



This sponsored article is brought to you by NYU Tandon School of Engineering.

To address today’s health challenges, especially in our aging society, we must become more intelligent in our approaches. Clinicians now have access to a range of advanced technologies designed to assist early diagnosis, evaluate prognosis, and enhance patient health outcomes, including telemedicine, medical robots, powered prosthetics, exoskeletons, and AI-powered smart wearables. However, many of these technologies are still in their infancy.

The belief that advancing technology can improve human health is central to research related to medical device technologies. This forms the heart of research for Prof. S. Farokh Atashzar who directs the Medical Robotics and Interactive Intelligent Technologies (MERIIT) Lab at the NYU Tandon School of Engineering.

Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at NYU Tandon. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life.

Atashzar’s work is dedicated to developing intelligent, interactive robotic, and AI-driven assistive machines that can augment human sensorimotor capabilities and allow our healthcare system to go beyond natural competences and overcome physiological and pathological barriers.

Stroke detection and rehabilitation

Stroke is the leading cause of age-related motor disabilities and is becoming more prevalent in younger populations as well. But while there is a burgeoning marketplace for rehabilitation devices that claim to accelerate recovery, including robotic rehabilitation systems, recommendations for how and when to use them are based mostly on subjective evaluation of the sensorimotor capacities of patients in need.

Atashzar is working in collaboration with John-Ross Rizzo, associate professor of Biomedical Engineering at NYU Tandon and Ilse Melamid Associate Professor of rehabilitation medicine at the NYU School of Medicine and Dr. Ramin Bighamian from the U.S. Food and Drug Administration to design a regulatory science tool (RST) based on data from biomarkers in order to improve the review processes for such devices and how best to use them. The team is designing and validating a robust recovery biomarker enabling a first-ever stroke rehabilitation RST based on exchanges between regions of the central and peripheral nervous systems.

S. Farokh Atashzar is an Assistant Professor of Electrical and Computer Engineering and Mechanical and Aerospace Engineering at New York University Tandon School of Engineering. He is also a member of NYU WIRELESS, a consortium of researchers dedicated to the next generation of wireless technology, as well as the Center for Urban Science and Progress (CUSP), a center of researchers dedicated to all things related to the future of modern urban life, and directs the MERIIT Lab at NYU Tandon.NYU Tandon

In addition, Atashzar is collaborating with Smita Rao, PT, the inaugural Robert S. Salant Endowed Associate Professor of Physical Therapy. Together, they aim to identify AI-driven computational biomarkers for motor control and musculoskeletal damage and to decode the hidden complex synergistic patterns of degraded muscle activation using data collected from surface electromyography (sEMG) and high-density sEMG. In the past few years, this collaborative effort has been exploring the fascinating world of “Nonlinear Functional Muscle Networks” — a new computational window (rooted in Shannon’s information theory) into human motor control and mobility. This synergistic network orchestrates the “music of mobility,” harmonizing the synchrony between muscles to facilitate fluid movement.

But rehabilitation is only one of the research thrusts at MERIIT lab. If you can prevent strokes from happening or reoccurring, you can head off the problem before it happens. For Atashzar, a big clue could be where you least expect it: in your retina.

Atashzar along with NYU Abu Dhabi Assistant Professor Farah Shamout, are working on a project they call “EyeScore,” an AI-powered technology that uses non-invasive scans of the retina to predict the recurrence of stroke in patients. They use optical coherence tomography — a scan of the back of the retina — and track changes over time using advanced deep learning models. The retina, attached directly to the brain through the optic nerve, can be used as a physiological window for changes in the brain itself.

Atashzar and Shamout are currently formulating their hybrid AI model, pinpointing the exact changes that can predict a stroke and recurrence of strokes. The outcome will be able to analyze these images and flag potentially troublesome developments. And since the scans are already in use in optometrist offices, this life-saving technology could be in the hands of medical professionals sooner than expected.

Preventing downturns

Atashzar is utilizing AI algorithms for uses beyond stroke. Like many researchers, his gaze was drawn to the largest medical event in recent history: COVID-19. In the throes of the COVID-19 pandemic, the very bedrock of global healthcare delivery was shaken. COVID-19 patients, susceptible to swift and severe deterioration, presented a serious problem for caregivers.

Especially in the pandemic’s early days, when our grasp of the virus was tenuous at best, predicting patient outcomes posed a formidable challenge. The merest tweaks in admission protocols held the power to dramatically shift patient fates, underscoring the need for vigilant monitoring. As healthcare systems groaned under the pandemic’s weight and contagion fears loomed, outpatient and nursing center residents were steered toward remote symptom tracking via telemedicine. This cautious approach sought to spare them unnecessary hospital exposure, allowing in-person visits only for those in the throes of grave symptoms.

But while much of the pandemic’s research spotlight fell on diagnosing COVID-19, this study took a different avenue: predicting patient deterioration in the future. Existing studies often juggled an array of data inputs, from complex imaging to lab results, but failed to harness data’s temporal aspects. Enter this research, which prioritized simplicity and scalability, leaning on data easily gathered not only within medical walls but also in the comfort of patients’ homes with the use of simple wearables.

S. Farokh Atashzar and colleagues at NYU Tandon are using deep neural network models to assess COVID data and try to predict patient deterioration in the future.

Atashzar, along with his Co-PI of the project Yao Wang, Professor of Biomedical Engineering and Electrical and Computer Engineering at NYU Tandon, used a novel deep neural network model to assess COVID data, leveraging time series data on just three vital signs to foresee COVID-19 patient deterioration for some 37,000 patients. The ultimate prize? A streamlined predictive model capable of aiding clinical decision-making for a wide spectrum of patients. Oxygen levels, heartbeats, and temperatures formed the trio of vital signs under scrutiny, a choice propelled by the ubiquity of wearable tech like smartwatches. A calculated exclusion of certain signs, like blood pressure, followed, due to their incompatibility with these wearables.

The researchers utilized real-world data from NYU Langone Health’s archives spanning January 2020 to September 2022 lent authenticity. Predicting deterioration within timeframes of 3 to 24 hours, the model analyzed vital sign data from the preceding 24 hours. This crystal ball aimed to forecast outcomes ranging from in-hospital mortality to intensive care unit admissions or intubations.

“In a situation where a hospital is overloaded, getting a CT scan for every single patient would be very difficult or impossible, especially in remote areas when the healthcare system is overstretched,” says Atashzar. “So we are minimizing the need for data, while at the same time, maximizing the accuracy for prediction. And that can help with creating better healthcare access in remote areas and in areas with limited healthcare.”

In addition to addressing the pandemic at the micro level (individuals), Atashzar and his team are also working on algorithmic solutions that can assist the healthcare system at the meso and macro level. In another effort related to COVID-19, Atashzar and his team are developing novel probabilistic models that can better predict the spread of disease when taking into account the effects of vaccination and mutation of the virus. Their efforts go beyond the classic small-scale models that were previously used for small epidemics. They are working on these large-scale complex models in order to help governments better prepare for pandemics and mitigate rapid disease spread. Atashzar is drawing inspiration from his active work with control algorithms used in complex networks of robotic systems. His team is now utilizing similar techniques to develop new algorithmic tools for controlling spread in the networked dynamic models of human society.

A state-of-the-art human-machine interface module with wearable controller is one of many multi-modal technologies tested in S. Farokh Atashzar’s MERIIT Lab at NYU Tandon.NYU Tandon

Where minds meet machines

These projects represent only a fraction of Atashzar’s work. In the MERIIT lab, he and his students build cyber-physical systems that augment the functionality of the next-generation medical robotic systems. They delve into haptics and robotics for a wide range of medical applications. Examples include telesurgery and telerobotic rehabilitation, which are built upon the capabilities of next-generation telecommunications. The team is specifically interested in the application of 5G-based tactile internet in medical robotics.

Recently, he received a donation from the Intuitive Foundation: a Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent. While several researchers have investigated this vision in the past decade, Atashzar is specifically concentrating on connecting the power of the surgeon’s mind with the autonomy of surgical robots - promoting discussions on ways to share the surgical autonomy between the intelligence of machines and the mind of surgeons. This approach aims to reduce mental fatigue and cognitive load on surgeons while reintroducing the sense of haptics lost in traditional surgical robotic systems.

Atashzar poses with NYU Tandon’s Da Vinci research kit. This state-of-the-art surgical system will allow his team to explore ways for a surgeon in one location to operate on a patient in another—whether they are in a different city, region, or even continent.NYU Tandon

In a related line of research, the MERIIT lab is also focusing on cutting-edge human-machine interface technologies that enable neuro-to-device capabilities. These technologies have direct applications in exoskeletal devices, next-generation prosthetics, rehabilitation robots, and possibly the upcoming wave of augmented reality systems in our smart and connected society. One common significant challenge of such systems which is focused by the team is predicting the intended actions of the human users through processing signals generated by functional behavior of motor neurons.

By solving this challenge using advanced AI modules in real-time, the team can decode a user’s motor intentions and predict the intended gestures for controlling robots and virtual reality systems in an agile and robust manner. Some practical challenges include ensuring the generalizability, scalability, and robustness of these AI-driven solutions, given the variability of human neurophysiology and heavy reliance of classic models on data. Powered by such predictive models, the team is advancing the complex control of human-centric machines and robots. They are also crafting algorithms that take into account human physiology and biomechanics. This requires conducting transdisciplinary solutions bridging AI and nonlinear control theories.

Atashzar’s work dovetails perfectly with the work of other researchers at NYU Tandon, which prizes interdisciplinary work without the silos of traditional departments.

“Dr. Atashzar shines brightly in the realm of haptics for telerobotic medical procedures, positioning him as a rising star in his research community,” says Katsuo Kurabayashi, the new chair of the Mechanical and Aerospace Engineering department at NYU Tandon. “His pioneering research carries the exciting potential to revolutionize rehabilitation therapy, facilitate the diagnosis of neuromuscular diseases, and elevate the field of surgery. This holds the key to ushering in a new era of sophisticated remote human-machine interactions and leveraging machine learning-driven sensor signal interpretations.”

This commitment to human health, through the embrace of new advances in biosignals, robotics, and rehabilitation, is at the heart of Atashzar’s enduring work, and his unconventional approaches to age-old problem make him a perfect example of the approach to engineering embraced at NYU Tandon.

This paper focuses on the topic of “everyday life” as it is addressed in Human-Robot Interaction (HRI) research. It starts from the argument that while human daily life with social robots has been increasingly discussed and studied in HRI, the concept of everyday life lacks clarity or systematic analysis, and it plays only a secondary role in supporting the study of the key HRI topics. In order to help conceptualise everyday life as a research theme in HRI in its own right, we provide an overview of the Social Science and Humanities (SSH) perspectives on everyday life and lived experiences, particularly in sociology, and identify the key elements that may serve to further develop and empirically study such a concept in HRI. We propose new angles of analysis that may help better explore unique aspects of human engagement with social robots. We look at the everyday not just as a reality as we know it (i.e., the realm of the “ordinary”) but also as the future that we need to envision and strive to materialise (i.e., the transformation that will take place through the “extraordinary” that comes with social robots). Finally, we argue that HRI research would benefit not only from engaging with a systematic conceptualisation but also critique of the contemporary everyday life with social robots. This is how HRI studies could play an important role in challenging the current ways of understanding of what makes different aspects of the human world “natural” and ultimately help bringing a social change towards what we consider a “good life.”

Pages