Feed aggregator



Last week, Google or Alphabet or X or whatever you want to call it announced that its Everyday Robots team has grown enough and made enough progress that it's time for it to become its own thing, now called, you guessed it, "Everyday Robots." There's a new website of questionable design along with a lot of fluffy descriptions of what Everyday Robots is all about. But fortunately, there are also some new videos and enough details about the engineering and the team's approach that it's worth spending a little bit of time wading through the clutter to see what Everyday Robots has been up to over the last couple of years and what their plans are for the near future.

That close to the arm seems like a really bad place to put an E-Stop, right?

Our headline may sound a little bit snarky, but the headline in Alphabet's own announcement blog post is "everyday robots are (slowly) leaving the lab." It's less of a dig and more of an acknowledgement that getting mobile manipulators to usefully operate in semi-structured environments has been, and continues to be, a huge challenge. We'll get into the details in a moment, but the high-level news here is that Alphabet appears to have thrown a lot of resources behind this effort while embracing a long time horizon, and that its investment is starting to pay dividends. This is a nice surprise, considering the somewhat haphazard state (at least to outside appearances) of Google's robotics ventures over the years.

The goal of Everyday Robots, according to Astro Teller, who runs Alphabet's moonshot stuff, is to create "a general-purpose learning robot," which sounds moonshot-y enough I suppose. To be fair, they've got an impressive amount of hardware deployed, says Everyday Robots' Hans Peter Brøndmo:

We are now operating a fleet of more than 100 robot prototypes that are autonomously performing a range of useful tasks around our offices. The same robot that sorts trash can now be equipped with a squeegee to wipe tables, and use the same gripper that grasps cups to open doors.

That's a lot of robots, which is awesome, but I have to question what "autonomously" actually means along with what "a range of useful tasks" actually means. There is really not enough publicly available information for us (or anyone?) to assess what Everyday Robots is doing with its fleet of 100 prototypes, how much manipulator-holding is required, the constraints under which they operate, and whether calling what they do "useful" is appropriate.

If you'd rather not wade through Everyday Robots' weirdly overengineered website, we've extracted the good stuff (the videos, mostly) and reposted them here, along with a little bit of commentary underneath each.

Introducing Everyday Robots

Everyday Robots

0:01 — Is it just me, or does the gearing behind those motions sound kind of, um, unhealthy?

0:25 — A bit of an overstatement about the Nobel Prize for picking a cup up off of a table, I think. Robots are pretty good at perceiving and grasping cups off of tables, because it's such a common task. Like, I get the point, but I just think there are better examples of problems that are currently human-easy and robot-hard.

1:13 — It's not necessarily useful to draw that parallel between computers and smartphones and compare them to robots, because there are certain physical realities (like motors and manipulation requirements) that prevent the kind of scaling to which the narrator refers.

1:35 — This is a red flag for me because we've heard this "it's a platform" thing so many times before and it never, ever works out. But people keep on trying it anyway. It might be effective when constrained to a research environment, but fundamentally, "platform" typically means "getting it to do (commercially?) useful stuff is someone else's problem," and I'm not sure that's ever been a successful model for robots.

2:10 — Yeah, okay. This robot sounds a lot more normal than the robots at the beginning of the video; what's up with that?

2:30 — I am a big fan of Moravec's Paradox and I wish it would get brought up more when people talk to the public about robots.

The challenge of everyday

Everyday Robots

0:18 — I like the door example, because you can easily imagine how many different ways it can go that would be catastrophic for most robots: different levers or knobs, glass in places, variable weight and resistance, and then, of course, thresholds and other nasty things like that.

1:03 — Yes. It can't be reinforced enough, especially in this context, that computers (and by extension robots) are really bad at understanding things. Recognizing things, yes. Understanding them, not so much.

1:40 — People really like throwing shade at Boston Dynamics, don't they? But this doesn't seem fair to me, especially for a company that Google used to own. What Boston Dynamics is doing is very hard, very impressive, and come on, pretty darn exciting. You can acknowledge that someone else is working on hard and exciting problems while you're working on different hard and exciting problems yourself, and not be a little miffed because what you're doing is, like, less flashy or whatever.

A robot that learns

Everyday Robots

0:26 — Saying that the robot is low cost is meaningless without telling us how much it costs. Seriously: "low cost" for a mobile manipulator like this could easily be (and almost certainly is) several tens of thousands of dollars at the very least.

1:10 — I love the inclusion of things not working. Everyone should do this when presenting a new robot project. Even if your budget is infinity, nobody gets everything right all the time, and we all feel better knowing that others are just as flawed as we are.

1:35 — I'd personally steer clear of using words like "intelligently" when talking about robots trained using reinforcement learning techniques, because most people associate "intelligence" with the kind of fundamental world understanding that robots really do not have.

Training the first task

Everyday Robots

1:20 — As a research task, I can see this being a useful project, but it's important to point out that this is a terrible way of automating the sorting of recyclables from trash. Since all of the trash and recyclables already get collected and (presumably) brought to a few centralized locations, in reality you'd just have your system there, where the robots could be stationary and have some control over their environment and do a much better job much more efficiently.

1:15 — Hopefully they'll talk more about this later, but when thinking about this montage, it's important to ask what of these tasks in the real world would you actually want a mobile manipulator to be doing, and which would you just want automated somehow, because those are very different things.

Building with everyone

Everyday Robots

0:19 — It could be a little premature to be talking about ethics at this point, but on the other hand, there's a reasonable argument to be made that there's no such thing as too early to consider the ethical implications of your robotics research. The latter is probably a better perspective, honestly, and I'm glad they're thinking about it in a serious and proactive way.

1:28 — Robots like these are not going to steal your job. I promise.

2:18 — Robots like these are also not the robots that he's talking about here, but the point he's making is a good one, because in the near- to medium term, robots are going to be most valuable in roles where they can increase human productivity by augmenting what humans can do on their own, rather than replacing humans completely.

3:16 — Again, that platform idea...blarg. The whole "someone has written those applications" thing, uh, who, exactly? And why would they? The difference between smartphones (which have a lucrative app ecosystem) and robots (which do not) is that without any third party apps at all, a smartphone has core functionality useful enough that it justifies its own cost. It's going to be a long time before robots are at that point, and they'll never get there if the software applications are always someone else's problem.

Everyday Robots

I'm a little bit torn on this whole thing. A fleet of 100 mobile manipulators is amazing. Pouring money and people into solving hard robotics problems is also amazing. I'm just not sure that the vision of an "Everyday Robot" that we're being asked to buy into is necessarily a realistic one.

The impression I get from watching all of these videos and reading through the website is that Everyday Robot wants us to believe that it's actually working towards putting general purpose mobile manipulators into everyday environments in a way where people (outside of the Google Campus) will be able to benefit from them. And maybe the company is working towards that exact thing, but is that a practical goal and does it make sense?

The fundamental research being undertaken seems solid; these are definitely hard problems, and solutions to these problems will help advance the field. (Those advances could be especially significant if these techniques and results are published or otherwise shared with the community.) And if the reason to embody this work in a robotic platform is to help inspire that research, then great, I have no issue with that.

But I'm really hesitant to embrace this vision of generalized in-home mobile manipulators doing useful tasks autonomously in a way that's likely to significantly help anyone who's actually watching Everyday Robotics' videos. And maybe this is the whole point of a moonshot vision—to work on something hard that won't pay off for a long time. And again, I have no problem with that. However, if that's the case, Everyday Robots should be careful about how it contextualizes and portrays its efforts (and even its successes), why it's working on a particular set of things, and how outside observers should set our expectations. Over and over, companies have overpromised and underdelivered on helpful and affordable robots. My hope is that Everyday Robots is not in the middle of making the exact same mistake.

It is almost a foregone conclusion that robots cannot be morally responsible agents, both because they lack traditional features of moral agency like consciousness, intentionality, or empathy and because of the apparent senselessness of holding them accountable. Moreover, although some theorists include them in the moral community as moral patients, on the Strawsonian picture of moral community as requiring moral responsibility, robots are typically excluded from membership. By looking closely at our actual moral responsibility practices, however, I determine that the agency reflected and cultivated by them is limited to the kind of moral agency of which some robots are capable, not the philosophically demanding sort behind the traditional view. Hence, moral rule-abiding robots (if feasible) can be sufficiently morally responsible and thus moral community members, despite certain deficits. Alternative accountability structures could address these deficits, which I argue ought to be in place for those existing moral community members who share these deficits.

Background: Play is critical for children’s physical, cognitive, and social development. Technology-based toys like robots are especially of interest to children. This pilot study explores the affordances of the play area provided by developmentally appropriate toys and a mobile socially assistive robot (SAR). The objective of this study is to assess the role of the SAR on physical activity, play behavior, and toy-use behavior of children during free play.

Methods: Six children (5 females, Mage = 3.6 ± 1.9 years) participated in the majority of our pilot study’s seven 30-minute-long weekly play sessions (4 baseline and 3 intervention). During baseline sessions, the SAR was powered off. During intervention sessions, the SAR was teleoperated to move in the play area and offered rewards of lights, sounds, and bubbles to children. Thirty-minute videos of the play sessions were annotated using a momentary time sampling observation system. Mean percentage of time spent in behaviors of interest in baseline and intervention sessions were calculated. Paired-Wilcoxon signed rank tests were conducted to assess differences between baseline and intervention sessions.

Results: There was a significant increase in children’s standing (∼15%; Z = −2.09; p = 0.037) and a tendency for less time sitting (∼19%; Z = −1.89; p = 0.059) in the intervention phase as compared to the baseline phase. There was also a significant decrease (∼4.5%, Z = −2.70; p = 0.007) in peer interaction play and a tendency for greater (∼4.5%, Z = −1.89; p = 0.059) interaction with adults in the intervention phase as compared to the baseline phase. There was a significant increase in children’s interaction with the robot (∼11.5%, Z = −2.52; p = 0.012) in the intervention phase as compared to the baseline phase.

Conclusion: These results may indicate that a mobile SAR provides affordances through rewards that elicit children’s interaction with the SAR and more time standing in free play. This pilot study lays a foundation for exploring the role of SARs in inclusive play environments for children with and without mobility disabilities in real-world settings like day-care centers and preschools.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

We first met Cleo Robotics at CES 2017, when they were showing off a consumer prototype of their unique ducted-fan drone. They've just announced a new version which has been beefed up to do surveillance, and it is actually called the Dronut.

For such a little thing, the 12 minute flight time is not the worst, and hopefully it'll find a unique niche that'll help Cleo move back towards the consumer market, because I want one.

[ Cleo ]

Happy tenth birthday, Thymio!

[ EPFL ]

Here we describe a protective strategy for winged drones that mitigates the added weight and drag by means of increased lift generation and stall delay at high angles of attack. The proposed structure is inspired by the wing system found in beetles and consists of adding an additional set of retractable wings, named elytra, which can rapidly encapsulate the main folding wings when protection is needed.

[ EPFL ]

This is some very, very impressive robust behavior on ANYmal, part of Joonho Lee's master's thesis at ETH Zurich.

[ ETH Zurich ]

NTT DOCOMO, INC. announced today that it has developed a blade-free, blimp-type drone equipped with a high-resolution video camera that captures high-quality video and full-color LED lights glow in radiant colors.

[ NTT Docomo ] via [ Gizmodo ]

Senior Software Engineer Daniel Piedrahita explains the theory behind robust dynamic stability and how Agility engineers used it to develop an unique and cohesive hardware and software solution that allows Digit to navigate unpredictable terrain with ease.

[ Agility ]

The title of thie video from DeepRobotics is "DOOMSDAY COMING." Best not to think about it, probably.

[ DeepRobotics ]

More Baymax!

[ Disney ]

At Ben-Gurion University of the Negev, they're trying to figure out how to make a COVID-19 officer robot authoritative enough that people will actually pay attention to it and do what it says.

[ Paper ]

Thanks, Andy!

You'd think that high voltage powerlines would be the last thing you'd want a drone to futz with, but here we are.

[ GRVC ]

Cassie Blue navigates around furniture treated as obstacles in the atrium of the Ford Robotics Building at the University of Michigan.

[ Michigan Robotics ]

Northrop Grumman and its partners AVL, Intuitive Machines, Lunar Outpost and Michelin are designing a new vehicle that will greatly expand and enhance human and robotic exploration of the Moon, and ultimately, Mars.

[ Northrop Grumman ]

This letter proposes a novel design for a coaxial hexarotor (Y6) with a tilting mechanism that can morph midair while in a hover, changing the flight stage from a horizontal to a vertical orientation, and vice versa, thus allowing wall-perching and wall-climbing maneuvers.

[ KAIST ]

Honda and Black & Veatch have successfully tested the prototype Honda Autonomous Work Vehicle (AWV) at a construction site in New Mexico. During the month-long field test, the second-generation, fully-electric Honda AWV performed a range of functions at a large-scale solar energy construction project, including towing activities and transporting construction materials, water, and other supplies to pre-set destinations within the work site.

[ Honda ]

This could very well be the highest speed multiplier I've ever seen in a robotics video.

[ GITAI ]

Here's an interesting design for a manipulator that can do in-hand manipulation with a minimum of fuss, from the Yale Grablab.

[ Paper ]

That ugo robot that's just a ball with eyes on a stick is one of my favorite robots ever, because it's so unapologetically just a ball on a stick.

[ ugo ]

Robot, make me a sandwich. And then make me a bunch more sandwiches.

[ Soft Robotics ]

Refilling water bottles isn't a very complex task, but having a robot do it means that humans don't have to.

[ Fraunhofer ]

To help manufacturers find cost effective and sustainable alternatives to single -use plastic, ABB Robotics is collaborating with Zume, a global provider of innovative compostable packaging solutions. We will integrate and install up to 2000 robots at Zume customer's sites worldwide over the next five years to automate the innovative manufacturing production of 100 percent compostable packaging molded from sustainably harvested plant-based material for products from food and groceries to cosmetics and consumer goods.

[ ABB ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

We first met Cleo Robotics at CES 2017, when they were showing off a consumer prototype of their unique ducted-fan drone. They've just announced a new version which has been beefed up to do surveillance, and it is actually called the Dronut.

For such a little thing, the 12 minute flight time is not the worst, and hopefully it'll find a unique niche that'll help Cleo move back towards the consumer market, because I want one.

[ Cleo ]

Happy tenth birthday, Thymio!

[ EPFL ]

Here we describe a protective strategy for winged drones that mitigates the added weight and drag by means of increased lift generation and stall delay at high angles of attack. The proposed structure is inspired by the wing system found in beetles and consists of adding an additional set of retractable wings, named elytra, which can rapidly encapsulate the main folding wings when protection is needed.

[ EPFL ]

This is some very, very impressive robust behavior on ANYmal, part of Joonho Lee's master's thesis at ETH Zurich.

[ ETH Zurich ]

NTT DOCOMO, INC. announced today that it has developed a blade-free, blimp-type drone equipped with a high-resolution video camera that captures high-quality video and full-color LED lights glow in radiant colors.

[ NTT Docomo ] via [ Gizmodo ]

Senior Software Engineer Daniel Piedrahita explains the theory behind robust dynamic stability and how Agility engineers used it to develop an unique and cohesive hardware and software solution that allows Digit to navigate unpredictable terrain with ease.

[ Agility ]

The title of thie video from DeepRobotics is "DOOMSDAY COMING." Best not to think about it, probably.

[ DeepRobotics ]

More Baymax!

[ Disney ]

At Ben-Gurion University of the Negev, they're trying to figure out how to make a COVID-19 officer robot authoritative enough that people will actually pay attention to it and do what it says.

[ Paper ]

Thanks, Andy!

You'd think that high voltage powerlines would be the last thing you'd want a drone to futz with, but here we are.

[ GRVC ]

Cassie Blue navigates around furniture treated as obstacles in the atrium of the Ford Robotics Building at the University of Michigan.

[ Michigan Robotics ]

Northrop Grumman and its partners AVL, Intuitive Machines, Lunar Outpost and Michelin are designing a new vehicle that will greatly expand and enhance human and robotic exploration of the Moon, and ultimately, Mars.

[ Northrop Grumman ]

This letter proposes a novel design for a coaxial hexarotor (Y6) with a tilting mechanism that can morph midair while in a hover, changing the flight stage from a horizontal to a vertical orientation, and vice versa, thus allowing wall-perching and wall-climbing maneuvers.

[ KAIST ]

Honda and Black & Veatch have successfully tested the prototype Honda Autonomous Work Vehicle (AWV) at a construction site in New Mexico. During the month-long field test, the second-generation, fully-electric Honda AWV performed a range of functions at a large-scale solar energy construction project, including towing activities and transporting construction materials, water, and other supplies to pre-set destinations within the work site.

[ Honda ]

This could very well be the highest speed multiplier I've ever seen in a robotics video.

[ GITAI ]

Here's an interesting design for a manipulator that can do in-hand manipulation with a minimum of fuss, from the Yale Grablab.

[ Paper ]

That ugo robot that's just a ball with eyes on a stick is one of my favorite robots ever, because it's so unapologetically just a ball on a stick.

[ ugo ]

Robot, make me a sandwich. And then make me a bunch more sandwiches.

[ Soft Robotics ]

Refilling water bottles isn't a very complex task, but having a robot do it means that humans don't have to.

[ Fraunhofer ]

To help manufacturers find cost effective and sustainable alternatives to single -use plastic, ABB Robotics is collaborating with Zume, a global provider of innovative compostable packaging solutions. We will integrate and install up to 2000 robots at Zume customer's sites worldwide over the next five years to automate the innovative manufacturing production of 100 percent compostable packaging molded from sustainably harvested plant-based material for products from food and groceries to cosmetics and consumer goods.

[ ABB ]

The development of autonomous legged/wheeled robots with the ability to navigate and execute tasks in unstructured environments is a well-known research challenge. In this work we introduce a methodology that permits a hybrid legged/wheeled platform to realize terrain traversing functionalities that are adaptable, extendable and can be autonomously selected and regulated based on the geometry of the perceived ground and associated obstacles. The proposed methodology makes use of a set of terrain traversing primitive behaviors that are used to perform driving, stepping on, down and over and can be adapted, based on the ground and obstacle geometry and dimensions. The terrain geometrical properties are first obtained by a perception module, which makes use of point cloud data coming from the LiDAR sensor to segment the terrain in front of the robot, identifying possible gaps or obstacles on the ground. Using these parameters the selection and adaption of the most appropriate traversing behavior is made in an autonomous manner. Traversing behaviors can be also serialized in a different order to synthesise more complex terrain crossing plans over paths of diverse geometry. Furthermore, the proposed methodology is easily extendable by incorporating additional primitive traversing behaviors into the robot mobility framework and in such a way more complex terrain negotiation capabilities can be eventually realized in an add-on fashion. The pipeline of the above methodology was initially implemented and validated on a Gazebo simulation environment. It was then ported and verified on the CENTAURO robot enabling the robot to successfully negotiate terrains of diverse geometry and size using the terrain traversing primitives.

This paper presents a multi-purpose gripping and incision tool-set to reduce the number of required manipulators for targeted therapeutics delivery in Minimally Invasive Surgery. We have recently proposed the use of multi-arm Concentric Tube Robots (CTR) consisting of an incision, a camera, and a gripper manipulator for deep orbital interventions, with a focus on Optic Nerve Sheath Fenestration (ONSF). The proposed prototype in this research, called Gripe-Needle, is a needle equipped with a sticky suction cup gripper capable of performing both gripping of target tissue and incision tasks in the optic nerve area by exploiting the multi-tube arrangement of a CTR for actuation of the different tool-set units. As a result, there will be no need for an independent gripper arm for an incision task. The CTR innermost tube is equipped with a needle, providing the pathway for drug delivery, and the immediate outer tube is attached to the suction cup, providing the suction pathway. Based on experiments on various materials, we observed that adding a sticky surface with bio-inspired grooves to a normal suction cup gripper has many advantages such as, 1) enhanced adhesion through material stickiness and by air-tightening the contact surface, 2) maintained adhesion despite internal pressure variations, e.g. due to the needle motion, and 3) sliding resistance. Simple Finite Element and theoretical modeling frameworks are proposed, based on which a miniature tool-set is designed to achieve the required gripping forces during ONSF. The final designs were successfully tested for accessing the optic nerve of a realistic eye phantom in a skull eye orbit, robust gripping and incision on units of a plastic bubble wrap sample, and manipulating different tissue types of porcine eye samples.

Soft and continuum robots are transforming medical interventions thanks to their flexibility, miniaturization, and multidirectional movement abilities. Although flexibility enables reaching targets in unstructured and dynamic environments, it also creates challenges for control, especially due to interactions with the anatomy. Thus, in recent years lots of efforts have been devoted for the development of shape reconstruction methods, with the advancement of different kinematic models, sensors, and imaging techniques. These methods can increase the performance of the control action as well as provide the tip position of robotic manipulators relative to the anatomy. Each method, however, has its advantages and disadvantages and can be worthwhile in different situations. For example, electromagnetic (EM) and Fiber Bragg Grating (FBG) sensor-based shape reconstruction methods can be used in small-scale robots due to their advantages thanks to miniaturization, fast response, and high sensitivity. Yet, the problem of electromagnetic interference in the case of EM sensors, and poor response to high strains in the case of FBG sensors need to be considered. To help the reader make a suitable choice, this paper presents a review of recent progress on shape reconstruction methods, based on a systematic literature search, excluding pure kinematic models. Methods are classified into two categories. First, sensor-based techniques are presented that discuss the use of various sensors such as FBG, EM, and passive stretchable sensors for reconstructing the shape of the robots. Second, imaging-based methods are discussed that utilize images from different imaging systems such as fluoroscopy, endoscopy cameras, and ultrasound for the shape reconstruction process. The applicability, benefits, and limitations of each method are discussed. Finally, the paper draws some future promising directions for the enhancement of the shape reconstruction methods by discussing open questions and alternative methods.

The evolving field of human-robot interaction (HRI) necessitates that we better understand how social robots operate and interact with humans. This scoping review provides an overview of about 300 research works focusing on the use of the NAO robot from 2010 to 2020. This study presents one of the most extensive and inclusive pieces of evidence on the deployment of the humanoid NAO robot and its global reach. Unlike most reviews, we provide both qualitative and quantitative results regarding how NAO is being used and what has been achieved so far. We analyzed a wide range of theoretical, empirical, and technical contributions that provide multidimensional insights, such as general trends in terms of application, the robot capabilities, its input and output modalities of communication, and the human-robot interaction experiments that featured NAO (e.g. number and roles of participants, design, and the length of interaction). Lastly, we derive from the review some research gaps in current state-of-the-art and provide suggestions for the design of the next generation of social robots.



I am not a fan of Alexa. Or Google Assistant. Or, really, any Internet-connected camera or microphone which has a functionality based around being in my house and active all of the time. I don't use voice-activated systems, and while having a webcam is necessary, I make sure to physically unplug it from my computer when I'm not using it. Am I being overly paranoid? Probably. But I feel like having a little bit of concern is reasonable, and having that concern constantly at the back of my mind is just not worth what these assistants have so far had to offer.

iRobot CEO Colin Angle disagrees. And last week, iRobot announced that it has "teamed with Amazon to further advance voice-enabled intelligence for home robots." Being skeptical about this whole thing, I asked Angle to talk me into it, and I have to say, he kinda maybe almost did.

Using Alexa, iRobot customers can automate routines, personalize cleaning jobs and control how their home is cleaned. Thanks to interactive Alexa conversations and predictive and proactive recommendations, smart home users can experience a new level of personalization and control for their unique homes, schedules, preferences and devices.

Here are the kinds of things that are new to the Roomba Alexa partnership:

"Roomba, Clean Around the [Object]" – Use Alexa to send your robot to clean a mess right where it happens with precision Clean Zones. Roomba can clean around specific objects that attract the most common messes, like couches, tables and kitchen counters. Simply ask Alexa to "tell Roomba, clean around the couch," and Roomba knows right where to go.

iRobot Scheduling with Alexa voice service – Thanks to Alexa's rich language understanding, customers can have a more natural interaction directing their robot using their voice to schedule cleaning Routines. For example, "Alexa, tell Roomba to clean the kitchen every weeknight at 7 pm," or "Alexa, tell Braava to mop the kitchen every Sunday afternoon."

Alexa Announcements – Alexa can let customers know about their robot's status, like when it needs help or when it has finished a cleaning job, even if your phone isn't nearby.

Alexa Hunches – The best time to clean is when no one is home. If Alexa has a 'hunch' that you're away, Alexa can begin a cleaning job.

The reason why this kind of voice control is important is because Roombas are getting very, very sophisticated. The latest models know more about our homes than ever before, with maps and object recognition and all kinds of complex and intelligent behaviors and scheduling options. iRobot has an app that does its best to simplify the process of getting your Roomba to do exactly what you want it to do, but you still have to be comfortable poking around in the app on a regular basis. This poses a bit of a problem for iRobot, which is now having to square all these really cool new capabilities with their original concept for the robot that I still remember as being best encapsulated by having just one single button that you could push, labeled "Clean" in nice big letters.

iRobot believes that voice control is the answer to this. It's fast, it's intuitive, and as long as there's a reliable mapping between what you tell the robot to do and what the robot actually does, it seems like it could be very successful—if, of course, you're fine with having Alexa as a mediator, which I'm not sure I am. But after talking with iRobot CEO Colin Angle, I'm starting to come around.

IEEE Spectrum: I know you've been working on this for a while, but can you talk about how the whole Alexa and Roomba integration thing came about?

Colin Angle: This started back when Alexa first came out. Amazon told us that they asked people, "what should we do with this speaker?" And one of the first things that came up was, "I want to tell my Roomba to clean." It was within the original testing as to what Alexa should do. It certainly took them a while to get there, and took us a while to get there. But it's a very substantial and intuitive thing that we're supposed to be able to do with our robots—use our voice and talk to them. I think almost every robot in film and literature can be talked to. They may not all talk back in any logical way, but they all can listen and respond to voice.

Alexa's "hunches" are a good example of the kind of thing that I don't like about Alexa. Like, what is a hunch, and what does the fact that Alexa can have hunches imply about what it knows about my life that I didn't explicitly tell it?

That's the problem with the term "hunch." It attributes intelligence when what they're trying to do is attribute uncertainty. Amazon is really trying to do the right thing, but naming something "hunch" just invites speculation as to whether there's an AI there that's listening to everything I do and tracking me, when in some way it's tragically simpler than all that—depending on what it's connected to, it can infer periods of inactivity.

There's a question of what should you do and what shouldn't you do with an omnipresent ear, and that requires trust. But in general, Alexa is less creepy the more you understand how it works. And so the term "hunch" is meant to convey uncertainty, but that doesn't help people's confidence.

One of the voice commands you can give is having Alexa ask Roomba to clean around the couch. The word "around" can have different meanings for different people, so how do you know what a user actually wants when they use a term like "around?"

We've had to build these skills using words like around, underneath, beneath, near… All of these different words which convey approximate location. If we clean a little more than you want us to clean, but not a ton more, you're probably not going to be upset. So taking a little bit of superset liberties around how Roomba cleans still yields a satisfying result. There's a certain pragmatism that's required, and it's better to understand more prepositions and have them converge into a carefully designed behavior which the vast majority of people would be okay with, while not requiring a magic incantation where you'd need to go grab your manual so that you can look up what to tell Roomba in order to get it to do the right thing.

This is one of the fascinating challenges—we're trying to build robots into partners, but in general, the full functionality has largely been in the iRobot app. And yet the metaphor of having a partner usually is not passing notes, it's delivering utterances that convey enough meaning that your partner does what they're supposed to do. If you make a mess, and say, "Alexa, tell Roomba to clean up around the kitchen table" without having to use the app, that's actually a pretty rewarding interaction. It's a very natural thing, and you can say many things close to that and have it just work.

Our measure of success is that if I said Evan, suck it up, plug in that Alexa and then without reading the instructions, convey your will to Roomba to clean your office every Sunday after noon or something by saying something like that, and see if it works.

Clearly communicating intent using voice is radically more complicated with each additional level of complexity that you're trying to convey. —Colin Angle

Roomba can now recognize commands that use the word "and," like "clean under the couch and coffee table." I'm wondering how much potential there is to make more sophisticated commands. Things like, "Roomba, clean between the couch and the coffee table," or "Roomba, clean the living room for 10 minutes."

Of the things you said, I would say that we can do the ones that are pragmatic. You couldn't say "clean between these two places;" I suppose we might know enough to try to figure that out because we know where those two areas are and we could craft the location, but that's not a normal everyday use case because people make messes under or near things rather than between things. With precise and approximate scheduling, we should be able to handle that, because that's something people are likely to say. From a design perspective, it has to do with listening intently to how customers like to talk about tasking Roomba, and making sure that our skill is sufficiently literate to reasonably precisely do the right thing.

Do these voice commands really feel like talking to Roomba, or does it feel more like talking to Alexa, and how important is that distinction?

Unfortunately, the metaphor is that you're talking to Alexa who is talking to Roomba. We like the fact that people personify Roomba. If you don't yet own a Roomba, it's kind of a creepy thing to go around saying, because it's a vacuum cleaner, not a friend. But the experience of owning a Roomba is supposed to feel like you have a partner. And this idea that you have to talk to your helper through an intermediary is the price that we pay, which in my mind diminishes that partnership a little bit in pursuit of iRobot not having to build and maintain our own speakers and voice system. I think both Amazon and Google played around with the idea of a direct connection, and decided that enforcing that metaphor of having the speaker as an intermediary simplifies how people interact with it. And so that's a business decision on their side. For us, if it was an option, I would say direct connection every time, because I think it elevates the feeling of partnership between the person and the robot.

From a human-robot interaction (HRI) perspective, do you think it would be risky to allow users to talk directly to their Roomba, in case their expectations for how their robot should sound or what it might say don't match the reality that's constrained by practical voice interaction decisions that iRobot will have to make?

I think the benefits outweigh the risks. For example, if you don't like the voice, you should be able to change the voice, and hopefully you can find something that is close enough to your mental model that you can learn to live with it. If the question is whether talking directly to Roomba creates a higher expectation of intelligence than talking through a third party, I would say it does, but is it night and day? With this announcement we're making the strong statement that we think that for most of the things that you're going to want Roomba to do, we have enabled them broadly with voice. Your Roomba is not going to know the score of the baseball game, but if you ask it about what it's supposed to be able to do, you're going to have a good experience.

Coming from the background that you have and being involved in developing Roomba from the very beginning, now that you're having to work through voice interactions and HRI and things like that, do you miss the days where the problems were power cords and deep carpet and basic navigation?

Honestly, I've been waiting to tackle problems that we're currently tackling. If I have to tackle another hair entrainment problem, I would scream! I mean, to some extent, here we are, 31 years in, and I'm getting to the good stuff, because I think that the promise of robots is as much about the interaction as it is around the physical hardware. In fact, ever since I was in college I was playing around with hardware because the software sucked and was insanely hard and not going to do what I wanted it to do. All of my early attempts at voice interaction were spectacular failures. And yet, I kept going back to voice because, well, you're supposed to be able to talk to your robot.

Voice is kind of the great point of integration if it can be done well enough. And if you can leave your phone in your pocket and get up from your meal, look down, see you made a mess and just say, "hey Roomba, the kitchen table looks messy," which you can, that's progress. That's one way of breaking this ceiling of control complexity that must be shattered because the smart home isn't smart today and only does a tiny percentage of what it needs to do.



I am not a fan of Alexa. Or Google Assistant. Or, really, any Internet-connected camera or microphone which has a functionality based around being in my house and active all of the time. I don't use voice-activated systems, and while having a webcam is necessary, I make sure to physically unplug it from my computer when I'm not using it. Am I being overly paranoid? Probably. But I feel like having a little bit of concern is reasonable, and having that concern constantly at the back of my mind is just not worth what these assistants have so far had to offer.

iRobot CEO Colin Angle disagrees. And last week, iRobot announced that it has "teamed with Amazon to further advance voice-enabled intelligence for home robots." Being skeptical about this whole thing, I asked Angle to talk me into it, and I have to say, he kinda maybe almost did.

Using Alexa, iRobot customers can automate routines, personalize cleaning jobs and control how their home is cleaned. Thanks to interactive Alexa conversations and predictive and proactive recommendations, smart home users can experience a new level of personalization and control for their unique homes, schedules, preferences and devices.

Here are the kinds of things that are new to the Roomba Alexa partnership:

"Roomba, Clean Around the [Object]" – Use Alexa to send your robot to clean a mess right where it happens with precision Clean Zones. Roomba can clean around specific objects that attract the most common messes, like couches, tables and kitchen counters. Simply ask Alexa to "tell Roomba, clean around the couch," and Roomba knows right where to go.

iRobot Scheduling with Alexa voice service – Thanks to Alexa's rich language understanding, customers can have a more natural interaction directing their robot using their voice to schedule cleaning Routines. For example, "Alexa, tell Roomba to clean the kitchen every weeknight at 7 pm," or "Alexa, tell Braava to mop the kitchen every Sunday afternoon."

Alexa Announcements – Alexa can let customers know about their robot's status, like when it needs help or when it has finished a cleaning job, even if your phone isn't nearby.

Alexa Hunches – The best time to clean is when no one is home. If Alexa has a 'hunch' that you're away, Alexa can begin a cleaning job.

The reason why this kind of voice control is important is because Roombas are getting very, very sophisticated. The latest models know more about our homes than ever before, with maps and object recognition and all kinds of complex and intelligent behaviors and scheduling options. iRobot has an app that does its best to simplify the process of getting your Roomba to do exactly what you want it to do, but you still have to be comfortable poking around in the app on a regular basis. This poses a bit of a problem for iRobot, which is now having to square all these really cool new capabilities with their original concept for the robot that I still remember as being best encapsulated by having just one single button that you could push, labeled "Clean" in nice big letters.

iRobot believes that voice control is the answer to this. It's fast, it's intuitive, and as long as there's a reliable mapping between what you tell the robot to do and what the robot actually does, it seems like it could be very successful—if, of course, you're fine with having Alexa as a mediator, which I'm not sure I am. But after talking with iRobot CEO Colin Angle, I'm starting to come around.

IEEE Spectrum: I know you've been working on this for a while, but can you talk about how the whole Alexa and Roomba integration thing came about?

Colin Angle: This started back when Alexa first came out. Amazon told us that they asked people, "what should we do with this speaker?" And one of the first things that came up was, "I want to tell my Roomba to clean." It was within the original testing as to what Alexa should do. It certainly took them a while to get there, and took us a while to get there. But it's a very substantial and intuitive thing that we're supposed to be able to do with our robots—use our voice and talk to them. I think almost every robot in film and literature can be talked to. They may not all talk back in any logical way, but they all can listen and respond to voice.

Alexa's "hunches" are a good example of the kind of thing that I don't like about Alexa. Like, what is a hunch, and what does the fact that Alexa can have hunches imply about what it knows about my life that I didn't explicitly tell it?

That's the problem with the term "hunch." It attributes intelligence when what they're trying to do is attribute uncertainty. Amazon is really trying to do the right thing, but naming something "hunch" just invites speculation as to whether there's an AI there that's listening to everything I do and tracking me, when in some way it's tragically simpler than all that—depending on what it's connected to, it can infer periods of inactivity.

There's a question of what should you do and what shouldn't you do with an omnipresent ear, and that requires trust. But in general, Alexa is less creepy the more you understand how it works. And so the term "hunch" is meant to convey uncertainty, but that doesn't help people's confidence.

One of the voice commands you can give is having Alexa ask Roomba to clean around the couch. The word "around" can have different meanings for different people, so how do you know what a user actually wants when they use a term like "around?"

We've had to build these skills using words like around, underneath, beneath, near… All of these different words which convey approximate location. If we clean a little more than you want us to clean, but not a ton more, you're probably not going to be upset. So taking a little bit of superset liberties around how Roomba cleans still yields a satisfying result. There's a certain pragmatism that's required, and it's better to understand more prepositions and have them converge into a carefully designed behavior which the vast majority of people would be okay with, while not requiring a magic incantation where you'd need to go grab your manual so that you can look up what to tell Roomba in order to get it to do the right thing.

This is one of the fascinating challenges—we're trying to build robots into partners, but in general, the full functionality has largely been in the iRobot app. And yet the metaphor of having a partner usually is not passing notes, it's delivering utterances that convey enough meaning that your partner does what they're supposed to do. If you make a mess, and say, "Alexa, tell Roomba to clean up around the kitchen table" without having to use the app, that's actually a pretty rewarding interaction. It's a very natural thing, and you can say many things close to that and have it just work.

Our measure of success is that if I said Evan, suck it up, plug in that Alexa and then without reading the instructions, convey your will to Roomba to clean your office every Sunday after noon or something by saying something like that, and see if it works.

Clearly communicating intent using voice is radically more complicated with each additional level of complexity that you're trying to convey. —Colin Angle

Roomba can now recognize commands that use the word "and," like "clean under the couch and coffee table." I'm wondering how much potential there is to make more sophisticated commands. Things like, "Roomba, clean between the couch and the coffee table," or "Roomba, clean the living room for 10 minutes."

Of the things you said, I would say that we can do the ones that are pragmatic. You couldn't say "clean between these two places;" I suppose we might know enough to try to figure that out because we know where those two areas are and we could craft the location, but that's not a normal everyday use case because people make messes under or near things rather than between things. With precise and approximate scheduling, we should be able to handle that, because that's something people are likely to say. From a design perspective, it has to do with listening intently to how customers like to talk about tasking Roomba, and making sure that our skill is sufficiently literate to reasonably precisely do the right thing.

Do these voice commands really feel like talking to Roomba, or does it feel more like talking to Alexa, and how important is that distinction?

Unfortunately, the metaphor is that you're talking to Alexa who is talking to Roomba. We like the fact that people personify Roomba. If you don't yet own a Roomba, it's kind of a creepy thing to go around saying, because it's a vacuum cleaner, not a friend. But the experience of owning a Roomba is supposed to feel like you have a partner. And this idea that you have to talk to your helper through an intermediary is the price that we pay, which in my mind diminishes that partnership a little bit in pursuit of iRobot not having to build and maintain our own speakers and voice system. I think both Amazon and Google played around with the idea of a direct connection, and decided that enforcing that metaphor of having the speaker as an intermediary simplifies how people interact with it. And so that's a business decision on their side. For us, if it was an option, I would say direct connection every time, because I think it elevates the feeling of partnership between the person and the robot.

From a human-robot interaction (HRI) perspective, do you think it would be risky to allow users to talk directly to their Roomba, in case their expectations for how their robot should sound or what it might say don't match the reality that's constrained by practical voice interaction decisions that iRobot will have to make?

I think the benefits outweigh the risks. For example, if you don't like the voice, you should be able to change the voice, and hopefully you can find something that is close enough to your mental model that you can learn to live with it. If the question is whether talking directly to Roomba creates a higher expectation of intelligence than talking through a third party, I would say it does, but is it night and day? With this announcement we're making the strong statement that we think that for most of the things that you're going to want Roomba to do, we have enabled them broadly with voice. Your Roomba is not going to know the score of the baseball game, but if you ask it about what it's supposed to be able to do, you're going to have a good experience.

Coming from the background that you have and being involved in developing Roomba from the very beginning, now that you're having to work through voice interactions and HRI and things like that, do you miss the days where the problems were power cords and deep carpet and basic navigation?

Honestly, I've been waiting to tackle problems that we're currently tackling. If I have to tackle another hair entrainment problem, I would scream! I mean, to some extent, here we are, 31 years in, and I'm getting to the good stuff, because I think that the promise of robots is as much about the interaction as it is around the physical hardware. In fact, ever since I was in college I was playing around with hardware because the software sucked and was insanely hard and not going to do what I wanted it to do. All of my early attempts at voice interaction were spectacular failures. And yet, I kept going back to voice because, well, you're supposed to be able to talk to your robot.

Voice is kind of the great point of integration if it can be done well enough. And if you can leave your phone in your pocket and get up from your meal, look down, see you made a mess and just say, "hey Roomba, the kitchen table looks messy," which you can, that's progress. That's one way of breaking this ceiling of control complexity that must be shattered because the smart home isn't smart today and only does a tiny percentage of what it needs to do.

In this paper, we present a study aimed at understanding whether the embodiment and humanlikeness of an artificial agent can affect people’s spontaneous and instructed mimicry of its facial expressions. The study followed a mixed experimental design and revolved around an emotion recognition task. Participants were randomly assigned to one level of humanlikeness (between-subject variable: humanlike, characterlike, or morph facial texture of the artificial agents) and observed the facial expressions displayed by three artificial agents differing in embodiment (within-subject variable: video-recorded robot, physical robot, and virtual agent) and a human (control). To study both spontaneous and instructed facial mimicry, we divided the experimental sessions into two phases. In the first phase, we asked participants to observe and recognize the emotions displayed by the agents. In the second phase, we asked them to look at the agents’ facial expressions, replicate their dynamics as closely as possible, and then identify the observed emotions. In both cases, we assessed participants’ facial expressions with an automated Action Unit (AU) intensity detector. Contrary to our hypotheses, our results disclose that the agent that was perceived as the least uncanny, and most anthropomorphic, likable, and co-present, was the one spontaneously mimicked the least. Moreover, they show that instructed facial mimicry negatively predicts spontaneous facial mimicry. Further exploratory analyses revealed that spontaneous facial mimicry appeared when participants were less certain of the emotion they recognized. Hence, we postulate that an emotion recognition goal can flip the social value of facial mimicry as it transforms a likable artificial agent into a distractor. Further work is needed to corroborate this hypothesis. Nevertheless, our findings shed light on the functioning of human-agent and human-robot mimicry in emotion recognition tasks and help us to unravel the relationship between facial mimicry, liking, and rapport.

The soft robotics community is currently wondering what the future of soft robotics is. Therefore, it is very important to identify the directions in which the community should focus its efforts to consolidate its impact. The identification of convincing applications is a priority, especially to demonstrate that some achievements already represent an attractive alternative to current technological approaches in specific scenarios. However, most of the added value of soft robotics has been only theoretically grasped. Embodied Intelligence, being of these theoretical principles, represents an interesting approach to fully exploit soft robotic’s potential, but a pragmatic application of this theory still remains difficult and very limited. A different design approach could be beneficial, i.e., the integration of a certain degree of continuous adaptability in the hardware functionalities of the robot, namely, a “flexible” design enabled by hardware components able to fulfill multiple functionalities. In this paper this concept of flexible design is introduced along with its main technological and theoretical basic elements. The potential of the approach is demonstrated through a biological comparison and the feasibility is supported by practical examples with state-of-the-art technologies.

Recently, efforts have been made to add programming activities to the curriculum that promote computational thinking and foster 21st-century digital skills. One of the programming modalities is the use of Tangible Programming Languages (TPL), used in activities with 4+ year old children. In this review, we analyze solutions proposed for TPL in different contexts crossing them with non-TPL solutions, like Graphical Programming Languages (GPL). We start to characterize features of language interaction, their use, and what learning activities are associated with them. Then, in a diagram, we show a relation between the complexity of the languages with factors such as target age and output device types. We provide an analysis considering the type of input (e.g., TPL versus GPL) and output devices (e.g., physical robot versus graphical simulation) and evaluate their contribution to further insights about the general trends with respect to educational robotic systems. Finally, we discuss the opportunities to extend and improve TPLs based on the different solutions identified.

In the context of keyhole surgery, and more particularly of uterine biopsy, the fine automatic movements of a surgical instrument held by a robot with 3 active DOF’s require an exact knowledge of the point of rotation of the instrument. However, this center of rotation is not fixed and moves during an examination. This paper deals with a new method of detecting and updating the interaction matrix linking the movements of the robot with the surgical instrument. This is based on the method of updating the Jacobian matrix which is named the “Broyden method”. It is able to take into account body tissue deformations in real time in order to improve the pointing task for automatic movements of a surgical instrument in an unknown environment.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Telexistence and FamilyMart introduced a new robot TX SCARA equipped with TX's proprietary AI system Gordon to the FamilyMart METI store to perform beverage replenishment work in the back 24 hours a day in place of human workers, thereby automating high-volume work in a low-temperature environment where the physical load on store staff is significant.

[ Telexistence ]

It would be a lot easier to build a drone if you didn't have to worry about take-offs or landings, and DARPA's Gremlins program has been making tangible progress towards midair drone recovery.

[ DARPA ]

At Cobionix, we are developing Cobi, a multi-sensing, intelligent cobot that can not only work safely alongside humans but also learn from them and become smarter over time. In this video, we showcase one of the applications that Cobi is being utilized: Needle-less robotic intermuscular injection.

[ Cobionix ] via [ Gizmodo ]

It's been just a little bit too long since we've had a high quality cat on a Roomba video.

[ YouTube ]

Scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), in the ever-present quest to get machines to replicate human abilities, created a framework that's more scaled up: a system that can reorient over two thousand different objects, with the robotic hand facing both upwards and downwards. This ability to manipulate anything from a cup to a tuna can, and a Cheez-It box, could help the hand quickly pick-and-place objects in specific ways and locations -- and even generalize to unseen objects.

[ MIT CSAIL ]

NASA is sending a couple of robots to Venus in 2029! Not the kind with legs or wheels, but still.

[ NASA ]

The Environmental Genomics & Systems Biology division at Berkeley Lab has built a robot, called the EcoBOT, that is able to perform “self-driving experiments."

[ EcoBOT ]

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences have developed a new approach in which robotic exosuit assistance can be calibrated to an individual and adapt to a variety of real-world walking tasks in a matter of seconds. The bioinspired system uses ultrasound measurements of muscle dynamics to develop a personalized and activity-specific assistance profile for users of the exosuit.

[ Harvard Wyss ]

We propose a gecko-inspired robot with an optimal bendable body structure. The robot leg and body movements are driven by central pattern generator (CPG)-based neural control. It can climb using a combination of trot gait and lateral undulation of the bendable body with a C-shaped standing wave. This approach results in 52% and 54% reduced energy consumption during climbing on inclined solid and soft surfaces, respectively, compared to climbing with a fixed body. To this end, the study provides a basis for developing sprawling posture robots with a bendable body and neural control for energy-efficient inclined surface climbing with a possible extension towards agile and versatile locomotion.

[ Paper ]

Thanks Poramate!

The new Mavic 3 from DJI looks very impressive, especially that 46 minute battery life.

[ DJI ]

Sonia Roberts, an experimentalist at heart and PhD researcher with Kod*lab, a legged robotics group within the GRASP Lab at Penn Engineering takes us inside her scientific process. How can a robot's controllers help it use less energy as it runs on sand?

[ KodLab ]

The Canadian Space Agency is preparing for a Canadian rover to explore a polar region of the Moon within the next five years. Two Canadian companies, MDA and Canadensys, have been selected to design lunar rover concepts.

[ CSA ]

Our Boeing Australia team has expanded its flight-test program of the Boeing Airpower Teaming System, with two aircraft successfully completing separate flight missions at the Woomera Range Complex recently.

[ Boeing ]

I do not understand what the Campaign to Stop Killer Robots folks are trying to tell me here, and also, those colors make my eyeballs scream.

[ Campaign to Stop Killer Robots ]

No doorbell? Nothing that some Dynamixels and a tongue drum can't fix.

[ YouTube ]

We present an integrated system for performing precision harvesting missions using a legged harvester (HEAP) in a confined, GPS denied forest environment.

[ Paper ]

This video demonstrates some of the results from a scientific deployment to Chernobyl NPP in September 2021 led by University of Bristol.

[ University of Bristol ]

This a bottle unscrambler. I don't know why that's what it's called because the bottles don't seem scrambled. But it's unscrambling them anyway.

[ B&R ]

We invite you to hear from the leadership of Team Explorer, the CMU DARPA Subterranean Challenge team, as they discuss the challenges, lessons learned, and the future direction these technologies are headed in.

[ AirLab ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2022 – May 23-27, 2022 – Philadelphia, PA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Telexistence and FamilyMart introduced a new robot TX SCARA equipped with TX's proprietary AI system Gordon to the FamilyMart METI store to perform beverage replenishment work in the back 24 hours a day in place of human workers, thereby automating high-volume work in a low-temperature environment where the physical load on store staff is significant.

[ Telexistence ]

It would be a lot easier to build a drone if you didn't have to worry about take-offs or landings, and DARPA's Gremlins program has been making tangible progress towards midair drone recovery.

[ DARPA ]

At Cobionix, we are developing Cobi, a multi-sensing, intelligent cobot that can not only work safely alongside humans but also learn from them and become smarter over time. In this video, we showcase one of the applications that Cobi is being utilized: Needle-less robotic intermuscular injection.

[ Cobionix ] via [ Gizmodo ]

It's been just a little bit too long since we've had a high quality cat on a Roomba video.

[ YouTube ]

Scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), in the ever-present quest to get machines to replicate human abilities, created a framework that's more scaled up: a system that can reorient over two thousand different objects, with the robotic hand facing both upwards and downwards. This ability to manipulate anything from a cup to a tuna can, and a Cheez-It box, could help the hand quickly pick-and-place objects in specific ways and locations -- and even generalize to unseen objects.

[ MIT CSAIL ]

NASA is sending a couple of robots to Venus in 2029! Not the kind with legs or wheels, but still.

[ NASA ]

The Environmental Genomics & Systems Biology division at Berkeley Lab has built a robot, called the EcoBOT, that is able to perform “self-driving experiments."

[ EcoBOT ]

Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences have developed a new approach in which robotic exosuit assistance can be calibrated to an individual and adapt to a variety of real-world walking tasks in a matter of seconds. The bioinspired system uses ultrasound measurements of muscle dynamics to develop a personalized and activity-specific assistance profile for users of the exosuit.

[ Harvard Wyss ]

We propose a gecko-inspired robot with an optimal bendable body structure. The robot leg and body movements are driven by central pattern generator (CPG)-based neural control. It can climb using a combination of trot gait and lateral undulation of the bendable body with a C-shaped standing wave. This approach results in 52% and 54% reduced energy consumption during climbing on inclined solid and soft surfaces, respectively, compared to climbing with a fixed body. To this end, the study provides a basis for developing sprawling posture robots with a bendable body and neural control for energy-efficient inclined surface climbing with a possible extension towards agile and versatile locomotion.

[ Paper ]

Thanks Poramate!

The new Mavic 3 from DJI looks very impressive, especially that 46 minute battery life.

[ DJI ]

Sonia Roberts, an experimentalist at heart and PhD researcher with Kod*lab, a legged robotics group within the GRASP Lab at Penn Engineering takes us inside her scientific process. How can a robot's controllers help it use less energy as it runs on sand?

[ KodLab ]

The Canadian Space Agency is preparing for a Canadian rover to explore a polar region of the Moon within the next five years. Two Canadian companies, MDA and Canadensys, have been selected to design lunar rover concepts.

[ CSA ]

Our Boeing Australia team has expanded its flight-test program of the Boeing Airpower Teaming System, with two aircraft successfully completing separate flight missions at the Woomera Range Complex recently.

[ Boeing ]

I do not understand what the Campaign to Stop Killer Robots folks are trying to tell me here, and also, those colors make my eyeballs scream.

[ Campaign to Stop Killer Robots ]

No doorbell? Nothing that some Dynamixels and a tongue drum can't fix.

[ YouTube ]

We present an integrated system for performing precision harvesting missions using a legged harvester (HEAP) in a confined, GPS denied forest environment.

[ Paper ]

This video demonstrates some of the results from a scientific deployment to Chernobyl NPP in September 2021 led by University of Bristol.

[ University of Bristol ]

This a bottle unscrambler. I don't know why that's what it's called because the bottles don't seem scrambled. But it's unscrambling them anyway.

[ B&R ]

We invite you to hear from the leadership of Team Explorer, the CMU DARPA Subterranean Challenge team, as they discuss the challenges, lessons learned, and the future direction these technologies are headed in.

[ AirLab ]

The rise of soft robotics opens new opportunities in endoscopy and minimally invasive surgery. Pneumatic catheters offer a promising alternative to conventional steerable catheters for safe navigation through the natural pathways without tissue injury. In this work, we present an optimized 6 mm diameter two-degree-of-freedom pneumatic actuator, able to bend in every direction and incorporating a 1 mm working channel. A versatile vacuum centrifugal overmolding method capable of producing small geometries with a variety of silicones is described, and meter-long actuators are extruded industrially. An improved method for fiber reinforcement is also presented. The actuator achieves bending more than 180° and curvatures of up to 0.1 mm−1. The exerted force remains below 100 mN, and with no rigid parts in the design, it limits the risks of damage on surrounding tissues. The response time of the actuator is below 300 ms and therefore not limited for medical applications. The working space and multi-channel actuation are also experimentally characterized. The focus is on the study of the influence of material stiffness on mechanical performances. As a rule, the softer the material, the better the energy conversion, and the stiffer the material, the larger the force developed at a given curvature. Based on the actuator, a 90 cm long steerable catheter demonstrator carrying an optical fiber is developed, and its potential for endoscopy is demonstrated in a bronchial tree phantom. In conclusion, this work contributes to the development of a toolbox of soft robotic solutions for MIS and endoscopic applications, by validating and characterizing a promising design, describing versatile and scalable fabrication methods, allowing for a better understanding of the influence of material stiffness on the actuator capabilities, and demonstrating the usability of the solution in a potential use-case.

Pages