Feed aggregator

It’s going to be a very, very long time before robots come anywhere close to matching the power-efficient mobility of animals, especially at small scales. Lots of folks are working on making tiny robots, but another option is to just hijack animals directly, by turning them into cyborgs. We’ve seen this sort of thing before with beetles, but there are many other animals out there that can be cyborgized. Researchers at Stanford and Caltech are giving sea jellies a try, and remarkably, it seems as though cyborg enhancements actually make the jellies more capable than they were before.

Usually, co-opting the mobility system of an animal with electronics doesn’t improve things for the animal, because we’re not nearly as good at controlling animals as they are at controlling themselves. But when you look at animals with very simple control systems, like sea jellies, it turns out that with some carefully targeted stimulation, they can move faster and more efficiently than they do naturally.

The researchers, Nicole W. Xu and John O. Dabiri, chose a friendly sort of sea jelly called Aurelia aurita, which is “an oblate species of jellyfish comprising a flexible mesogleal bell and monolayer of coronal and radial muscles that line the subumbrellar surface,” so there you go. To swim, jellies actuate the muscles in their bells, which squeeze water out and propel them forwards. These muscle contractions are controlled by a relatively simple stimulus of the jelly’s nervous system that can be replicated through external electrical impulses. 

To turn the sea jellies into cyborgs, the researchers developed an implant consisting of a battery, microelectronics, and bits of cork and stainless steel to make things neutrally buoyant, plus a wooden pin, which was used to gently impale each jelly through the bell to hold everything in place. While non-cyborg jellies tended to swim with a bell contraction frequency of 0.25 Hz, the implant allowed the researchers to crank the cyborg jellies up to a swimming frequency of 1 Hz.

While non-cyborg jellies tended to swim with a bell contraction frequency of 0.25 Hz, the implant allowed the researchers to crank the cyborg jellies up to a swimming frequency of 1 Hz

Peak speed was achieved at 0.62 Hz, resulting in the jellies traveling at nearly half a body diameter per second (4-6 centimeters per second), which is 2.8x their typical speed. More importantly, calculating the cost of transport for the jellies showed that the 2.8x increase in speed came with only a 2x increase in metabolic cost, meaning that the cyborg sea jelly is both faster and more efficient.

This is a little bit weird from an evolutionary standpoint—if a sea jelly has the ability to move faster, and moving faster is more efficient for it, then why doesn’t it just move faster all the time? The researchers think it may have something to do with feeding:

A possible explanation for the existence of more proficient and efficient swimming at nonnatural bell contraction frequencies stems from the multipurpose function of vortices shed during swimming. Vortex formation serves not only for locomotion but also to enable filter feeding and reproduction. There may therefore be no evolutionary pressure for A. aurita to use its full propulsive capabilities in nature, and there is apparently no significant cost associated with maintaining those capabilities in a dormant state, although higher speeds might limit the animals’ ability to feed as effectively.

Image: Science Advances

Sea jelly with a swim controller implant consisting of a battery, microelectronics, electrodes, and bits of cork and stainless steel to make things neutrally buoyant. The implant includes a wooden pin that is gently inserted through the jelly’s bell to hold everything in place, with electrodes embedded into the muscle and mesogleal tissue near the bell margin.

The really nice thing about relying on cyborgs instead of robots is that many of the advantages of a living organism are preserved. A cyborg sea jelly is perfectly capable of refueling itself as well as making any necessary repairs to its structure and function. And with an energy efficiency that’s anywhere from 10 to 1000 times more efficient than existing swimming robots, adding a control system and a couple of sensors could potentially lead to a useful biohybrid monitoring system.

Lastly, in case you’re concerned about the welfare of the sea jellies, which I definitely was, the researchers did try to keep them mostly healthy and happy (or at least as happy as an invertebrate with no central nervous system can be), despite stabbing them through the bell with a wooden pin. They were all allowed to take naps (or the sea jelly equivalent) in between experiments, and the bell piercing would heal up after just a couple of days. All animals recovered post-experiments, the researchers say, although a few had “bell deformities” from being cooped up in a rectangular fish tank for too long rather than being returned to their jelliquarium. Also, jelliquariums are a thing and I want one.

You may have noticed that over the course of this article, I have been passive-aggressively using the term “sea jelly” rather than “jellyfish.” This is because jellyfish are not fish at all—you are more closely related to a fish than a jellyfish is, which is why “sea jelly” is the more accurate term that will make marine biologists happy. And just as jellyfish should properly be called sea jellies, starfish should be called sea stars, and cuttlefish should be called sea cuttles. The last one is totally legit, don’t even question it.

“Low-power microelectronics embedded in live jellyfish enhance propulsion,” by Nicole W. Xu and John O. Dabiri from Stanford University and Caltech, is published in Science Advances.

[ Science Advances ]

When the going gets tough, future soft robots may break into a sweat to keep from overheating, much like marathon runners or ancient hunters chasing prey in the savannah, a new study finds.

Whereas conventional robots are made of rigid parts vulnerable to bumps, scrapes, twists, and falls, soft robots inspired by starfish, worms, and octopuses can resist many such kinds of damage and squirm past obstacles. Soft robots are also often cheaper and simpler to make, comparatively lightweight, and safer for people to be around.

However, the rubbery materials that make up soft robots often trap heat, exacerbating problems caused by overheating. Moreover, conventional devices used to control heat such as radiators and fans are typically made of rigid materials that are incompatible with soft robotics, says T.J. Wallin, a co-author and research scientist at Facebook Reality Labs.

To solve this problem, scientists decided to build robots that could sweat. "It turns out that the ability to perspire is one of the most remarkable features of humans," Wallin says. "We're not the fastest animals, but early humans found success as persistence hunters—the combination of sweating, relative hairlessness, and upright bipedal gait enabled us to physically exhaust our prey over prolonged chases."

"An elite marathon runner in the right conditions has been known to lose almost four liters of sweat an hour—this corresponds to roughly 2.5 kilowatts of cooling capacity," Wallin says. "To put that in perspective, refrigerators only use approximately 1 kilowatt-hour of energy. So as is often the case, biology provided an excellent guide for us engineers."

The researchers 3D-printed soft robot fingers that were hollow like balloons. These could bend or straighten to grip or drop objects, depending on the level of water pressure within each finger.

The robot fingers were each made of two different kinds of soft, flexible resin. The body of each finger was made of a resin that shrunk when heated above 40 degrees C, whereas the back of each finger was capped with a resin that expanded when heated above 30 degrees C.

The back of each finger was also dotted with microscopic pores. At temperatures cooler than 30 degrees C, these pores remained closed. However, at higher temperatures, the material on the back of each finger expanded, dilating the pores and letting the water in each finger sweat out. Moreover, as the heat rose, the material that made up the body of each finger shrank, helping squeeze out water.

"The best part of this synthetic strategy is that the thermoregulatory performance is baked into the material itself. We did not need to add sensors or other components to control the sweating rate—when the local temperature rose above the transition point, the pores would simply open and close on their own," Wallin says.

When exposed to wind from a fan, the sweaty fingers cooled off by about 39 degrees C per minute, or roughly six times faster than their dry counterparts. The amount by which the sweaty fingers cooled (about 107 watts per kilogram) also greatly exceeded the best cooling performance seen in animals (about 35 watts per kilogram, as seen in horses and humans), the scientists add.

"Much like in biology, where we have to manage internal heat through perspiring skin, we took inspiration and created sweat for high cooling power," says Robert Shepherd, a co-author and mechanical engineer at Cornell University.

"I think in order for the robot to operate with the sweating we have created, it would also have to be able to drink." —Robert Shepherd, Cornell University

Although sweat could make robot fingers more slippery, the researchers could design robot skin that wrinkles just like human fingers do when they get wet in order "to enhance gripping," Shepherd says.

Chemicals might also get added to robot sweat to remove contaminants from whatever they are touching, coat the surfaces of robots with a protective layer, or dissolve something they are touching. And the robot could then suck in whatever substance got dissolved to analyze it, Shepherd adds.

One problem these robot fingers experienced was how sweating reduced pressure within them, impairing their mobility. Future versions could separate the water networks behind sweating and mobility, at the expense of greater complexity, Wallin says.

There is also currently no way for sweating robots to replenish the water they lose. "The answer is right in front of me—I'm drinking some coffee right now," Shepherd says. "I think in order for the robot to operate with the sweating we have created, it would also have to be able to drink."

Another drawback that artificial perspiration might face is that it would likely not help much when the sweat cannot evaporate to cool robots, such as when the machines are underwater or when the air is very humid. "However, I would like to point out other heat transfer strategies, such as conduction, convection, and radiation, are ineffective at lowering the temperature of the body when it is below that of the environment, whereas sweating and evaporative water loss can do that," Wallin says. "In some ways it's a trade-off, but we feel it is an important benefit."

The scientists detailed their findings online on 29 January in the journal Science Robotics.

When the going gets tough, future soft robots may break into a sweat to keep from overheating, much like marathon runners or ancient hunters chasing prey in the savannah, a new study finds.

Whereas conventional robots are made of rigid parts vulnerable to bumps, scrapes, twists, and falls, soft robots inspired by starfish, worms, and octopuses can resist many such kinds of damage and squirm past obstacles. Soft robots are also often cheaper and simpler to make, comparatively lightweight, and safer for people to be around.

However, the rubbery materials that make up soft robots often trap heat, exacerbating problems caused by overheating. Moreover, conventional devices used to control heat such as radiators and fans are typically made of rigid materials that are incompatible with soft robotics, says T.J. Wallin, a co-author and research scientist at Facebook Reality Labs.

To solve this problem, scientists decided to build robots that could sweat. "It turns out that the ability to perspire is one of the most remarkable features of humans," Wallin says. "We're not the fastest animals, but early humans found success as persistence hunters—the combination of sweating, relative hairlessness, and upright bipedal gait enabled us to physically exhaust our prey over prolonged chases."

"An elite marathon runner in the right conditions has been known to lose almost four liters of sweat an hour—this corresponds to roughly 2.5 kilowatts of cooling capacity," Wallin says. "To put that in perspective, refrigerators only use approximately 1 kilowatt-hour of energy. So as is often the case, biology provided an excellent guide for us engineers."

The researchers 3D-printed soft robot fingers that were hollow like balloons. These could bend or straighten to grip or drop objects, depending on the level of water pressure within each finger.

The robot fingers were each made of two different kinds of soft, flexible resin. The body of each finger was made of a resin that shrunk when heated above 40 degrees C, whereas the back of each finger was capped with a resin that expanded when heated above 30 degrees C.

The back of each finger was also dotted with microscopic pores. At temperatures cooler than 30 degrees C, these pores remained closed. However, at higher temperatures, the material on the back of each finger expanded, dilating the pores and letting the water in each finger sweat out. Moreover, as the heat rose, the material that made up the body of each finger shrank, helping squeeze out water.

"The best part of this synthetic strategy is that the thermoregulatory performance is baked into the material itself. We did not need to add sensors or other components to control the sweating rate—when the local temperature rose above the transition point, the pores would simply open and close on their own," Wallin says.

When exposed to wind from a fan, the sweaty fingers cooled off by about 39 degrees C per minute, or roughly six times faster than their dry counterparts. The amount by which the sweaty fingers cooled (about 107 watts per kilogram) also greatly exceeded the best cooling performance seen in animals (about 35 watts per kilogram, as seen in horses and humans), the scientists add.

"Much like in biology, where we have to manage internal heat through perspiring skin, we took inspiration and created sweat for high cooling power," says Robert Shepherd, a co-author and mechanical engineer at Cornell University.

"I think in order for the robot to operate with the sweating we have created, it would also have to be able to drink." —Robert Shepherd, Cornell University

Although sweat could make robot fingers more slippery, the researchers could design robot skin that wrinkles just like human fingers do when they get wet in order "to enhance gripping," Shepherd says.

Chemicals might also get added to robot sweat to remove contaminants from whatever they are touching, coat the surfaces of robots with a protective layer, or dissolve something they are touching. And the robot could then suck in whatever substance got dissolved to analyze it, Shepherd adds.

One problem these robot fingers experienced was how sweating reduced pressure within them, impairing their mobility. Future versions could separate the water networks behind sweating and mobility, at the expense of greater complexity, Wallin says.

There is also currently no way for sweating robots to replenish the water they lose. "The answer is right in front of me—I'm drinking some coffee right now," Shepherd says. "I think in order for the robot to operate with the sweating we have created, it would also have to be able to drink."

Another drawback that artificial perspiration might face is that it would likely not help much when the sweat cannot evaporate to cool robots, such as when the machines are underwater or when the air is very humid. "However, I would like to point out other heat transfer strategies, such as conduction, convection, and radiation, are ineffective at lowering the temperature of the body when it is below that of the environment, whereas sweating and evaporative water loss can do that," Wallin says. "In some ways it's a trade-off, but we feel it is an important benefit."

The scientists detailed their findings online on 29 January in the journal Science Robotics.

Two years ago, we wrote about an AI startup from UC Berkeley and OpenAI called Embodied Intelligence, founded by robot laundry-folding expert Pieter Abbeel. What exactly Embodied was going to do wasn’t entirely clear, and honestly, it seemed like Embodied itself didn’t really know—they talked about “building technology that enables existing robot hardware to handle a much wider range of tasks where existing solutions break down,” and gave some examples of how that might be applied (including in manufacturing and logistics), but nothing more concrete.

Since then, a few things have happened. Thing one is that Embodied is now Covariant.ai. Thing two is that Covariant.ai spent almost a year talking with literally hundreds of different companies about how smarter robots could potentially make a difference for them. These companies represent sectors that include electronics manufacturing, car manufacturing, textiles, bio labs, construction, farming, hotels, elder care—“pretty much anything you could think about where maybe a robot could be helpful,” Pieter Abbeel tells us. “Over time, it became clear to us that manufacturing and logistics are the two spaces where there’s most demand now, and logistics especially is just hurting really hard for more automation.” And the really hard part of logistics is what Covariant decided to tackle.

There’s already a huge amount of automation in logistics, but as Abbeel explains, in warehouses there are two separate categories that need automation: “The things that people do with their legs and the things that people do with their hands.” The leg automation has largely been taken care of over the last five or 10 years through a mixture of conveyor systems, mobile retrieval systems, Kiva-like mobile shelving, and other mobile robots. “The pressure now is on the hand part,” Abbeel says. “It’s about how to be more efficient with things that are done in warehouses with human hands.”

A huge chunk of human-hand tasks in warehouses comes down to picking. That is, taking products out of one box and putting them into another box. In the logistics industry, the boxes are usually called totes, and each individual kind of product is referred to by its stock keeping unit number, or SKU. Big warehouses can have anywhere from thousands to millions of SKUs, which poses an enormous challenge to automated systems. As a result, most existing automated picking systems in warehouses are fairly limited. Either they’re specifically designed to pick a particular class of things, or they have to be trained to recognize more or less every individual thing you want them to pick. Obviously, in warehouses with millions of different SKUs, traditional methods of recognizing or modeling specific objects is not only impractical in the short term, but would also be virtually impossible to scale.

This is why humans are still used in picking—we have the ability to generalize. We can look at an object and understand how to pick it up because we have a lifetime of experience with object recognition and manipulation. We’re incredibly good at it, and robots aren’t. “From the very beginning, our vision was to ultimately work on very general robotic manipulation tasks,” says Abbeel. “The way automation’s going to expand is going to be robots that are capable of seeing what’s around them, adapting to what’s around them, and learning things on the fly.”

Covariant is tackling this with relatively simple hardware, including an off-the-shelf industrial arm (which can be just about any arm), a suction gripper (more on that later), and a straightforward 2D camera system that doesn’t rely on lasers or pattern projection or anything like that. What couples the vision system to the suction gripper is one single (and very, very large) neural network, which is what helps Covariant to be cost effective for customers. “We can’t have specialized networks,” says Abbeel. “It has to be a single network able to handle any kind of SKU, any kind of picking station. In terms of being able to understand what’s happening and what’s the right thing to do, that’s all unified. We call it Covariant Brain, and it’s obviously not a human brain, but it’s the same notion that a single neural network can do it all.”

We can talk about the challenges of putting picking robots in warehouses all day, but the reason why Covariant is making this announcement now is because their system has been up and running reliably and cost effectively in a real warehouse in Germany for the last four months. 

This video is showing Covariant’s robotic picking system operating (for over an hour at 10x speed) in a warehouse that handles logistics for a company called Obeta, which overnights orders of electrical supplies to electricians in Germany. The robot’s job is to pick items from bulk storage totes, and add them to individual order boxes for shipping. The warehouse is managed by an automated logistics company called KNAPP, which is Covariant’s first partner. “We were searching a long time for the right partner,” says Peter Puchwein, vice president of innovation at KNAPP. “We looked at every solution out there. Covariant is the only one that’s ready for real production.” He explains that Covariant’s AI is able to detect glossy, shiny, and reflective products, including products in plastic bags. “The product range is nearly unlimited, and the robotic picking station has the same or better performance than humans.”

The key to being able to pick such a wide range of products so reliably, explains Abbeel, is being able to generalize. “Our system generalizes to items it’s never seen before. Being able to look at a scene and understand how to interact with individual items in a tote, including items it’s never seen before—humans can do this, and that’s essentially generalized intelligence,” he says. “This generalized understanding of what’s in a bin is really key to success. That’s the difference between a traditional system where you would catalog everything ahead of time and try to recognize everything in the catalog, versus fast-moving warehouses where you have many SKUs and they’re always changing. That’s the core of the intelligence that we’re building.”

To be sure, the details on how Covariant’s technology work are still vague, but we tried to extract some more specifics from Abbeel, particularly about the machine learning components. Here’s the rest of our conversation with him:

IEEE Spectrum: How was your system trained initially?

Pieter Abbeel: We would get a lot of data on what kind of SKUs our customer has, get similar SKUs in our headquarters, and just train, train, train on those SKUs. But it’s not just a matter of getting more data. Actually, often there’s a clear limit on a neural net where it’s saturating. Like, we give it more data and more data, but it’s not doing any better, so clearly the neural net doesn’t have the capacity to learn about these new missing pieces. And then the question is, what can we do to re-architect it to learn about this aspect or that aspect that it’s clearly missing out on?

You’ve done a lot of work on sim2real transfer—did you end up using a bajillion simulated arms in this training, or did you have to rely on real-world training?

We found that you need to use both. You need to work both in simulation and the real world to get things to work. And as you’re continually trying to improve your system, you need a whole different kind of testing: You need traditional software unit tests, but you also need to run things in simulation, you need to run it on a real robot, and you need to also be able to test it in the actual facility. It’s a lot more levels of testing when you’re dealing with real physical systems, and those tests require a lot of time and effort to put in place because you may think you’re improving something, but you have to make sure that it’s actually being improved.

What happens if you need to train your system for a totally new class of items?

The first thing we do is we just put new things in front of our robot and see what happens, and often it’ll just work. Our system has few-shot adaptation, meaning that on-the-fly, without us doing anything, when it doesn’t succeed it’ll update its understanding of the scene and try some new things. That makes it a lot more robust in many ways, because if anything noisy or weird happens, or there’s something a little bit new but not that new, you might do a second or third attempt and try some new things.

But of course, there are going to be scenarios where the SKU set is so different from anything it’s been trained on so far that some things are not going to work, and we’ll have to just collect a bunch of new data—what does the robot need to understand about these types of SKUs, how to approach them, how to pick them up. We can use imitation learning, or the robot can try on its own, because with suction, it’s actually not too hard to detect if a robot succeeds or fails. You can get a reward signal for reinforcement learning. But you don’t want to just use RL, because RL is notorious for taking a long time, so we bootstrap it off some imitation and then from there, RL can complete everything. 

Why did you choose a suction gripper?

What’s currently deployed is the suction gripper, because we knew it was going to do the job in this deployment, but if you think about it from a technological point of view, we also actually have a single neural net that uses different grippers. I can’t say exactly how it’s done, but at a high level, your robot is going to take an action based on visual input, but also based on the gripper that’s attached to it, and you can also represent a gripper visually in some way, like a pattern of where the suction cups are. And so, we can condition a single neural network on both what it sees and the end-effector it has available. This makes it possible to hot-swap grippers if you want to. You lose some time, so you don’t want to swap too often, but you could swap between a suction gripper and a parallel gripper, because the same neural network can use different gripping strategies.

And I would say this is a very common thread in everything we do. We really wanted to be a single, general system that can share all its learnings across different modalities, whether it’s SKUs, end of arm tools, different bins you pick from, or other things that might be different. The expertise should all be sharable.

“People often say neural networks are just black boxes and if you’re doing something new you have to start from scratch. That’s not really true . . . Their strength comes from the fact that you can train end-to-end, you can train from input to the desired output” —Pieter Abbeel, Covariant.ai

And one single neural net is versatile enough for this?

People often say neural networks are just black boxes and if you’re doing something new you have to start from scratch. That’s not really true. I don’t think what’s important about neural nets is that they’re black boxes—that’s not really where their strength comes from. Their strength comes from the fact that you can train end-to-end, you can train from input to the desired output. And you can put modular things in there, like neural nets that are an architecture that’s well suited to visual information, versus end-effector information, and then they can merge their information loads to come to a conclusion. And the beauty is that you can train it all together, no problem.

When your system fails at a pick, what are the consequences?

Here’s where things get very interesting. You think about bringing AI into the physical world—AI has been very successful already in the digital world, but the digital world is much more forgiving. There’s a long tail of scenarios that you could encounter in the real world and you haven’t trained against them, or you haven’t hardcoded against them. And that’s what makes it so hard and why you need really good generalization including few-shot adaptation and so forth. 

Now let’s say you want a system to create value. For a robot in a warehouse, does it need to be 100 percent successful? No, it doesn’t. If, say, it takes a few attempts to pick something, that’s just a slowdown. It’s really the overall successful picks per hour that matter, not how often you have to try to get those picks. And so if periodically it has to try twice, it’s really the picking rate that’s affected, not the success rate that’s affected. A true failure is one where human intervention is needed.

With true failures, where after repeated attempts the robot just can’t pick an item, we’ll get notified by that and we can then train on it, and the next day it might work, but at that moment it doesn’t work. And even if a robotic deployment works 90 percent of the time, that’s not good enough. A human picking station can range from 300 to 2000 picks per hour. 2000 is really rare and is peak pick for a very short amount of time, so if we look at the bottom of that range, 300 picks per hour, if we’re succeeding 90 percent, that means 30 failures per hour. Wow, that’s bad. At 30 fails per hour, fixing those up by a human probably takes more than an hour’s worth of work. So what you’ve done now is you’ve created more work than you save, so 90 percent is definitely a no go. 

At 99 percent that’s 3 failures per hour. If it usually takes a couple of minutes for a human to fix, at that point, a human could oversee 10 stations easily, and that’s where all of a sudden we’re creating value. Or a human could do another job, and just keep an eye on the station and jump in for a moment to make sure it keeps running. If you had a 1000 per hour station, you’d need closer to 99.9 percent to get there and so forth, but that’s essentially the calculus we’ve been doing. And that’s what you realize how any extra nine you want to get is so much more challenging than the previous nine you’ve already achieved.

Photo: Elena Zhukova Covariant co-founders (left to right): Tianhao Zhang, Rocky Duan, Peter Chen, and Pieter Abbeel.

There are other companies that are developing using similar approaches to picking—industrial arms, vision systems, suction grippers, neural networks. What makes Covariant’s system work better?

I think it’s a combination of things. First of all, we want to bring to bear any kind of learning—imitation learning, supervised learning, reinforcement learning, all the different kinds of learning you can. And you also want to be smart about how you collect data—what data you collect, what processes you have in place to get the data that you need to improve the system. Then related to that, sometimes it’s not just a matter of data anymore, it’s a matter of, you need to re-architect your neural net. A lot of deep learning progress is made that way, where you come up with new architectures and the new architecture allows you to learn something that otherwise would maybe not be possible to learn. I mean, it’s really all of those things brought together that are giving the results that we’re seeing. So it’s not really like any one that can be singled out as “this is the thing.”

Also, it’s just a really hard problem. If you look at the amount of AI research that was needed to make this work... We started with four people, and we have 40 people now. About half of us are AI researchers, we have some world-leading AI researchers, and I think that’s what’s made the difference. I mean, I know that’s what’s made the difference. 

So it’s not like you’ve developed some sort of crazy new technology or something?

There’s no hardware trick. And we’re not doing, I don’t know, fuzzy logic or something else out of left field all of a sudden. It’s really about the AI stuff that processes everything—underneath it all is a gigantic neural network. 

Okay, then how the heck are you actually making this work?

If you have an extremely uniquely qualified team and you’ve picked the right problem to work on, you can do something that is quite out there compared to what has otherwise been possible. In academic research, people write a paper, and everybody else catches up the moment the paper comes out. We’ve not been doing that—so far we haven’t shared the details of what we actually did to make our system work, because right now we have a technology advantage. I think there will be a day when we will be sharing some of these things, but it’s not going to be anytime soon. 

It probably won’t surprise you that Covariant has been able to lock down plenty of funding (US $27 million so far), but what’s more interesting is some of the individual investors who are now involved with Covariant, which include Geoff Hinton, Fei-Fei Li, Yann LeCun, Raquel Urtasun, Anca Dragan, Michael I. Jordan, Vlad Mnih, Daniela Rus, Dawn Song, and Jeff Dean

While we’re expecting to see more deployments of Covariant’s software in picking applications, it’s also worth mentioning that their press release is much more general about how their AI could be used:

The Covariant Brain [is] universal AI for robots that can be applied to any use case or customer environment. Covariant robots learn general abilities such as robust 3D perception, physical affordances of objects, few-shot learning and real-time motion planning, which enables them to quickly learn to manipulate objects without being told what to do. 

Today, [our] robots are all in logistics, but there is nothing in our architecture that limits it to logistics. In the future we look forward to further building out the Covariant Brain to power ever more robots in industrial-scale settings, including manufacturing, agriculture, hospitality, commercial kitchens and eventually, people’s homes.

Fundamentally, Covariant is attempting to connect sensing with manipulation using a neural network in a way that can potentially be applied to almost anything. Logistics is the obvious first application, since the value there is huge, and even though the ability to generalize is important, there are still plenty of robot-friendly constraints on the task and the environment as well as safe and low-impact ways to fail. As to whether this technology will effectively translate into the kinds of semi-structured and unstructured environments that have historically posed such a challenge for general purpose manipulation (notably, people’s homes)—as much as we love speculating, it’s probably too early even for that.

What we can say for certain is that Covariant’s approach looks promising both in its present implementation and its future potential, and we’re excited to see where they take it from here.

[ Covariant.ai ]

Two years ago, we wrote about an AI startup from UC Berkeley and OpenAI called Embodied Intelligence, founded by robot laundry-folding expert Pieter Abbeel. What exactly Embodied was going to do wasn’t entirely clear, and honestly, it seemed like Embodied itself didn’t really know—they talked about “building technology that enables existing robot hardware to handle a much wider range of tasks where existing solutions break down,” and gave some examples of how that might be applied (including in manufacturing and logistics), but nothing more concrete.

Since then, a few things have happened. Thing one is that Embodied is now Covariant.ai. Thing two is that Covariant.ai spent almost a year talking with literally hundreds of different companies about how smarter robots could potentially make a difference for them. These companies represent sectors that include electronics manufacturing, car manufacturing, textiles, bio labs, construction, farming, hotels, elder care—“pretty much anything you could think about where maybe a robot could be helpful,” Pieter Abbeel tells us. “Over time, it became clear to us that manufacturing and logistics are the two spaces where there’s most demand now, and logistics especially is just hurting really hard for more automation.” And the really hard part of logistics is what Covariant decided to tackle.

There’s already a huge amount of automation in logistics, but as Abbeel explains, in warehouses there are two separate categories that need automation: “The things that people do with their legs and the things that people do with their hands.” The leg automation has largely been taken care of over the last five or 10 years through a mixture of conveyor systems, mobile retrieval systems, Kiva-like mobile shelving, and other mobile robots. “The pressure now is on the hand part,” Abbeel says. “It’s about how to be more efficient with things that are done in warehouses with human hands.”

A huge chunk of human-hand tasks in warehouses comes down to picking. That is, taking products out of one box and putting them into another box. In the logistics industry, the boxes are usually called totes, and each individual kind of product is referred to by its stock keeping unit number, or SKU. Big warehouses can have anywhere from thousands to millions of SKUs, which poses an enormous challenge to automated systems. As a result, most existing automated picking systems in warehouses are fairly limited. Either they’re specifically designed to pick a particular class of things, or they have to be trained to recognize more or less every individual thing you want them to pick. Obviously, in warehouses with millions of different SKUs, traditional methods of recognizing or modeling specific objects is not only impractical in the short term, but would also be virtually impossible to scale.

This is why humans are still used in picking—we have the ability to generalize. We can look at an object and understand how to pick it up because we have a lifetime of experience with object recognition and manipulation. We’re incredibly good at it, and robots aren’t. “From the very beginning, our vision was to ultimately work on very general robotic manipulation tasks,” says Abbeel. “The way automation’s going to expand is going to be robots that are capable of seeing what’s around them, adapting to what’s around them, and learning things on the fly.”

Covariant is tackling this with relatively simple hardware, including an off-the-shelf industrial arm (which can be just about any arm), a suction gripper (more on that later), and a straightforward 2D camera system that doesn’t rely on lasers or pattern projection or anything like that. What couples the vision system to the suction gripper is one single (and very, very large) neural network, which is what helps Covariant to be cost effective for customers. “We can’t have specialized networks,” says Abbeel. “It has to be a single network able to handle any kind of SKU, any kind of picking station. In terms of being able to understand what’s happening and what’s the right thing to do, that’s all unified. We call it Covariant Brain, and it’s obviously not a human brain, but it’s the same notion that a single neural network can do it all.”

We can talk about the challenges of putting picking robots in warehouses all day, but the reason why Covariant is making this announcement now is because their system has been up and running reliably and cost effectively in a real warehouse in Germany for the last four months. 

This video is showing Covariant’s robotic picking system operating (for over an hour at 10x speed) in a warehouse that handles logistics for a company called Obeta, which overnights orders of electrical supplies to electricians in Germany. The robot’s job is to pick items from bulk storage totes, and add them to individual order boxes for shipping. The warehouse is managed by an automated logistics company called KNAPP, which is Covariant’s first partner. “We were searching a long time for the right partner,” says Peter Puchwein, vice president of innovation at KNAPP. “We looked at every solution out there. Covariant is the only one that’s ready for real production.” He explains that Covariant’s AI is able to detect glossy, shiny, and reflective products, including products in plastic bags. “The product range is nearly unlimited, and the robotic picking station has the same or better performance than humans.”

The key to being able to pick such a wide range of products so reliably, explains Abbeel, is being able to generalize. “Our system generalizes to items it’s never seen before. Being able to look at a scene and understand how to interact with individual items in a tote, including items it’s never seen before—humans can do this, and that’s essentially generalized intelligence,” he says. “This generalized understanding of what’s in a bin is really key to success. That’s the difference between a traditional system where you would catalog everything ahead of time and try to recognize everything in the catalog, versus fast-moving warehouses where you have many SKUs and they’re always changing. That’s the core of the intelligence that we’re building.”

To be sure, the details on how Covariant’s technology work are still vague, but we tried to extract some more specifics from Abbeel, particularly about the machine learning components. Here’s the rest of our conversation with him:

IEEE Spectrum: How was your system trained initially?

Pieter Abbeel: We would get a lot of data on what kind of SKUs our customer has, get similar SKUs in our headquarters, and just train, train, train on those SKUs. But it’s not just a matter of getting more data. Actually, often there’s a clear limit on a neural net where it’s saturating. Like, we give it more data and more data, but it’s not doing any better, so clearly the neural net doesn’t have the capacity to learn about these new missing pieces. And then the question is, what can we do to re-architect it to learn about this aspect or that aspect that it’s clearly missing out on?

You’ve done a lot of work on sim2real transfer—did you end up using a bajillion simulated arms in this training, or did you have to rely on real-world training?

We found that you need to use both. You need to work both in simulation and the real world to get things to work. And as you’re continually trying to improve your system, you need a whole different kind of testing: You need traditional software unit tests, but you also need to run things in simulation, you need to run it on a real robot, and you need to also be able to test it in the actual facility. It’s a lot more levels of testing when you’re dealing with real physical systems, and those tests require a lot of time and effort to put in place because you may think you’re improving something, but you have to make sure that it’s actually being improved.

What happens if you need to train your system for a totally new class of items?

The first thing we do is we just put new things in front of our robot and see what happens, and often it’ll just work. Our system has few-shot adaptation, meaning that on-the-fly, without us doing anything, when it doesn’t succeed it’ll update its understanding of the scene and try some new things. That makes it a lot more robust in many ways, because if anything noisy or weird happens, or there’s something a little bit new but not that new, you might do a second or third attempt and try some new things.

But of course, there are going to be scenarios where the SKU set is so different from anything it’s been trained on so far that some things are not going to work, and we’ll have to just collect a bunch of new data—what does the robot need to understand about these types of SKUs, how to approach them, how to pick them up. We can use imitation learning, or the robot can try on its own, because with suction, it’s actually not too hard to detect if a robot succeeds or fails. You can get a reward signal for reinforcement learning. But you don’t want to just use RL, because RL is notorious for taking a long time, so we bootstrap it off some imitation and then from there, RL can complete everything. 

Why did you choose a suction gripper?

What’s currently deployed is the suction gripper, because we knew it was going to do the job in this deployment, but if you think about it from a technological point of view, we also actually have a single neural net that uses different grippers. I can’t say exactly how it’s done, but at a high level, your robot is going to take an action based on visual input, but also based on the gripper that’s attached to it, and you can also represent a gripper visually in some way, like a pattern of where the suction cups are. And so, we can condition a single neural network on both what it sees and the end-effector it has available. This makes it possible to hot-swap grippers if you want to. You lose some time, so you don’t want to swap too often, but you could swap between a suction gripper and a parallel gripper, because the same neural network can use different gripping strategies.

And I would say this is a very common thread in everything we do. We really wanted to be a single, general system that can share all its learnings across different modalities, whether it’s SKUs, end of arm tools, different bins you pick from, or other things that might be different. The expertise should all be sharable.

“People often say neural networks are just black boxes and if you’re doing something new you have to start from scratch. That’s not really true . . . Their strength comes from the fact that you can train end-to-end, you can train from input to the desired output” —Pieter Abbeel, Covariant.ai

And one single neural net is versatile enough for this?

People often say neural networks are just black boxes and if you’re doing something new you have to start from scratch. That’s not really true. I don’t think what’s important about neural nets is that they’re black boxes—that’s not really where their strength comes from. Their strength comes from the fact that you can train end-to-end, you can train from input to the desired output. And you can put modular things in there, like neural nets that are an architecture that’s well suited to visual information, versus end-effector information, and then they can merge their information loads to come to a conclusion. And the beauty is that you can train it all together, no problem.

When your system fails at a pick, what are the consequences?

Here’s where things get very interesting. You think about bringing AI into the physical world—AI has been very successful already in the digital world, but the digital world is much more forgiving. There’s a long tail of scenarios that you could encounter in the real world and you haven’t trained against them, or you haven’t hardcoded against them. And that’s what makes it so hard and why you need really good generalization including few-shot adaptation and so forth. 

Now let’s say you want a system to create value. For a robot in a warehouse, does it need to be 100 percent successful? No, it doesn’t. If, say, it takes a few attempts to pick something, that’s just a slowdown. It’s really the overall successful picks per hour that matter, not how often you have to try to get those picks. And so if periodically it has to try twice, it’s really the picking rate that’s affected, not the success rate that’s affected. A true failure is one where human intervention is needed.

With true failures, where after repeated attempts the robot just can’t pick an item, we’ll get notified by that and we can then train on it, and the next day it might work, but at that moment it doesn’t work. And even if a robotic deployment works 90 percent of the time, that’s not good enough. A human picking station can range from 300 to 2000 picks per hour. 2000 is really rare and is peak pick for a very short amount of time, so if we look at the bottom of that range, 300 picks per hour, if we’re succeeding 90 percent, that means 30 failures per hour. Wow, that’s bad. At 30 fails per hour, fixing those up by a human probably takes more than an hour’s worth of work. So what you’ve done now is you’ve created more work than you save, so 90 percent is definitely a no go. 

At 99 percent that’s 3 failures per hour. If it usually takes a couple of minutes for a human to fix, at that point, a human could oversee 10 stations easily, and that’s where all of a sudden we’re creating value. Or a human could do another job, and just keep an eye on the station and jump in for a moment to make sure it keeps running. If you had a 1000 per hour station, you’d need closer to 99.9 percent to get there and so forth, but that’s essentially the calculus we’ve been doing. And that’s what you realize how any extra nine you want to get is so much more challenging than the previous nine you’ve already achieved.

Photo: Elena Zhukova Covariant co-founders (left to right): Tianhao Zhang, Rocky Duan, Peter Chen, and Pieter Abbeel.

There are other companies that are developing using similar approaches to picking—industrial arms, vision systems, suction grippers, neural networks. What makes Covariant’s system work better?

I think it’s a combination of things. First of all, we want to bring to bear any kind of learning—imitation learning, supervised learning, reinforcement learning, all the different kinds of learning you can. And you also want to be smart about how you collect data—what data you collect, what processes you have in place to get the data that you need to improve the system. Then related to that, sometimes it’s not just a matter of data anymore, it’s a matter of, you need to re-architect your neural net. A lot of deep learning progress is made that way, where you come up with new architectures and the new architecture allows you to learn something that otherwise would maybe not be possible to learn. I mean, it’s really all of those things brought together that are giving the results that we’re seeing. So it’s not really like any one that can be singled out as “this is the thing.”

Also, it’s just a really hard problem. If you look at the amount of AI research that was needed to make this work... We started with four people, and we have 40 people now. About half of us are AI researchers, we have some world-leading AI researchers, and I think that’s what’s made the difference. I mean, I know that’s what’s made the difference. 

So it’s not like you’ve developed some sort of crazy new technology or something?

There’s no hardware trick. And we’re not doing, I don’t know, fuzzy logic or something else out of left field all of a sudden. It’s really about the AI stuff that processes everything—underneath it all is a gigantic neural network. 

Okay, then how the heck are you actually making this work?

If you have an extremely uniquely qualified team and you’ve picked the right problem to work on, you can do something that is quite out there compared to what has otherwise been possible. In academic research, people write a paper, and everybody else catches up the moment the paper comes out. We’ve not been doing that—so far we haven’t shared the details of what we actually did to make our system work, because right now we have a technology advantage. I think there will be a day when we will be sharing some of these things, but it’s not going to be anytime soon. 

It probably won’t surprise you that Covariant has been able to lock down plenty of funding (US $27 million so far), but what’s more interesting is some of the individual investors who are now involved with Covariant, which include Geoff Hinton, Fei-Fei Li, Yann LeCun, Raquel Urtasun, Anca Dragan, Michael I. Jordan, Vlad Mnih, Daniela Rus, Dawn Song, and Jeff Dean

While we’re expecting to see more deployments of Covariant’s software in picking applications, it’s also worth mentioning that their press release is much more general about how their AI could be used:

The Covariant Brain [is] universal AI for robots that can be applied to any use case or customer environment. Covariant robots learn general abilities such as robust 3D perception, physical affordances of objects, few-shot learning and real-time motion planning, which enables them to quickly learn to manipulate objects without being told what to do. 

Today, [our] robots are all in logistics, but there is nothing in our architecture that limits it to logistics. In the future we look forward to further building out the Covariant Brain to power ever more robots in industrial-scale settings, including manufacturing, agriculture, hospitality, commercial kitchens and eventually, people’s homes.

Fundamentally, Covariant is attempting to connect sensing with manipulation using a neural network in a way that can potentially be applied to almost anything. Logistics is the obvious first application, since the value there is huge, and even though the ability to generalize is important, there are still plenty of robot-friendly constraints on the task and the environment as well as safe and low-impact ways to fail. As to whether this technology will effectively translate into the kinds of semi-structured and unstructured environments that have historically posed such a challenge for general purpose manipulation (notably, people’s homes)—as much as we love speculating, it’s probably too early even for that.

What we can say for certain is that Covariant’s approach looks promising both in its present implementation and its future potential, and we’re excited to see where they take it from here.

[ Covariant.ai ]

Japan has had a robust robot culture for decades, thanks (at least in part) to the success of the Gundam series, which are bipedal humanoid robots controlled by a human who rides inside of them. I would tell you how many different TV series and video games and manga there are about Gundam, but I’m certain I can’t count that high—there’s like seriously a lot of Gundam stuff out there. One of the most visible bits of Gundam stuff is a real life full-scale Gundam statue in Tokyo, but who really wants a statue, right? C’mon, Japan! Bring us the real thing!

Gundam Factory Yokohama, which is a Gundam Factory in Yokohama, is constructing an 18-meter-tall, 25-ton Gundam robot. The plan is for the robot to be fully actuated using a combination of electric and hydraulic actuators, achieving “Gundam-like movement” with its 24 degrees of freedom. This will include the ability to walk, which has already been simulated by the University of Tokyo JSK Lab:

Video: Kazumichi Moriyama/Impress

As we all know, simulation is pretty much just as good as reality, which is good because so far simulation is all we have of this robot, including these 1/30 scale models of the robot and the docking and maintenance facility that will be built for it:

Video: RobotStart

Apparently, the robot is coupled to a mobile support system (“Gundam Carrier”) that can move the robot in and out of the docking infrastructure, and perhaps provide power and support while the robot takes a step or two backwards and forwards, but it’s really not at all clear at this point how it’s all supposed to work. And it looks that when the robot does move, it’ll be remote controlled and spectators will be restricted to watching from a nearby building, which experience with watching large robots walk tells us is probably in the best interests of everyone.

Image: Sotsu/Sunrise/Gundam Factory Yokohama

The current schedule is for the robot to be open to the public by October, which seems like it’ll be a challenge—but if anyone can do it, it’s Gundam Factory Yokohama. Because no one else will.

[ Gundam Factory Yokohama ] via [ Impress ] and [ RobotStart ]

Japan has had a robust robot culture for decades, thanks (at least in part) to the success of the Gundam series, which are bipedal humanoid robots controlled by a human who rides inside of them. I would tell you how many different TV series and video games and manga there are about Gundam, but I’m certain I can’t count that high—there’s like seriously a lot of Gundam stuff out there. One of the most visible bits of Gundam stuff is a real life full-scale Gundam statue in Tokyo, but who really wants a statue, right? C’mon, Japan! Bring us the real thing!

Gundam Factory Yokohama, which is a Gundam Factory in Yokohama, is constructing an 18-meter-tall, 25-ton Gundam robot. The plan is for the robot to be fully actuated using a combination of electric and hydraulic actuators, achieving “Gundam-like movement” with its 24 degrees of freedom. This will include the ability to walk, which has already been simulated by the University of Tokyo JSK Lab:

Video: Kazumichi Moriyama/Impress

As we all know, simulation is pretty much just as good as reality, which is good because so far simulation is all we have of this robot, including these 1/30 scale models of the robot and the docking and maintenance facility that will be built for it:

Video: RobotStart

Apparently, the robot is coupled to a mobile support system (“Gundam Carrier”) that can move the robot in and out of the docking infrastructure, and perhaps provide power and support while the robot takes a step or two backwards and forwards, but it’s really not at all clear at this point how it’s all supposed to work. And it looks that when the robot does move, it’ll be remote controlled and spectators will be restricted to watching from a nearby building, which experience with watching large robots walk tells us is probably in the best interests of everyone.

Image: Sotsu/Sunrise/Gundam Factory Yokohama

The current schedule is for the robot to be open to the public by October, which seems like it’ll be a challenge—but if anyone can do it, it’s Gundam Factory Yokohama. Because no one else will.

[ Gundam Factory Yokohama ] via [ Impress ] and [ RobotStart ]

Path planning is general problem of mobile robots, which has special characteristics when applied to marine applications. In addition to avoid colliding with obstacles, in marine scenarios, environment conditions such as water currents or wind need to be taken into account in the path planning process. In this paper, several solutions based on the Fast Marching Method are proposed. The basic method focus on collision avoidance and optimal planning and, later on, using the same underlying method, the influence of marine currents in the optimal path planning is detailed. Finally, the application of these methods to consider marine robot formations is presented.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

I’ve got to hand it to Boston Dynamics—letting Adam Savage borrow a Spot for a year is a pretty savvy marketing move.

[ Tested ]

The Indian Space Research Organization (ISRO) plans to send a humanoid robot into space later this year. According to a Times of India story, the humanoid is called Vyommitra and will help ISRO prepare for its Gaganyaan manned space flight mission, expected for 2022. Before sending human astronauts, ISRO will send Vyommitra, which can speak but doesn’t move much (it currently has no legs). According to the Times of India, ISRO chief Kailasavadivoo Sivan said the “Gaganyaan mission is not just about sending a human to space, this mission provides us an opportunities to build a framework for long term national and international collaborations and cooperation. We all know that scientific discoveries, economic development, education, tech development and inspiring youth are coming goals for all nations. Human space flight provides perfect platform to meet all these objectives.”

[ Times of India ]

Soft robots have applications in safe human-robot interactions, manipulation of fragile objects, and locomotion in challenging and unstructured environments. In this paper, we present a computational method for augmenting soft robots with proprioceptive sensing capabilities. Our method automatically computes a minimal stretch-receptive sensor network to user-provided soft robotic designs, which is optimized to perform well under a set of user-specified deformation-force pairs. The sensorized robots are able to reconstruct their full deformation state, under interaction forces. We cast our sensor design as a sub-selection problem, selecting a minimal set of sensors from a large set of fabricable ones which minimizes the error when sensing specified deformation-force pairs. Unique to our approach is the use of an analytical gradient of our reconstruction performance measure with respect to selection variables. We demonstrate our technique on a bending bar and gripper example, illustrating more complex designs with a simulated tentacle.

Disney Research ]

Dragonfly is a rotorcraft lander that will explore Saturn’s large moon Titan. The sampling system called DrACO (Drill for Acquisition of Complex Organics) will extract material from Titan’s surface and deliver it to DraMS (Dragonfly Mass Spectrometer, provided by NASA Goddard Space Flight Center). Honeybee Robotics will build the end-to-end DrACO system (including hardware, avionics, and flight software) and will command its operation once Dragonfly lands on Titan in 2034.

Honeybee Robotics ]

DARPA’s Gremlins program has completed the first flight test of its X-61A vehicle. The test in late November at the U.S. Army’s Dugway Proving Ground in Utah included one captive-carry mission aboard a C-130A and an airborne launch and free flight lasting just over an hour-and-a-half.

The goal for this third phase of the Gremlins program is completion of a full-scale technology demonstration series featuring the air recovery of multiple, low-cost, reusable unmanned aerial systems (UASs), or “Gremlins.” Safety, reliability, and affordability are the key objectives for the system, which would launch groups of UASs from multiple types of military aircraft while out of range from adversary defenses. Once Gremlins complete their mission, the transport aircraft would retrieve them in the air and carry them home, where ground crews would prepare them for their next use within 24 hours.

[ DARPA ]

Thi is only sort of a robot, more of an automated system, but I like the idea: dog training!

[ CompanionPro ]

Free-falling paper shapes exhibit rich, complex and varied behaviours that are extremely challenging to model analytically. Physical experimentation aids in system understanding, but is time-consuming, sensitive to initial conditions and reliant on subjective visual behavioural classification. In this study, robotics, computer vision and machine learning are used to autonomously fabricate, drop, analyse and classify the behaviours of hundreds of shapes.

[ Nature ]

This paper introduces LiftTiles, modular inflatable actuators for prototyping room-scale shape-changing interfaces. Each inflatable actuator has a large footprint (e.g., 30 cm x 30 cm) and enables large-scale shape transformation. The ac- tuator is fabricated from a flexible plastic tube and constant force springs. It extends when inflated and retracts by the force of its spring when deflated. By controlling the internal air volume, the actuator can change its height from 15 cm to 150 cm.

We designed each module as low cost (e.g., 8 USD), lightweight (e.g., 1.8kg), and robust (e.g., with- stand more than 10 kg weight), so that it is suitable for rapid prototyping of room-sized interfaces. Our design utilizes constant force springs to provide greater scalability, simplified fabrication, and stronger retraction force, all essential for large-scale shape-change.

[ LiftTiles ]

Aibo may not be the most fearsome security pupper, but it does have what other dogs don’t: Wireless connectivity, remote control, and a camera.

[ Aibo ]

I missed this Toyota HSR demo at CES, which is really too bad because I really could have used a snack.

[ NEU ]

The HKUST Aerial Robotics Group has some impressive real-time drone planning that’ll be presented at ICRA 2020:

[ Paper ]

Gripping something tricky? When in doubt, just add more fingers.

[ Soft Robotics ]

Demo of the project of Nino Di Pasquale, Matthieu Le Cauchois, Alejandra Plaice and Joël Zbinden. The goal was to program in Python and combine in a project, elements of global path planning, local path planning, baysian filtering for pose estimation and computer vision. The video presents the visualisation interface in real time, assoicated with the real video of the setting with the Thymio robot controlled by wireless connection by the computer running the program.

[ EPFL ]

From public funding opportunities to the latest technologies in software and system integration, the combination of robotics and IT to hardware and application highlights plus updates on new platforms and open-source communities: ROS-Industrial Conference 2019 offered on 3 days in December a varied and top-class programme to more than 150 attendees.

[ ROS-I Consortium ]

Aaron Johnson and his students have been exploring whether hoof-inspired feet can help robots adapt to rough terrain without needing to exhaustively plan out every step.

There’s no paper or anything yet, but Aaron did give a talk at Dynamic Walking 2018.

[ Robomechanics Lab ]

YouTube has put some money into an original eight-episode series on robots and AI, featuring some well-known roboticists. Here are a couple of the more robot-y episodes:

You can watch the whole series at the link below.

[ Age of AI ]

On the AI Podcast, Lex Fridman speaks with Ayanna Howard from Georgia Tech.

[ Lex Fridman ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

I’ve got to hand it to Boston Dynamics—letting Adam Savage borrow a Spot for a year is a pretty savvy marketing move.

[ Tested ]

The Indian Space Research Organization (ISRO) plans to send a humanoid robot into space later this year. According to a Times of India story, the humanoid is called Vyommitra and will help ISRO prepare for its Gaganyaan manned space flight mission, expected for 2022. Before sending human astronauts, ISRO will send Vyommitra, which can speak but doesn’t move much (it currently has no legs). According to the Times of India, ISRO chief Kailasavadivoo Sivan said the “Gaganyaan mission is not just about sending a human to space, this mission provides us an opportunities to build a framework for long term national and international collaborations and cooperation. We all know that scientific discoveries, economic development, education, tech development and inspiring youth are coming goals for all nations. Human space flight provides perfect platform to meet all these objectives.”

[ Times of India ]

Soft robots have applications in safe human-robot interactions, manipulation of fragile objects, and locomotion in challenging and unstructured environments. In this paper, we present a computational method for augmenting soft robots with proprioceptive sensing capabilities. Our method automatically computes a minimal stretch-receptive sensor network to user-provided soft robotic designs, which is optimized to perform well under a set of user-specified deformation-force pairs. The sensorized robots are able to reconstruct their full deformation state, under interaction forces. We cast our sensor design as a sub-selection problem, selecting a minimal set of sensors from a large set of fabricable ones which minimizes the error when sensing specified deformation-force pairs. Unique to our approach is the use of an analytical gradient of our reconstruction performance measure with respect to selection variables. We demonstrate our technique on a bending bar and gripper example, illustrating more complex designs with a simulated tentacle.

Disney Research ]

Dragonfly is a rotorcraft lander that will explore Saturn’s large moon Titan. The sampling system called DrACO (Drill for Acquisition of Complex Organics) will extract material from Titan’s surface and deliver it to DraMS (Dragonfly Mass Spectrometer, provided by NASA Goddard Space Flight Center). Honeybee Robotics will build the end-to-end DrACO system (including hardware, avionics, and flight software) and will command its operation once Dragonfly lands on Titan in 2034.

Honeybee Robotics ]

DARPA’s Gremlins program has completed the first flight test of its X-61A vehicle. The test in late November at the U.S. Army’s Dugway Proving Ground in Utah included one captive-carry mission aboard a C-130A and an airborne launch and free flight lasting just over an hour-and-a-half.

The goal for this third phase of the Gremlins program is completion of a full-scale technology demonstration series featuring the air recovery of multiple, low-cost, reusable unmanned aerial systems (UASs), or “Gremlins.” Safety, reliability, and affordability are the key objectives for the system, which would launch groups of UASs from multiple types of military aircraft while out of range from adversary defenses. Once Gremlins complete their mission, the transport aircraft would retrieve them in the air and carry them home, where ground crews would prepare them for their next use within 24 hours.

[ DARPA ]

Thi is only sort of a robot, more of an automated system, but I like the idea: dog training!

[ CompanionPro ]

Free-falling paper shapes exhibit rich, complex and varied behaviours that are extremely challenging to model analytically. Physical experimentation aids in system understanding, but is time-consuming, sensitive to initial conditions and reliant on subjective visual behavioural classification. In this study, robotics, computer vision and machine learning are used to autonomously fabricate, drop, analyse and classify the behaviours of hundreds of shapes.

[ Nature ]

This paper introduces LiftTiles, modular inflatable actuators for prototyping room-scale shape-changing interfaces. Each inflatable actuator has a large footprint (e.g., 30 cm x 30 cm) and enables large-scale shape transformation. The ac- tuator is fabricated from a flexible plastic tube and constant force springs. It extends when inflated and retracts by the force of its spring when deflated. By controlling the internal air volume, the actuator can change its height from 15 cm to 150 cm.

We designed each module as low cost (e.g., 8 USD), lightweight (e.g., 1.8kg), and robust (e.g., with- stand more than 10 kg weight), so that it is suitable for rapid prototyping of room-sized interfaces. Our design utilizes constant force springs to provide greater scalability, simplified fabrication, and stronger retraction force, all essential for large-scale shape-change.

[ LiftTiles ]

Aibo may not be the most fearsome security pupper, but it does have what other dogs don’t: Wireless connectivity, remote control, and a camera.

[ Aibo ]

I missed this Toyota HSR demo at CES, which is really too bad because I really could have used a snack.

[ NEU ]

The HKUST Aerial Robotics Group has some impressive real-time drone planning that’ll be presented at ICRA 2020:

[ Paper ]

Gripping something tricky? When in doubt, just add more fingers.

[ Soft Robotics ]

Demo of the project of Nino Di Pasquale, Matthieu Le Cauchois, Alejandra Plaice and Joël Zbinden. The goal was to program in Python and combine in a project, elements of global path planning, local path planning, baysian filtering for pose estimation and computer vision. The video presents the visualisation interface in real time, assoicated with the real video of the setting with the Thymio robot controlled by wireless connection by the computer running the program.

[ EPFL ]

From public funding opportunities to the latest technologies in software and system integration, the combination of robotics and IT to hardware and application highlights plus updates on new platforms and open-source communities: ROS-Industrial Conference 2019 offered on 3 days in December a varied and top-class programme to more than 150 attendees.

[ ROS-I Consortium ]

Aaron Johnson and his students have been exploring whether hoof-inspired feet can help robots adapt to rough terrain without needing to exhaustively plan out every step.

There’s no paper or anything yet, but Aaron did give a talk at Dynamic Walking 2018.

[ Robomechanics Lab ]

YouTube has put some money into an original eight-episode series on robots and AI, featuring some well-known roboticists. Here are a couple of the more robot-y episodes:

You can watch the whole series at the link below.

[ Age of AI ]

On the AI Podcast, Lex Fridman speaks with Ayanna Howard from Georgia Tech.

[ Lex Fridman ]

Emotional deception and emotional attachment are regarded as ethical concerns in human-robot interaction. Considering these concerns is essential, particularly as little is known about longitudinal effects of interactions with social robots. We ran a longitudinal user study with older adults in two retirement villages, where people interacted with a robot in a didactic setting for eight sessions over a period of 4 weeks. The robot would show either non-emotive or emotive behavior during these interactions in order to investigate emotional deception. Questionnaires were given to investigate participants' acceptance of the robot, perception of the social interactions with the robot and attachment to the robot. Results show that the robot's behavior did not seem to influence participants' acceptance of the robot, perception of the interaction or attachment to the robot. Time did not appear to influence participants' level of attachment to the robot, which ranged from low to medium. The perceived ease of using the robot significantly increased over time. These findings indicate that a robot showing emotions—and perhaps resulting in users being deceived—in a didactic setting may not by default negatively influence participants' acceptance and perception of the robot, and that older adults may not become distressed if the robot would break or be taken away from them, as attachment to the robot in this didactic setting was not high. However, more research is required as there may be other factors influencing these ethical concerns, and support through other measurements than questionnaires is required to be able to draw conclusions regarding these concerns.

NASA has decided that humans are going back to the Moon. That’s great! Before that actually happens, a whole bunch of other things have to happen, and excitingly, many of those things involve robots. As a sort of first-ish step, NASA is developing a new lunar rover called VIPER (Volatiles Investigating Polar Exploration Rover). VIPER’s job is to noodle around the permanently shaded craters at the Moon’s south pole looking for water ice, which can (eventually) be harvested and turned into breathable air and rocket fuel.

An engineering model of the Volatiles Investigating Polar Exploration Rover, or VIPER, is tested in the Simulated Lunar Operations Laboratory at NASA’s Glenn Research Center in Cleveland, Ohio. About the size of a golf cart, VIPER is a mobile robot that will roam around the Moon’s South Pole looking for water ice in the region and for the first time ever, actually sample the water ice at the same pole where the first woman and next man will land in 2024 under the Artemis program.

In the video, the VIPER engineering model is enjoying playtime in simulated lunar regolith (not stuff that you want to be breathing, hence the fancy hats) to help characterize the traction of the wheels on different slopes, and to help figure out how much power will be necessary. The final rover might look a bit more like this:

VIPER is more than somewhat similar to an earlier rover that NASA was working on called Resource Prospector, which was cancelled back in 2018. Resource Prospector was also scheduled to go to the Moon’s south pole to look for water ice, and VIPER will be carrying some of the instruments originally developed for Resource Prospector. If it seems a little weird that NASA cancelled Resource Prospector only to almost immediately start work on VIPER, well, yeah—the primary difference between the two rovers seems to be that VIPER is intended to spend several months operating, while Resource Prospector’s lifespan was only a couple of weeks.

The other big difference between VIPER and Resource Prospector is that NASA has been gradually shifting away from developing all of its own hardware in-house, and VIPER is no exception. One of the primary science instruments, a drilling system called TRIDENT (The Regolith and Ice Drill for Exploring New Terrain, obviously), comes from Honeybee Robotics, which has contributed a variety of tools that have been used to poke and prod at the surface of Mars on the Mars Exploration rovers, Phoenix, and Curiosity. There’s nothing wrong with this, except that for VIPER, it looks like NASA wants a commercial delivery system as well.

Finding water ice on the Moon is the first step towards the in-situ resource utilization (ISRU) robots necessary to practically sustain a long-term lunar mission

Last week, Space News reported that NASA is postponing procurement of a commercially-developed lander that would deliver VIPER to the lunar surface, meaning that not only does VIPER not have a ride to the Moon right now, but that it’s not very clear when it’ll actually happen—as recently as last November, the plan was to have a lander selected by early February, for a landing in late 2022. From the sound of things, the problem is that VIPER is a relatively chunky payload (weighing about 350 kg), meaning that only a few companies have the kind of hardware that would be required to get it safely to the lunar surface, and NASA has a limited budget that also has to cover a bunch of other stuff at the same time.

This delay is unfortunate, because VIPER plays an important part in NASA’s overall lunar strategy. Finding water ice on the Moon is the first step towards the in-situ resource utilization (ISRU) robots necessary to practically sustain a long-term lunar mission, and after that, it’ll take a bunch more work to actually deploy a system to harvest ice and turn it into usable hydrogen and oxygen with enough reliability and volume to make a difference. We have the technology—we’ve just got to get it there, and get it working. 

[ VIPER ]

NASA has decided that humans are going back to the Moon. That’s great! Before that actually happens, a whole bunch of other things have to happen, and excitingly, many of those things involve robots. As a sort of first-ish step, NASA is developing a new lunar rover called VIPER (Volatiles Investigating Polar Exploration Rover). VIPER’s job is to noodle around the permanently shaded craters at the Moon’s south pole looking for water ice, which can (eventually) be harvested and turned into breathable air and rocket fuel.

An engineering model of the Volatiles Investigating Polar Exploration Rover, or VIPER, is tested in the Simulated Lunar Operations Laboratory at NASA’s Glenn Research Center in Cleveland, Ohio. About the size of a golf cart, VIPER is a mobile robot that will roam around the Moon’s South Pole looking for water ice in the region and for the first time ever, actually sample the water ice at the same pole where the first woman and next man will land in 2024 under the Artemis program.

In the video, the VIPER engineering model is enjoying playtime in simulated lunar regolith (not stuff that you want to be breathing, hence the fancy hats) to help characterize the traction of the wheels on different slopes, and to help figure out how much power will be necessary. The final rover might look a bit more like this:

VIPER is more than somewhat similar to an earlier rover that NASA was working on called Resource Prospector, which was cancelled back in 2018. Resource Prospector was also scheduled to go to the Moon’s south pole to look for water ice, and VIPER will be carrying some of the instruments originally developed for Resource Prospector. If it seems a little weird that NASA cancelled Resource Prospector only to almost immediately start work on VIPER, well, yeah—the primary difference between the two rovers seems to be that VIPER is intended to spend several months operating, while Resource Prospector’s lifespan was only a couple of weeks.

The other big difference between VIPER and Resource Prospector is that NASA has been gradually shifting away from developing all of its own hardware in-house, and VIPER is no exception. One of the primary science instruments, a drilling system called TRIDENT (The Regolith and Ice Drill for Exploring New Terrain, obviously), comes from Honeybee Robotics, which has contributed a variety of tools that have been used to poke and prod at the surface of Mars on the Mars Exploration rovers, Phoenix, and Curiosity. There’s nothing wrong with this, except that for VIPER, it looks like NASA wants a commercial delivery system as well.

Finding water ice on the Moon is the first step towards the in-situ resource utilization (ISRU) robots necessary to practically sustain a long-term lunar mission

Last week, Space News reported that NASA is postponing procurement of a commercially-developed lander that would deliver VIPER to the lunar surface, meaning that not only does VIPER not have a ride to the Moon right now, but that it’s not very clear when it’ll actually happen—as recently as last November, the plan was to have a lander selected by early February, for a landing in late 2022. From the sound of things, the problem is that VIPER is a relatively chunky payload (weighing about 350 kg), meaning that only a few companies have the kind of hardware that would be required to get it safely to the lunar surface, and NASA has a limited budget that also has to cover a bunch of other stuff at the same time.

This delay is unfortunate, because VIPER plays an important part in NASA’s overall lunar strategy. Finding water ice on the Moon is the first step towards the in-situ resource utilization (ISRU) robots necessary to practically sustain a long-term lunar mission, and after that, it’ll take a bunch more work to actually deploy a system to harvest ice and turn it into usable hydrogen and oxygen with enough reliability and volume to make a difference. We have the technology—we’ve just got to get it there, and get it working. 

[ VIPER ]

Suction is a useful tool in many robotic applications, as long as those applications are grasping objects that are suction-friendly—that is, objects that are impermeable and generally smooth-ish and flat-ish. If you can’t form a seal on a surface, your suction gripper is going to have a bad time, which is why you don’t often see suction systems working outside of an environment that’s at least semi-constrained. Warehouses? Yes. Kitchens? Maybe. The outdoors? Almost certainly not.

In general, getting robotic grippers (and robots themselves) to adhere to smooth surfaces and rough surfaces requires completely different technology. But researchers from Zhejiang University in China have come up with a new kind of suction gripper that can very efficiently handle surfaces like widely-spaced tile and even rough concrete, by augmenting the sealing system with a spinning vortex of water.

Image: Zhejiang University To climb, the robot uses a micro-vacuum pump coupled to a rapidly rotating fan and a water source. Centripetal force causes the spinning water to form a ring around the outside of the vacuum chamber. Because water can get into all those surface irregularities that doom traditional vacuum grippers, the seal is much stronger.

The paper is a little bit dense, but from what I can make out, what’s going on is that you’ve got a traditional suction gripper with a vacuum pump, modified with a water injection system and a fan. The fan has nothing to do with creating or maintaining a vacuum—its job is to get the water spinning at up to 90 rotations per second. Centripetal force causes the spinning water to form a ring around the outside of the vacuum chamber, which keeps the water from being sucked out through the vacuum pump while also maintaining a liquid seal between the vacuum chamber and the surface. Because water can get into all of those annoying little nooks and crannies that can mean doom for traditional vacuum grippers, the seal is much better, resulting in far higher performance, especially on surfaces with high roughness.

Photo: Zhejiang University One of the potential applications for the water-vortex suction robot is as a “Spider-Man” wall-climbing device.

For example, a single suction unit weighing 0.8 kg was able to generate a suction force of over 245 N on a rough surface using less than 400 W, while a traditional suction unit of the same footprint would need several thousand watts (and weigh dozens of kilograms) to generate a comparable amount of suction, since the rough surface would cause a significant amount of leakage (although not a loss of suction). At very high power, the efficiency does decrease a bit— the “Spider-Man” system weighs 3 kg per unit, with a suction force of 2000 N using 650 W.

And as for the downsides? Er, well, it does kind of leak all over the place, especially when disengaging. The “Spider-Man” version leaks over 2 liters per minute. It’s only water, but still. And since it leaks, it needs to be provided with a constant water supply, which limits its versatility. The researchers are working on ways of significantly reducing water consumption to make the system more independent, but personally, I feel like the splooshyness is part of the appeal.

Vacuum suction unit based on the zero pressure difference method,” by Kaige Shi and Xin Li from Zhejiang University in China, is published in Physics of Fluids.

Suction is a useful tool in many robotic applications, as long as those applications are grasping objects that are suction-friendly—that is, objects that are impermeable and generally smooth-ish and flat-ish. If you can’t form a seal on a surface, your suction gripper is going to have a bad time, which is why you don’t often see suction systems working outside of an environment that’s at least semi-constrained. Warehouses? Yes. Kitchens? Maybe. The outdoors? Almost certainly not.

In general, getting robotic grippers (and robots themselves) to adhere to smooth surfaces and rough surfaces requires completely different technology. But researchers from Zhejiang University in China have come up with a new kind of suction gripper that can very efficiently handle surfaces like widely-spaced tile and even rough concrete, by augmenting the sealing system with a spinning vortex of water.

Image: Zhejiang University To climb, the robot uses a micro-vacuum pump coupled to a rapidly rotating fan and a water source. Centripetal force causes the spinning water to form a ring around the outside of the vacuum chamber. Because water can get into all those surface irregularities that doom traditional vacuum grippers, the seal is much stronger.

The paper is a little bit dense, but from what I can make out, what’s going on is that you’ve got a traditional suction gripper with a vacuum pump, modified with a water injection system and a fan. The fan has nothing to do with creating or maintaining a vacuum—its job is to get the water spinning at up to 90 rotations per second. Centripetal force causes the spinning water to form a ring around the outside of the vacuum chamber, which keeps the water from being sucked out through the vacuum pump while also maintaining a liquid seal between the vacuum chamber and the surface. Because water can get into all of those annoying little nooks and crannies that can mean doom for traditional vacuum grippers, the seal is much better, resulting in far higher performance, especially on surfaces with high roughness.

Photo: Zhejiang University One of the potential applications for the water-vortex suction robot is as a “Spider-Man” wall-climbing device.

For example, a single suction unit weighing 0.8 kg was able to generate a suction force of over 245 N on a rough surface using less than 400 W, while a traditional suction unit of the same footprint would need several thousand watts (and weigh dozens of kilograms) to generate a comparable amount of suction, since the rough surface would cause a significant amount of leakage (although not a loss of suction). At very high power, the efficiency does decrease a bit— the “Spider-Man” system weighs 3 kg per unit, with a suction force of 2000 N using 650 W.

And as for the downsides? Er, well, it does kind of leak all over the place, especially when disengaging. The “Spider-Man” version leaks over 2 liters per minute. It’s only water, but still. And since it leaks, it needs to be provided with a constant water supply, which limits its versatility. The researchers are working on ways of significantly reducing water consumption to make the system more independent, but personally, I feel like the splooshyness is part of the appeal.

Vacuum suction unit based on the zero pressure difference method,” by Kaige Shi and Xin Li from Zhejiang University in China, is published in Physics of Fluids.

Natural language is inherently a discrete symbolic representation of human knowledge. Recent advances in machine learning (ML) and in natural language processing (NLP) seem to contradict the above intuition: discrete symbols are fading away, erased by vectors or tensors called distributed and distributional representations. However, there is a strict link between distributed/distributional representations and discrete symbols, being the first an approximation of the second. A clearer understanding of the strict link between distributed/distributional representations and symbols may certainly lead to radically new deep learning networks. In this paper we make a survey that aims to renew the link between symbolic representations and distributed/distributional representations. This is the right time to revitalize the area of interpreting how discrete symbols are represented inside neural networks.

Most collaborative tasks require interaction with everyday objects (e.g., utensils while cooking). Thus, robots must perceive everyday objects in an effective and efficient way. This highlights the necessity of understanding environmental factors and their impact on visual perception, such as illumination changes throughout the day on robotic systems in the real world. In object recognition, two of these factors are changes due to illumination of the scene and differences in the sensors capturing it. In this paper, we will present data augmentations for object recognition that enhance a deep learning architecture. We will show how simple linear and non-linear illumination models and feature concatenation can be used to improve deep learning-based approaches. The aim of this work is to allow for more realistic Human-Robot Interaction scenarios with a small amount of training data in combination with incremental interactive object learning. This will benefit the interaction with the robot to maximize object learning for long-term and location-independent learning in unshaped environments. With our model-based analysis, we showed that changes in illumination affect recognition approaches that use Deep Convolutional Neural Network to encode features for object recognition. Using data augmentation, we were able to show that such a system can be modified toward a more robust recognition without retraining the network. Additionally, we have shown that using simple brightness change models can help to improve the recognition across all training set sizes.

Repertoire-based learning is a data-efficient adaptation approach based on a two-step process in which (1) a large and diverse set of policies is learned in simulation, and (2) a planning or learning algorithm chooses the most appropriate policies according to the current situation (e.g., a damaged robot, a new object, etc.). In this paper, we relax the assumption of previous works that a single repertoire is enough for adaptation. Instead, we generate repertoires for many different situations (e.g., with a missing leg, on different floors, etc.) and let our algorithm selects the most useful prior. Our main contribution is an algorithm, APROL (Adaptive Prior selection for Repertoire-based Online Learning) to plan the next action by incorporating these priors when the robot has no information about the current situation. We evaluate APROL on two simulated tasks: (1) pushing unknown objects of various shapes and sizes with a robotic arm and (2) a goal reaching task with a damaged hexapod robot. We compare with “Reset-free Trial and Error” (RTE) and various single repertoire-based baselines. The results show that APROL solves both the tasks in less interaction time than the baselines. Additionally, we demonstrate APROL on a real, damaged hexapod that quickly learns to pick compensatory policies to reach a goal by avoiding obstacles in the path.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

The Real-World Deployment of Legged Robots Workshop is back at ICRA 2020!

We’ll be there!

Workshop ]

Thanks Marko!

This video shows some cool musical experiments with Pepper. They should definitely release this karaoke feature to Peppers everywhere—with “Rage Against the Machine” songs included, of course. NSFW warning: There is some swearing by both robot and humans, so headphones recommended if you’re at work.

It all started when on a whim David and another team member fed a karaoke file into Pepper’s text to speech, with a quick Python script, and playing some music in parallel from their PC. The effect was a bit strange, but there was something so fun (and funny) to it. I think they were going for a virtual performance from Pepper or something, but someone noted that it sounds like he’s struggling like someone doing karaoke. And from there it grew into doing duets with Pepper.

This thing might seem ridiculous, and it is. But believe me, it’s genuinely fun. It was going all night in a meeting room at the office winter party.

[ Taylor Veltrop ]

And now, this.

In “Scary Beauty,” a performance conceived and directed by Tokyo-based musician Keiichiro Shibuya, a humanoid robot called Alter 3 not only conducts a human orchestra but also sings along with it. 

Unlike the previous two "Alters", the Alter 3 has improved sensory and expression capabilities closer to humans, such as a camera with both eyes and the ability to utter from the mouth, as well as expressiveness around the mouth for singing. In addition, the output was enhanced compared to the alternator 2, which made it possible to improve the immediacy of the body expression and achieve dynamic movement. In addition, portability, which allows anyone to disassemble and assemble and transport by air, is one of the evolutions of the Altera 3.

Scary Beauty ] via [ RobotStart ]

Carnegie Mellon University’s Henny Admoni studies human behavior in order to program robots to better anticipate people’s needs. Admoni’s research focuses on using assistive robots to address different impairments and aid people in living more fulfilling lives.

[ HARP Lab ]

Olympia was produced as part of a two-year project exploring the growth of social and humanoid robotics in the UK and beyond. Olympia was shot on location at Bristol Robotics Labs, one of the largest of its kind in Britain.

Humanoid robotics - one the most complex and often provocative areas of artificial intelligence - form the central subject of this short film. At what point are we willing to believe that we might form a real bond with a machine?

[ Olympia ] via [ Bristol Robotics Lab ]

In this work, we explore user preferences for different modes of autonomy for robot-assisted feeding given perceived error risks and also analyze the effect of input modalities on technology acceptance.

[ Personal Robotics Lab ]

This video brings to you a work conducted on a multi-agent system of aerial robots to form mid-air structures by docking using position-based visual servoing of the aerial robot. For the demonstration, the commercially available drone DJI Tello has been modified to fit to use and has been commanded using the DJI Tello Python SDK.

[ YouTube ]

The video present DLR CLASH (Compliant Low-cost Antagonistic Servo Hand) developed within the EU-Project Soma (grant number H2020-ICT-645599) and shows the hand resilience tests and the capability of the hand to grasp objects under different motor and sensor failures.

[ DLR ]

Squishy Robotics is celebrating our birthday! Here is a short montage of the places we’ve been and the things we’ve done over the last three years.

[ Squishy Robotics ]

The 2020 DJI RoboMaster Challenge takes place in Shenzhen in early August 2020.

[ RoboMaster ]

With support from the National Science Foundation, electrical engineer Yan Wan and a team at the University of Texas at Arlington are developing a new generation of "networked" unmanned aerial vehicles (UAVs) to bring long distance, broadband communications capability to first responders in the field.

[ NSF ]

Drones and UAVs are vulnerable to hackers that might try to take control of the craft or access data stored on-board. Researchers at the University of Michigan are part of a team building a suite of software to keep drones secure.

The suite is called Trusted and Resilient Mission Operations (TRMO). The U-M team, led by Wes Weimer, professor of electrical engineering and computer science, is focused on integrating the different applications into a holistic system that can prevent and combat attacks in real time.

[ UMich ]

A mobile robot that revs up industrial production: SOTO enables efficient automated line feeding, for example in the automotive industry. The supply chain robot SOTO brings materials to the assembly line, just-in-time and completely autonomous.

[ Magazino ]

MIT’s Lex Fridman get us caught up with the state-of-the-art in deep learning.

[ MIT ]

Just in case you couldn’t make it out to Australia in 2018, here are a couple of the keynotes from ICRA in Brisbane.

[ ICRA 2018 ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

The Real-World Deployment of Legged Robots Workshop is back at ICRA 2020!

We’ll be there!

Workshop ]

Thanks Marko!

This video shows some cool musical experiments with Pepper. They should definitely release this karaoke feature to Peppers everywhere—with “Rage Against the Machine” songs included, of course. NSFW warning: There is some swearing by both robot and humans, so headphones recommended if you’re at work.

It all started when on a whim David and another team member fed a karaoke file into Pepper’s text to speech, with a quick Python script, and playing some music in parallel from their PC. The effect was a bit strange, but there was something so fun (and funny) to it. I think they were going for a virtual performance from Pepper or something, but someone noted that it sounds like he’s struggling like someone doing karaoke. And from there it grew into doing duets with Pepper.

This thing might seem ridiculous, and it is. But believe me, it’s genuinely fun. It was going all night in a meeting room at the office winter party.

[ Taylor Veltrop ]

And now, this.

In “Scary Beauty,” a performance conceived and directed by Tokyo-based musician Keiichiro Shibuya, a humanoid robot called Alter 3 not only conducts a human orchestra but also sings along with it. 

Unlike the previous two "Alters", the Alter 3 has improved sensory and expression capabilities closer to humans, such as a camera with both eyes and the ability to utter from the mouth, as well as expressiveness around the mouth for singing. In addition, the output was enhanced compared to the alternator 2, which made it possible to improve the immediacy of the body expression and achieve dynamic movement. In addition, portability, which allows anyone to disassemble and assemble and transport by air, is one of the evolutions of the Altera 3.

Scary Beauty ] via [ RobotStart ]

Carnegie Mellon University’s Henny Admoni studies human behavior in order to program robots to better anticipate people’s needs. Admoni’s research focuses on using assistive robots to address different impairments and aid people in living more fulfilling lives.

[ HARP Lab ]

Olympia was produced as part of a two-year project exploring the growth of social and humanoid robotics in the UK and beyond. Olympia was shot on location at Bristol Robotics Labs, one of the largest of its kind in Britain.

Humanoid robotics - one the most complex and often provocative areas of artificial intelligence - form the central subject of this short film. At what point are we willing to believe that we might form a real bond with a machine?

[ Olympia ] via [ Bristol Robotics Lab ]

In this work, we explore user preferences for different modes of autonomy for robot-assisted feeding given perceived error risks and also analyze the effect of input modalities on technology acceptance.

[ Personal Robotics Lab ]

This video brings to you a work conducted on a multi-agent system of aerial robots to form mid-air structures by docking using position-based visual servoing of the aerial robot. For the demonstration, the commercially available drone DJI Tello has been modified to fit to use and has been commanded using the DJI Tello Python SDK.

[ YouTube ]

The video present DLR CLASH (Compliant Low-cost Antagonistic Servo Hand) developed within the EU-Project Soma (grant number H2020-ICT-645599) and shows the hand resilience tests and the capability of the hand to grasp objects under different motor and sensor failures.

[ DLR ]

Squishy Robotics is celebrating our birthday! Here is a short montage of the places we’ve been and the things we’ve done over the last three years.

[ Squishy Robotics ]

The 2020 DJI RoboMaster Challenge takes place in Shenzhen in early August 2020.

[ RoboMaster ]

With support from the National Science Foundation, electrical engineer Yan Wan and a team at the University of Texas at Arlington are developing a new generation of "networked" unmanned aerial vehicles (UAVs) to bring long distance, broadband communications capability to first responders in the field.

[ NSF ]

Drones and UAVs are vulnerable to hackers that might try to take control of the craft or access data stored on-board. Researchers at the University of Michigan are part of a team building a suite of software to keep drones secure.

The suite is called Trusted and Resilient Mission Operations (TRMO). The U-M team, led by Wes Weimer, professor of electrical engineering and computer science, is focused on integrating the different applications into a holistic system that can prevent and combat attacks in real time.

[ UMich ]

A mobile robot that revs up industrial production: SOTO enables efficient automated line feeding, for example in the automotive industry. The supply chain robot SOTO brings materials to the assembly line, just-in-time and completely autonomous.

[ Magazino ]

MIT’s Lex Fridman get us caught up with the state-of-the-art in deep learning.

[ MIT ]

Just in case you couldn’t make it out to Australia in 2018, here are a couple of the keynotes from ICRA in Brisbane.

[ ICRA 2018 ]

Pages