Feed aggregator

When I reached Professor Guang-Zhong Yang on the phone last week, he was cooped up in a hotel room in Shanghai, where he had self-isolated after returning from a trip abroad. I wanted to hear from Yang, a widely respected figure in the robotics community, about the role that robots are playing in fighting the coronavirus pandemic. He’d been monitoring the situation from his room over the previous week, and during that time his only visitors were a hotel employee, who took his temperature twice a day, and a small wheeled robot, which delivered his meals autonomously.

An IEEE Fellow and founding editor of the journal Science Robotics, Yang is the former director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College London. More recently, he became the founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University, often called the MIT of China. Yang wants to build the new institute into a robotics powerhouse, recruiting 500 faculty members and graduate students over the next three years to explore areas like surgical and rehabilitation robots, image-guided systems, and precision mechatronics.

“I ran a lot of the operations for the institute from my hotel room using Zoom,” he told me.

Yang is impressed by the different robotic systems being deployed as part of the COVID-19 response. There are robots checking patients for fever, robots disinfecting hospitals, and robots delivering medicine and food. But he thinks robotics can do even more.

Photo: Shanghai Jiao Tong University Professor Guang-Zhong Yang, founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University.

“Robots can be really useful to help you manage this kind of situation, whether to minimize human-to-human contact or as a front-line tool you can use to help contain the outbreak,” he says. While the robots currently being used rely on technologies that are mature enough to be deployed, he argues that roboticists should work more closely with medical experts to develop new types of robots for fighting infectious diseases.

“What I fear is that, there is really no sustained or coherent effort in developing these types of robots,” he says. “We need an orchestrated effort in the medical robotics community, and also the research community at large, to really look at this more seriously.”

Yang calls for a global effort to tackle the problem. “In terms of the way to move forward, I think we need to be more coordinated globally,” he says. “Because many of the challenges require that we work collectively to deal with them.”

Our full conversation, edited for clarity and length, is below.

IEEE Spectrum: How is the situation in Shanghai?

Guang-Zhong Yang: I came back to Shanghai about 10 days ago, via Hong Kong, so I’m now under self-imposed isolation in a hotel room just to be cautious, for two weeks. The general feeling in Shanghai is that it’s really calm and orderly. Everything seems well under control. And as you probably know, in recent days the number of new cases is steadily dropping. So the main priority for the government is to restore normal routines, and also for companies to go back to work. Of course, people are still very cautious, and there are systematic checks in place. In my hotel, for instance, I get checked twice a day for my temperature to make sure that all the people in the hotel are well.

Are most people staying inside, are the streets empty?

No, the streets are not empty. In fact, in Minhang, next to Shanghai Jiao Tong University, things are going back to normal. Not at full capacity, but stores and restaurants are gradually opening. And people are thinking about the essential travels they need to do, what they can do remotely. As you know in China we have very good online order and delivery services, so people use them a lot more. I was really impressed by how the whole thing got under control, really.

Has Shanghai Jiao Tong University switched to online classes?

Yes. Since last week, the students are attending online lectures. The university has 1449 courses for undergrads and 657 for graduate students. I participated in some of them. It’s really well run. You can have the typical format with a presenter teaching the class, but you can also have part of the lecture with the students divided into groups and having discussions. Of course what’s really affected is laboratory-based work. So we’ll need to wait for some more time to get back into action.

What do you think of the robots being used to help fight the outbreak?

I’ve seen reports showing a variety of robots being deployed. Disinfection robots that use UV light in hospitals. Drones being used for transporting samples. There’s a prototype robot, developed by the Chinese Academy of Sciences, to remotely collect oropharyngeal swabs from patients for testing, so a medical worker doesn’t have to directly swab the patient. In my hotel, there’s a robot that brings my meals to my door. This little robot can manage to get into the lift, go to your room, and call you to open the door. I’m a roboticist myself and I find it striking how well this robot works every time! [Laughs.]

Photo: UVD Robots UVD Robots has shipped hundreds of ultraviolet-C disinfection robots like the one above to Chinese hospitals. 

After Japan’s Fukushima nuclear emergency, the robotics community realized that it needed to be better prepared. It seems that we’ve made progress with disaster-response robots, but what about dealing with pandemics?

I think that for events involving infectious diseases, like this coronavirus outbreak, when they happen, everybody realizes the importance of robots. The challenge is that at most research institutions, people are more concerned with specific research topics, and that’s indeed the work of a scientist—to dig deep into the scientific issues and solve those specific problems. But we also need to have a global view to deal with big challenges like this pandemic.

So I think what we need to do, starting now, is to have a more systematic effort to make sure those robots can be deployed when we need them. We just need to recompose ourselves and work to identify the technologies that are ready to be deployed, and what are the key directions we need to pursue. There’s a lot we can do. It’s not too late. Because this is not going to disappear. We have to see the worst before it gets better.

So what should we do to be better prepared?

After a major crisis, when everything is under control, people’s priority is to go back to our normal routines. The last thing in people’s minds is, What should we do to prepare for the next crisis? And the thing is, you can’t predict when the next crisis will happen. So I think we need three levels of action, and it really has to be a global effort. One is at the government level, in particular funding agencies: How to make sure we can plan ahead and to prepare for the worst.

Another level is the robotics community, including organizations like the IEEE, we need leadership to advocate for these issues and promote activities like robotics challenges. We see challenges for disasters, logistics, drones—how about a robotic challenge for infectious diseases. I was surprised, and a bit disappointed in myself, that we didn’t think about this before. So for the editorial board of Science Robotics, for instance, this will become an important topic for us to rethink.

And the third level is our interaction with front-line clinicians—our interaction with them needs to be stronger. We need to understand the requirements and not be obsessed with pure technologies, so we can ensure that our systems are effective, safe, and can be rapidly deployed. I think that if we can mobilize and coordinate our effort at all these three levels, that would be transformative. And we’ll be better prepared for the next crisis.

Are there projects taking place at the Institute of Medical Robotics that could help with this pandemic?

The institute has been in full operation for just over a year now. We have three main areas of research: The first is surgical robotics, which is my main area of research. The second area is in rehabilitation and assistive robots. The third area is hospital and laboratory automation. One important lesson that we learned from the coronavirus is that, if we can detect and intervene early, we have a better chance of containing it. And for other diseases, it’s the same. For cancer, early detection based on imaging and other sensing technologies, is critical. So that’s something we want to explore—how robotics, including technologies like laboratory automation, can help with early detection and intervention.

“One area we are working on is automated intensive-care unit wards. The idea is to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks.”

One area we are working on is automated intensive-care unit wards. The idea is to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks. Some tasks could be performed remotely by medical personnel, while other tasks could be fully automated. A lot of the technologies that we already use in surgical robotics can be translated into this area. We’re hoping to work with other institutions and share our expertise to continue developing this further. Indeed, this technology is not just for emergency situations. It will also be useful for routine management of infectious disease patients. We really need to rethink how hospitals are organized in the future to avoid unnecessary exposure and cross-infection.

Photo: Shanghai Jiao Tong University Shanghai Jiao Tong University’s Institute of Medical Robotics is researching areas like micro/nano systems, surgical and rehabilitation robotics, and human-robot interaction.

I’ve seen some recent headlines—“China’s tech fights back,” “Coronavirus is the first big test for futuristic tech”—many people expect technology to save the day.

When there’s a major crisis like this pandemic, in the general public’s mind, people want to find a magic cure that will solve all the problems. I completely understand that expectation. But technology can’t always do that, of course. What technology can do is to help us to be better prepared. For example, it’s clear that in the last few years self-navigating robots with localization and mapping are becoming a mature technology, so we should see more of those used for situations like this. I’d also like to see more technologies developed for front-line management of patients, like the robotic ICU I mentioned earlier. Another area is public transportation systems—can they have an element of disease prevention, using technology to minimize the spread of diseases so that lockdowns are only imposed as a last resort?

And then there’s the problem of people being isolated. You probably saw that Italy has imposed a total lockdown. That could have a major psychological impact, particularly for people who are vulnerable and living alone. There is one area of robotics, called social robotics, that could play a part in this as well. I’ve been in this hotel room by myself for days now—I’m really starting to feel the isolation…

We should have done a Zoom call.

Yes, we should. [Laughs.] I guess this isolation, or quarantine for various people, also provides the opportunity for us to reflect on our lives, our work, our daily routines. That’s the silver lining that we may see from this crisis.

Photo: Unity Drive Innovation Unity Drive, a startup spun out of Hong Kong University of Science and Technology, is deploying self-driving vehicles to carry out contactless deliveries in three Chinese cities.

While some people say we need more technology during emergencies like this, others worry that companies and governments will use things like cameras and facial recognition to increase surveillance of individuals.

A while ago we published an article listing the 10 grand challenges for robotics in Science Robotics. One of the grand challenges is concerned with legal and ethical issues, which include what you mentioned in your question. Respecting privacy, and also being sensitive about individual and citizens’ rights—these are very, very important. Because we must operate within this legal ethical boundary. We should not use technologies that will intrude in people’s lives. You mentioned that some people say that we don’t have enough technology, and that others say we have too much. And I think both have a point. What we need to do is to develop technologies that are appropriate to be deployed in the right situation and for the right tasks.

Many researchers seem eager to help. What would you say to roboticists interested in helping fight this outbreak or prepare for the next one?

For medical robotics research, my experience is that for your technology to be effective, it has to be application oriented. You need to ensure that end-users like the clinicians who will use your robot, or in the case of assistive robots, the patients, that they are deeply involved in the development of the technology. And the second thing is really to think out of the box—how to develop radically different new technologies. Because robotics research is very hands on and there’s a tendency of adapting what’s readily available out there. For your technology to have a major impact, you need to fundamentally rethink your research and innovation, not just follow the waves.

For example, at our institute we’re investing a lot of effort on the development of micro and nano systems and also new materials that could one day be used in robots. Because for micro robotic systems, we can’t rely on the more traditional approach of using motors and gears that we use in larger systems. So my suggestion is to work on technologies that not only have a deep science element but can also become part of a real-world application. Only then we can be sure to have strong technologies to deal with future crises.

Back to IEEE COVID-19 Resources

When I reached Professor Guang-Zhong Yang on the phone last week, he was cooped up in a hotel room in Shanghai, where he had self-isolated after returning from a trip abroad. I wanted to hear from Yang, a widely respected figure in the robotics community, about the role that robots are playing in fighting the coronavirus pandemic. He’d been monitoring the situation from his room over the previous week, and during that time his only visitors were a hotel employee, who took his temperature twice a day, and a small wheeled robot, which delivered his meals autonomously.

An IEEE Fellow and founding editor of the journal Science Robotics, Yang is the former director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College London. More recently, he became the founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University, often called the MIT of China. Yang wants to build the new institute into a robotics powerhouse, recruiting 500 faculty members and graduate students over the next three years to explore areas like surgical and rehabilitation robots, image-guided systems, and precision mechatronics.

“I ran a lot of the operations for the institute from my hotel room using Zoom,” he told me.

Yang is impressed by the different robotic systems being deployed as part of the COVID-19 response. There are robots checking patients for fever, robots disinfecting hospitals, and robots delivering medicine and food. But he thinks robotics can do even more.

Photo: Shanghai Jiao Tong University Professor Guang-Zhong Yang, founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University.

“Robots can be really useful to help you manage this kind of situation, whether to minimize human-to-human contact or as a front-line tool you can use to help contain the outbreak,” he says. While the robots currently being used rely on technologies that are mature enough to be deployed, he argues that roboticists should work more closely with medical experts to develop new types of robots for fighting infectious diseases.

“What I fear is that, there is really no sustained or coherent effort in developing these types of robots,” he says. “We need an orchestrated effort in the medical robotics community, and also the research community at large, to really look at this more seriously.”

Yang calls for a global effort to tackle the problem. “In terms of the way to move forward, I think we need to be more coordinated globally,” he says. “Because many of the challenges require that we work collectively to deal with them.”

Our full conversation, edited for clarity and length, is below.

IEEE Spectrum: How is the situation in Shanghai?

Guang-Zhong Yang: I came back to Shanghai about 10 days ago, via Hong Kong, so I’m now under self-imposed isolation in a hotel room just to be cautious, for two weeks. The general feeling in Shanghai is that it’s really calm and orderly. Everything seems well under control. And as you probably know, in recent days the number of new cases is steadily dropping. So the main priority for the government is to restore normal routines, and also for companies to go back to work. Of course, people are still very cautious, and there are systematic checks in place. In my hotel, for instance, I get checked twice a day for my temperature to make sure that all the people in the hotel are well.

Are most people staying inside, are the streets empty?

No, the streets are not empty. In fact, in Minhang, next to Shanghai Jiao Tong University, things are going back to normal. Not at full capacity, but stores and restaurants are gradually opening. And people are thinking about the essential travels they need to do, what they can do remotely. As you know in China we have very good online order and delivery services, so people use them a lot more. I was really impressed by how the whole thing got under control, really.

Has Shanghai Jiao Tong University switched to online classes?

Yes. Since last week, the students are attending online lectures. The university has 1449 courses for undergrads and 657 for graduate students. I participated in some of them. It’s really well run. You can have the typical format with a presenter teaching the class, but you can also have part of the lecture with the students divided into groups and having discussions. Of course what’s really affected is laboratory-based work. So we’ll need to wait for some more time to get back into action.

What do you think of the robots being used to help fight the outbreak?

I’ve seen reports showing a variety of robots being deployed. Disinfection robots that use UV light in hospitals. Drones being used for transporting samples. There’s a prototype robot, developed by the Chinese Academy of Sciences, to remotely collect oropharyngeal swabs from patients for testing, so a medical worker doesn’t have to directly swab the patient. In my hotel, there’s a robot that brings my meals to my door. This little robot can manage to get into the lift, go to your room, and call you to open the door. I’m a roboticist myself and I find it striking how well this robot works every time! [Laughs.]

Photo: UVD Robots UVD Robots has shipped hundreds of ultraviolet-C disinfection robots like the one above to Chinese hospitals. 

After Japan’s Fukushima nuclear emergency, the robotics community realized that it needed to be better prepared. It seems that we’ve made progress with disaster-response robots, but what about dealing with pandemics?

I think that for events involving infectious diseases, like this coronavirus outbreak, when they happen, everybody realizes the importance of robots. The challenge is that at most research institutions, people are more concerned with specific research topics, and that’s indeed the work of a scientist—to dig deep into the scientific issues and solve those specific problems. But we also need to have a global view to deal with big challenges like this pandemic.

So I think what we need to do, starting now, is to have a more systematic effort to make sure those robots can be deployed when we need them. We just need to recompose ourselves and work to identify the technologies that are ready to be deployed, and what are the key directions we need to pursue. There’s a lot we can do. It’s not too late. Because this is not going to disappear. We have to see the worst before it gets better.

So what should we do to be better prepared?

After a major crisis, when everything is under control, people’s priority is to go back to our normal routines. The last thing in people’s minds is, What should we do to prepare for the next crisis? And the thing is, you can’t predict when the next crisis will happen. So I think we need three levels of action, and it really has to be a global effort. One is at the government level, in particular funding agencies: How to make sure we can plan ahead and to prepare for the worst.

Another level is the robotics community, including organizations like the IEEE, we need leadership to advocate for these issues and promote activities like robotics challenges. We see challenges for disasters, logistics, drones—how about a robotic challenge for infectious diseases. I was surprised, and a bit disappointed in myself, that we didn’t think about this before. So for the editorial board of Science Robotics, for instance, this will become an important topic for us to rethink.

And the third level is our interaction with front-line clinicians—our interaction with them needs to be stronger. We need to understand the requirements and not be obsessed with pure technologies, so we can ensure that our systems are effective, safe, and can be rapidly deployed. I think that if we can mobilize and coordinate our effort at all these three levels, that would be transformative. And we’ll be better prepared for the next crisis.

Are there projects taking place at the Institute of Medical Robotics that could help with this pandemic?

The institute has been in full operation for just over a year now. We have three main areas of research: The first is surgical robotics, which is my main area of research. The second area is in rehabilitation and assistive robots. The third area is hospital and laboratory automation. One important lesson that we learned from the coronavirus is that, if we can detect and intervene early, we have a better chance of containing it. And for other diseases, it’s the same. For cancer, early detection based on imaging and other sensing technologies, is critical. So that’s something we want to explore—how robotics, including technologies like laboratory automation, can help with early detection and intervention.

“One area we are working on is automated intensive-care unit wards. The idea is to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks.”

One area we are working on is automated intensive-care unit wards. The idea is to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks. Some tasks could be performed remotely by medical personnel, while other tasks could be fully automated. A lot of the technologies that we already use in surgical robotics can be translated into this area. We’re hoping to work with other institutions and share our expertise to continue developing this further. Indeed, this technology is not just for emergency situations. It will also be useful for routine management of infectious disease patients. We really need to rethink how hospitals are organized in the future to avoid unnecessary exposure and cross-infection.

Photo: Shanghai Jiao Tong University Shanghai Jiao Tong University’s Institute of Medical Robotics is researching areas like micro/nano systems, surgical and rehabilitation robotics, and human-robot interaction.

I’ve seen some recent headlines—“China’s tech fights back,” “Coronavirus is the first big test for futuristic tech”—many people expect technology to save the day.

When there’s a major crisis like this pandemic, in the general public’s mind, people want to find a magic cure that will solve all the problems. I completely understand that expectation. But technology can’t always do that, of course. What technology can do is to help us to be better prepared. For example, it’s clear that in the last few years self-navigating robots with localization and mapping are becoming a mature technology, so we should see more of those used for situations like this. I’d also like to see more technologies developed for front-line management of patients, like the robotic ICU I mentioned earlier. Another area is public transportation systems—can they have an element of disease prevention, using technology to minimize the spread of diseases so that lockdowns are only imposed as a last resort?

And then there’s the problem of people being isolated. You probably saw that Italy has imposed a total lockdown. That could have a major psychological impact, particularly for people who are vulnerable and living alone. There is one area of robotics, called social robotics, that could play a part in this as well. I’ve been in this hotel room by myself for days now—I’m really starting to feel the isolation…

We should have done a Zoom call.

Yes, we should. [Laughs.] I guess this isolation, or quarantine for various people, also provides the opportunity for us to reflect on our lives, our work, our daily routines. That’s the silver lining that we may see from this crisis.

Photo: Unity Drive Innovation Unity Drive, a startup spun out of Hong Kong University of Science and Technology, is deploying self-driving vehicles to carry out contactless deliveries in three Chinese cities.

While some people say we need more technology during emergencies like this, others worry that companies and governments will use things like cameras and facial recognition to increase surveillance of individuals.

A while ago we published an article listing the 10 grand challenges for robotics in Science Robotics. One of the grand challenges is concerned with legal and ethical issues, which include what you mentioned in your question. Respecting privacy, and also being sensitive about individual and citizens’ rights—these are very, very important. Because we must operate within this legal ethical boundary. We should not use technologies that will intrude in people’s lives. You mentioned that some people say that we don’t have enough technology, and that others say we have too much. And I think both have a point. What we need to do is to develop technologies that are appropriate to be deployed in the right situation and for the right tasks.

Many researchers seem eager to help. What would you say to roboticists interested in helping fight this outbreak or prepare for the next one?

For medical robotics research, my experience is that for your technology to be effective, it has to be application oriented. You need to ensure that end-users like the clinicians who will use your robot, or in the case of assistive robots, the patients, that they are deeply involved in the development of the technology. And the second thing is really to think out of the box—how to develop radically different new technologies. Because robotics research is very hands on and there’s a tendency of adapting what’s readily available out there. For your technology to have a major impact, you need to fundamentally rethink your research and innovation, not just follow the waves.

For example, at our institute we’re investing a lot of effort on the development of micro and nano systems and also new materials that could one day be used in robots. Because for micro robotic systems, we can’t rely on the more traditional approach of using motors and gears that we use in larger systems. So my suggestion is to work on technologies that not only have a deep science element but can also become part of a real-world application. Only then we can be sure to have strong technologies to deal with future crises.

Back to IEEE COVID-19 Resources

Working from home is the new normal, at least for those of us whose jobs mostly involve tapping on computer keys. But what about researchers who are synthesizing new chemical compounds or testing them on living tissue or on bacteria in petri dishes? What about those scientists rushing to develop drugs to fight the new coronavirus? Can they work from home?

Silicon Valley-based startup Strateos says its robotic laboratories allow scientists doing biological research and testing to do so right now. Within a few months, the company believes it will have remote robotic labs available for use by chemists synthesizing new compounds. And, the company says, those new chemical synthesis lines will connect with some of its existing robotic biology labs so a remote researcher can seamlessly transfer a new compound from development into testing.

The company’s first robotic labs, up and running in Menlo Park, Calif., since 2012, were developed by one of Strateos’ predecessor companies, Transcriptic. Last year Transcriptic merged with 3Scan, a company that produces digital 3D histological models from scans of tissue samples, to form Strateos. This facility has four robots that run experiments in large, pod-like laboratories for a number of remote clients, including DARPA and the California Pacific Medical Center Research Institute.

Strateos CEO Mark Fischer-Colbrie explains Strateos’ process:

“It starts with an intake kit,” he says, in which the researchers match standard lab containers with a web-based labeling system. Then scientists use Strateos’ graphical user interface to select various tests to run. These can include tests of the chemical properties of compounds, biochemical processes including how compounds react to enzymes or where compounds bind to molecules, and how synthetic yeast organisms respond to stimuli. Soon the company will be adding the capability to do toxicology tests on living cells.

Photo: Strateos A robot in one of Strateos’ cloud labs manages inventory

“Our approach is fully automated and programmable,” Fischer-Colbrie says. “That means that scientists can pick a standard workflow, or decide how a workflow is run. All the pieces of equipment, which include acoustic liquid handlers, spectrophotometers, real-time quantitative polymerase chain reaction instruments, and flow cytometers are accessible.

“The scientists can define every step of the experiment with various parameters, for example, how long the robot incubates a sample and whether it does it fast or slow.&rdquo

To develop the system, Strateos’ engineers had to “connect the dots, that is, connect the lab automation to the web,” rather than dramatically push technology’s envelope, Fischer-Colbrie explains, “bringing the concepts of web services and the sharing economy to the life sciences.”

Nobody had done it before, he says, simply because researchers in the life sciences had been using traditional laboratory techniques for so long, it didn’t seem like there could be a real substitute to physically being in the lab.

“It’s frictionless science, giving scientists the ability to concentrate on their ideas and hypotheses.”

Late last year, in a partnership with Eli Lilly, Strateos added four more biology lab modules in San Diego and by July plans to integrate these with eight chemistry robots that will, according to a press release, “physically and virtually integrate several areas of the drug discovery process—including design, synthesis, purification, analysis, sample management, and hypothesis testing—into a fully automated platform. The lab includes more than 100 instruments and storage for over 5 million compounds, all within a closed-loop and automated drug discovery platform.”

Some of the capacity will be used exclusively by Lilly scientists, but Fischer-Colbrie says, Strateos capped that usage and will be selling lab capacity beyond the cap to others. It currently prices biological assays on a per plate basis and will price chemical reactions per compound.

The company plans to add labs in additional cities as demand for the services increases, in much the same way that Amazon Web Services adds data centers in multiple locales.

It has also started selling access to its software systems directly to companies looking to run their own, dedicated robotic biology labs.

Strateos, of course, had developed this technology long before the new coronavirus pushed people into remote work. Fischer-Colbrie says it has several advantages over traditional lab experiments in addition to enabling scientists to work from home. Experiments run via robots are easier to standardize, he says, and record more metadata than customary or even possible during a manual experiment. This will likely make repeating research easier, allow geographically separated scientists to work together, and create a shorter path to bringing AI into the design and analysis of experiments. “Because we can easily repeat experiments and generate clean datasets, training data for AI systems is cleaner,” he said.

And, he says, robotic labs open up the world of drug discovery to small companies and individuals who don’t have funding for expensive equipment, expanding startup opportunities in the same way software companies boomed when they could turn to cloud services for computing capacity instead of building their own server farms.

Says Alok Gupta, Strateos senior vice president of engineering, “This allows scientists to focus on the concept, not on buying equipment, setting it up, calibrating it; they can just get online and start their work.”

“It’s frictionless science,” says CEO Fischer-Colbrie, “giving scientists the ability to concentrate on their ideas and hypotheses.”

Back to IEEE COVID-19 Resources

We’ve been writing about the musical robots from Georgia Tech’s Center for Music Technology for many, many years. Over that time, Gil Weinberg’s robots have progressed from being able to dance along to music that they hear, to being able to improvise along with it, to now being able to compose, play, and sing completely original songs.

Shimon, the marimba-playing robot that has performed in places like the Kennedy Center, will be going on a new tour to promote an album that will be released on Spotify next month, featuring songs written (and sung) entirely by the robot.

Deep learning is famous for producing results that seem like they sort of make sense, but actually don’t at all. Key to Shimon’s composing ability is its semantic knowledge—the ability to make thematic connections between things, which is a step beyond just throwing some deep learning at a huge database of music composed by humans (although that’s Shimon’s starting point, a dataset of 50,000 lyrics from jazz, prog rock, and hip-hop). So rather than just training a neural network that relates specific words that tend to be found together in lyrics, Shimon can recognize more general themes and build on them to create a coherent piece of music.

Fans of Shimon may have noticed that the robot has had its head almost completely replaced. It may be tempting to say “upgraded,” since the robot now has eyes, eyebrows, and a mouth, but I’ll always have a liking for Shimon’s older design, which had just one sort of abstract eye thing ( that functions as a mouth on the current design). Personally, I very much appreciate robots that are able to be highly expressive without resorting to anthropomorphism, but in its new career as a pop sensation, I guess having eyes and a mouth are, like, important, or something?

To find out more about Shimon’s new talents (and new face), we spoke with Georgia Tech professor Gil Weinberg and his PhD student Richard Savery.

IEEE Spectrum: What makes Shimon’s music fundamentally different from music that could have been written by a human? 

Richard Savery: Shimon’s musical knowledge is drawn from training on huge datasets of lyrics, around 20,000 prog rock songs and another 20,000 jazz songs. With this level of data Shimon is able to draw on far more sources of inspiration than than a human would ever be able to. At a fundamental level Shimon is able to take in huge amounts of new material very rapidly, so within a day it can change from focusing on jazz lyrics, to hip hop to prog rock, or a hybrid combination of them all. 

How much human adjustment is involved in developing coherent melodies and lyrics with Shimon?

Savery: Just like working with a human collaborator, there’s many different ways Shimon can interact. Shimon can perform a range of musical tasks from composing a full song by itself or just playing a part composed by a human. For the new album we focused on human-robot collaboration so every song has some elements that were created by a human and some by Shimon. More than human adjustment from Shimon’s generation we try and have a musical dialogue where we get inspired and build on Shimon’s creation. Like any band, each of us has our own strengths and weaknesses, in our case no one else writes lyrics, so it was natural for Shimon to take responsibility for the lyrics. As a lyricist there’s a few ways Shimon can work, firstly Shimon can be given some keywords or ideas, like “earth” and “humanity” and then generate a full song of lyrics around those words. In addition to keywords Shimon can also take a musical and write lyrics that fit over that melody. 

The press release mentions that Shimon is able to “decide what’s good.” What does that mean?

Richard Savery: When Shimon writes lyrics the first step is generating thousands of phrases. So for those keywords Shimon will generate lots of material about “earth,” and then also generate related synonyms and antonyms like “world,” and “ocean.” Like a human composer Shimon has to parse through lots of ideas to choose what’s good from the creations. Shimon has preferences towards maintaining the same sentiment, or gradually shifting sentiment as well as trying to keep rhymes going between lines. For Shimon good lyrics should rhyme, keep some core thematic ideas going, maintain a similar sentiment and have some similarity to existing lyrics. 

I would guess that Shimon’s voice could have been almost anything—why choose this particular voice?

Gil Weinberg: Since we did not have singing voice synthesis expertise in our Robotic Musicianship group at Georgia Tech, we looked to collaborate with other groups. The Music Technology Group at Pompeu Fabra University developed a remarkable deep learning-based singing voice synthesizer and was excited to collaborate. As part of the process, we sent them audio files of songs recorded by one of our students to be used as a dataset to train their neural network. At the end, we decided to use another voice that was trained on a different dataset, since we felt it better represented Shimon’s genderless personality and was a better fit to the melodic register of our songs. 

“We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us” —Gil Weinberg, Georgia Tech

Can you tell us about the changes made to Shimon’s face?

Weinberg: We are big fans of avoiding exaggerated anthropomorphism and using too many degrees of freedom in our robots. We feel that this might push robots into the uncanny valley. But after much deliberation, we decided that a singing robot should have a mouth to represent the embodiment of singing and to look believable. It was important to us, though, not to add DoFs for this purpose, rather to replace the old eye DoF with a mouth to minimize complexity. Originally, we thought to repurpose both DoFs of the old eye (bottom eyelid and top eye lid) to represent top lip and bottom lip. But we felt this might be too anthropomorphic, and that it would be more challenging and interesting to use only one DoF to automatically control mouth size based on the lyric’s phonemes. For this purpose, we looked at examples as varied as parrot vocalization and Muppets animation, to learn how animals and animators go about mouth actuation. Once we were happy with what we developed, we decided to use the old top eyelid DoFs as an eyebrow, to add more emotion to Shimon’s expression. 

Are you able to take advantage of any inherently robotic capabilities of Shimon?

Weinberg: One of the most important new features of the new Shimon, in addition to its singing song-writing capabilities, is a total redesign of its striking arms. As part of the process we replaced the old solenoid-based actuators with new brushless DC motors that can support a much faster striking (up to 30 hits per second) as well as a wider and more linear dynamic range—from very soft pianissimo to much louder fortissimo. This not only allows for a much richer musical expression, but also supports the ability to create new humanly impossible timbres and sonorities by using 8 novel virtuosic actuators. We hope and believe that these new abilities would push human collaborators to new uncharted directions that could not be achieved in human-to-human collaboration.

How do you hope audiences will react to Shimon?

Weinberg: We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us to listen to, play, and think about music in new ways.

What are you working on next?

Gil Weinberg: We are currently working on new capabilities that would allow Shimon to listen to, understand, and respond to lyrics in real time. The first genre we are exploring for this functionality is rap battles. We plan to release a new album on Spotify April 10th featuring songs where Shimon not only sings but raps in real time as well.

[ Georgia Tech ]

We’ve been writing about the musical robots from Georgia Tech’s Center for Music Technology for many, many years. Over that time, Gil Weinberg’s robots have progressed from being able to dance along to music that they hear, to being able to improvise along with it, to now being able to compose, play, and sing completely original songs.

Shimon, the marimba-playing robot that has performed in places like the Kennedy Center, will be going on a new tour to promote an album that will be released on Spotify next month, featuring songs written (and sung) entirely by the robot.

Deep learning is famous for producing results that seem like they sort of make sense, but actually don’t at all. Key to Shimon’s composing ability is its semantic knowledge—the ability to make thematic connections between things, which is a step beyond just throwing some deep learning at a huge database of music composed by humans (although that’s Shimon’s starting point, a dataset of 50,000 lyrics from jazz, prog rock, and hip-hop). So rather than just training a neural network that relates specific words that tend to be found together in lyrics, Shimon can recognize more general themes and build on them to create a coherent piece of music.

Fans of Shimon may have noticed that the robot has had its head almost completely replaced. It may be tempting to say “upgraded,” since the robot now has eyes, eyebrows, and a mouth, but I’ll always have a liking for Shimon’s older design, which had just one sort of abstract eye thing ( that functions as a mouth on the current design). Personally, I very much appreciate robots that are able to be highly expressive without resorting to anthropomorphism, but in its new career as a pop sensation, I guess having eyes and a mouth are, like, important, or something?

To find out more about Shimon’s new talents (and new face), we spoke with Georgia Tech professor Gil Weinberg and his PhD student Richard Savery.

IEEE Spectrum: What makes Shimon’s music fundamentally different from music that could have been written by a human? 

Richard Savery: Shimon’s musical knowledge is drawn from training on huge datasets of lyrics, around 20,000 prog rock songs and another 20,000 jazz songs. With this level of data Shimon is able to draw on far more sources of inspiration than than a human would ever be able to. At a fundamental level Shimon is able to take in huge amounts of new material very rapidly, so within a day it can change from focusing on jazz lyrics, to hip hop to prog rock, or a hybrid combination of them all. 

How much human adjustment is involved in developing coherent melodies and lyrics with Shimon?

Savery: Just like working with a human collaborator, there’s many different ways Shimon can interact. Shimon can perform a range of musical tasks from composing a full song by itself or just playing a part composed by a human. For the new album we focused on human-robot collaboration so every song has some elements that were created by a human and some by Shimon. More than human adjustment from Shimon’s generation we try and have a musical dialogue where we get inspired and build on Shimon’s creation. Like any band, each of us has our own strengths and weaknesses, in our case no one else writes lyrics, so it was natural for Shimon to take responsibility for the lyrics. As a lyricist there’s a few ways Shimon can work, firstly Shimon can be given some keywords or ideas, like “earth” and “humanity” and then generate a full song of lyrics around those words. In addition to keywords Shimon can also take a musical and write lyrics that fit over that melody. 

The press release mentions that Shimon is able to “decide what’s good.” What does that mean?

Richard Savery: When Shimon writes lyrics the first step is generating thousands of phrases. So for those keywords Shimon will generate lots of material about “earth,” and then also generate related synonyms and antonyms like “world,” and “ocean.” Like a human composer Shimon has to parse through lots of ideas to choose what’s good from the creations. Shimon has preferences towards maintaining the same sentiment, or gradually shifting sentiment as well as trying to keep rhymes going between lines. For Shimon good lyrics should rhyme, keep some core thematic ideas going, maintain a similar sentiment and have some similarity to existing lyrics. 

I would guess that Shimon’s voice could have been almost anything—why choose this particular voice?

Gil Weinberg: Since we did not have singing voice synthesis expertise in our Robotic Musicianship group at Georgia Tech, we looked to collaborate with other groups. The Music Technology Group at Pompeu Fabra University developed a remarkable deep learning-based singing voice synthesizer and was excited to collaborate. As part of the process, we sent them audio files of songs recorded by one of our students to be used as a dataset to train their neural network. At the end, we decided to use another voice that was trained on a different dataset, since we felt it better represented Shimon’s genderless personality and was a better fit to the melodic register of our songs. 

“We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us” —Gil Weinberg, Georgia Tech

Can you tell us about the changes made to Shimon’s face?

Weinberg: We are big fans of avoiding exaggerated anthropomorphism and using too many degrees of freedom in our robots. We feel that this might push robots into the uncanny valley. But after much deliberation, we decided that a singing robot should have a mouth to represent the embodiment of singing and to look believable. It was important to us, though, not to add DoFs for this purpose, rather to replace the old eye DoF with a mouth to minimize complexity. Originally, we thought to repurpose both DoFs of the old eye (bottom eyelid and top eye lid) to represent top lip and bottom lip. But we felt this might be too anthropomorphic, and that it would be more challenging and interesting to use only one DoF to automatically control mouth size based on the lyric’s phonemes. For this purpose, we looked at examples as varied as parrot vocalization and Muppets animation, to learn how animals and animators go about mouth actuation. Once we were happy with what we developed, we decided to use the old top eyelid DoFs as an eyebrow, to add more emotion to Shimon’s expression. 

Are you able to take advantage of any inherently robotic capabilities of Shimon?

Weinberg: One of the most important new features of the new Shimon, in addition to its singing song-writing capabilities, is a total redesign of its striking arms. As part of the process we replaced the old solenoid-based actuators with new brushless DC motors that can support a much faster striking (up to 30 hits per second) as well as a wider and more linear dynamic range—from very soft pianissimo to much louder fortissimo. This not only allows for a much richer musical expression, but also supports the ability to create new humanly impossible timbres and sonorities by using 8 novel virtuosic actuators. We hope and believe that these new abilities would push human collaborators to new uncharted directions that could not be achieved in human-to-human collaboration.

How do you hope audiences will react to Shimon?

Weinberg: We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us to listen to, play, and think about music in new ways.

What are you working on next?

Gil Weinberg: We are currently working on new capabilities that would allow Shimon to listen to, understand, and respond to lyrics in real time. The first genre we are exploring for this functionality is rap battles. We plan to release a new album on Spotify April 10th featuring songs where Shimon not only sings but raps in real time as well.

[ Georgia Tech ]

As much as we love soft robots (and we really love soft robots), the vast majority of them operate pneumatically (or hydraulically) at larger scales, especially when they need to exert significant amounts of force. This causes complications, because pneumatics and hydraulics generally require a pump somewhere to move fluid around, so you often see soft robots tethered to external and decidedly non-soft power sources. There’s nothing wrong with this, really, because there are plenty of challenges that you can still tackle that way, and there are some up-and-coming technologies that might result in soft pumps or gas generators.

Researchers at Stanford have developed a new kind of (mostly) soft robot based around a series of compliant, air-filled tubes. It’s human scale, moves around, doesn’t require a pump or tether, is more or less as safe as large robots get, and even manages to play a little bit of basketball.

Image: Stanford/Science Robotics

Stanford’s soft robot consists of a set of identical robotic roller modules mounted onto inflated fabric tubes (A). The rollers pinch the fabric tube between rollers, creating an effective joint (B) that can be relocated by driving the rollers. The roller modules actuate the robot by driving along the tube, simultaneously lengthening one edge while shortening another (C). The roller modules connect to each other at nodes using three-degree-of-freedom universal joints that are composed of a clevis joint that couples two rods, each free to spin about its axis (D). The robot moves untethered outdoors using a rolling gait (E).

This thing looks a heck of a lot like the tensegrity robots that NASA Ames has been working on forever, and which are now being commercialized (hopefully?) by Squishy Robotics. Stanford’s model is not technically a tensegrity robot, though, because it doesn’t use structural components that are under tension (like cables). The researchers refer to this kind of robot as “isoperimetric,” which means while discrete parts of the structure may change length, the overall length of all the parts put together stays the same. This means it’s got a similar sort of inherent compliance across the structure to tensegrity robots, which is one of the things that makes them so appealing. 

While the compliance of Stanford’s robot comes from a truss-like structure made of air-filled tubes, its motion relies on powered movable modules. These modules pinch the tube that they’re located on through two cylindrical rollers (without creating a seal), and driving the rollers moves the module back and forth along the tube, effectively making one section of the tube longer and the other one shorter. Although this is just one degree of freedom, having a whole bunch of tubes each with an independently controlled roller module means that the robot as a whole can exhibit complex behaviors, like drastic shape changes, movement, and even manipulation.

There are numerous advantages to a design like this. You get all the advantages of pneumatic robots (compliance, flexibility, collapsibility, durability, high strength to weight ratio) without requiring some way of constantly moving air around, since the volume of air inside the robot stays constant. Each individual triangular module is self-contained (with one tube, two active roller modules, and one passive anchor module) and easy to combine with similar modules—the video shows an octahedron, but you can easily add or subtract modules to make a variety of differently shaped robots with different capabilities.

Since the robot is inherently so modular, there are all kinds of potential applications for this thing, as the researchers speculate in a paper published today in Science Robotics:

The compliance and shape change of the robot could make it suitable for several tasks involving humans. For example, the robot could work alongside workers, holding parts in place as the worker bolts them in place. In the classroom, the modularity and soft nature of the robotic system make it a potentially valuable educational tool. Students could create many different robots with a single collection of hardware and then physically interact with the robot. By including a much larger number of roller modules in a robot, the robot could function as a shape display, dynamically changing shape as a sort of high–refresh rate 3D printer. Incorporating touch-sensitive fabric into the structure could allow users to directly interact with the displayed shapes. More broadly, the modularity allows the same hardware to build a diverse family of robots—the same roller modules can be used with new tube routings to create new robots. If the user needed a robot to reach through a long, narrow passageway, they could assemble a chain-like robot; then, for a locomoting robot, they could reassemble into a spherical shape.

Image: Farrin Abbott

I’m having trouble picturing some of that stuff, but the rest of it sounds like fun.

We’re obligated to point out that because of the motorized roller modules, this soft robot is really only semi-soft, and you could argue that it’s not fundamentally all that much better than hydraulic or pneumatic soft robots with embedded rigid components like batteries and pumps. Calling this robot “inherently human-safe,” as the researchers do, might be overselling it slightly, in that it has hard edges, pokey bits, and what look to be some serious finger-munchers. It does sound like there might be some potential to replace the roller modules with something softer and more flexible, which will be a focus of future work.

An untethered isoperimetric soft robot,” by Nathan S. Usevitch, Zachary M. Hammond, Mac Schwager, Allison M. Okamura, Elliot W. Hawkes, and Sean Follmer from Stanford University and UCSB, was published in Science Robotics.

As much as we love soft robots (and we really love soft robots), the vast majority of them operate pneumatically (or hydraulically) at larger scales, especially when they need to exert significant amounts of force. This causes complications, because pneumatics and hydraulics generally require a pump somewhere to move fluid around, so you often see soft robots tethered to external and decidedly non-soft power sources. There’s nothing wrong with this, really, because there are plenty of challenges that you can still tackle that way, and there are some up-and-coming technologies that might result in soft pumps or gas generators.

Researchers at Stanford have developed a new kind of (mostly) soft robot based around a series of compliant, air-filled tubes. It’s human scale, moves around, doesn’t require a pump or tether, is more or less as safe as large robots get, and even manages to play a little bit of basketball.

Image: Stanford/Science Robotics

Stanford’s soft robot consists of a set of identical robotic roller modules mounted onto inflated fabric tubes (A). The rollers pinch the fabric tube between rollers, creating an effective joint (B) that can be relocated by driving the rollers. The roller modules actuate the robot by driving along the tube, simultaneously lengthening one edge while shortening another (C). The roller modules connect to each other at nodes using three-degree-of-freedom universal joints that are composed of a clevis joint that couples two rods, each free to spin about its axis (D). The robot moves untethered outdoors using a rolling gait (E).

This thing looks a heck of a lot like the tensegrity robots that NASA Ames has been working on forever, and which are now being commercialized (hopefully?) by Squishy Robotics. Stanford’s model is not technically a tensegrity robot, though, because it doesn’t use structural components that are under tension (like cables). The researchers refer to this kind of robot as “isoperimetric,” which means while discrete parts of the structure may change length, the overall length of all the parts put together stays the same. This means it’s got a similar sort of inherent compliance across the structure to tensegrity robots, which is one of the things that makes them so appealing. 

While the compliance of Stanford’s robot comes from a truss-like structure made of air-filled tubes, its motion relies on powered movable modules. These modules pinch the tube that they’re located on through two cylindrical rollers (without creating a seal), and driving the rollers moves the module back and forth along the tube, effectively making one section of the tube longer and the other one shorter. Although this is just one degree of freedom, having a whole bunch of tubes each with an independently controlled roller module means that the robot as a whole can exhibit complex behaviors, like drastic shape changes, movement, and even manipulation.

There are numerous advantages to a design like this. You get all the advantages of pneumatic robots (compliance, flexibility, collapsibility, durability, high strength to weight ratio) without requiring some way of constantly moving air around, since the volume of air inside the robot stays constant. Each individual triangular module is self-contained (with one tube, two active roller modules, and one passive anchor module) and easy to combine with similar modules—the video shows an octahedron, but you can easily add or subtract modules to make a variety of differently shaped robots with different capabilities.

Since the robot is inherently so modular, there are all kinds of potential applications for this thing, as the researchers speculate in a paper published today in Science Robotics:

The compliance and shape change of the robot could make it suitable for several tasks involving humans. For example, the robot could work alongside workers, holding parts in place as the worker bolts them in place. In the classroom, the modularity and soft nature of the robotic system make it a potentially valuable educational tool. Students could create many different robots with a single collection of hardware and then physically interact with the robot. By including a much larger number of roller modules in a robot, the robot could function as a shape display, dynamically changing shape as a sort of high–refresh rate 3D printer. Incorporating touch-sensitive fabric into the structure could allow users to directly interact with the displayed shapes. More broadly, the modularity allows the same hardware to build a diverse family of robots—the same roller modules can be used with new tube routings to create new robots. If the user needed a robot to reach through a long, narrow passageway, they could assemble a chain-like robot; then, for a locomoting robot, they could reassemble into a spherical shape.

Image: Farrin Abbott

I’m having trouble picturing some of that stuff, but the rest of it sounds like fun.

We’re obligated to point out that because of the motorized roller modules, this soft robot is really only semi-soft, and you could argue that it’s not fundamentally all that much better than hydraulic or pneumatic soft robots with embedded rigid components like batteries and pumps. Calling this robot “inherently human-safe,” as the researchers do, might be overselling it slightly, in that it has hard edges, pokey bits, and what look to be some serious finger-munchers. It does sound like there might be some potential to replace the roller modules with something softer and more flexible, which will be a focus of future work.

An untethered isoperimetric soft robot,” by Nathan S. Usevitch, Zachary M. Hammond, Mac Schwager, Allison M. Okamura, Elliot W. Hawkes, and Sean Follmer from Stanford University and UCSB, was published in Science Robotics.

Editor’s Note: When we asked Rodney Brooks if he’d write an article for IEEE Spectrum on his definition of robot, he wrote back right away. “I recently learned that Warren McCulloch”—one of the pioneers of computational neuroscience—“wrote sonnets,” Brooks told us. “He, and your request, inspired me. Here is my article—a little shorter than you might have desired.” Included in his reply were 14 lines composed in iambic pentameter. Brooks titled it “What Is a Robot?” Later, after a few tweaks to improve the metric structure of some of the lines, he added, “I am no William Shakespeare, but I think it is now a real sonnet, if a little clunky in places.”

What Is a Robot?*
By Rodney Brooks

Shall I compare thee to creatures of God?
Thou art more simple and yet more remote.
You move about, but still today, a clod,
You sense and act but don’t see or emote.

You make fast maps with laser light all spread,
Then compare shapes to object libraries,
And quickly plan a path, to move ahead,
Then roll and touch and grasp so clumsily.

You learn just the tiniest little bit,
And start to show some low intelligence,
But we, your makers, Gods not, we admit,
All pledge to quest for genuine sentience.

    So long as mortals breathe, or eyes can see,
    We shall endeavor to give life to thee.

* With thanks to William Shakespeare

Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT, where he was director of the AI Lab and then CSAIL. He has been cofounder of iRobot, Rethink Robotics, and Robust AI, where he is currently CTO.

Editor’s Note: When we asked Rodney Brooks if he’d write an article for IEEE Spectrum on his definition of robot, he wrote back right away. “I recently learned that Warren McCulloch”—one of the pioneers of computational neuroscience—“wrote sonnets,” Brooks told us. “He, and your request, inspired me. Here is my article—a little shorter than you might have desired.” Included in his reply were 14 lines composed in iambic pentameter. Brooks titled it “What Is a Robot?” Later, after a few tweaks to improve the metric structure of some of the lines, he added, “I am no William Shakespeare, but I think it is now a real sonnet, if a little clunky in places.”

What Is a Robot?*
By Rodney Brooks

Shall I compare thee to creatures of God?
Thou art more simple and yet more remote.
You move about, but still today, a clod,
You sense and act but don’t see or emote.

You make fast maps with laser light all spread,
Then compare shapes to object libraries,
And quickly plan a path, to move ahead,
Then roll and touch and grasp so clumsily.

You learn just the tiniest little bit,
And start to show some low intelligence,
But we, your makers, Gods not, we admit,
All pledge to quest for genuine sentience.

    So long as mortals breathe, or eyes can see,
    We shall endeavor to give life to thee.

* With thanks to William Shakespeare

Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT, where he was director of the AI Lab and then CSAIL. He has been cofounder of iRobot, Rethink Robotics, and Robust AI, where he is currently CTO.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. [CANCELED] ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Having robots learn dexterous tasks requiring real-time hand-eye coordination is hard. Many tasks that we would consider simple, like hanging up a baseball cap on a rack, would be very challenging for most robot software. What’s more, for a robot to learn each new task, it typically takes significant amounts of engineering time to program the robot. Pete Florence and Lucas Manuelli in the Robot Locomotion Group took a step closer to that goal with their work.

[ Paper ]

Octo-Bouncer is not a robot that bounces an octopus. But it’s almost as good. Almost.

[ Electron Dust ]

D’Kitty (pronounced as “The Kitty”) is a 12-degree-of-freedom platform for exploring learning-based techniques in locomotion and it’s adooorable!

[ D’Kitty ]

Knightscope Autonomous Security Robot meets Tesla Model 3 in Summon Mode!  See, nothing to fear, Elon. :-)

The robots also have a message for us:

[ Knightscope ]

If you missed the robots vs. humans match at RoboCup 2019, here are the highlights.

Tech United ]

Fraunhofer developed this cute little demo of autonomously navigating, cooperating mobile robots executing a miniaturized logistics scenario involving chocolate for the LogiMAT trade show. Which was canceled. But enjoy the video!

[ Fraunhofer ]

Thanks Thilo!

Drones can potentially be used for taking soil samples in awkward areas by dropping darts equipped with accelerometers. But the really clever bit is how the drone can retrieve the dart on its own.

[ UH ]

Rope manipulation is one of those human-easy robot-hard things that’s really, really robot-hard.

[ UC Berkeley ]

Autonomous landing on a moving platform presents unique challenges for multirotor vehicles, including the need to accurately localize the platform, fast trajectory planning, and precise/robust control. This work presents a fully autonomous vision-based system that addresses these limitations by tightly coupling the localization, planning, and control, thereby enabling fast and accurate landing on a moving platform. The platform’s position, orientation, and velocity are estimated by an extended Kalman filter using simulated GPS measurements when the quadrotor-platform distance is large, and by a visual fiducial system when the platform is nearby. To improve the performance, the characteristics of the turbulent conditions are accounted for in the controller. The landing trajectory is fast, direct, and does not require hovering over the platform, as is typical of most state-of-the-art approaches. Simulations and hardware experiments are presented to validate the robustness of the approach.

[ MIT ACL ]

And now, this.

[ Soft Robotics ]

The EPRI (Electric Power Research Institute) recently worked with Exyn Technologies, a pioneer in autonomous aerial robot systems, for a safety and data collection demonstration at Exelon’s Peach Bottom Atomic Power Station in Pennsylvania. Exyn’s drone was able to autonomously inspect components in elevated hard to access areas, search for temperature anomalies, and collect dose rate surveys in radiological areas— without the need for a human operator.

[ Exyn ]

Thanks Zach!

Relax: Pepper is here to help with all of your medical problems.

[ Softbank ]

Amir Shapiro at BGU, along with Yoav Golan (whose work on haptic control of dogs we covered last year), have developed an interesting new kind of robotic finger with passively adjustable friction.

Paper ] via [ BGU ]

Thanks Andy!

UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is expected to offer healthcare services to children at Sint Maartenskliniek in the Netherlands. Before that, three of them have been trained to have exercise, empathy and cognition capabilities.

[ UBTECH ]

Get ready for CYBATHLON, postponed to September 2020!

[ Cybathlon ]

In partnership with the World Mosquito Program (WMP), WeRobotics has led the development and deployment of a drone-based release mechanism that has been shown to help prevent the incidence of Dengue fever.

[ WeRobotics ]

Sadly, koalas today face a dire outlook across Australia due to human development, droughts, and forest fires. Events like these and a declining population make conservation and research more important than ever. Drones offer a more efficient way to count koalas from above, covering more ground than was possible in the past. Dr. Hamilton and his team at the Queensland University of Technology use DJI drones to count koalas, using the data obtained to better help these furry friends from down under.

[ DJI ]

Fostering the Next Generation of Robotics Startups | TC Sessions: Robotics

Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Speakers will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.

[ TechCrunch ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. [CANCELED] ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Having robots learn dexterous tasks requiring real-time hand-eye coordination is hard. Many tasks that we would consider simple, like hanging up a baseball cap on a rack, would be very challenging for most robot software. What’s more, for a robot to learn each new task, it typically takes significant amounts of engineering time to program the robot. Pete Florence and Lucas Manuelli in the Robot Locomotion Group took a step closer to that goal with their work.

[ Paper ]

Octo-Bouncer is not a robot that bounces an octopus. But it’s almost as good. Almost.

[ Electron Dust ]

D’Kitty (pronounced as “The Kitty”) is a 12-degree-of-freedom platform for exploring learning-based techniques in locomotion and it’s adooorable!

[ D’Kitty ]

Knightscope Autonomous Security Robot meets Tesla Model 3 in Summon Mode!  See, nothing to fear, Elon. :-)

The robots also have a message for us:

[ Knightscope ]

If you missed the robots vs. humans match at RoboCup 2019, here are the highlights.

Tech United ]

Fraunhofer developed this cute little demo of autonomously navigating, cooperating mobile robots executing a miniaturized logistics scenario involving chocolate for the LogiMAT trade show. Which was canceled. But enjoy the video!

[ Fraunhofer ]

Thanks Thilo!

Drones can potentially be used for taking soil samples in awkward areas by dropping darts equipped with accelerometers. But the really clever bit is how the drone can retrieve the dart on its own.

[ UH ]

Rope manipulation is one of those human-easy robot-hard things that’s really, really robot-hard.

[ UC Berkeley ]

Autonomous landing on a moving platform presents unique challenges for multirotor vehicles, including the need to accurately localize the platform, fast trajectory planning, and precise/robust control. This work presents a fully autonomous vision-based system that addresses these limitations by tightly coupling the localization, planning, and control, thereby enabling fast and accurate landing on a moving platform. The platform’s position, orientation, and velocity are estimated by an extended Kalman filter using simulated GPS measurements when the quadrotor-platform distance is large, and by a visual fiducial system when the platform is nearby. To improve the performance, the characteristics of the turbulent conditions are accounted for in the controller. The landing trajectory is fast, direct, and does not require hovering over the platform, as is typical of most state-of-the-art approaches. Simulations and hardware experiments are presented to validate the robustness of the approach.

[ MIT ACL ]

And now, this.

[ Soft Robotics ]

The EPRI (Electric Power Research Institute) recently worked with Exyn Technologies, a pioneer in autonomous aerial robot systems, for a safety and data collection demonstration at Exelon’s Peach Bottom Atomic Power Station in Pennsylvania. Exyn’s drone was able to autonomously inspect components in elevated hard to access areas, search for temperature anomalies, and collect dose rate surveys in radiological areas— without the need for a human operator.

[ Exyn ]

Thanks Zach!

Relax: Pepper is here to help with all of your medical problems.

[ Softbank ]

Amir Shapiro at BGU, along with Yoav Golan (whose work on haptic control of dogs we covered last year), have developed an interesting new kind of robotic finger with passively adjustable friction.

Paper ] via [ BGU ]

Thanks Andy!

UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is expected to offer healthcare services to children at Sint Maartenskliniek in the Netherlands. Before that, three of them have been trained to have exercise, empathy and cognition capabilities.

[ UBTECH ]

Get ready for CYBATHLON, postponed to September 2020!

[ Cybathlon ]

In partnership with the World Mosquito Program (WMP), WeRobotics has led the development and deployment of a drone-based release mechanism that has been shown to help prevent the incidence of Dengue fever.

[ WeRobotics ]

Sadly, koalas today face a dire outlook across Australia due to human development, droughts, and forest fires. Events like these and a declining population make conservation and research more important than ever. Drones offer a more efficient way to count koalas from above, covering more ground than was possible in the past. Dr. Hamilton and his team at the Queensland University of Technology use DJI drones to count koalas, using the data obtained to better help these furry friends from down under.

[ DJI ]

Fostering the Next Generation of Robotics Startups | TC Sessions: Robotics

Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Speakers will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.

[ TechCrunch ]

The absolute best way of dealing with the coronavirus pandemic is to just not get coronavirus in the first place. By now, you’ve (hopefully) had all of the strategies for doing this drilled into your skull—wash your hands, keep away from large groups of people, wash your hands, stay home when sick, wash your hands, avoid travel when possible, and please, please wash your hands.

At the top of the list of the places to avoid right now are hospitals, because that’s where all the really sick people go. But for healthcare workers, and the sick people themselves, there’s really no other option. To prevent the spread of coronavirus (and everything else) through hospitals, keeping surfaces disinfected is incredibly important, but it’s also dirty, dull, and (considering what you can get infected with) dangerous. And that’s why it’s an ideal task for autonomous robots.

Photo: UVD Robots The robots can travel through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge.

UVD Robots is a Danish company making robots that are able to disinfect patient rooms and operating theaters in hospitals. They’re able to disinfect pretty much anything you point them at—each robot is a mobile array of powerful short wavelength ultraviolet-C (UVC) lights that emit enough energy to literally shred the DNA or RNA of any microorganisms that have the misfortune of being exposed to them.

The company’s robots have been operating in China for the past two or three weeks, and UVD Robots CEO Per Juul Nielsen says they are sending more to China as fast as they can. “The initial volume is in the hundreds of robots; the first ones went to Wuhan where the situation is the most severe,” Nielsen told IEEE Spectrum. “We’re shipping every week—they’re going air freight into China because they’re so desperately needed.” The goal is to supply the robots to over 2,000 hospitals and medical facilities in China.

UV disinfecting technology has been around for something like a century, and it’s commonly used to disinfect drinking water. You don’t see it much outside of fixed infrastructure because you have to point a UV lamp directly at a surface for a couple of minutes in order to be effective, and since it can cause damage to skin and eyes, humans have to be careful around it. Mobile UVC disinfection systems are a bit more common—UV lamps on a cart that a human can move from place to place to disinfect specific areas, like airplanes. For large environments like a hospital with dozens of rooms, operating UV systems manually can be costly and have mixed results—humans can inadvertently miss certain areas, or not expose them long enough.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis.” —Per Juul Nielsen, UVD Robots

UVD Robots spent four years developing a robotic UV disinfection system, which it started selling in 2018. The robot consists of a mobile base equipped with multiple lidar sensors and an array of UV lamps mounted on top. To deploy a robot, you drive it around once using a computer. The robot scans the environment using its lidars and creates a digital map. You then annotate the map indicating all the rooms and points the robot should stop to perform disinfecting tasks.

After that, the robot relies on simultaneous localization and mapping (SLAM) to navigate, and it operates completely on its own. It’ll travel from its charging station, through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge. For safety, the robot operates when people are not around, using its sensors to detect motion and shutting the UV lights off if a person enters the area.

It takes between 10 and 15 minutes to disinfect a typical room, with the robot spending 1 or 2 minutes in five or six different positions around the room to maximize the number of surfaces that it disinfects. The robot’s UV array emits 20 joules per square meter per second (at 1 meter distance) of 254-nanometer light, which will utterly wreck 99.99 percent of germs in just a few minutes without the robot having to do anything more complicated than just sit there. The process is more consistent than a human cleaning since the robot follows the same path each time, and its autonomy means that human staff can be freed up to do more interesting tasks, like interacting with patients.

Originally, the robots were developed to address hospital acquired infections, which are a significant problem globally. According to Nielsen, between 5 and 10 percent of hospital patients worldwide will acquire a new infection while in the hospital, and tens of thousands of people die from these infections every year. The goal of the UVD robots was to help hospitals prevent these infections in the first place.

Photo: UVD Robots A shipment of robots from UVD Robots arrives at a hospital in Wuhan, where the first coronavirus cases were reported in December.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis,” Nielsen says. “That’s why there’s a big need for our robots all over the world now, because they can be used in fighting coronavirus, and for fighting all of the other infections that are still there.”

The robots, which cost between US $80,000 and $90,000, are relatively affordable for medical equipment, and as you might expect, recent interest in them has been substantial. “Once [hospitals] see it, it’s a no-brainer,” Nielsen says. “If they want this type of disinfection solution, then the robot is much smarter and more cost-effective than what’s available in the market today.” Hundreds of these robots are at work in more than 40 countries, and they’ve recently completed hospital trials in Florida. Over the next few weeks, they’ll be tested at other medical facilities around the United States, and Nielsen points out that they could be useful in schools, cruise ships, or any other relatively structured spaces. I’ll take one for my apartment, please.

UVD Robots ]

Back to IEEE COVID-19 Resources

The absolute best way of dealing with the coronavirus pandemic is to just not get coronavirus in the first place. By now, you’ve (hopefully) had all of the strategies for doing this drilled into your skull—wash your hands, keep away from large groups of people, wash your hands, stay home when sick, wash your hands, avoid travel when possible, and please, please wash your hands.

At the top of the list of the places to avoid right now are hospitals, because that’s where all the really sick people go. But for healthcare workers, and the sick people themselves, there’s really no other option. To prevent the spread of coronavirus (and everything else) through hospitals, keeping surfaces disinfected is incredibly important, but it’s also dirty, dull, and (considering what you can get infected with) dangerous. And that’s why it’s an ideal task for autonomous robots.

Photo: UVD Robots The robots can travel through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge.

UVD Robots is a Danish company making robots that are able to disinfect patient rooms and operating theaters in hospitals. They’re able to disinfect pretty much anything you point them at—each robot is a mobile array of powerful short wavelength ultraviolet-C (UVC) lights that emit enough energy to literally shred the DNA or RNA of any microorganisms that have the misfortune of being exposed to them.

The company’s robots have been operating in China for the past two or three weeks, and UVD Robots CEO Per Juul Nielsen says they are sending more to China as fast as they can. “The initial volume is in the hundreds of robots; the first ones went to Wuhan where the situation is the most severe,” Nielsen told IEEE Spectrum. “We’re shipping every week—they’re going air freight into China because they’re so desperately needed.” The goal is to supply the robots to over 2,000 hospitals and medical facilities in China.

UV disinfecting technology has been around for something like a century, and it’s commonly used to disinfect drinking water. You don’t see it much outside of fixed infrastructure because you have to point a UV lamp directly at a surface for a couple of minutes in order to be effective, and since it can cause damage to skin and eyes, humans have to be careful around it. Mobile UVC disinfection systems are a bit more common—UV lamps on a cart that a human can move from place to place to disinfect specific areas, like airplanes. For large environments like a hospital with dozens of rooms, operating UV systems manually can be costly and have mixed results—humans can inadvertently miss certain areas, or not expose them long enough.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis.” —Per Juul Nielsen, UVD Robots

UVD Robots spent four years developing a robotic UV disinfection system, which it started selling in 2018. The robot consists of a mobile base equipped with multiple lidar sensors and an array of UV lamps mounted on top. To deploy a robot, you drive it around once using a computer. The robot scans the environment using its lidars and creates a digital map. You then annotate the map indicating all the rooms and points the robot should stop to perform disinfecting tasks.

After that, the robot relies on simultaneous localization and mapping (SLAM) to navigate, and it operates completely on its own. It’ll travel from its charging station, through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge. For safety, the robot operates when people are not around, using its sensors to detect motion and shutting the UV lights off if a person enters the area.

It takes between 10 and 15 minutes to disinfect a typical room, with the robot spending 1 or 2 minutes in five or six different positions around the room to maximize the number of surfaces that it disinfects. The robot’s UV array emits 20 joules per square meter per second (at 1 meter distance) of 254-nanometer light, which will utterly wreck 99.99 percent of germs in just a few minutes without the robot having to do anything more complicated than just sit there. The process is more consistent than a human cleaning since the robot follows the same path each time, and its autonomy means that human staff can be freed up to do more interesting tasks, like interacting with patients.

Originally, the robots were developed to address hospital acquired infections, which are a significant problem globally. According to Nielsen, between 5 and 10 percent of hospital patients worldwide will acquire a new infection while in the hospital, and tens of thousands of people die from these infections every year. The goal of the UVD robots was to help hospitals prevent these infections in the first place.

Photo: UVD Robots A shipment of robots from UVD Robots arrives at a hospital in Wuhan, where the first coronavirus cases were reported in December.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis,” Nielsen says. “That’s why there’s a big need for our robots all over the world now, because they can be used in fighting coronavirus, and for fighting all of the other infections that are still there.”

The robots, which cost between US $80,000 and $90,000, are relatively affordable for medical equipment, and as you might expect, recent interest in them has been substantial. “Once [hospitals] see it, it’s a no-brainer,” Nielsen says. “If they want this type of disinfection solution, then the robot is much smarter and more cost-effective than what’s available in the market today.” Hundreds of these robots are at work in more than 40 countries, and they’ve recently completed hospital trials in Florida. Over the next few weeks, they’ll be tested at other medical facilities around the United States, and Nielsen points out that they could be useful in schools, cruise ships, or any other relatively structured spaces. I’ll take one for my apartment, please.

UVD Robots ]

Back to IEEE COVID-19 Resources

Researchers on WeBank’s AI Moonshot Team have taken a deep learning system developed to detect solar panel installations from satellite imagery and repurposed it to track China’s economic recovery from the novel coronavirus outbreak.

This, as far as the researchers know, is the first time big data and AI have been used to measure the impact of the new coronavirus on China, Haishan Wu, vice general manager of WeBank’s AI department, told IEEE Spectrum. WeBank is a private Chinese online banking company founded by Tencent.

The team used its neural network to analyze visible, near-infrared, and short-wave infrared images from various satellites, including the infrared bands from the Sentinel-2 satellite. This allowed the system to look for hot spots indicative of actual steel manufacturing inside a plant. In the early days of the outbreak, this analysis showed that steel manufacturing had dropped to a low of 29 percent of capacity. But by 9 February, it had recovered to 76 percent.

The researchers then looked at other types of manufacturing and commercial activity using AI. One of the techniques was simply counting cars in large corporate parking lots. From that analysis, it appeared that, by 10 February, Tesla’s Shanghai car production had fully recovered, while tourism operations, like Shanghai Disneyland, are still shut down.

Images: WeBank

Moving beyond satellite data, the researchers took daily anonymized GPS data from several million mobile phone users in 2019 and 2020, and used AI to determine which of those users were commuters. The software then counted the number of commuters in each city, and compared the number of commuters on a given day in 2019 and its corresponding date in 2020, starting with Chinese New Year. In both cases, Chinese New Year saw a huge dip in commuting, but unlike in 2019, the number of people going to work didn’t bounce back after the holiday. While things picked up slowly, the WeBank researchers calculated that by 10 March 2020, about 75 percent of the workforce had returned to work.

Projecting out from these curves, the researchers concluded that most Chinese workers, with the exception of Wuhan, will be back to work by the end of March. Economic growth in the first quarter, their study indicated, will take a 36 percent hit.

Finally, the team used natural language processing technology to mine Twitter-like services and other social media platforms for mentions of companies that provide online working, gaming, education, streaming video, social networking, e-commerce, and express delivery services. According to this analysis, telecommuting for work is booming, up 537 percent from the first day of 2020; online education is up 169 percent gaming is up 124 percent; video streaming is up 55 percent; social networking is up 47 percent. Meanwhile, e-commerce is flat, and express delivery is down a little less than 1 percent. The analysis of China’s social media activity also yielded the prediction that the Chinese economy will be mostly back to normal by the end of March.

Back to IEEE COVID-19 Resources

A lot of people in the auto industry talked for way too long about the imminent advent of fully self-driving cars. 

In 2013, Carlos Ghosn, now very much the ex-chairman of Nissan, said it would happen in seven years. In 2016, Elon Musk, then chairman of Tesla, implied  his cars could basically do it already. In 2017 and right through early 2019 GM Cruise talked 2019. And Waymo, the company with the most to show for its efforts so far, is speaking in more measured terms than it used just a year or two ago. 

It’s all making Gill Pratt, CEO of the Toyota Research Institute in California, look rather prescient. A veteran roboticist who joined Toyota in 2015 with the task of developing robocars, Pratt from the beginning emphasized just how hard the task would be and how important it was to aim for intermediate goals—notably by making a car that could help drivers now, not merely replace them at some distant date.

That helpmate, called Guardian, is set to use a range of active safety features to coach a driver and, in the worst cases, to save him from his own mistakes. The more ambitious Chauffeur will one day really drive itself, though in a constrained operating environment. The constraints on the current iteration will be revealed at the first demonstration at this year’s Olympic games in Tokyo; they will certainly involve limits to how far afield and how fast the car may go.

Earlier this week, at TRI’s office in Palo Alto, Calif., Pratt and his colleagues gave Spectrum a walkaround look at the latest version of the Chauffeur, the P4; it’s a Lexus with a package of sensors neatly merging with the roof. Inside are two lidars from Luminar, a stereocamera, a mono-camera (just to zero in on traffic signs), and radar. At the car’s front and corners are small Velodyne lidars, hidden behind a grill or folded smoothly into small protuberances. Nothing more could be glimpsed, not even the electronics that no doubt filled the trunk.

Pratt and his colleagues had a lot to say on the promises and pitfalls of self-driving technology. The easiest to excerpt is their view on the difficulty of the problem.

“There isn’t anything that’s telling us it can’t be done; I should be very clear on that,” Pratt says. “Just because we don’t know how to do it doesn’t mean it can’t be done.”

That said, though, he notes that early successes (using deep neural networks to process vast amounts of data) led researchers to optimism. In describing that optimism, he does not object to the phrase “irrational exuberance,” made famous during the 1990s dot-com bubble.

It turned out that the early successes came in those fields where deep learning, as it’s known, was most effective, like artificial vision and other aspects of perception. Computers, long held to be particularly bad at pattern recognition, were suddenly shown to be particularly good at it—even better, in some cases, than human beings. 

“The irrational exuberance came from looking  at the slope of the [graph] and seeing the seemingly miraculous improvement deep learning had given us,” Pratt says. “Everyone was surprised, including the people who developed it, that suddenly, if you threw enough data and enough computing at it, the performance would get so good. It was then easy to say that because we were surprised just now, it must mean we’re going to continue to be surprised in the next couple of years.”

The mindset was one of permanent revolution: The difficult, we do immediately; the impossible just takes a little longer. 

Then came the slow realization that AI not only had to perceive the world—a nontrivial problem, even now—but also to make predictions, typically about human behavior. That problem is more than nontrivial. It is nearly intractable. 

Of course, you can always use deep learning to do whatever it does best, and then use expert systems to handle the rest. Such systems use logical rules, input by actual experts, to handle whatever problems come up. That method also enables engineers to tweak the system—an option that the black box of deep learning doesn’t allow.

Putting deep learning and expert systems together does help, says Pratt. “But not nearly enough.”

Day-to-day improvements will continue no matter what new tools become available to AI researchers, says Wolfram Burgard, Toyota’s vice president for automated driving technology. 

“We are now in the age of deep learning,” he says. “We don’t know what will come after—it could be a rebirth of an old technology that suddenly outperforms what we saw before. We are still in a phase where we are making progress with existing techniques, but the gradient isn’t as steep as it was a few years ago. It is getting more difficult.”

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

aside.inlay.xlrg.XploreFree { font-family: "Georgia", serif; border-width: 4px 0; border-top: solid #888; border-bottom: solid #888; padding: 10px 0; font-size: 19px; font-weight: bold; text-align: center; } span.FreeRed { color: red; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; } span.XploreBlue { color: #03a6e3; font-family: "Theinhardt-Medium", sans-serif; }

A new sensor for robots is designed to make our physical interactions with these machines a little smoother—and safer. The sensor, which is now being commercialized, allows robots to measure the distance and angle of approach of a human or object in close proximity.

Industrial robots often work autonomously to complete tasks. But increasingly, collaborative robots are working alongside humans. To avoid collisions in these circumstances, collaborative robots need highly accurate sensors to detect when someone (or something) is getting a little too close.

Many sensors have been developed for this purpose, each with its own advantages and disadvantages. Those that rely on sound and light (for example, infrared or ultrasonic time-of-flight sensors) measure the reflections of those signals and must therefore be closely aligned with the approaching object, which limits their field of detection.

Photos: Aidin Robotics

To circumvent this problem, a group of researchers in South Korea created a new proximity sensor that measures impedance. It works by inducing electric and magnetic fields with a wide angle. When a human approaches the sensor, their body causes changes in resistance within those fields. The sensor measures the changes and uses that data to inform the robot of the person’s distance and angle of approach. The researchers describe their design in a study published 26 February in IEEE Transactions on Industrial Electronics. It has since been commercialized by Aidin Robotics.

Read this article for free on IEEE Xplore until 08 April 2020.

The sensor is made of electrodes with a flexible, coil-like design. “Since the sensor is highly flexible, it can be manufactured in various shapes tailored to the geometries of the robot,” explains Yoon Haeng Lee, CEO of Aidin Robotics. “Moreover, it is able to classify the materials of the approaching objects such as human, metals, and plastics.”

Tests show that the sensor can detect humans from up to 30 centimeters away. It has an accuracy of 90 percent when on a flat surface. However, the electric and magnetic fields become weaker and more dispersed when the sensor is laid over a curved surface. Therefore, the sensor’s accuracy decreases as the underlying surface becomes increasingly curved.

Every robot is different, and the sensor’s performance may change based on a specific robot’s characteristics. The latest version of the integrated sensor module, when installed on a curved surface, can detect objects from up to 20 centimeters away with an accuracy of 94 percent.

Lee says the device is already being used in some collaborative robot models, including the UR10 (by Universal Robots) and Indy7 (by Neuromeka Inc.). “In the future, the sensor module will be mass-produced and applied to the other service robots, as well as collaborative and industrial robots, to contribute to the truly safe work and coexistence of robots and humans,” he says.

This article appears in the May 2020 print issue as “A Proximity Sensor for Robots.”

Back to IEEE Journal Watch

Dr. Arthur Kreitenberg and his son Elliot got some strange looks when they began the design work for the GermFalcon, a new machine that uses ultraviolet light to wipe out coronavirus and other germs inside an airplane. The father-son founders of Dimer UVC took tape measures with them on flights to unobtrusively record the distances that would form the key design constraints for their system.

“We definitely got lots of looks from passengers and lots of inquiries from flight attendants,” Dr. Kreitenberg recalls. “You can imagine that would cause some attention: taking out a tape measure midflight and measuring armrests. The truth is that when we explained to the flight attendants what we were doing and what we were designing, they [were] really excited about it.”

Perhaps that shouldn’t be surprising. In these days of coronavirus concerns, airline attendants work in what must seem like an aluminum-encased biohazard site.

Image: Dimer UVC

GermFalcon uses a set of mercury lamps to bathe the airline cabin, bathrooms, and galley in ultraviolet-C light. Unlike UV-A and UV-B, that 200 to 280 nanometer wavelength doesn’t reach the surface of the Earth from the sun, because it’s strongly absorbed by nitrogen in the air. And that’s a good thing, because it’s like kryptonite to DNA. Using 100-amps from a lithium-iron-phosphate battery pack, GermFalcon’s mercury lamps’ output is so strong that the company claims the system can wipe out flu viruses from an entire narrow-body plane in about three minutes: one pass up the aisle, one pass down the aisle, and a minute for the bathrooms and galley.

Flu prevention was the original inspiration for GermFalcon. Dr. Arthur Kreitenberg, an orthopaedic surgeon with a background in mechanical engineering, was already familiar with UV-C sterilization, because of its use in operating rooms. “Our motivation was to take it outside of the hospital into other areas where people are concerned about germs,” he says. With SARS and MERS and annual influenza, it seemed clear that airplanes are a major mode of transmission. It was also clear that nobody was effectively disinfecting aircraft.

Many of the chemicals you’d use in a hospital are not approved for use on an aircraft, Kreitenberg points out. And some of the ones that are, aren’t nearly as effective or practical as assumed. (Stop for a minute and look at the actual directions for disinfecting a surface with a Lysol Wipe, then try to imagine doing that on a plane. Go ahead. I’ll wait.)

Photo: Dimer UVC

The key design constraints for bringing UV-C sterilization into air travel were geometry, time, and power. The Kreitenbergs needed to know how much room their system had to move up and down the aisles without bashing into seats, armrests, restroom doors, and overhead bins. They also needed to know what surfaces were the most germ-ridden (the top of the seat back, as you might expect), something they discovered by swabbing surfaces on about a dozen flights. And from those data points, they had to figure out the proper power and position of the UV-lamps that would allow them to sterilize an aircraft in a matter of minutes. “Time is a big constraint as well. The airlines want us on and off the airplane as quick as possible,” he says.

“I wish I could tell you we solved it all mathematically,” says Kreitenberg. “But the truth is we went out to the airplane graveyard in Mojave, California and bought a couple rows of airplane seats and overhead bins, put [UV] meters on them, smeared them with bacteria, and did cultures.”

It took four or five iterations to get it right. “It turns out there are a lot of different airplane configurations,” he says.

Initially, the pair envisioned GermFalcon as a robot, but that made the design challenges multiply. “Robotics are easier said than done, even just going up and down an airplane,” he says. Sensors weren’t hardy enough and needed frequent recalibration, and the motor drives were heavy and energy consuming. The robotics consumed about a year of their development time before they decided to abandon that path in favor of a human protected by shielding.

Photo: Dimer UVC

Lacking a suitable lab for such a dangerous germ, Dimer UVC hasn’t tested the system on the virus that causes COVID-19. But Kreitenberg expects it will be similarly susceptible to UV-C as influenza and other germs are. The dose can be easily adjusted by slowing GermFalcon’s roll down the aisle. The company has offered GermFalcon’s services free of charge to airlines operating from a handful of U.S. airports

While Dimer UVC waits for airlines to take up its offer, it’s gotten involved in another attempt to robotize aerospace interiors. The company is part of a team building a UV-C sterilization robot for the International Space Station. “It’ll work basically work like a Roomba and skim the surface of the space station,” says Kreitenberg, a former finalist astronaut candidate.

Because it can get so close to the station’s surfaces, the zero-G death-ray Roomba the team is working on can use UV-C LEDs instead of the power-hungry mercury lamps of GermFalcon. Kreitenberg says he would be much happier using LEDs, if they could reach the needed power. “All of our power constraints and a lot of other constraints will be solved when there is an effective UV-C LED,” he says. Looking at the progress companies have made in that area over the last five years, he’s “optimistic" that GermFalcon will be able to switch to using only LEDs.

Back to IEEE COVID-19 Resources
aside.inlay.xlrg.XploreFree { font-family: "Georgia", serif; border-width: 4px 0; border-top: solid #888; border-bottom: solid #888; padding: 10px 0; font-size: 19px; font-weight: bold; text-align: center; } span.FreeRed { color: red; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; } span.XploreBlue { color: #03a6e3; font-family: "Theinhardt-Medium", sans-serif; }

Swarms of small, inexpensive robots are a compelling research area in robotics. With a swarm, you can often accomplish tasks that would be impractical (or impossible) for larger robots to do, in a way that’s much more resilient and cost effective than larger robots could ever be.

The tricky thing is getting a swarm of robots to work together to do what you want them to do, especially if what you want them to do is a task that’s complicated or highly structured. It’s not too bad if you have some kind of controller that can see all the robots at once and tell them where to go, but that’s a luxury that you’re not likely to find outside of a robotics lab.

Researchers at Northwestern University, in Evanston, have been working on a way to provide decentralized control for a swarm of 100 identically programmed small robots, which allows them to collectively work out a way to transition from one shape to another without running into each other even a little bit.

The process that the robots use to figure out where to go seems like it should be mostly straightforward: They’re given a shape to form, so each robot picks its goal location (where it wants to end up as part of the shape), and then plans a path to get from where it is to where it needs to go, following a grid pattern to make things a little easier. But using this method, you immediately run into two problems: First, since there’s no central control, you may end up with two (or more) robots with the same goal; and second, there’s no way for any single robot to path plan all the way to its goal in a way that it can be certain won’t run into another robot.

To solve these problems, the robots are all talking to each other as they move, not just to avoid colliding with its friends, but also to figure out where its friends are going and whether it might be worth swapping destinations. Since the robots are all the same, they don’t really care where exactly they end up, as long as all of the goal positions are filled up. And if one robot talks to another robot and they agree that a goal swap would result in both of them having to move less, they go ahead and swap. The algorithm makes sure that all goal positions are filled eventually, and also helps robots avoid running into each other through judicious use of a “wait” command.

What’s novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this.

What’s really novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this. And it means that since it’s effective with no centralized control at all, you can think of “the swarm” as a sort of Borg-like collective entity of its own, which is pretty cool.

The Northwestern researchers behind this are Michael Rubenstein, assistant professor of electrical engineering and computer science, and his PhD student Hanlin Wang. You might remember Mike from his work on Kilobots at Harvard, which we wrote about in 2011, 2013, and again in 2014, when Mike and his fellow researchers managed to put together a thousand (!) of them. As awesome as it is to have a thousand robots, when you start thinking about what it takes to charge, fix, and modify them, a thousand robots (a thousand robots!), it makes sense why they’ve updated the platform a bit (now called Coachbot) and reduced the swarm size to 100 physical robots, making up the rest in simulation.

These robots, we’re told, are “much better behaved.”

Image: Northwestern University

The hardware used by the researchers in their experiments. 1. The Coachbot V2.0 mobile robots (height of 12 cm and a diameter of 10 cm) are equipped with a localization system based on the HTC Vive (a), Raspberry Pi b+ computer (b), electronics motherboard (c), and rechargeable battery (d). The robot arena used in experiments has an overhead camera only used for recording videos (e) and an overhead HTC Vive base station (f). The experiments relied on a swarm of 100 robots (g). 2. The Coachbot V2.0 swarm communication network consists of an ethernet connection between the base station and a Wi-Fi router (green link), TCP/IP connections (blue links), and layer 2 broadcasting connections (black links). 3. A swarm of 100 robots. 4. The robots recharge their batteries by connecting to two metal strips attached to the wall.

For more details on this work, we spoke with Mike Rubenstein via email.

IEEE Spectrum: Why switch to the new hardware platform instead of Kilobots?

Mike Rubenstein: We wanted to make a platform more capable and extendable than Kilobot, and improve on lessons learned with Kilobot. These robots have far better locomotion capabilities that Kilobot, and include absolute position sensing, which makes operating the robots easier. They have truly “hands free” operations. For example with Kilobot to start an experiment you had to place the robots in their starting position by hand (sometimes taking an hour or two), while with these robots, a user just specifies a set of positions for all the robots and presses the “go” button. With Kilobot it was also hard to see what the state of all the robots were, for example it was difficult to see if 999 robots are powered on or 1000 robots are powered on. These new robots send state information back to a user display, making it easy to understand the full state of the swarm. 
 
How much of a constraint is grid-ifying the goal points and motion planning?

The grid constraint obviously makes motion less efficient as they must move in Manhattan-type paths, not straight line paths, so most of the time they move a bit farther. The reason we constrain the motions to move in a discrete grid is that it makes the robot algorithm less computationally complex and reasoning about collisions and deadlock becomes a lot easier, which allowed us to provide guarantees that the shape will form successfully. 

Image: Northwestern University

Still images of a 100 robot shape formation experiment. The robots start in a random configuration, and move to form the desired “N” shape. Once this shape is formed, they then form the shape “U.” The entire sequence is fully autonomous. (a) T = 0 s; (b) T = 20 s; (c) T = 64 s; (d) T = 72 s; (e)  T = 80 s; (f) T = 112 s.

Can you tell us about those couple of lonely wandering robots at the end of the simulated “N” formation in the video?

In our algorithm, we don’t assign goal locations to all the robots at the start, they have to figure out on their own which robot goes where. The last few robots you pointed out happened to be far away from the goal location the swarm figured they should have. Instead of having that robot move around the whole shape to its goal, you see a subset of robots all shift over by one to make room for the robot in the shape closer to its current position.
 
What are some examples of ways in which this research could be applied to real-world useful swarms of robots?

One example could be the shape formation in modular self-reconfigurable robots. The hope is that this shape formation algorithm could allow these self-reconfigurable systems to automatically change their shape in a simple and reliable way. Another example could be warehouse robots, where robots need to move to assigned goals to pick up items. This algorithm would help them move quickly and reliably.
 
What are you working on next?

I’m looking at trying to understand how to enable large groups of simple individuals to behave in a controlled and reliable way as a group. I’ve started looking at this question in a wide range of settings; from swarms of ground robots, to reconfigurable robots that attach together by melting conductive plastic, to swarms of flying vehicles, to satellite swarms. 

Shape Formation in Homogeneous Swarms Using Local Task Swapping,” by Hanlin Wang and Michael Rubenstein from Northwestern, is published in IEEE Transactions on Robotics. < Back to IEEE Journal Watch
aside.inlay.xlrg.XploreFree { font-family: "Georgia", serif; border-width: 4px 0; border-top: solid #888; border-bottom: solid #888; padding: 10px 0; font-size: 19px; font-weight: bold; text-align: center; } span.FreeRed { color: red; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; } span.XploreBlue { color: #03a6e3; font-family: "Theinhardt-Medium", sans-serif; }

Swarms of small, inexpensive robots are a compelling research area in robotics. With a swarm, you can often accomplish tasks that would be impractical (or impossible) for larger robots to do, in a way that’s much more resilient and cost effective than larger robots could ever be.

The tricky thing is getting a swarm of robots to work together to do what you want them to do, especially if what you want them to do is a task that’s complicated or highly structured. It’s not too bad if you have some kind of controller that can see all the robots at once and tell them where to go, but that’s a luxury that you’re not likely to find outside of a robotics lab.

Researchers at Northwestern University, in Evanston, have been working on a way to provide decentralized control for a swarm of 100 identically programmed small robots, which allows them to collectively work out a way to transition from one shape to another without running into each other even a little bit.

The process that the robots use to figure out where to go seems like it should be mostly straightforward: They’re given a shape to form, so each robot picks its goal location (where it wants to end up as part of the shape), and then plans a path to get from where it is to where it needs to go, following a grid pattern to make things a little easier. But using this method, you immediately run into two problems: First, since there’s no central control, you may end up with two (or more) robots with the same goal; and second, there’s no way for any single robot to path plan all the way to its goal in a way that it can be certain won’t run into another robot.

To solve these problems, the robots are all talking to each other as they move, not just to avoid colliding with its friends, but also to figure out where its friends are going and whether it might be worth swapping destinations. Since the robots are all the same, they don’t really care where exactly they end up, as long as all of the goal positions are filled up. And if one robot talks to another robot and they agree that a goal swap would result in both of them having to move less, they go ahead and swap. The algorithm makes sure that all goal positions are filled eventually, and also helps robots avoid running into each other through judicious use of a “wait” command.

What’s novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this.

What’s really novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this. And it means that since it’s effective with no centralized control at all, you can think of “the swarm” as a sort of Borg-like collective entity of its own, which is pretty cool.

The Northwestern researchers behind this are Michael Rubenstein, assistant professor of electrical engineering and computer science, and his PhD student Hanlin Wang. You might remember Mike from his work on Kilobots at Harvard, which we wrote about in 2011, 2013, and again in 2014, when Mike and his fellow researchers managed to put together a thousand (!) of them. As awesome as it is to have a thousand robots, when you start thinking about what it takes to charge, fix, and modify them, a thousand robots (a thousand robots!), it makes sense why they’ve updated the platform a bit (now called Coachbot) and reduced the swarm size to 100 physical robots, making up the rest in simulation.

These robots, we’re told, are “much better behaved.”

Image: Northwestern University

The hardware used by the researchers in their experiments. 1. The Coachbot V2.0 mobile robots (height of 12 cm and a diameter of 10 cm) are equipped with a localization system based on the HTC Vive (a), Raspberry Pi b+ computer (b), electronics motherboard (c), and rechargeable battery (d). The robot arena used in experiments has an overhead camera only used for recording videos (e) and an overhead HTC Vive base station (f). The experiments relied on a swarm of 100 robots (g). 2. The Coachbot V2.0 swarm communication network consists of an ethernet connection between the base station and a Wi-Fi router (green link), TCP/IP connections (blue links), and layer 2 broadcasting connections (black links). 3. A swarm of 100 robots. 4. The robots recharge their batteries by connecting to two metal strips attached to the wall.

For more details on this work, we spoke with Mike Rubenstein via email.

IEEE Spectrum: Why switch to the new hardware platform instead of Kilobots?

Mike Rubenstein: We wanted to make a platform more capable and extendable than Kilobot, and improve on lessons learned with Kilobot. These robots have far better locomotion capabilities that Kilobot, and include absolute position sensing, which makes operating the robots easier. They have truly “hands free” operations. For example with Kilobot to start an experiment you had to place the robots in their starting position by hand (sometimes taking an hour or two), while with these robots, a user just specifies a set of positions for all the robots and presses the “go” button. With Kilobot it was also hard to see what the state of all the robots were, for example it was difficult to see if 999 robots are powered on or 1000 robots are powered on. These new robots send state information back to a user display, making it easy to understand the full state of the swarm. 
 
How much of a constraint is grid-ifying the goal points and motion planning?

The grid constraint obviously makes motion less efficient as they must move in Manhattan-type paths, not straight line paths, so most of the time they move a bit farther. The reason we constrain the motions to move in a discrete grid is that it makes the robot algorithm less computationally complex and reasoning about collisions and deadlock becomes a lot easier, which allowed us to provide guarantees that the shape will form successfully. 

Image: Northwestern University

Still images of a 100 robot shape formation experiment. The robots start in a random configuration, and move to form the desired “N” shape. Once this shape is formed, they then form the shape “U.” The entire sequence is fully autonomous. (a) T = 0 s; (b) T = 20 s; (c) T = 64 s; (d) T = 72 s; (e)  T = 80 s; (f) T = 112 s.

Can you tell us about those couple of lonely wandering robots at the end of the simulated “N” formation in the video?

In our algorithm, we don’t assign goal locations to all the robots at the start, they have to figure out on their own which robot goes where. The last few robots you pointed out happened to be far away from the goal location the swarm figured they should have. Instead of having that robot move around the whole shape to its goal, you see a subset of robots all shift over by one to make room for the robot in the shape closer to its current position.
 
What are some examples of ways in which this research could be applied to real-world useful swarms of robots?

One example could be the shape formation in modular self-reconfigurable robots. The hope is that this shape formation algorithm could allow these self-reconfigurable systems to automatically change their shape in a simple and reliable way. Another example could be warehouse robots, where robots need to move to assigned goals to pick up items. This algorithm would help them move quickly and reliably.
 
What are you working on next?

I’m looking at trying to understand how to enable large groups of simple individuals to behave in a controlled and reliable way as a group. I’ve started looking at this question in a wide range of settings; from swarms of ground robots, to reconfigurable robots that attach together by melting conductive plastic, to swarms of flying vehicles, to satellite swarms. 

Shape Formation in Homogeneous Swarms Using Local Task Swapping,” by Hanlin Wang and Michael Rubenstein from Northwestern, is published in IEEE Transactions on Robotics. < Back to IEEE Journal Watch
aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

NASA Curiosity Project Scientist Ashwin Vasavada guides this tour of the rover’s view of the Martian surface. Composed of more than 1,000 images and carefully assembled over the ensuing months, the larger version of this composite contains nearly 1.8 billion pixels of Martian landscape.

This panorama showcases "Glen Torridon," a region on the side of Mount Sharp that Curiosity is exploring. The panorama was taken between Nov. 24 and Dec. 1, 2019, when the Curiosity team was out for the Thanksgiving holiday. Since the rover would be sitting still with few other tasks to do while it waited for the team to return and provide its next commands, the rover had a rare chance to image its surroundings several days in a row without moving.

[ MSL ]

Sarcos has been making progress with its Guardian XO powered exoskeleton, which we got to see late last year in prototype stage:

The Sarcos Guardian XO full-body, powered exoskeleton is a first-of-its-kind wearable robot that enhances human productivity while keeping workers safe from strain or injury. Set to transform the way work gets done, the Guardian XO exoskeleton augments operator strength without restricting freedom of movement to boost productivity while dramatically reducing injuries.

[ Sarcos ]

Professor Hooman Samani, director of the Artificial Intelligence and Robotics Technology Laboratory (AIART Lab) at National Taipei University, Taiwan, writes in to share some ideas on how robots could be used to fight the coronavirus outbreak. 

Time is a critical issue when dealing with people affected by Coronavirus. Also due to the current emergency disaster, doctors could be far away from the patients. Additionally, avoiding direct contact with infected person is a medical priority. An immediate monitoring and treatment using specific kits must be administered to the victim. We have designed and developed the Ambulance Robot (AmbuBot) which could be a solution to address those issues. AmbuBot could be placed in various locations especially in busy, remote or quarantine areas to assist in above mentioned scenario. The AmbuBot also brings along an AED in a sudden event of cardiac arrest and facilitates various modes of operation from manual to semi-autonomous to autonomous functioning.

[ AIART Lab ]

IEEE Spectrum is interested in exploring how robotics and related technologies can help to fight the coronavirus (COVID-19) outbreak. If you are involved with actual deployments of robots to hospitals and high risk areas or have experience working with robots, drones, or other autonomous systems designed for this kind of emergency, please contact  IEEE Spectrum senior editor Erico Guizzo (e.guizzo@ieee.org) Click here for additional coronavirus coverage

Digit is launching later this month alongside a brand new sim that’s a 1:1 match to both the API and physics of the actual robot. Here, we show off the ability to train a learned policy against the validated physics of the robot. We have a LOT more to say about RL with real hardware... stay tuned.

Staying tuned!

Agility Robotics ]

This video presents simulations and experiments highlighting the functioning of the proposed Trapezium Line Theta* planner, as well as its improvements over our previous work namely the Obstacle Negotiating A* planner. First, we briefly present a comparison of our previous and new planners. We then show two simulations. The first shows the robot traversing an inclined corridor to reach a goal near the low-lying obstacle. This demonstrates the omnidirectional and any-angle motion planning improvement achieved by the new planner, as well as the independent planning for the front and back wheel pairs. The second simulation further demonstrates the key improvements mentioned above by having the robot traverse tight right-angled corridors. Finally, we present two real experiments on the CENTAURO robot. In the first experiment, the robot has to traverse into a narrow passage and then expand over a low lying obstacle. The second experiment has the robot first expand over a wide obstacle and then move into a narrow passage.

To be presented at ICRA 2020.

Dimitrios Kanoulas ]

We’re contractually obligated to post any video with “adverse events” in the title.

JHU ]

Waymo advertises their self-driving system in this animated video that features a robot car making a right turn without indicating. Also pretty sure that it ends up in the wrong lane for a little bit after a super wide turn and blocks a crosswalk to pick up a passenger. Oops!

I’d still ride in one, though.

Waymo ]

Exyn is building the world’s most advanced, autonomous aerial robots. Today, we launched our latest capability, Scoutonomy. Our pilotless robot can now ‘scout’ freely within a desired volume, such as a tunnel, or this parking garage. The robot sees the white boxes as ‘unknown’ space, and flies to explore them. The orange boxes are mapped obstacles. It also intelligently avoids obstacles in its path and identifies objects, such as people or cars. Scoutonomy can be used to safely and quickly finding survivors in natural, or man-made, disasters.

Exyn ]

I don’t know what soma blocks are, but this robot is better with them than I am.

This work presents a planner that can automatically find an optimal assembly sequence for a dual-arm robot to assemble the soma blocks. The planner uses the mesh model of objects and the final state of the assembly to generate all possible assembly sequence and evaluate the optimal assembly sequence by considering the stability, graspability, assemblability, as well as the need for a second arm. Especially, the need for a second arm is considered when supports from worktables and other workpieces are not enough to produce a stable assembly.

[ Harada Lab ]

Semantic grasping is the problem of selecting stable grasps that are functionally suitable for specific object manipulation tasks. In order for robots to effectively perform object manipulation, a broad sense of contexts, including object and task constraints, needs to be accounted for. We introduce the Context-Aware Grasping Engine, which combines a novel semantic representation of grasp contexts with a neural network structure based on the Wide & Deep model, capable of capturing complex reasoning patterns. We quantitatively validate our approach against three prior methods on a novel dataset consisting of 14,000 semantic grasps for 44 objects, 7 tasks, and 6 different object states. Our approach outperformed all baselines by statistically significant margins, producing new insights into the importance of balancing memorization and generalization of contexts for semantic grasping. We further demonstrate the effectiveness of our approach on robot experiments in which the presented model successfully achieved 31 of 32 suitable grasps.

[ RAIL Lab ]

I’m not totally convinced that bathroom cleaning is an ideal job for autonomous robots at this point, just because of the unstructured nature of a messy bathroom (if not of the bathroom itself). But this startup is giving it a shot anyway.

The cost target is $1,000 per month.

[ Somatic ] via [ TechCrunch ]

IHMC is designing, building, and testing a mobility assistance research device named Quix. The main function of Quix is to restore mobility to those stricken with lower limb paralysis. In order to achieve this the device has motors at the pelvis, hips, knees, and ankles and an onboard computer controlling the motors and various sensors incorporated into the system.

[ IHMC ]

In this major advance for mind-controlled prosthetics, U-M research led by Paul Cederna and Cindy Chestek demonstrates an ultra-precise prosthetic interface technology that taps faint latent signals from nerves in the arm and amplifies them to enable real-time, intuitive, finger-level control of a robotic hand.

[ University of Michigan ]

Coral reefs represent only 1% of the seafloor, but are home to more than 25% of all marine life. Reefs are declining worldwide. Yet, critical information remains unknown about basic biological, ecological, and chemical processes that sustain coral reefs because of the challenges to access their narrow crevices and passageways. A robot that grows through its environment would be well suited to this challenge as there is no relative motion between the exterior of the robot and its surroundings. We design and develop a soft growing robot that operates underwater and take a step towards navigating the complex terrain of a coral reef.

[ UCSD ]

What goes on inside those package lockers, apparently.

[ Dorabot ]

In the future robots could track the progress of construction projects. As part of the MEMMO H2020 project, we recently carried out an autonomous inspection of the Costain High Speed Rail site in London with our ANYmal robot, in collaboration with Edinburgh Robotics.

[ ORI ]

Soft Robotics technology enables seafood handling at high speed even with amorphous products like mussels, crab legs, and lobster tails.

[ Soft Robotics ]

Pepper and Nao had a busy 2019:

[ SoftBank Robotics ]

Chris Atkeson, a professor at the Robotics Institute at Carnegie Mellon University, watches a variety of scenes featuring robots from movies and television and breaks down how accurate their depictions really are. Would the Terminator actually have dialogue options? Are the "three laws" from I, Robot a real thing? Is it actually hard to erase a robot’s memory (a la Westworld)?

[ Chris Atkeson ] via [ Wired ]

This week’s CMU RI Seminar comes from Anca Dragan at UC Berkeley, on “Optimizing for Coordination With People.”

From autonomous cars to quadrotors to mobile manipulators, robots need to co-exist and even collaborate with humans. In this talk, we will explore how our formalism for decision making needs to change to account for this interaction, and dig our heels into the subtleties of modeling human behavior — sometimes strategic, often irrational, and nearly always influenceable. Towards the end, I’ll try to convince you that every robotics task is actually a human-robot interaction task (its specification lies with a human!) and how this view has shaped our more recent work.

[ CMU RI ]

Pages