Feed aggregator

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

When I reached Professor Guang-Zhong Yang on the phone last week, he was cooped up in a hotel room in Shanghai, where he had self-isolated after returning from a trip abroad. I wanted to hear from Yang, a widely respected figure in the robotics community, about the role that robots are playing in fighting the coronavirus pandemic. He’d been monitoring the situation from his room over the previous week, and during that time his only visitors were a hotel employee, who took his temperature twice a day, and a small wheeled robot, which delivered his meals autonomously.

An IEEE Fellow and founding editor of the journal Science Robotics, Yang is the former director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College London. More recently, he became the founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University, often called the MIT of China. Yang wants to build the new institute into a robotics powerhouse, recruiting 500 faculty members and graduate students over the next three years to explore areas like surgical and rehabilitation robots, image-guided systems, and precision mechatronics.

“I ran a lot of the operations for the institute from my hotel room using Zoom,” he told me.

Yang is impressed by the different robotic systems being deployed as part of the COVID-19 response. There are robots checking patients for fever, robots disinfecting hospitals, and robots delivering medicine and food. But he thinks robotics can do even more.

Photo: Shanghai Jiao Tong University Professor Guang-Zhong Yang, founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University.

“Robots can be really useful to help you manage this kind of situation, whether to minimize human-to-human contact or as a front-line tool you can use to help contain the outbreak,” he says. While the robots currently being used rely on technologies that are mature enough to be deployed, he argues that roboticists should work more closely with medical experts to develop new types of robots for fighting infectious diseases.

“What I fear is that, there is really no sustained or coherent effort in developing these types of robots,” he says. “We need an orchestrated effort in the medical robotics community, and also the research community at large, to really look at this more seriously.”

Yang calls for a global effort to tackle the problem. “In terms of the way to move forward, I think we need to be more coordinated globally,” he says. “Because many of the challenges require that we work collectively to deal with them.”

Our full conversation, edited for clarity and length, is below.

IEEE Spectrum: How is the situation in Shanghai?

Guang-Zhong Yang: I came back to Shanghai about 10 days ago, via Hong Kong, so I’m now under self-imposed isolation in a hotel room just to be cautious, for two weeks. The general feeling in Shanghai is that it’s really calm and orderly. Everything seems well under control. And as you probably know, in recent days the number of new cases is steadily dropping. So the main priority for the government is to restore normal routines, and also for companies to go back to work. Of course, people are still very cautious, and there are systematic checks in place. In my hotel, for instance, I get checked twice a day for my temperature to make sure that all the people in the hotel are well.

Are most people staying inside, are the streets empty?

No, the streets are not empty. In fact, in Minhang, next to Shanghai Jiao Tong University, things are going back to normal. Not at full capacity, but stores and restaurants are gradually opening. And people are thinking about the essential travels they need to do, what they can do remotely. As you know in China we have very good online order and delivery services, so people use them a lot more. I was really impressed by how the whole thing got under control, really.

Has Shanghai Jiao Tong University switched to online classes?

Yes. Since last week, the students are attending online lectures. The university has 1449 courses for undergrads and 657 for graduate students. I participated in some of them. It’s really well run. You can have the typical format with a presenter teaching the class, but you can also have part of the lecture with the students divided into groups and having discussions. Of course what’s really affected is laboratory-based work. So we’ll need to wait for some more time to get back into action.

What do you think of the robots being used to help fight the outbreak?

I’ve seen reports showing a variety of robots being deployed. Disinfection robots that use UV light in hospitals. Drones being used for transporting samples. There’s a prototype robot, developed by the Chinese Academy of Sciences, to remotely collect oropharyngeal swabs from patients for testing, so a medical worker doesn’t have to directly swab the patient. In my hotel, there’s a robot that brings my meals to my door. This little robot can manage to get into the lift, go to your room, and call you to open the door. I’m a roboticist myself and I find it striking how well this robot works every time! [Laughs.]

Photo: UVD Robots UVD Robots has shipped hundreds of ultraviolet-C disinfection robots like the one above to Chinese hospitals. 

After Japan’s Fukushima nuclear emergency, the robotics community realized that it needed to be better prepared. It seems that we’ve made progress with disaster-response robots, but what about dealing with pandemics?

I think that for events involving infectious diseases, like this coronavirus outbreak, when they happen, everybody realizes the importance of robots. The challenge is that at most research institutions, people are more concerned with specific research topics, and that’s indeed the work of a scientist—to dig deep into the scientific issues and solve those specific problems. But we also need to have a global view to deal with big challenges like this pandemic.

So I think what we need to do, starting now, is to have a more systematic effort to make sure those robots can be deployed when we need them. We just need to recompose ourselves and work to identify the technologies that are ready to be deployed, and what are the key directions we need to pursue. There’s a lot we can do. It’s not too late. Because this is not going to disappear. We have to see the worst before it gets better.

Click here for additional coronavirus coverage

So what should we do to be better prepared?

After a major crisis, when everything is under control, people’s priority is to go back to our normal routines. The last thing in people’s minds is, What should we do to prepare for the next crisis? And the thing is, you can’t predict when the next crisis will happen. So I think we need three levels of action, and it really has to be a global effort. One is at the government level, in particular funding agencies: How to make sure we can plan ahead and to prepare for the worst.

Another level is the robotics community, including organizations like the IEEE, we need leadership to advocate for these issues and promote activities like robotics challenges. We see challenges for disasters, logistics, drones—how about a robotic challenge for infectious diseases. I was surprised, and a bit disappointed in myself, that we didn’t think about this before. So for the editorial board of Science Robotics, for instance, this will become an important topic for us to rethink.

And the third level is our interaction with front-line clinicians—our interaction with them needs to be stronger. We need to understand the requirements and not be obsessed with pure technologies, so we can ensure that our systems are effective, safe, and can be rapidly deployed. I think that if we can mobilize and coordinate our effort at all these three levels, that would be transformative. And we’ll be better prepared for the next crisis.

Are there projects taking place at the Institute of Medical Robotics that could help with this pandemic?

The institute has been in full operation for just over a year now. We have three main areas of research: The first is surgical robotics, which is my main area of research. The second area is in rehabilitation and assistive robots. The third area is hospital and laboratory automation. One important lesson that we learned from the coronavirus is that, if we can detect and intervene early, we have a better chance of containing it. And for other diseases, it’s the same. For cancer, early detection based on imaging and other sensing technologies, is critical. So that’s something we want to explore—how robotics, including technologies like laboratory automation, can help with early detection and intervention.

“One area we are working on is automated intensive-care unit wards. The idea it to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks”

One area we are working on is automated intensive-care unit wards. The idea it to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks. Some tasks could be performed remotely by medical personnel, while other tasks could be fully automated. A lot of the technologies that we already use in surgical robotics can be translated into this area. We’re hoping to work with other institutions and share our expertise to continue developing this further. Indeed, this technology is not just for emergency situations. It will also be useful for routine management of infectious disease patients. We really need to rethink how hospitals are organized in the future to avoid unnecessary exposure and cross-infection.

Photo: Shanghai Jiao Tong University Shanghai Jiao Tong University’s Institute of Medical Robotics is researching areas like micro/nano systems, surgical and rehabilitation robotics, and human-robot interaction.

I’ve seen some recent headlines—“China’s tech fights back,” “Coronavirus is the first big test for futuristic tech”—many people expect technology to save the day.

When there’s a major crisis like this pandemic, in the general public’s mind, people want to find a magic cure that will solve all the problems. I completely understand that expectation. But technology can’t always do that, of course. What technology can do is to help us to be better prepared. For example, it’s clear that in the last few years self-navigating robots with localization and mapping are becoming a mature technology, so we should see more of those used for situations like this. I’d also like to see more technologies developed for front-line management of patients, like the robotic ICU I mentioned earlier. Another area is public transportation systems—can they have an element of disease prevention, using technology to minimize the spread of diseases so that lockdowns are only imposed as a last resort?

And then there’s the problem of people being isolated. You probably saw that Italy has imposed a total lockdown. That could have a major psychological impact, particularly for people who are vulnerable and living alone. There is one area of robotics, called social robotics, that could play a part in this as well. I’ve been in this hotel room by myself for days now—I’m really starting to feel the isolation…

We should have done a Zoom call.

Yes, we should. [Laughs.] I guess this isolation, or quarantine for various people, also provides the opportunity for us to reflect on our lives, our work, our daily routines. That’s the silver lining that we may see from this crisis.

Photo: Unity Drive Innovation Unity Drive, a startup spun out of Hong Kong University of Science and Technology, is deploying self-driving vehicles to carry out contactless deliveries in three Chinese cities.

While some people say we need more technology during emergencies like this, others worry that companies and governments will use things like cameras and facial recognition to increase surveillance of individuals.

A while ago we published an article listing the 10 grand challenges for robotics in Science Robotics. One of the grand challenges is concerned with legal and ethical issues, which include what you mentioned in your question. Respecting privacy, and also being sensitive about individual and citizens’ rights—these are very, very important. Because we must operate within this legal ethical boundary. We should not use technologies that will intrude in people’s lives. You mentioned that some people say that we don’t have enough technology, and that others say we have too much. And I think both have a point. What we need to do is to develop technologies that are appropriate to be deployed in the right situation and for the right tasks.

Many researchers seem eager to help. What would you say to roboticists interested in helping fight this outbreak or prepare for the next one?

For medical robotics research, my experience is that for your technology to be effective, it has to be application oriented. You need to ensure that end-users like the clinicians who will use your robot, or in the case of assistive robots, the patients, that they are deeply involved in the development of the technology. And the second thing is really to think out of the box—how to develop radically different new technologies. Because robotics research is very hands on and there’s a tendency of adapting what’s readily available out there. For your technology to have a major impact, you need to fundamentally rethink your research and innovation, not just follow the waves.

For example, at our institute we’re investing a lot of effort on the development of micro and nano systems and also new materials that could one day be used in robots. Because for micro robotic systems, we can’t rely on the more traditional approach of using motors and gears that we use in larger systems. So my suggestion is to work on technologies that not only have a deep science element but can also become part of a real-world application. Only then we can be sure to have strong technologies to deal with future crises.

We present a reinforcement learning-based (RL) control scheme for trajectory tracking of fully-actuated surface vessels. The proposed method learns online both a model-based feedforward controller, as well an optimizing feedback policy in order to follow a desired trajectory under the influence of environmental forces. The method's efficiency is evaluated via simulations and sea trials, with the unmanned surface vehicle (USV) ReVolt performing three different tracking tasks: The four corner DP test, straight-path tracking and curved-path tracking. The results demonstrate the method's ability to accomplish the control objectives and a good agreement between the performance achieved in the Revolt Digital Twin and the sea trials. Finally, we include an section with considerations about assurance for RL-based methods and where our approach stands in terms of the main challenges.

Motor skill learning of dental implantation surgery is difficult for novices because it involves fine manipulation of different dental tools to fulfill a strictly pre-defined procedure. Haptics-enabled virtual reality training systems provide a promising tool for surgical skill learning. In this paper, we introduce a haptic rendering algorithm for simulating diverse tool-tissue contact constraints during dental implantation. Motion forms of an implant tool can be summarized as the high degree of freedom (H-DoF) motion and the low degree of freedom (L-DoF) motion. During the H-DoF state, the tool can move freely on bone surface and in free space with 6 DoF. While during the L-DoF state, the motion degrees are restrained due to the constraints imposed by the implant bed. We propose a state switching framework to simplify the simulation workload by rendering the H-DoF motion state and the L-DoF motion state separately, and seamless switch between the two states by defining an implant criteria as the switching judgment. We also propose the virtual constraint method to render the L-DoF motion, which are different from ordinary drilling procedures as the tools should obey different axial constraint forms including sliding, drilling, screwing and perforating. The virtual constraint method shows efficiency and accuracy in adapting to different kinds of constraint forms, and consists of three core steps, including defining the movement axis, projecting the configuration difference, and deriving the movement control ratio. The H-DoF motion on bone surface and in free space is simulated through the previously proposed virtual coupling method. Experimental results illustrated that the proposed method could simulate the 16 different phases of the complete implant procedures of the Straumann® Bone Level(BL) Implants Φ4.8–L12 mm. According to the output force curve, different contact constraints could be rendered with steady and continuous output force during the operation procedures.

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

Working from home is the new normal, at least for those of us whose jobs mostly involve tapping on computer keys. But what about researchers who are synthesizing new chemical compounds or testing them on living tissue or on bacteria in petri dishes? What about those scientists rushing to develop drugs to fight the new coronavirus? Can they work from home?

Silicon Valley-based startup Strateos says its robotic laboratories allow scientists doing biological research and testing to do so right now. Within a few months, the company believes it will have remote robotic labs available for use by chemists synthesizing new compounds. And, the company says, those new chemical synthesis lines will connect with some of its existing robotic biology labs so a remote researcher can seamlessly transfer a new compound from development into testing.

Click here for additional coronavirus coverage

The company’s first robotic labs, up and running in Menlo Park, Calif., since 2012, were developed by one of Strateos’ predecessor companies, Transcriptic. Last year Transcriptic merged with 3Scan, a company that produces digital 3D histological models from scans of tissue samples, to form Strateos. This facility has four robots that run experiments in large, pod-like laboratories for a number of remote clients, including DARPA and the California Pacific Medical Center Research Institute.

Strateos CEO Mark Fischer-Colbrie explains Strateos’ process:

“It starts with an intake kit,” he says, in which the researchers match standard lab containers with a web-based labeling system. Then scientists use Strateos’ graphical user interface to select various tests to run. These can include tests of the chemical properties of compounds, biochemical processes including how compounds react to enzymes or where compounds bind to molecules, and how synthetic yeast organisms respond to stimuli. Soon the company will be adding the capability to do toxicology tests on living cells.

Photo: Strateos A robot in one of Strateos’ cloud labs manages inventory

“Our approach is fully automated and programmable,” Fischer-Colbrie says. “That means that scientists can pick a standard workflow, or decide how a workflow is run. All the pieces of equipment, which include acoustic liquid handlers, spectrophotometers, real-time quantitative polymerase chain reaction instruments, and flow cytometers are accessible.

“The scientists can define every step of the experiment with various parameters, for example, how long the robot incubates a sample and whether it does it fast or slow.&rdquo

To develop the system, Strateos’ engineers had to “connect the dots, that is, connect the lab automation to the web,” rather than dramatically push technology’s envelope, Fischer-Colbrie explains, “bringing the concepts of web services and the sharing economy to the life sciences.”

Nobody had done it before, he says, simply because researchers in the life sciences had been using traditional laboratory techniques for so long, it didn’t seem like there could be a real substitute to physically being in the lab.

“It’s frictionless science, giving scientists the ability to concentrate on their ideas and hypotheses.”

Late last year, in a partnership with Eli Lilly, Strateos added four more biology lab modules in San Diego and by July plans to integrate these with eight chemistry robots that will, according to a press release, “physically and virtually integrate several areas of the drug discovery process—including design, synthesis, purification, analysis, sample management, and hypothesis testing—into a fully automated platform. The lab includes more than 100 instruments and storage for over 5 million compounds, all within a closed-loop and automated drug discovery platform.”

Some of the capacity will be used exclusively by Lilly scientists, but Fischer-Colbrie says, Strateos capped that usage and will be selling lab capacity beyond the cap to others. It currently prices biological assays on a per plate basis and will price chemical reactions per compound.

The company plans to add labs in additional cities as demand for the services increases, in much the same way that Amazon Web Services adds data centers in multiple locales.

It has also started selling access to its software systems directly to companies looking to run their own, dedicated robotic biology labs.

Strateos, of course, had developed this technology long before the new coronavirus pushed people into remote work. Fischer-Colbrie says it has several advantages over traditional lab experiments in addition to enabling scientists to work from home. Experiments run via robots are easier to standardize, he says, and record more metadata than customary or even possible during a manual experiment. This will likely make repeating research easier, allow geographically separated scientists to work together, and create a shorter path to bringing AI into the design and analysis of experiments. “Because we can easily repeat experiments and generate clean datasets, training data for AI systems is cleaner,” he said.

And, he says, robotic labs open up the world of drug discovery to small companies and individuals who don’t have funding for expensive equipment, expanding startup opportunities in the same way software companies boomed when they could turn to cloud services for computing capacity instead of building their own server farms.

Says Alok Gupta, Strateos senior vice president of engineering, “This allows scientists to focus on the concept, not on buying equipment, setting it up, calibrating it; they can just get online and start their work.”

“It’s frictionless science,” says CEO Fischer-Colbrie, “giving scientists the ability to concentrate on their ideas and hypotheses.”

We’ve been writing about the musical robots from Georgia Tech’s Center for Music Technology for many, many years. Over that time, Gil Weinberg’s robots have progressed from being able to dance along to music that they hear, to being able to improvise along with it, to now being able to compose, play, and sing completely original songs.

Shimon, the marimba-playing robot that has performed in places like the Kennedy Center, will be going on a new tour to promote an album that will be released on Spotify next month, featuring songs written (and sung) entirely by the robot.

Deep learning is famous for producing results that seem like they sort of make sense, but actually don’t at all. Key to Shimon’s composing ability is its semantic knowledge—the ability to make thematic connections between things, which is a step beyond just throwing some deep learning at a huge database of music composed by humans (although that’s Shimon’s starting point, a dataset of 50,000 lyrics from jazz, prog rock, and hip-hop). So rather than just training a neural network that relates specific words that tend to be found together in lyrics, Shimon can recognize more general themes and build on them to create a coherent piece of music.

Fans of Shimon may have noticed that the robot has had its head almost completely replaced. It may be tempting to say “upgraded,” since the robot now has eyes, eyebrows, and a mouth, but I’ll always have a liking for Shimon’s older design, which had just one sort of abstract eye thing ( that functions as a mouth on the current design). Personally, I very much appreciate robots that are able to be highly expressive without resorting to anthropomorphism, but in its new career as a pop sensation, I guess having eyes and a mouth are, like, important, or something?

To find out more about Shimon’s new talents (and new face), we spoke with Georgia Tech professor Gil Weinberg and his PhD student Richard Savery.

IEEE Spectrum: What makes Shimon’s music fundamentally different from music that could have been written by a human? 

Richard Savery: Shimon’s musical knowledge is drawn from training on huge datasets of lyrics, around 20,000 prog rock songs and another 20,000 jazz songs. With this level of data Shimon is able to draw on far more sources of inspiration than than a human would ever be able to. At a fundamental level Shimon is able to take in huge amounts of new material very rapidly, so within a day it can change from focusing on jazz lyrics, to hip hop to prog rock, or a hybrid combination of them all. 

How much human adjustment is involved in developing coherent melodies and lyrics with Shimon?

Savery: Just like working with a human collaborator, there’s many different ways Shimon can interact. Shimon can perform a range of musical tasks from composing a full song by itself or just playing a part composed by a human. For the new album we focused on human-robot collaboration so every song has some elements that were created by a human and some by Shimon. More than human adjustment from Shimon’s generation we try and have a musical dialogue where we get inspired and build on Shimon’s creation. Like any band, each of us has our own strengths and weaknesses, in our case no one else writes lyrics, so it was natural for Shimon to take responsibility for the lyrics. As a lyricist there’s a few ways Shimon can work, firstly Shimon can be given some keywords or ideas, like “earth” and “humanity” and then generate a full song of lyrics around those words. In addition to keywords Shimon can also take a musical and write lyrics that fit over that melody. 

The press release mentions that Shimon is able to “decide what’s good.” What does that mean?

Richard Savery: When Shimon writes lyrics the first step is generating thousands of phrases. So for those keywords Shimon will generate lots of material about “earth,” and then also generate related synonyms and antonyms like “world,” and “ocean.” Like a human composer Shimon has to parse through lots of ideas to choose what’s good from the creations. Shimon has preferences towards maintaining the same sentiment, or gradually shifting sentiment as well as trying to keep rhymes going between lines. For Shimon good lyrics should rhyme, keep some core thematic ideas going, maintain a similar sentiment and have some similarity to existing lyrics. 

I would guess that Shimon’s voice could have been almost anything—why choose this particular voice?

Gil Weinberg: Since we did not have singing voice synthesis expertise in our Robotic Musicianship group at Georgia Tech, we looked to collaborate with other groups. The Music Technology Group at Pompeu Fabra University developed a remarkable deep learning-based singing voice synthesizer and was excited to collaborate. As part of the process, we sent them audio files of songs recorded by one of our students to be used as a dataset to train their neural network. At the end, we decided to use another voice that was trained on a different dataset, since we felt it better represented Shimon’s genderless personality and was a better fit to the melodic register of our songs. 

“We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us” —Gil Weinberg, Georgia Tech

Can you tell us about the changes made to Shimon’s face?

Weinberg: We are big fans of avoiding exaggerated anthropomorphism and using too many degrees of freedom in our robots. We feel that this might push robots into the uncanny valley. But after much deliberation, we decided that a singing robot should have a mouth to represent the embodiment of singing and to look believable. It was important to us, though, not to add DoFs for this purpose, rather to replace the old eye DoF with a mouth to minimize complexity. Originally, we thought to repurpose both DoFs of the old eye (bottom eyelid and top eye lid) to represent top lip and bottom lip. But we felt this might be too anthropomorphic, and that it would be more challenging and interesting to use only one DoF to automatically control mouth size based on the lyric’s phonemes. For this purpose, we looked at examples as varied as parrot vocalization and Muppets animation, to learn how animals and animators go about mouth actuation. Once we were happy with what we developed, we decided to use the old top eyelid DoFs as an eyebrow, to add more emotion to Shimon’s expression. 

Are you able to take advantage of any inherently robotic capabilities of Shimon?

Weinberg: One of the most important new features of the new Shimon, in addition to its singing song-writing capabilities, is a total redesign of its striking arms. As part of the process we replaced the old solenoid-based actuators with new brushless DC motors that can support a much faster striking (up to 30 hits per second) as well as a wider and more linear dynamic range—from very soft pianissimo to much louder fortissimo. This not only allows for a much richer musical expression, but also supports the ability to create new humanly impossible timbres and sonorities by using 8 novel virtuosic actuators. We hope and believe that these new abilities would push human collaborators to new uncharted directions that could not be achieved in human-to-human collaboration.

How do you hope audiences will react to Shimon?

Weinberg: We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us to listen to, play, and think about music in new ways.

What are you working on next?

Gil Weinberg: We are currently working on new capabilities that would allow Shimon to listen to, understand, and respond to lyrics in real time. The first genre we are exploring for this functionality is rap battles. We plan to release a new album on Spotify April 10th featuring songs where Shimon not only sings but raps in real time as well.

[ Georgia Tech ]

We’ve been writing about the musical robots from Georgia Tech’s Center for Music Technology for many, many years. Over that time, Gil Weinberg’s robots have progressed from being able to dance along to music that they hear, to being able to improvise along with it, to now being able to compose, play, and sing completely original songs.

Shimon, the marimba-playing robot that has performed in places like the Kennedy Center, will be going on a new tour to promote an album that will be released on Spotify next month, featuring songs written (and sung) entirely by the robot.

Deep learning is famous for producing results that seem like they sort of make sense, but actually don’t at all. Key to Shimon’s composing ability is its semantic knowledge—the ability to make thematic connections between things, which is a step beyond just throwing some deep learning at a huge database of music composed by humans (although that’s Shimon’s starting point, a dataset of 50,000 lyrics from jazz, prog rock, and hip-hop). So rather than just training a neural network that relates specific words that tend to be found together in lyrics, Shimon can recognize more general themes and build on them to create a coherent piece of music.

Fans of Shimon may have noticed that the robot has had its head almost completely replaced. It may be tempting to say “upgraded,” since the robot now has eyes, eyebrows, and a mouth, but I’ll always have a liking for Shimon’s older design, which had just one sort of abstract eye thing ( that functions as a mouth on the current design). Personally, I very much appreciate robots that are able to be highly expressive without resorting to anthropomorphism, but in its new career as a pop sensation, I guess having eyes and a mouth are, like, important, or something?

To find out more about Shimon’s new talents (and new face), we spoke with Georgia Tech professor Gil Weinberg and his PhD student Richard Savery.

IEEE Spectrum: What makes Shimon’s music fundamentally different from music that could have been written by a human? 

Richard Savery: Shimon’s musical knowledge is drawn from training on huge datasets of lyrics, around 20,000 prog rock songs and another 20,000 jazz songs. With this level of data Shimon is able to draw on far more sources of inspiration than than a human would ever be able to. At a fundamental level Shimon is able to take in huge amounts of new material very rapidly, so within a day it can change from focusing on jazz lyrics, to hip hop to prog rock, or a hybrid combination of them all. 

How much human adjustment is involved in developing coherent melodies and lyrics with Shimon?

Savery: Just like working with a human collaborator, there’s many different ways Shimon can interact. Shimon can perform a range of musical tasks from composing a full song by itself or just playing a part composed by a human. For the new album we focused on human-robot collaboration so every song has some elements that were created by a human and some by Shimon. More than human adjustment from Shimon’s generation we try and have a musical dialogue where we get inspired and build on Shimon’s creation. Like any band, each of us has our own strengths and weaknesses, in our case no one else writes lyrics, so it was natural for Shimon to take responsibility for the lyrics. As a lyricist there’s a few ways Shimon can work, firstly Shimon can be given some keywords or ideas, like “earth” and “humanity” and then generate a full song of lyrics around those words. In addition to keywords Shimon can also take a musical and write lyrics that fit over that melody. 

The press release mentions that Shimon is able to “decide what’s good.” What does that mean?

Richard Savery: When Shimon writes lyrics the first step is generating thousands of phrases. So for those keywords Shimon will generate lots of material about “earth,” and then also generate related synonyms and antonyms like “world,” and “ocean.” Like a human composer Shimon has to parse through lots of ideas to choose what’s good from the creations. Shimon has preferences towards maintaining the same sentiment, or gradually shifting sentiment as well as trying to keep rhymes going between lines. For Shimon good lyrics should rhyme, keep some core thematic ideas going, maintain a similar sentiment and have some similarity to existing lyrics. 

I would guess that Shimon’s voice could have been almost anything—why choose this particular voice?

Gil Weinberg: Since we did not have singing voice synthesis expertise in our Robotic Musicianship group at Georgia Tech, we looked to collaborate with other groups. The Music Technology Group at Pompeu Fabra University developed a remarkable deep learning-based singing voice synthesizer and was excited to collaborate. As part of the process, we sent them audio files of songs recorded by one of our students to be used as a dataset to train their neural network. At the end, we decided to use another voice that was trained on a different dataset, since we felt it better represented Shimon’s genderless personality and was a better fit to the melodic register of our songs. 

“We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us” —Gil Weinberg, Georgia Tech

Can you tell us about the changes made to Shimon’s face?

Weinberg: We are big fans of avoiding exaggerated anthropomorphism and using too many degrees of freedom in our robots. We feel that this might push robots into the uncanny valley. But after much deliberation, we decided that a singing robot should have a mouth to represent the embodiment of singing and to look believable. It was important to us, though, not to add DoFs for this purpose, rather to replace the old eye DoF with a mouth to minimize complexity. Originally, we thought to repurpose both DoFs of the old eye (bottom eyelid and top eye lid) to represent top lip and bottom lip. But we felt this might be too anthropomorphic, and that it would be more challenging and interesting to use only one DoF to automatically control mouth size based on the lyric’s phonemes. For this purpose, we looked at examples as varied as parrot vocalization and Muppets animation, to learn how animals and animators go about mouth actuation. Once we were happy with what we developed, we decided to use the old top eyelid DoFs as an eyebrow, to add more emotion to Shimon’s expression. 

Are you able to take advantage of any inherently robotic capabilities of Shimon?

Weinberg: One of the most important new features of the new Shimon, in addition to its singing song-writing capabilities, is a total redesign of its striking arms. As part of the process we replaced the old solenoid-based actuators with new brushless DC motors that can support a much faster striking (up to 30 hits per second) as well as a wider and more linear dynamic range—from very soft pianissimo to much louder fortissimo. This not only allows for a much richer musical expression, but also supports the ability to create new humanly impossible timbres and sonorities by using 8 novel virtuosic actuators. We hope and believe that these new abilities would push human collaborators to new uncharted directions that could not be achieved in human-to-human collaboration.

How do you hope audiences will react to Shimon?

Weinberg: We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us to listen to, play, and think about music in new ways.

What are you working on next?

Gil Weinberg: We are currently working on new capabilities that would allow Shimon to listen to, understand, and respond to lyrics in real time. The first genre we are exploring for this functionality is rap battles. We plan to release a new album on Spotify April 10th featuring songs where Shimon not only sings but raps in real time as well.

[ Georgia Tech ]

Illusory ownership can be induced in a virtual body by visuo-motor synchrony. Our aim was to test the possibility of a re-association of the right thumb with a virtual left arm and express the illusory body ownership of the re-associated arm through a synchronous or asynchronous movement of the body parts through action and vision. Participants felt that their right thumb was the virtual left arm more strongly in the synchronous condition than in the asynchronous one, and the feeling of ownership of the virtual arm was also stronger in the synchronous condition. We did not find a significant difference in the startle responses to a sudden knife appearance to the virtual arm between the two synchrony conditions, as there was no proprioceptive drift of the thumb. These results suggest that a re-association of the right thumb with the virtual left arm could be induced by visuo-motor synchronization; however, it may be weaker than the natural association.

As much as we love soft robots (and we really love soft robots), the vast majority of them operate pneumatically (or hydraulically) at larger scales, especially when they need to exert significant amounts of force. This causes complications, because pneumatics and hydraulics generally require a pump somewhere to move fluid around, so you often see soft robots tethered to external and decidedly non-soft power sources. There’s nothing wrong with this, really, because there are plenty of challenges that you can still tackle that way, and there are some up-and-coming technologies that might result in soft pumps or gas generators.

Researchers at Stanford have developed a new kind of (mostly) soft robot based around a series of compliant, air-filled tubes. It’s human scale, moves around, doesn’t require a pump or tether, is more or less as safe as large robots get, and even manages to play a little bit of basketball.

Image: Stanford/Science Robotics

Stanford’s soft robot consists of a set of identical robotic roller modules mounted onto inflated fabric tubes (A). The rollers pinch the fabric tube between rollers, creating an effective joint (B) that can be relocated by driving the rollers. The roller modules actuate the robot by driving along the tube, simultaneously lengthening one edge while shortening another (C). The roller modules connect to each other at nodes using three-degree-of-freedom universal joints that are composed of a clevis joint that couples two rods, each free to spin about its axis (D). The robot moves untethered outdoors using a rolling gait (E).

This thing looks a heck of a lot like the tensegrity robots that NASA Ames has been working on forever, and which are now being commercialized (hopefully?) by Squishy Robotics. Stanford’s model is not technically a tensegrity robot, though, because it doesn’t use structural components that are under tension (like cables). The researchers refer to this kind of robot as “isoperimetric,” which means while discrete parts of the structure may change length, the overall length of all the parts put together stays the same. This means it’s got a similar sort of inherent compliance across the structure to tensegrity robots, which is one of the things that makes them so appealing. 

While the compliance of Stanford’s robot comes from a truss-like structure made of air-filled tubes, its motion relies on powered movable modules. These modules pinch the tube that they’re located on through two cylindrical rollers (without creating a seal), and driving the rollers moves the module back and forth along the tube, effectively making one section of the tube longer and the other one shorter. Although this is just one degree of freedom, having a whole bunch of tubes each with an independently controlled roller module means that the robot as a whole can exhibit complex behaviors, like drastic shape changes, movement, and even manipulation.

There are numerous advantages to a design like this. You get all the advantages of pneumatic robots (compliance, flexibility, collapsibility, durability, high strength to weight ratio) without requiring some way of constantly moving air around, since the volume of air inside the robot stays constant. Each individual triangular module is self-contained (with one tube, two active roller modules, and one passive anchor module) and easy to combine with similar modules—the video shows an octahedron, but you can easily add or subtract modules to make a variety of differently shaped robots with different capabilities.

Since the robot is inherently so modular, there are all kinds of potential applications for this thing, as the researchers speculate in a paper published today in Science Robotics:

The compliance and shape change of the robot could make it suitable for several tasks involving humans. For example, the robot could work alongside workers, holding parts in place as the worker bolts them in place. In the classroom, the modularity and soft nature of the robotic system make it a potentially valuable educational tool. Students could create many different robots with a single collection of hardware and then physically interact with the robot. By including a much larger number of roller modules in a robot, the robot could function as a shape display, dynamically changing shape as a sort of high–refresh rate 3D printer. Incorporating touch-sensitive fabric into the structure could allow users to directly interact with the displayed shapes. More broadly, the modularity allows the same hardware to build a diverse family of robots—the same roller modules can be used with new tube routings to create new robots. If the user needed a robot to reach through a long, narrow passageway, they could assemble a chain-like robot; then, for a locomoting robot, they could reassemble into a spherical shape.

Image: Farrin Abbott

I’m having trouble picturing some of that stuff, but the rest of it sounds like fun.

We’re obligated to point out that because of the motorized roller modules, this soft robot is really only semi-soft, and you could argue that it’s not fundamentally all that much better than hydraulic or pneumatic soft robots with embedded rigid components like batteries and pumps. Calling this robot “inherently human-safe,” as the researchers do, might be overselling it slightly, in that it has hard edges, pokey bits, and what look to be some serious finger-munchers. It does sound like there might be some potential to replace the roller modules with something softer and more flexible, which will be a focus of future work.

An untethered isoperimetric soft robot,” by Nathan S. Usevitch, Zachary M. Hammond, Mac Schwager, Allison M. Okamura, Elliot W. Hawkes, and Sean Follmer from Stanford University and UCSB, was published in Science Robotics.

As much as we love soft robots (and we really love soft robots), the vast majority of them operate pneumatically (or hydraulically) at larger scales, especially when they need to exert significant amounts of force. This causes complications, because pneumatics and hydraulics generally require a pump somewhere to move fluid around, so you often see soft robots tethered to external and decidedly non-soft power sources. There’s nothing wrong with this, really, because there are plenty of challenges that you can still tackle that way, and there are some up-and-coming technologies that might result in soft pumps or gas generators.

Researchers at Stanford have developed a new kind of (mostly) soft robot based around a series of compliant, air-filled tubes. It’s human scale, moves around, doesn’t require a pump or tether, is more or less as safe as large robots get, and even manages to play a little bit of basketball.

Image: Stanford/Science Robotics

Stanford’s soft robot consists of a set of identical robotic roller modules mounted onto inflated fabric tubes (A). The rollers pinch the fabric tube between rollers, creating an effective joint (B) that can be relocated by driving the rollers. The roller modules actuate the robot by driving along the tube, simultaneously lengthening one edge while shortening another (C). The roller modules connect to each other at nodes using three-degree-of-freedom universal joints that are composed of a clevis joint that couples two rods, each free to spin about its axis (D). The robot moves untethered outdoors using a rolling gait (E).

This thing looks a heck of a lot like the tensegrity robots that NASA Ames has been working on forever, and which are now being commercialized (hopefully?) by Squishy Robotics. Stanford’s model is not technically a tensegrity robot, though, because it doesn’t use structural components that are under tension (like cables). The researchers refer to this kind of robot as “isoperimetric,” which means while discrete parts of the structure may change length, the overall length of all the parts put together stays the same. This means it’s got a similar sort of inherent compliance across the structure to tensegrity robots, which is one of the things that makes them so appealing. 

While the compliance of Stanford’s robot comes from a truss-like structure made of air-filled tubes, its motion relies on powered movable modules. These modules pinch the tube that they’re located on through two cylindrical rollers (without creating a seal), and driving the rollers moves the module back and forth along the tube, effectively making one section of the tube longer and the other one shorter. Although this is just one degree of freedom, having a whole bunch of tubes each with an independently controlled roller module means that the robot as a whole can exhibit complex behaviors, like drastic shape changes, movement, and even manipulation.

There are numerous advantages to a design like this. You get all the advantages of pneumatic robots (compliance, flexibility, collapsibility, durability, high strength to weight ratio) without requiring some way of constantly moving air around, since the volume of air inside the robot stays constant. Each individual triangular module is self-contained (with one tube, two active roller modules, and one passive anchor module) and easy to combine with similar modules—the video shows an octahedron, but you can easily add or subtract modules to make a variety of differently shaped robots with different capabilities.

Since the robot is inherently so modular, there are all kinds of potential applications for this thing, as the researchers speculate in a paper published today in Science Robotics:

The compliance and shape change of the robot could make it suitable for several tasks involving humans. For example, the robot could work alongside workers, holding parts in place as the worker bolts them in place. In the classroom, the modularity and soft nature of the robotic system make it a potentially valuable educational tool. Students could create many different robots with a single collection of hardware and then physically interact with the robot. By including a much larger number of roller modules in a robot, the robot could function as a shape display, dynamically changing shape as a sort of high–refresh rate 3D printer. Incorporating touch-sensitive fabric into the structure could allow users to directly interact with the displayed shapes. More broadly, the modularity allows the same hardware to build a diverse family of robots—the same roller modules can be used with new tube routings to create new robots. If the user needed a robot to reach through a long, narrow passageway, they could assemble a chain-like robot; then, for a locomoting robot, they could reassemble into a spherical shape.

Image: Farrin Abbott

I’m having trouble picturing some of that stuff, but the rest of it sounds like fun.

We’re obligated to point out that because of the motorized roller modules, this soft robot is really only semi-soft, and you could argue that it’s not fundamentally all that much better than hydraulic or pneumatic soft robots with embedded rigid components like batteries and pumps. Calling this robot “inherently human-safe,” as the researchers do, might be overselling it slightly, in that it has hard edges, pokey bits, and what look to be some serious finger-munchers. It does sound like there might be some potential to replace the roller modules with something softer and more flexible, which will be a focus of future work.

An untethered isoperimetric soft robot,” by Nathan S. Usevitch, Zachary M. Hammond, Mac Schwager, Allison M. Okamura, Elliot W. Hawkes, and Sean Follmer from Stanford University and UCSB, was published in Science Robotics.

In this experiment, we aimed to measure the conscious internal representation of one's body appearance and allow the participants to compare this to their ideal body appearance and to their real body appearance. We created a virtual representation of the internal image participants had of their own body shape. We also created a virtual body corresponding to the internal representation they had of their ideal body shape, and we built another virtual body based on their real body measures. Participants saw the three different virtual bodies from an embodied first-person perspective and from a third-person perspective and had to evaluate the appearance of those virtual bodies. We observed that female participants evaluated their real body as more attractive when they saw it from a third-person perspective, and that their level of body dissatisfaction was lower after the experimental procedure. We believe that third-person perspective allowed female participants to perceive their real body shape without applying the negative prior beliefs usually associated to the “self”, and that this resulted in a more positive evaluation of their body shape. We speculate that this method could be applied with patients suffering from eating disorders, by making their body perception more realistic and therefore improve their body satisfaction.

Editor’s Note: When we asked Rodney Brooks if he’d write an article for IEEE Spectrum on his definition of robot, he wrote back right away. “I recently learned that Warren McCulloch”—one of the pioneers of computational neuroscience—“wrote sonnets,” Brooks told us. “He, and your request, inspired me. Here is my article—a little shorter than you might have desired.” Included in his reply were 14 lines composed in iambic pentameter. Brooks titled it “What Is a Robot?” Later, after a few tweaks to improve the metric structure of some of the lines, he added, “I am no William Shakespeare, but I think it is now a real sonnet, if a little clunky in places.”

What Is a Robot?*
By Rodney Brooks

Shall I compare thee to creatures of God?
Thou art more simple and yet more remote.
You move about, but still today, a clod,
You sense and act but don’t see or emote.

You make fast maps with laser light all spread,
Then compare shapes to object libraries,
And quickly plan a path, to move ahead,
Then roll and touch and grasp so clumsily.

You learn just the tiniest little bit,
And start to show some low intelligence,
But we, your makers, Gods not, we admit,
All pledge to quest for genuine sentience.

    So long as mortals breathe, or eyes can see,
    We shall endeavor to give life to thee.

* With thanks to William Shakespeare

Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT, where he was director of the AI Lab and then CSAIL. He has been cofounder of iRobot, Rethink Robotics, and Robust AI, where he is currently CTO.

Editor’s Note: When we asked Rodney Brooks if he’d write an article for IEEE Spectrum on his definition of robot, he wrote back right away. “I recently learned that Warren McCulloch”—one of the pioneers of computational neuroscience—“wrote sonnets,” Brooks told us. “He, and your request, inspired me. Here is my article—a little shorter than you might have desired.” Included in his reply were 14 lines composed in iambic pentameter. Brooks titled it “What Is a Robot?” Later, after a few tweaks to improve the metric structure of some of the lines, he added, “I am no William Shakespeare, but I think it is now a real sonnet, if a little clunky in places.”

What Is a Robot?*
By Rodney Brooks

Shall I compare thee to creatures of God?
Thou art more simple and yet more remote.
You move about, but still today, a clod,
You sense and act but don’t see or emote.

You make fast maps with laser light all spread,
Then compare shapes to object libraries,
And quickly plan a path, to move ahead,
Then roll and touch and grasp so clumsily.

You learn just the tiniest little bit,
And start to show some low intelligence,
But we, your makers, Gods not, we admit,
All pledge to quest for genuine sentience.

    So long as mortals breathe, or eyes can see,
    We shall endeavor to give life to thee.

* With thanks to William Shakespeare

Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT, where he was director of the AI Lab and then CSAIL. He has been cofounder of iRobot, Rethink Robotics, and Robust AI, where he is currently CTO.

The real world is highly variable and unpredictable, and so fine-tuned robot controllers that successfully result in group-level “emergence” of swarm capabilities indoors may quickly become inadequate outside. One response to unpredictability could be greater robot complexity and cost, but this seems counter to the “swarm philosophy” of deploying (very) large numbers of simple agents. Instead, here I argue that bioinspiration in swarm robotics has considerable untapped potential in relation to the phenomenon of phenotypic plasticity: when a genotype can produce a range of distinctive changes in organismal behavior, physiology and morphology in response to different environments. This commonly arises following a natural history of variable conditions; implying the need for more diverse and hazardous simulated environments in offline, pre-deployment optimization of swarms. This will generate—indicate the need for—plasticity. Biological plasticity is sometimes irreversible; yet this characteristic remains relevant in the context of minimal swarms, where robots may become mass-producible. Plasticity can be introduced through the greater use of adaptive threshold-based behaviors; more fundamentally, it can link to emerging technologies such as smart materials, which can adapt form and function to environmental conditions. Moreover, in social animals, individual heterogeneity is increasingly recognized as functional for the group. Phenotypic plasticity can provide meaningful diversity “for free” based on early, local sensory experience, contributing toward better collective decision-making and resistance against adversarial agents, for example. Nature has already solved the challenge of resilient self-organisation in the physical realm through phenotypic plasticity: swarm engineers can follow this lead.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. [CANCELED] ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Having robots learn dexterous tasks requiring real-time hand-eye coordination is hard. Many tasks that we would consider simple, like hanging up a baseball cap on a rack, would be very challenging for most robot software. What’s more, for a robot to learn each new task, it typically takes significant amounts of engineering time to program the robot. Pete Florence and Lucas Manuelli in the Robot Locomotion Group took a step closer to that goal with their work.

[ Paper ]

Octo-Bouncer is not a robot that bounces an octopus. But it’s almost as good. Almost.

[ Electron Dust ]

D’Kitty (pronounced as “The Kitty”) is a 12-degree-of-freedom platform for exploring learning-based techniques in locomotion and it’s adooorable!

[ D’Kitty ]

Knightscope Autonomous Security Robot meets Tesla Model 3 in Summon Mode!  See, nothing to fear, Elon. :-)

The robots also have a message for us:

[ Knightscope ]

If you missed the robots vs. humans match at RoboCup 2019, here are the highlights.

Tech United ]

Fraunhofer developed this cute little demo of autonomously navigating, cooperating mobile robots executing a miniaturized logistics scenario involving chocolate for the LogiMAT trade show. Which was canceled. But enjoy the video!

[ Fraunhofer ]

Thanks Thilo!

Drones can potentially be used for taking soil samples in awkward areas by dropping darts equipped with accelerometers. But the really clever bit is how the drone can retrieve the dart on its own.

[ UH ]

Rope manipulation is one of those human-easy robot-hard things that’s really, really robot-hard.

[ UC Berkeley ]

Autonomous landing on a moving platform presents unique challenges for multirotor vehicles, including the need to accurately localize the platform, fast trajectory planning, and precise/robust control. This work presents a fully autonomous vision-based system that addresses these limitations by tightly coupling the localization, planning, and control, thereby enabling fast and accurate landing on a moving platform. The platform’s position, orientation, and velocity are estimated by an extended Kalman filter using simulated GPS measurements when the quadrotor-platform distance is large, and by a visual fiducial system when the platform is nearby. To improve the performance, the characteristics of the turbulent conditions are accounted for in the controller. The landing trajectory is fast, direct, and does not require hovering over the platform, as is typical of most state-of-the-art approaches. Simulations and hardware experiments are presented to validate the robustness of the approach.

[ MIT ACL ]

And now, this.

[ Soft Robotics ]

The EPRI (Electric Power Research Institute) recently worked with Exyn Technologies, a pioneer in autonomous aerial robot systems, for a safety and data collection demonstration at Exelon’s Peach Bottom Atomic Power Station in Pennsylvania. Exyn’s drone was able to autonomously inspect components in elevated hard to access areas, search for temperature anomalies, and collect dose rate surveys in radiological areas— without the need for a human operator.

[ Exyn ]

Thanks Zach!

Relax: Pepper is here to help with all of your medical problems.

[ Softbank ]

Amir Shapiro at BGU, along with Yoav Golan (whose work on haptic control of dogs we covered last year), have developed an interesting new kind of robotic finger with passively adjustable friction.

Paper ] via [ BGU ]

Thanks Andy!

UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is expected to offer healthcare services to children at Sint Maartenskliniek in the Netherlands. Before that, three of them have been trained to have exercise, empathy and cognition capabilities.

[ UBTECH ]

Get ready for CYBATHLON, postponed to September 2020!

[ Cybathlon ]

In partnership with the World Mosquito Program (WMP), WeRobotics has led the development and deployment of a drone-based release mechanism that has been shown to help prevent the incidence of Dengue fever.

[ WeRobotics ]

Sadly, koalas today face a dire outlook across Australia due to human development, droughts, and forest fires. Events like these and a declining population make conservation and research more important than ever. Drones offer a more efficient way to count koalas from above, covering more ground than was possible in the past. Dr. Hamilton and his team at the Queensland University of Technology use DJI drones to count koalas, using the data obtained to better help these furry friends from down under.

[ DJI ]

Fostering the Next Generation of Robotics Startups | TC Sessions: Robotics

Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Speakers will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.

[ TechCrunch ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. [CANCELED] ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Having robots learn dexterous tasks requiring real-time hand-eye coordination is hard. Many tasks that we would consider simple, like hanging up a baseball cap on a rack, would be very challenging for most robot software. What’s more, for a robot to learn each new task, it typically takes significant amounts of engineering time to program the robot. Pete Florence and Lucas Manuelli in the Robot Locomotion Group took a step closer to that goal with their work.

[ Paper ]

Octo-Bouncer is not a robot that bounces an octopus. But it’s almost as good. Almost.

[ Electron Dust ]

D’Kitty (pronounced as “The Kitty”) is a 12-degree-of-freedom platform for exploring learning-based techniques in locomotion and it’s adooorable!

[ D’Kitty ]

Knightscope Autonomous Security Robot meets Tesla Model 3 in Summon Mode!  See, nothing to fear, Elon. :-)

The robots also have a message for us:

[ Knightscope ]

If you missed the robots vs. humans match at RoboCup 2019, here are the highlights.

Tech United ]

Fraunhofer developed this cute little demo of autonomously navigating, cooperating mobile robots executing a miniaturized logistics scenario involving chocolate for the LogiMAT trade show. Which was canceled. But enjoy the video!

[ Fraunhofer ]

Thanks Thilo!

Drones can potentially be used for taking soil samples in awkward areas by dropping darts equipped with accelerometers. But the really clever bit is how the drone can retrieve the dart on its own.

[ UH ]

Rope manipulation is one of those human-easy robot-hard things that’s really, really robot-hard.

[ UC Berkeley ]

Autonomous landing on a moving platform presents unique challenges for multirotor vehicles, including the need to accurately localize the platform, fast trajectory planning, and precise/robust control. This work presents a fully autonomous vision-based system that addresses these limitations by tightly coupling the localization, planning, and control, thereby enabling fast and accurate landing on a moving platform. The platform’s position, orientation, and velocity are estimated by an extended Kalman filter using simulated GPS measurements when the quadrotor-platform distance is large, and by a visual fiducial system when the platform is nearby. To improve the performance, the characteristics of the turbulent conditions are accounted for in the controller. The landing trajectory is fast, direct, and does not require hovering over the platform, as is typical of most state-of-the-art approaches. Simulations and hardware experiments are presented to validate the robustness of the approach.

[ MIT ACL ]

And now, this.

[ Soft Robotics ]

The EPRI (Electric Power Research Institute) recently worked with Exyn Technologies, a pioneer in autonomous aerial robot systems, for a safety and data collection demonstration at Exelon’s Peach Bottom Atomic Power Station in Pennsylvania. Exyn’s drone was able to autonomously inspect components in elevated hard to access areas, search for temperature anomalies, and collect dose rate surveys in radiological areas— without the need for a human operator.

[ Exyn ]

Thanks Zach!

Relax: Pepper is here to help with all of your medical problems.

[ Softbank ]

Amir Shapiro at BGU, along with Yoav Golan (whose work on haptic control of dogs we covered last year), have developed an interesting new kind of robotic finger with passively adjustable friction.

Paper ] via [ BGU ]

Thanks Andy!

UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is expected to offer healthcare services to children at Sint Maartenskliniek in the Netherlands. Before that, three of them have been trained to have exercise, empathy and cognition capabilities.

[ UBTECH ]

Get ready for CYBATHLON, postponed to September 2020!

[ Cybathlon ]

In partnership with the World Mosquito Program (WMP), WeRobotics has led the development and deployment of a drone-based release mechanism that has been shown to help prevent the incidence of Dengue fever.

[ WeRobotics ]

Sadly, koalas today face a dire outlook across Australia due to human development, droughts, and forest fires. Events like these and a declining population make conservation and research more important than ever. Drones offer a more efficient way to count koalas from above, covering more ground than was possible in the past. Dr. Hamilton and his team at the Queensland University of Technology use DJI drones to count koalas, using the data obtained to better help these furry friends from down under.

[ DJI ]

Fostering the Next Generation of Robotics Startups | TC Sessions: Robotics

Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Speakers will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.

[ TechCrunch ]

Predictions and predictive knowledge have seen recent success in improving not only robot control but also other applications ranging from industrial process control to rehabilitation. A property that makes these predictive approaches well-suited for robotics is that they can be learned online and incrementally through interaction with the environment. However, a remaining challenge for many prediction-learning approaches is an appropriate choice of prediction-learning parameters, especially parameters that control the magnitude of a learning machine's updates to its predictions (the learning rates or step sizes). Typically, these parameters are chosen based on an extensive parameter search—an approach that neither scales well nor is well-suited for tasks that require changing step sizes due to non-stationarity. To begin to address this challenge, we examine the use of online step-size adaptation using the Modular Prosthetic Limb: a sensor-rich robotic arm intended for use by persons with amputations. Our method of choice, Temporal-Difference Incremental Delta-Bar-Delta (TIDBD), learns and adapts step sizes on a feature level; importantly, TIDBD allows step-size tuning and representation learning to occur at the same time. As a first contribution, we show that TIDBD is a practical alternative for classic Temporal-Difference (TD) learning via an extensive parameter search. Both approaches perform comparably in terms of predicting future aspects of a robotic data stream, but TD only achieves comparable performance with a carefully hand-tuned learning rate, while TIDBD uses a robust meta-parameter and tunes its own learning rates. Secondly, our results show that for this particular application TIDBD allows the system to automatically detect patterns characteristic of sensor failures common to a number of robotic applications. As a third contribution, we investigate the sensitivity of classic TD and TIDBD with respect to the initial step-size values on our robotic data set, reaffirming the robustness of TIDBD as shown in previous papers. Together, these results promise to improve the ability of robotic devices to learn from interactions with their environments in a robust way, providing key capabilities for autonomous agents and robots.

This paper presents the design of an assessment process and its outcomes to investigate the impact of Educational Robotics activities on students' learning. Through data analytics techniques, the authors will explore the activities' output from a pedagogical and quantitative point of view. Sensors are utilized in the context of an Educational Robotics activity to obtain a more effective robot–environment interaction. Pupils work on specific exercises to make their robot smarter and to carry out more complex and inspirational projects: the integration of sensors on a robotic prototype is crucial, and learners have to comprehend how to use them. In the presented study, the potential of Educational Data Mining is used to investigate how a group of primary and secondary school students, using visual programming (Lego Mindstorms EV3 Education software), design programming sequences while they are solving an exercise related to an ultrasonic sensor mounted on their robotic artifact. For this purpose, a tracking system has been designed so that every programming attempt performed by students' teams is registered on a log file and stored in an SD card installed in the Lego Mindstorms EV3 brick. These log files are then analyzed using machine learning techniques (k-means clustering) in order to extract different patterns in the creation of the sequences and extract various problem-solving pathways performed by students. The difference between problem-solving pathways with respect to an indicator of early achievement is studied.

This article reports on two studies that aimed to evaluate the effective impact of educational robotics in learning concepts related to Physics and Geography. The reported studies involved two courses from an upper secondary school and two courses from a lower secondary school. Upper secondary school classes studied topics of motion physics, and lower secondary school classes explored issues related to geography. In each grade, there was an “experimental group” that carried out their study using robotics and cooperative learning and a “control group” that studied the same concepts without robots. Students in both classes were subjected to tests before and after the robotics laboratory, to check their knowledge in the topics covered. Our initial hypothesis was that classes involving educational robotics and cooperative learning are more effective in improving learning and stimulating the interest and motivation of students. As expected, the results showed that students in the experimental groups had a far better understanding of concepts and higher participation to the activities than students in the control groups.

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

The absolute best way of dealing with the coronavirus pandemic is to just not get coronavirus in the first place. By now, you’ve (hopefully) had all of the strategies for doing this drilled into your skull—wash your hands, keep away from large groups of people, wash your hands, stay home when sick, wash your hands, avoid travel when possible, and please, please wash your hands

At the top of the list of the places to avoid right now are hospitals, because that’s where all the really sick people go. But for healthcare workers, and the sick people themselves, there’s really no other option. To prevent the spread of coronavirus (and everything else) through hospitals, keeping surfaces disinfected is incredibly important, but it’s also dirty, dull, and (considering what you can get infected with) dangerous. And that’s why it’s an ideal task for autonomous robots.

  Photo: UVD Robots The robots can travel through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge.

UVD Robots is a Danish company making robots that are able to disinfect patient rooms and operating theaters in hospitals. They’re able to disinfect pretty much anything you point them at—each robot is a mobile array of powerful short wavelength ultraviolet-C (UVC) lights that emit enough energy to literally shred the DNA or RNA of any microorganisms that have the misfortune of being exposed to them. 

The company’s robots have been operating in China for the past two or three weeks, and UVD Robots CEO Per Juul Nielsen says they are sending more to China as fast as they can. “The initial volume is in the hundreds of robots; the first ones went to Wuhan where the situation is the most severe,” Nielsen told IEEE Spectrum. “We’re shipping every week—they’re going air freight into China because they’re so desperately needed.” The goal is to supply the robots to over 2,000 hospitals and medical facilities in China.

UV disinfecting technology has been around for something like a century, and it’s commonly used to disinfect drinking water. You don’t see it much outside of fixed infrastructure because you have to point a UV lamp directly at a surface for a couple of minutes in order to be effective, and since it can cause damage to skin and eyes, humans have to be careful around it. Mobile UVC disinfection systems are a bit more common—UV lamps on a cart that a human can move from place to place to disinfect specific areas, like airplanes. For large environments like a hospital with dozens of rooms, operating UV systems manually can be costly and have mixed results—humans can inadvertently miss certain areas, or not expose them long enough.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis” —Per Juul Nielsen, UVD Robots

UVD Robots spent four years developing a robotic UV disinfection system, which it started selling in 2018. The robot consists of a mobile base equipped with multiple lidar sensors and an array of UV lamps mounted on top. To deploy a robot, you drive it around once using a computer. The robot scans the environment using its lidars and creates a digital map. You then annotate the map indicating all the rooms and points the robot should stop to perform disinfecting tasks. 

After that, the robot relies on simultaneous localization and mapping (SLAM) to navigate, and it operates completely on its own. It’ll travel from its charging station, through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge. For safety, the robot operates when people are not around, using its sensors to detect motion and shutting the UV lights off if a person enters the area.

CLICK HERE FOR ADDITIONAL CORONAVIRUS COVERAGE

It takes between 10 and 15 minutes to disinfect a typical room, with the robot spending 1 or 2 minutes in five or six different positions around the room to maximize the number of surfaces that it disinfects. The robot’s UV array emits 20 joules per square meter per second (at 1 meter distance) of 254-nanometer light, which will utterly wreck 99.99 percent of germs in just a few minutes without the robot having to do anything more complicated than just sit there. The process is more consistent than a human cleaning since the robot follows the same path each time, and its autonomy means that human staff can be freed up to do more interesting tasks, like interacting with patients. 

Originally, the robots were developed to address hospital acquired infections, which are a significant problem globally. According to Nielsen, between 5 and 10 percent of hospital patients worldwide will acquire a new infection while in the hospital, and tens of thousands of people die from these infections every year. The goal of the UVD robots was to help hospitals prevent these infections in the first place.

Photo: UVD Robots A shipment of robots from UVD Robots arrives at a hospital in Wuhan, where the first coronavirus cases were reported in December.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis,” Nielsen says. “That’s why there’s a big need for our robots all over the world now, because they can be used in fighting coronavirus, and for fighting all of the other infections that are still there.”

The robots, which cost between US $80,000 and $90,000, are relatively affordable for medical equipment, and as you might expect, recent interest in them has been substantial. “Once [hospitals] see it, it’s a no-brainer,” Nielsen says. “If they want this type of disinfection solution, then the robot is much smarter and more cost-effective than what’s available in the market today.” Hundreds of these robots are at work in more than 40 countries, and they’ve recently completed hospital trials in Florida. Over the next few weeks, they’ll be tested at other medical facilities around the United States, and Nielsen points out that they could be useful in schools, cruise ships, or any other relatively structured spaces. I’ll take one for my apartment, please.

UVD Robots ]

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

The absolute best way of dealing with the coronavirus pandemic is to just not get coronavirus in the first place. By now, you’ve (hopefully) had all of the strategies for doing this drilled into your skull—wash your hands, keep away from large groups of people, wash your hands, stay home when sick, wash your hands, avoid travel when possible, and please, please wash your hands

At the top of the list of the places to avoid right now are hospitals, because that’s where all the really sick people go. But for healthcare workers, and the sick people themselves, there’s really no other option. To prevent the spread of coronavirus (and everything else) through hospitals, keeping surfaces disinfected is incredibly important, but it’s also dirty, dull, and (considering what you can get infected with) dangerous. And that’s why it’s an ideal task for autonomous robots.

  Photo: UVD Robots The robots can travel through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge.

UVD Robots is a Danish company making robots that are able to disinfect patient rooms and operating theaters in hospitals. They’re able to disinfect pretty much anything you point them at—each robot is a mobile array of powerful short wavelength ultraviolet-C (UVC) lights that emit enough energy to literally shred the DNA or RNA of any microorganisms that have the misfortune of being exposed to them. 

The company’s robots have been operating in China for the past two or three weeks, and UVD Robots CEO Per Juul Nielsen says they are sending more to China as fast as they can. “The initial volume is in the hundreds of robots; the first ones went to Wuhan where the situation is the most severe,” Nielsen told IEEE Spectrum. “We’re shipping every week—they’re going air freight into China because they’re so desperately needed.” The goal is to supply the robots to over 2,000 hospitals and medical facilities in China.

UV disinfecting technology has been around for something like a century, and it’s commonly used to disinfect drinking water. You don’t see it much outside of fixed infrastructure because you have to point a UV lamp directly at a surface for a couple of minutes in order to be effective, and since it can cause damage to skin and eyes, humans have to be careful around it. Mobile UVC disinfection systems are a bit more common—UV lamps on a cart that a human can move from place to place to disinfect specific areas, like airplanes. For large environments like a hospital with dozens of rooms, operating UV systems manually can be costly and have mixed results—humans can inadvertently miss certain areas, or not expose them long enough.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis” —Per Juul Nielsen, UVD Robots

UVD Robots spent four years developing a robotic UV disinfection system, which it started selling in 2018. The robot consists of a mobile base equipped with multiple lidar sensors and an array of UV lamps mounted on top. To deploy a robot, you drive it around once using a computer. The robot scans the environment using its lidars and creates a digital map. You then annotate the map indicating all the rooms and points the robot should stop to perform disinfecting tasks. 

After that, the robot relies on simultaneous localization and mapping (SLAM) to navigate, and it operates completely on its own. It’ll travel from its charging station, through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge. For safety, the robot operates when people are not around, using its sensors to detect motion and shutting the UV lights off if a person enters the area.

CLICK HERE FOR ADDITIONAL CORONAVIRUS COVERAGE

It takes between 10 and 15 minutes to disinfect a typical room, with the robot spending 1 or 2 minutes in five or six different positions around the room to maximize the number of surfaces that it disinfects. The robot’s UV array emits 20 joules per square meter per second (at 1 meter distance) of 254-nanometer light, which will utterly wreck 99.99 percent of germs in just a few minutes without the robot having to do anything more complicated than just sit there. The process is more consistent than a human cleaning since the robot follows the same path each time, and its autonomy means that human staff can be freed up to do more interesting tasks, like interacting with patients. 

Originally, the robots were developed to address hospital acquired infections, which are a significant problem globally. According to Nielsen, between 5 and 10 percent of hospital patients worldwide will acquire a new infection while in the hospital, and tens of thousands of people die from these infections every year. The goal of the UVD robots was to help hospitals prevent these infections in the first place.

Photo: UVD Robots A shipment of robots from UVD Robots arrives at a hospital in Wuhan, where the first coronavirus cases were reported in December.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis,” Nielsen says. “That’s why there’s a big need for our robots all over the world now, because they can be used in fighting coronavirus, and for fighting all of the other infections that are still there.”

The robots, which cost between US $80,000 and $90,000, are relatively affordable for medical equipment, and as you might expect, recent interest in them has been substantial. “Once [hospitals] see it, it’s a no-brainer,” Nielsen says. “If they want this type of disinfection solution, then the robot is much smarter and more cost-effective than what’s available in the market today.” Hundreds of these robots are at work in more than 40 countries, and they’ve recently completed hospital trials in Florida. Over the next few weeks, they’ll be tested at other medical facilities around the United States, and Nielsen points out that they could be useful in schools, cruise ships, or any other relatively structured spaces. I’ll take one for my apartment, please.

UVD Robots ]

Pages