IEEE Spectrum Robotics

IEEE Spectrum Robotics recent content
Subscribe to IEEE Spectrum Robotics feed
aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

A year ago, we visited Rwanda to see how Zipline’s autonomous, fixed-wing delivery drones were providing blood to hospitals and clinics across the country. We were impressed with both Zipline’s system design (involving dramatic catapult launches, parachute drops, and mid-air drone catching), as well as their model of operations, which minimizes waste while making critical supplies available in minutes almost anywhere in the country.

Since then, Zipline has expanded into Ghana, and has plans to start flying in India as well, but the COVID-19 pandemic is changing everything. Africa is preparing for the worst, while in the United States, Zipline is working with the Federal Aviation Administration to try and expedite safety and regulatory approvals for an emergency humanitarian mission with the goal of launching a medical supply delivery network that could help people maintain social distancing or quarantine when necessary by delivering urgent medication nearly to their doorsteps.

In addition to its existing role delivering blood products and medication, Zipline is acting as a centralized distribution network for COVID-19 supplies in Ghana and Rwanda. Things like personal protective equipment (PPE) will be delivered as needed by drone, ensuring that demand is met across the entire healthcare network. This has been a problem in the United States—getting existing supplies where they’re needed takes a lot of organization and coordination, which the US government is finding to be a challenge.

Photo: Zipline

Zipline says that their drones are able to reduce human involvement in the supply chain (a vector for infection), while reducing hospital overcrowding by making it more practical for non-urgent patients to receive care in local clinics closer to home. COVID-19 is also having indirect effects on healthcare, with social distancing and community lockdowns straining blood supplies. With its centralized distribution model, Zipline has helped Rwanda to essentially eliminate wasted (expired) blood products. “We probably waste more blood [in the United States] than is used in all of Rwanda,” Zipline CEO Keller Rinaudo told us. But it’s going to take more than blood supply to fight COVID-19, and it may hit Africa particularly hard.

Click here for additional coronavirus coverage

“Things are earlier in Africa, you don’t see infections at the scale that we’re seeing in the U.S.,” says Rinaudo. “I also think Africa is responding much faster. Part of that is the benefit of seeing what’s happening in countries that didn’t take it seriously in the first few months where community spreading gets completely out of control. But it’s quite possible that COVID is going to be much more severe in countries that are less capable of locking down, where you have densely populated areas with people who can’t just stay in their house for 45 days.” 

In an attempt to prepare for things getting worse, Rinaudo says that Zipline is stocking as many COVID-related products as possible, and they’re also looking at whether they’ll be able to deliver to neighborhood drop-off points, or perhaps directly to homes. “That’s something that Zipline has been on track to do for quite some time, and we’re considering ways of accelerating that. When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way.” This kind of system, Rinaudo points out, would also benefit people with non-COVID healthcare needs, who need to do their best to avoid hospitals. If a combination of telemedicine and home or neighborhood delivery of medical supplies means they can stay home, it would be a benefit for everyone. “This is a transformation of the healthcare system that’s already happening and needs to happen anyway. COVID is just accelerating it.”

“When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way” —Keller Rinaudo, Zipline

For the past year, Zipline, working closely with the FAA, has been planning on a localized commercial trial of a medical drone delivery service that was scheduled to begin in North Carolina this fall. While COVID is more urgent, the work that’s already been done towards this trial puts Zipline in a good position to move quickly, says Rinaudo.

“All of the work that we did with the IPP [UAS Integration Pilot Program] is even more important, given this crisis. It means that we’ve already been working with the FAA in detail, and that’s made it possible for us to have a foundation to build on to help with the COVID-19 response.” Assuming that Zipline and the FAA can find a regulatory path forward, the company could begin setting up distribution centers that can support hospital networks for both interfacility delivery as well as contactless delivery to (eventually) neighborhood points and perhaps even homes. “It’s exactly the use case and value proposition that I was describing for Africa,” Rinaudo says.

Leveraging rapid deployment experience that it has from work with the U.S. Department of Defense, Zipline would launch one distribution center within just a few months of a go-ahead from the FAA. This single distribution center could cover an area representing up to 10 million people. “We definitely want to move quickly here,” Rinaudo tells us. Within 18 months, Zipline could theoretically cover the entire US, although he admits “that would be an insanely fast roll-out.”

The question, at this point, is how fast the FAA can take action to make innovative projects like this happen. Zipline, as far as we can tell, is ready to go. We did also ask Rinaudo if he thought that hospitals specifically, and the medical system in general, has the bandwidth to adopt a system like Zipline’s in the middle of a pandemic that’s already stretching people and resources to the limit.

“In the U.S. there’s this sense that this technology is impossible, whereas it’s already operating at multi-national scale, serving thousands of hospitals and health facilities, and it’s completely boring to the people who are benefiting from it,” Rinaudo says. “People in the U.S. have really not caught on that this is something that’s reliable and can dramatically improve our response to crises like this.”

[ Zipline ]

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

For the last several years, Diligent Robotics has been testing out its robot, Moxi, in hospitals in Texas. Diligent isn’t the only company working on hospital robots, but Moxi is unique in that it’s doing commercial mobile manipulation, picking supplies out of supply closets and delivering them to patient rooms, all completely autonomously.

A few weeks ago, Diligent announced US $10 million in new funding, which comes at a critical time, as the company addressed in their press release:

Now more than ever hospitals are under enormous stress, and the people bearing the most risk in this pandemic are the nurses and clinicians at the frontlines of patient care. Our mission with Moxi has always been focused on relieving tasks from nurses, giving them more time to focus on patients, and today that mission has a newfound meaning and purpose. Time and again, we hear from our hospital partners that Moxi not only returns time back to their day but also brings a smile to their face.  

We checked in with Diligent CEO Andrea Thomaz last week to get a better sense of how Moxi is being used at hospitals. “As our hospital customers are implementing new protocols to respond to the [COVID-19] crisis, we are working with them to identify the best ways for Moxi to be deployed as a resource,” Thomaz told us. “The same kinds of delivery tasks we have been doing are still just as needed as ever, but we are also working with them to identify use cases where having Moxi do a delivery task also reduces infection risk to people in the environment.”

Click here for additional coronavirus coverage

Since this is still something that Diligent and their hospital customers are actively working on, it’s a little early for them to share details. But in general, robots making deliveries means that people aren’t making deliveries, which has several immediate benefits. First, it means that overworked hospital staff can spend their time doing other things (like interacting with patients), and second, the robot is less likely to infect other people. It’s not just that the robot can’t get a virus (not that kind of virus, at any rate), but it’s also much easier to keep robots clean in ways that aren’t an option for humans. Besides wiping them down with chemicals, without too much trouble you could also have them autonomously disinfect themselves with UV, which is both efficient and effective.

While COVID-19 only emphasizes the importance of robots in healthcare, Diligent is tackling a particularly difficult set of problems with Moxi, involving full autonomy, manipulation, and human-robot interaction. Earlier this year, we spoke with Thomaz about how Moxi is starting to make a difference to hospital staff.

IEEE Spectrum: Last time we talked, Moxi was in beta testing. What’s different about Moxi now that it’s ready for full-time deployment?

Andrew Thomaz: During our beta trial, Moxi was deployed for over 120 days total, in four different hospitals (one of them was a children’s hospital, the other three were adult acute-care units), working alongside more than 125 nurses and clinicians. The people we were working with were so excited to be part of this kind of innovative research, and how this new technology is going to actually impact workloads. Our focus on the beta trials was to try any idea that a customer had of how Moxi could provide value—if it seemed at all reasonable, then we would quickly try to mock something up and try it.

I think it validates our human-robot interaction approach to building the company, of getting the technology out there in front of customers to make sure that we’re building the product that they really need. We started to see common workflows across hospitals—there are different kinds of patient care that’s happening, but the kinds of support and supplies and things that are moving around the hospital are similar—and so then we felt that we had learned what we needed to learn from the beta trial and we were ready to launch with our first customers.

Photo: Diligent Robotics

The primary function that Moxi has right now, of restocking and delivery, was that there from the beginning? Or was that something that people asked for and you realized, oh, well, this is how a robot can actually be the most useful.

We knew from the beginning that our goal was to provide the kind of operational support that an end-to-end mobile manipulation platform can do, where you can go somewhere autonomously, pick something up, and bring it to another location and put it down. With each of our beta customers, we were very focused on opportunities where that was the case, where nurses were wasting time.

We did a lot of that kind of discovery, and then you just start seeing that it’s not rocket science—there are central supply places where things are kept around the hospital, and nurses are running back and forth to these places multiple times a day. We’d look at some particular task like admission buckets, or something else that nurses have to do everyday, and then we say, where are the places that automation can really fit in? Some of that support is just navigation tasks, like going from one place to another, some actually involves manipulation, like you need to press this button or you need to pick up this thing. But with Moxi, we have a mobility and a manipulation component that we can put to work, to redefine workflows to include automation.

You mentioned that as part of the beta program that you were mocking the robot up to try all kinds of customer ideas. Was there something that hospitals really wanted the robot to do, that you mocked up and tried but just didn’t work at all?

We were pretty good at not setting ourselves up for failure. I think the biggest thing would be, if there was something that was going to be too heavy for the Kinova arm, or the Robotiq gripper, that’s something we just can’t do right now. But honestly, it was a pretty small percentage of things that we were kind of asked to manipulate that we had to say, oh no, sorry, we can’t lift that much or we can’t grip that wide. The other reason that things that we tried in the beta didn’t make it into our roadmap is if there was an idea that came up with only one of the beta sites. One example is delivering water: One of the beta sites was super excited about having water delivered to the patients every day, ahead of medication deliveries, which makes a lot of sense, but when we start talking to hospital leadership or other people, in other hospitals, it’s definitely just a “nice to have.” So for us, from a technical standpoint, it doesn’t make as much sense to devote a lot of resources into making water delivery a real task if it’s just going to be kind of a “nice to have” for a small percentage of our hospitals. That’s more how that R&D went—if we heard it from one hospital we’d ask, is this something that everybody wants, or just an idea that one person had. 

Let’s talk about how Moxi does what it does. How does the picking process work?

We’re focused on very structured manipulation; we’re not doing general purpose manipulation, and so we have a process for teaching Moxi a particular supply room. There are visual cues that are used to orient the robot to that supply room, and then once you are oriented you know where a bin is. Things don’t really move around a great deal in the supply room, the bigger variability is just how full each of the bins are.

The things that the robot is picking out of the bins are very well known, and we make sure that hospitals have a drop off location outside the patient’s room. In about half the hospitals we were in, they already had a drawer where the robot could bring supplies, but sometimes they didn’t have anything, and then we would install something like a mailbox on the wall. That’s something that we’re still working out exactly—it was definitely a prototype for the beta trials, and we’re working out how much that’s going to be needed in our future roll out.

“A robot needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around” —Andrea Thomaz, Diligent Robotics

These aren’t supply rooms that are dedicated to the robot—they’re also used by humans who may move things around unpredictably. How does Moxi deal with the added uncertainty?

That’s really the entire focus of our human-guided learning approach—having the robot build manipulation skills with perceptual cues that are telling it about different anchor points to do that manipulation skill with respect to, and learning particular grasp strategies for a particular category of objects. Those kinds of strategies are going to make that grasp into that bin more successful, and then also learning the sensory feedback that’s expected on a successful grasp versus an unsuccessful one, so that you have the ability to retry until you get the expected sensory feedback.

There must also be plenty of uncertainty when Moxi is navigating around the hospital, which is probably full of people who’ve never seen it before and want to interact with it. To what extent is Moxi designed for those kinds of interactions? And if Moxi needs to be somewhere because it has a job to do, how do you mitigate or avoid them?

One of the things that we liked about hospitals as a semi-structured environment is that even the human interaction that you’re going to run into is structured as well, more so than somewhere like a shopping mall. In a hospital you have a kind of idea of the kind of people that are going to be interacting with the robot, and you can have some expectations about who they are and why they’re there and things, so that’s nice.

We had gone into the beta trial thinking, okay, we’re not doing any patient care, we’re not going into patients’ rooms, we’re bringing things to right outside the patient rooms, we’re mostly going to be interacting with nurses and staff and doctors. We had developed a lot of the social capabilities, little things that Moxi would do with the eyes or little sounds that would be made occasionally, really thinking about nurses and doctors that were going to be in the hallways interacting with Moxi. Within the first couple weeks at the first beta site, the patients and general public in the hospital were having so many more interactions with the robot than we expected. There were people who were, like, grandma is in the hospital, so the entire family comes over on the weekend, to see the robot that happens to be on grandma’s unit, and stuff like that. It was fascinating.

We always knew that being socially acceptable and fitting into the social fabric of the team was important to focus on. A robot needs to have both sides of that coin—it needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around. But in the first couple weeks in our first beta trial, we quickly had to ramp up and say, okay, what else can Moxi do to be social? We had the robot, instead of just going to the charger in between tasks, taking an extra social lap to see if there’s anybody that wants to take a selfie. We added different kinds of hot word detections, like for when people say “hi Moxi,” “good morning, Moxi,” or “how are you?” Just all these things that people were saying to the robot that we wanted to turn into fun interactions.

I would guess that this could sometimes be a little problematic, especially at a children’s hospital where you’re getting lots of new people coming in who haven’t seen a robot before—people really want to interact with robots and that’s independent of whether or not the robot has something else it’s trying to do. How much of a problem is that for Moxi?

That’s on our technical roadmap. We still have to figure out socially appropriate ways to disengage. But what we did learn in our beta trials is that there are even just different navigation paths that you can take, by understanding where crowds tend to be at different times. Like, maybe don’t take a path right by the cafeteria at noon, instead take the back hallway at noon. There are always different ways to get to where you’re going. Houston was a great example—in that hospital, there was this one skyway where you knew the robot was going to get held up for 10 or 15 minutes taking selfies with people, but there was another hallway two floors down that was always empty. So you can kind of optimize navigation time for the number of selfies expected, things like that.

Photo: Diligent Robotics

To what extent is the visual design of Moxi intended to give people a sense of what its capabilities are, or aren’t?

For us, it started with the functional things that Moxie needs. We knew that we’re doing mobile manipulation, so we’d need a base, and we’d need an arm. And we knew we also wanted it to have a social presence, and so from those constraints, we worked with our amazing head of design, Carla Diana, on the look and feel of the robot. For this iteration, we wanted to make sure it didn’t have an overly humanoid look.

Some of the previous platforms that I used in academia, like the Simon robot or the Curie robot, had very realistic eyes. But when you start to talk about taking that to a commercial setting, now you have these eyeballs and eyelids and each of those is a motor that has to work every day all day long, so we realized that you can get a lot out of some simplified LED eyes, and it’s actually endearing to people to have this kind of simplified version of it. The eyes are a big component—that’s always been a big thing for me because of the importance of attention, and being able to communicate to people what the robot is paying attention to. Even if you don’t put eyeballs on a robot, people will find a thing to attribute attention to: They’ll find the camera and say, “oh, those are its eyes!” So I find it’s better to give the robot a socially expressive focus of attention.

I would say speech is the biggest one that we have drawn the line on. We want to make sure people don’t get the sense that Moxi can understand the full English language, because I think people are getting to be really used to speech interfaces, and we don’t have an Alexa or anything like that integrated yet. That could happen in the future, but we don’t have a real need for that right now, so it’s not there, so we want to make sure people don’t think of the robot as an Alexa or a Google Home or a Siri that you can just talk to, so we make sure that it just does beeps and whistles, and then that kind of makes sense to people. So they get that you can say stuff like “hi Moxi,” but that’s about it. 

Otherwise, I think the design is really meant to be socially acceptable, we want to make sure people are comfortable, because like you’re saying, this is a robot that a lot of people are going to see for the first time, and we have to be really sensitive to the fact that the hospital is a stressful place for a lot of people, you’re already there with a sick family member and you might have a lot going on, and we want to make sure that we aren’t contributing to additional stress in your day.

You mentioned that you have a vision for human-robot teaming. Longer term, how do you feel like people should be partnering more directly with robots?

Right now, we’re really focused on looking at operational processes that hit two or three different departments in the hospital and require a nurse to do this and a patient care technician to do that and a pharmacy or a materials supply person to do something else. We’re working with hospitals to understand how that whole team of people is making some big operational workflow happen and where Moxi could fit in. 

Some places where Moxi fits in, it’s a completely independent task. Other places, it might be a nurse on a unit calling Moxi over to do something, and so there might be a more direct interaction sometimes. Other times it might be that we’re able to connect to the electronic health record and infer automatically that something’s needed and then it really is just happening more in the background. We’re definitely open to both explicit interaction with the team where Moxi’s being called to do something in particular by someone, but I think some of the more powerful examples from our beta trials were ones that really take that cognitive burden off of people—where Moxi could just infer what could happen in the background.

In terms of direct collaboration, like side-by-side working together kind of thing, I do think there’s just such vast differences between—if you’re talking about a human and a robot cooperating on some manipulation task, robots are just—it’s going to be awhile before a robot is going to be as capable. If you already have a person there, doing some kind of manipulation task, it’s going to be hard for a robot to compete, and so I think it’s better to think about places where the person could be used for better things and you could hand something else off entirely to the robot.

So how feasible in the near-term is a nurse saying, “Moxi, could you hold this for me?” How complicated or potentially useful is that?

I think that’s a really interesting example. So then a question is, is the value of the resource and whether being always available to be like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, and what, we did a little bit of that kind on-demand, you know, hey Moxi come over here and do this thing, in some of our beta trials just to kind of look at that on demand versus pre planned activities, and if you can find things in workflows that can be automated and inferred what the robot’s gonna be doing, we think that’s gonna be the biggest bang for your buck, in terms of the value that the robot’s able to deliver, 

I think that there may come a day where every clinician’s walking around and there’s always a robot available to respond to “hey, hold this for me,” and I think that would be amazing. But for now, the question is whether the robot being like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, when it could instead be working all night long to get things ready for the next shift.

[ Diligent Robotics ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICARSC 2020 – April 15-17, 2020 – [Online Conference] ICRA 2020 – May 31-4, 2020 – [TBD] ICUAS 2020 – June 9-12, 2020 – Athens, Greece RSS 2020 – July 12-16, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

You need this dancing robot right now.

By Vanessa Weiß at UPenn.

[ KodLab ]

Remember Qoobo the headless robot cat? There’s a TINY QOOBO NOW!

It’s available now on a Japanese crowdfunding site, but I can’t tell if it’ll ship to other countries.

[ Qoobo ]

Just what we need, more of this thing.

[ Vstone ]

HiBot, which just received an influx of funding, is adding new RaaS (robotics as a service) offerings to its collection of robot arms and snakebots.

HiBot ]

If social distancing already feels like too much work, Misty is like that one-in-a-thousand child that enjoys cleaning. See her in action here as a robot disinfector and sanitizer for common and high-touch surfaces. Alcohol reservoir, servo actuator, and nozzle not (yet) included. But we will provide the support to help you build the skill.

[ Misty Robotics ]

After seeing this tweet from Kate Darling that mentions an MIT experiment in which “a group of gerbils inhabited an architectural environment made of modular blocks, which were manipulated by a robotic arm in response to the gerbils’ movements,” I had to find a video of the robot arm gerbil habitat. The best I could do was this 2007 German remake, but it’s pretty good:

[ Lutz Dammbeck ]

We posted about this research almost a year ago when it came out in RA-L, but I’m not tired of watching the video yet.

Today’s autonomous drones have reaction times of tens of milliseconds, which is not enough for navigating fast in complex dynamic environments. To safely avoid fast moving objects, drones need low-latency sensors and algorithms. We depart from state of the art approaches by using event cameras, which are novel bioinspired sensors with reaction times of microseconds. We demonstrate the effectiveness of our approach on an autonomous quadrotor using only onboard sensing and computation. Our drone was capable of avoiding multiple obstacles of different sizes and shapes at relative speeds up to 10 meters/second, both indoors and outdoors.

[ UZH ]

In this video we present the autonomous exploration of a staircase with four sub-levels and the transition between two floors of the Satsop Nuclear Power Plant during the DARPA Subterranean Challenge Urban Circuit. The utilized system is a collision-tolerant flying robot capable of multi-modal Localization And Mapping fusing LiDAR, vision and inertial sensing. Autonomous exploration and navigation through the staircase is enabled through a Graph-based Exploration Planner implementing a specific mode for vertical exploration. The collision-tolerance of the platform was of paramount importance especially due to the thin features of the involved geometry such as handrails. The whole mission was conducted fully autonomously.

[ CERBERUS ]

At Cognizant’s Inclusion in Tech: Work of Belonging conference, Cognizant VP and Managing Director of the Center for the Future of Work, Ben Pring, sits down with Mary “Mary” Cummings. Missy is currently a Professor at Duke University and the Director of the Duke Robotics Labe. Interestingly, Missy began her career as one of the first female fighter pilots in the U.S. Navy. Working in predominantly male fields – the military, tech, academia – Missy understands the prevalence of sexism, bias and gender discrimination.

Let’s hear more from Missy Cummings on, like, everything.

[ Duke ] via [ Cognizant ]

You don’t need to mountain bike for the Skydio 2 to be worth it, but it helps.

[ Skydio ]

Here’s a look at one of the preliminary simulated cave environments for the DARPA SubT Challenge.

[ Robotika ]

SherpaUW is a hybrid walking and driving exploration rover for subsea applications. The locomotive system consists of four legs with 5 active DoF each. Additionally, a 6 DoF manipulation arm is available. All joints of the legs and the manipulation arm are sealed against water. The arm is pressure compensated, allowing the deployment in deep sea applications.

SherpaUW’s hybrid crawler-design is intended to allow for extended long-term missions on the sea floor. Since it requires no extra energy to maintain its posture and position compared to traditional underwater ROVs (Remotely Operated Vehicles), SherpaUW is well suited for repeated and precise sampling operations, for example monitoring black smockers over a longer period of time.

[ DFKI ]

In collaboration with the Army and Marines, 16 active-duty Army soldiers and Marines used Near Earth’s technology to safely execute 64 resupply missions in an operational demonstration at Fort AP Hill, Virginia in Sep 2019. This video shows some of the modes used during the demonstration.

[ NEA ]

For those of us who aren’t either lucky enough or cursed enough to live with our robotic co-workers, HEBI suggests that now might be a great time to try simulation.

[ GitHub ]

DJI Phantom 4 Pro V2.0 is a complete aerial imaging solution, designed for the professional creator. Featuring a 1-inch CMOS sensor that can shoot 4K/60fps videos and 20MP photos, the Phantom 4 Pro V2.0 grants filmmakers absolute creative freedom. The OcuSync 2.0 HD transmission system ensures stable connectivity and reliability, five directions of obstacle sensing ensures additional safety, and a dedicated remote controller with a built-in screen grants even greater precision and control.

US $1600, or $2k with VR goggles.

[ DJI ]

Not sure why now is the right time to introduce the Fetch research robot, but if you forgot it existed, here’s a reminder.

[ Fetch ]

Two keynotes from the MBZIRC Symposium, featuring Oussama Khatib and Ron Arkin.

[ MBZIRC ]

And here are a couple of talks from the 2020 ROS-I Consortium.

Roger Barga, GM of AWS Robotics and Autonomous Services at Amazon shares some of the latest developments around ROS and advanced robotics in the cloud.

Alex Shikany, VP of Membership and Business Intelligence for A3 shares insights from his organization on the relationship between robotics growth and employment.

[ ROS-I ]

Many tech companies are trying to build machines that detect people’s emotions, using techniques from artificial intelligence. Some companies claim to have succeeded already. Dr. Lisa Feldman Barrett evaluates these claims against the latest scientific evidence on emotion. What does it mean to “detect” emotion in a human face? How often do smiles express happiness and scowls express anger? And what are emotions, scientifically speaking?

[ Microsoft ]

There’s been a lot of intense and well-funded work developing chips that are specially designed to perform AI algorithms faster and more efficiently. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Ideally you want a chip that’s optimized to do today’s AI, not the AI of two to five years ago. Google’s solution: have an AI design the AI chip.

“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other,” they write in a paper describing the work that posted today to Arxiv.

“We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn't exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.”

Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learn to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and area.

Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized. Heightening the challenge is the requirement that all this happen while at the same time obeying rules about the density of interconnects. Goldie and Mirhoseini targeted chip placement, because even with today’s advanced tools, it takes a human expert weeks of iteration to produce an acceptable design.

Goldie and Mirhoseini modeled chip placement as a reinforcement learning problem. Reinforcement learning systems, unlike typical deep learning, do not train on a large set of labeled data. Instead, they learn by doing, adjusting the parameters in their networks according to a reward signal when they succeed. In this case, the reward was a proxy measure of a combination of power reduction, performance improvement, and area reduction. As a result, the placement-bot becomes better at its task the more designs it does.

The team hopes AI systems like theirs will lead to the design of “more chips in the same time period, and also chips that run faster, use less power, cost less to build, and use less area,” says Goldie.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – [ONLINE EVENT] ICARSC 2020 – April 15-17, 2020 – [ONLINE EVENT] ICRA 2020 – May 31-4, 2020 – [SEE ATTENDANCE SURVEY] ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

UBTECH Robotics’ ATRIS, AIMBOT, and Cruzr robots were deployed at a Shenzhen hospital specialized in treating COVID-19 patients. The company says the robots, which are typically used in retail and hospitality scenarios, were modified to perform tasks that can help keep the hospital safer for everyone, especially front-line healthcare workers. The tasks include providing videoconferencing services between patients and doctors, monitoring the body temperatures of visitors and patients, and disinfecting designated areas.

The Third People’s Hospital of Shenzhen (TPHS), the only designated hospital for treating COVID-19 in Shenzhen, a metropolis with a population of more than 12.5 million, has introduced an intelligent anti-epidemic solution to combat the coronavirus.

AI robots are playing a key role. The UBTECH-developed robot trio, namely ATRIS, AIMBOT, and Cruzr, are giving a helping hand to monitor body temperature, detect people without masks, spray disinfectants and provide medical inquiries.

[ UBTECH ]

Someone has spilled gold all over the place! Probably one of those St. Paddy’s leprechauns... Anyways... It happened near a Robotiq Wrist Camera and Epick setup so it only took a couple of minutes to program and ’’pick and place’’ the mess up.

Even in situations like these, it’s important to stay positive and laugh a little, we had this ready and though we’d still share. Stay safe!

[ Robotiq ]

HEBI Robotics is helping out with social distancing by controlling a robot arm in Austria from their lab in Pittsburgh.

Can’t be too careful!

[ HEBI Robotics ]

Thanks Dave!

SLIDER, a new robot under development at Imperial College London, reminds us a little bit of what SCHAFT was working on with its straight-legged design.

[ Imperial ]

Imitation learning is an effective and safe technique to train robot policies in the real world because it does not depend on an expensive random exploration process. However, due to the lack of exploration, learning policies that generalize beyond the demonstrated behaviors is still an open challenge. We present a novel imitation learning framework to enable robots to 1) learn complex real world manipulation tasks efficiently from a small number of human demonstrations, and 2) synthesize new behaviors not contained in the collected demonstrations. Our key insight is that multi-task domains often present a latent structure, where demonstrated trajectories for different tasks intersect at common regions of the state space. We present Generalization Through Imitation (GTI), a two-stage offline imitation learning algorithm that exploits this intersecting structure to train goal-directed policies that generalize to unseen start and goal state combinations.

[ GTI ]

Here are two excellent videos from UPenn’s Kod*lab showing the capabilities of their programmable compliant origami spring things.

[ Kod*lab ]

We met Bornlove when we were reporting on drones in Tanzania in 2018, and it’s good to see that he’s still improving on his built-from-scratch drone.

[ ADF ]

Laser. Guided. Sandwich. Stacking.

[ Kawasaki ]

The Self-Driving Car Research Studio is a highly expandable and powerful platform designed specifically for academic research. It includes the tools and components researchers need to start testing and validating their concepts and technologies on the first day, without spending time and resources on building DYI platforms or implementing hobby-level vehicles. The research studio includes a fleet of vehicles, software tools enabling researchers to work in Simulink, C/C++, Python, or ROS, with pre-built libraries and models and simulated environments support, even a set of reconfigurable floor panels with road patterns and a set of traffic signs. The research studio’s feature vehicle, QCar, is a 1/10 scale model vehicle powered by NVIDIA Jetson TX2 supercomputer and equipped with LIDAR, 360-degree vision, depth sensor, IMU, encoders, and other sensors, as well as user-expandable IO.

[ Quanser ]

Thanks Zuzana!

The Swarm-Probe Enabling ATEG Reactor, or SPEAR, is a nuclear electric propulsion spacecraft that uses a new, lightweight reactor moderator and advanced thermoelectric generators (ATEGs) to greatly reduce overall core mass. If the total mass of an NEP system could be reduced to levels that were able to be launched on smaller vehicles, these devices could deliver scientific payloads to anywhere in the solar system.

One major destination of recent importance is Europa, one of the moons of Jupiter, which may contain traces of extraterrestrial life deep beneath the surface of its icy crust. Occasionally, the subsurface water on Europa violently breaks through the icy crust and bursts into the space above, creating a large water plume. One proposed method of searching for evidence of life on Europa is to orbit the moon and scan these plumes for ejected organic material. By deploying a swarm of Cubesats, these plumes can be flown through and analyzed multiple times to find important scientific data.

[ SPEAR ]

This hydraulic cyborg hand costs just $35.

Available next month in Japan.

[ Elekit ]

Microsoft is collaborating with researchers from Carnegie Mellon University and Oregon State University to compete in the DARPA Subterranean (SubT) challenges, collectively named Team Explorer. These challenges are designed to test drones and robots on how they perform in hazardous physical environments where humans can’t access safely. By participating in these challenges, these teams hope to find a solution that will assist emergency first responders to help find survivors more quickly.

[ Team Explorer ]

Aalborg University Hospital is the largest hospital in the North Jutland region of Denmark. Up to 3,000 blood samples arrive here in the lab every day. They must be tested and sorted – a time-consuming and monotonous process which was done manually until now. The university hospital has now automated the procedure: a robot-based system and intelligent transport boxes ensure the quality of the samples – and show how workflows in hospitals can be simplified by automation.

[ Kuka ]

This video shows human-robot collaboration for assembly of a gearbox mount in a realistic replica of a production line of Volkswagen AG. Knowledge-based robot skills enable autonomous operation of a mobile dual arm robot side-by-side of a worker.

[ DFKI ]

A brief overview of what’s going on in Max Likhachev’s lab at CMU.

Always good to see PR2 keeping busy!

[ CMU ]

The Intelligent Autonomous Manipulation (IAM) Lab at the Carnegie Mellon University (CMU) Robotics Institute brings together researchers to address the challenges of creating general purpose robots that are capable of performing manipulation tasks in unstructured and everyday environments. Our research focuses on developing learning methods for robots to model tasks and acquire versatile and robust manipulation skills in a sample-efficient manner.

[ IAM Lab ]

Jesse Hostetler is an Advanced Computer Scientist in the Vision and Learning org at SRI International in Princeton, NJ. In this episode of The Dish TV they explore the different aspects of artificial intelligence, and creating robots that use sleep and dream states to prevent catastrophic forgetting.

[ SRI ]

On the latest episode of the AI Podcast, Lex interviews Anca Dragan from UC Berkeley.

Anca Dragan is a professor at Berkeley, working on human-robot interaction -- algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings.

[ AI Podcast ]

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

When I reached Professor Guang-Zhong Yang on the phone last week, he was cooped up in a hotel room in Shanghai, where he had self-isolated after returning from a trip abroad. I wanted to hear from Yang, a widely respected figure in the robotics community, about the role that robots are playing in fighting the coronavirus pandemic. He’d been monitoring the situation from his room over the previous week, and during that time his only visitors were a hotel employee, who took his temperature twice a day, and a small wheeled robot, which delivered his meals autonomously.

An IEEE Fellow and founding editor of the journal Science Robotics, Yang is the former director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College London. More recently, he became the founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University, often called the MIT of China. Yang wants to build the new institute into a robotics powerhouse, recruiting 500 faculty members and graduate students over the next three years to explore areas like surgical and rehabilitation robots, image-guided systems, and precision mechatronics.

“I ran a lot of the operations for the institute from my hotel room using Zoom,” he told me.

Yang is impressed by the different robotic systems being deployed as part of the COVID-19 response. There are robots checking patients for fever, robots disinfecting hospitals, and robots delivering medicine and food. But he thinks robotics can do even more.

Photo: Shanghai Jiao Tong University Professor Guang-Zhong Yang, founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University.

“Robots can be really useful to help you manage this kind of situation, whether to minimize human-to-human contact or as a front-line tool you can use to help contain the outbreak,” he says. While the robots currently being used rely on technologies that are mature enough to be deployed, he argues that roboticists should work more closely with medical experts to develop new types of robots for fighting infectious diseases.

“What I fear is that, there is really no sustained or coherent effort in developing these types of robots,” he says. “We need an orchestrated effort in the medical robotics community, and also the research community at large, to really look at this more seriously.”

Yang calls for a global effort to tackle the problem. “In terms of the way to move forward, I think we need to be more coordinated globally,” he says. “Because many of the challenges require that we work collectively to deal with them.”

Our full conversation, edited for clarity and length, is below.

IEEE Spectrum: How is the situation in Shanghai?

Guang-Zhong Yang: I came back to Shanghai about 10 days ago, via Hong Kong, so I’m now under self-imposed isolation in a hotel room just to be cautious, for two weeks. The general feeling in Shanghai is that it’s really calm and orderly. Everything seems well under control. And as you probably know, in recent days the number of new cases is steadily dropping. So the main priority for the government is to restore normal routines, and also for companies to go back to work. Of course, people are still very cautious, and there are systematic checks in place. In my hotel, for instance, I get checked twice a day for my temperature to make sure that all the people in the hotel are well.

Are most people staying inside, are the streets empty?

No, the streets are not empty. In fact, in Minhang, next to Shanghai Jiao Tong University, things are going back to normal. Not at full capacity, but stores and restaurants are gradually opening. And people are thinking about the essential travels they need to do, what they can do remotely. As you know in China we have very good online order and delivery services, so people use them a lot more. I was really impressed by how the whole thing got under control, really.

Has Shanghai Jiao Tong University switched to online classes?

Yes. Since last week, the students are attending online lectures. The university has 1449 courses for undergrads and 657 for graduate students. I participated in some of them. It’s really well run. You can have the typical format with a presenter teaching the class, but you can also have part of the lecture with the students divided into groups and having discussions. Of course what’s really affected is laboratory-based work. So we’ll need to wait for some more time to get back into action.

What do you think of the robots being used to help fight the outbreak?

I’ve seen reports showing a variety of robots being deployed. Disinfection robots that use UV light in hospitals. Drones being used for transporting samples. There’s a prototype robot, developed by the Chinese Academy of Sciences, to remotely collect oropharyngeal swabs from patients for testing, so a medical worker doesn’t have to directly swab the patient. In my hotel, there’s a robot that brings my meals to my door. This little robot can manage to get into the lift, go to your room, and call you to open the door. I’m a roboticist myself and I find it striking how well this robot works every time! [Laughs.]

Photo: UVD Robots UVD Robots has shipped hundreds of ultraviolet-C disinfection robots like the one above to Chinese hospitals. 

After Japan’s Fukushima nuclear emergency, the robotics community realized that it needed to be better prepared. It seems that we’ve made progress with disaster-response robots, but what about dealing with pandemics?

I think that for events involving infectious diseases, like this coronavirus outbreak, when they happen, everybody realizes the importance of robots. The challenge is that at most research institutions, people are more concerned with specific research topics, and that’s indeed the work of a scientist—to dig deep into the scientific issues and solve those specific problems. But we also need to have a global view to deal with big challenges like this pandemic.

So I think what we need to do, starting now, is to have a more systematic effort to make sure those robots can be deployed when we need them. We just need to recompose ourselves and work to identify the technologies that are ready to be deployed, and what are the key directions we need to pursue. There’s a lot we can do. It’s not too late. Because this is not going to disappear. We have to see the worst before it gets better.

Click here for additional coronavirus coverage

So what should we do to be better prepared?

After a major crisis, when everything is under control, people’s priority is to go back to our normal routines. The last thing in people’s minds is, What should we do to prepare for the next crisis? And the thing is, you can’t predict when the next crisis will happen. So I think we need three levels of action, and it really has to be a global effort. One is at the government level, in particular funding agencies: How to make sure we can plan ahead and to prepare for the worst.

Another level is the robotics community, including organizations like the IEEE, we need leadership to advocate for these issues and promote activities like robotics challenges. We see challenges for disasters, logistics, drones—how about a robotic challenge for infectious diseases. I was surprised, and a bit disappointed in myself, that we didn’t think about this before. So for the editorial board of Science Robotics, for instance, this will become an important topic for us to rethink.

And the third level is our interaction with front-line clinicians—our interaction with them needs to be stronger. We need to understand the requirements and not be obsessed with pure technologies, so we can ensure that our systems are effective, safe, and can be rapidly deployed. I think that if we can mobilize and coordinate our effort at all these three levels, that would be transformative. And we’ll be better prepared for the next crisis.

Are there projects taking place at the Institute of Medical Robotics that could help with this pandemic?

The institute has been in full operation for just over a year now. We have three main areas of research: The first is surgical robotics, which is my main area of research. The second area is in rehabilitation and assistive robots. The third area is hospital and laboratory automation. One important lesson that we learned from the coronavirus is that, if we can detect and intervene early, we have a better chance of containing it. And for other diseases, it’s the same. For cancer, early detection based on imaging and other sensing technologies, is critical. So that’s something we want to explore—how robotics, including technologies like laboratory automation, can help with early detection and intervention.

“One area we are working on is automated intensive-care unit wards. The idea it to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks”

One area we are working on is automated intensive-care unit wards. The idea it to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks. Some tasks could be performed remotely by medical personnel, while other tasks could be fully automated. A lot of the technologies that we already use in surgical robotics can be translated into this area. We’re hoping to work with other institutions and share our expertise to continue developing this further. Indeed, this technology is not just for emergency situations. It will also be useful for routine management of infectious disease patients. We really need to rethink how hospitals are organized in the future to avoid unnecessary exposure and cross-infection.

Photo: Shanghai Jiao Tong University Shanghai Jiao Tong University’s Institute of Medical Robotics is researching areas like micro/nano systems, surgical and rehabilitation robotics, and human-robot interaction.

I’ve seen some recent headlines—“China’s tech fights back,” “Coronavirus is the first big test for futuristic tech”—many people expect technology to save the day.

When there’s a major crisis like this pandemic, in the general public’s mind, people want to find a magic cure that will solve all the problems. I completely understand that expectation. But technology can’t always do that, of course. What technology can do is to help us to be better prepared. For example, it’s clear that in the last few years self-navigating robots with localization and mapping are becoming a mature technology, so we should see more of those used for situations like this. I’d also like to see more technologies developed for front-line management of patients, like the robotic ICU I mentioned earlier. Another area is public transportation systems—can they have an element of disease prevention, using technology to minimize the spread of diseases so that lockdowns are only imposed as a last resort?

And then there’s the problem of people being isolated. You probably saw that Italy has imposed a total lockdown. That could have a major psychological impact, particularly for people who are vulnerable and living alone. There is one area of robotics, called social robotics, that could play a part in this as well. I’ve been in this hotel room by myself for days now—I’m really starting to feel the isolation…

We should have done a Zoom call.

Yes, we should. [Laughs.] I guess this isolation, or quarantine for various people, also provides the opportunity for us to reflect on our lives, our work, our daily routines. That’s the silver lining that we may see from this crisis.

Photo: Unity Drive Innovation Unity Drive, a startup spun out of Hong Kong University of Science and Technology, is deploying self-driving vehicles to carry out contactless deliveries in three Chinese cities.

While some people say we need more technology during emergencies like this, others worry that companies and governments will use things like cameras and facial recognition to increase surveillance of individuals.

A while ago we published an article listing the 10 grand challenges for robotics in Science Robotics. One of the grand challenges is concerned with legal and ethical issues, which include what you mentioned in your question. Respecting privacy, and also being sensitive about individual and citizens’ rights—these are very, very important. Because we must operate within this legal ethical boundary. We should not use technologies that will intrude in people’s lives. You mentioned that some people say that we don’t have enough technology, and that others say we have too much. And I think both have a point. What we need to do is to develop technologies that are appropriate to be deployed in the right situation and for the right tasks.

Many researchers seem eager to help. What would you say to roboticists interested in helping fight this outbreak or prepare for the next one?

For medical robotics research, my experience is that for your technology to be effective, it has to be application oriented. You need to ensure that end-users like the clinicians who will use your robot, or in the case of assistive robots, the patients, that they are deeply involved in the development of the technology. And the second thing is really to think out of the box—how to develop radically different new technologies. Because robotics research is very hands on and there’s a tendency of adapting what’s readily available out there. For your technology to have a major impact, you need to fundamentally rethink your research and innovation, not just follow the waves.

For example, at our institute we’re investing a lot of effort on the development of micro and nano systems and also new materials that could one day be used in robots. Because for micro robotic systems, we can’t rely on the more traditional approach of using motors and gears that we use in larger systems. So my suggestion is to work on technologies that not only have a deep science element but can also become part of a real-world application. Only then we can be sure to have strong technologies to deal with future crises.

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

Working from home is the new normal, at least for those of us whose jobs mostly involve tapping on computer keys. But what about researchers who are synthesizing new chemical compounds or testing them on living tissue or on bacteria in petri dishes? What about those scientists rushing to develop drugs to fight the new coronavirus? Can they work from home?

Silicon Valley-based startup Strateos says its robotic laboratories allow scientists doing biological research and testing to do so right now. Within a few months, the company believes it will have remote robotic labs available for use by chemists synthesizing new compounds. And, the company says, those new chemical synthesis lines will connect with some of its existing robotic biology labs so a remote researcher can seamlessly transfer a new compound from development into testing.

Click here for additional coronavirus coverage

The company’s first robotic labs, up and running in Menlo Park, Calif., since 2012, were developed by one of Strateos’ predecessor companies, Transcriptic. Last year Transcriptic merged with 3Scan, a company that produces digital 3D histological models from scans of tissue samples, to form Strateos. This facility has four robots that run experiments in large, pod-like laboratories for a number of remote clients, including DARPA and the California Pacific Medical Center Research Institute.

Strateos CEO Mark Fischer-Colbrie explains Strateos’ process:

“It starts with an intake kit,” he says, in which the researchers match standard lab containers with a web-based labeling system. Then scientists use Strateos’ graphical user interface to select various tests to run. These can include tests of the chemical properties of compounds, biochemical processes including how compounds react to enzymes or where compounds bind to molecules, and how synthetic yeast organisms respond to stimuli. Soon the company will be adding the capability to do toxicology tests on living cells.

Photo: Strateos A robot in one of Strateos’ cloud labs manages inventory

“Our approach is fully automated and programmable,” Fischer-Colbrie says. “That means that scientists can pick a standard workflow, or decide how a workflow is run. All the pieces of equipment, which include acoustic liquid handlers, spectrophotometers, real-time quantitative polymerase chain reaction instruments, and flow cytometers are accessible.

“The scientists can define every step of the experiment with various parameters, for example, how long the robot incubates a sample and whether it does it fast or slow.&rdquo

To develop the system, Strateos’ engineers had to “connect the dots, that is, connect the lab automation to the web,” rather than dramatically push technology’s envelope, Fischer-Colbrie explains, “bringing the concepts of web services and the sharing economy to the life sciences.”

Nobody had done it before, he says, simply because researchers in the life sciences had been using traditional laboratory techniques for so long, it didn’t seem like there could be a real substitute to physically being in the lab.

“It’s frictionless science, giving scientists the ability to concentrate on their ideas and hypotheses.”

Late last year, in a partnership with Eli Lilly, Strateos added four more biology lab modules in San Diego and by July plans to integrate these with eight chemistry robots that will, according to a press release, “physically and virtually integrate several areas of the drug discovery process—including design, synthesis, purification, analysis, sample management, and hypothesis testing—into a fully automated platform. The lab includes more than 100 instruments and storage for over 5 million compounds, all within a closed-loop and automated drug discovery platform.”

Some of the capacity will be used exclusively by Lilly scientists, but Fischer-Colbrie says, Strateos capped that usage and will be selling lab capacity beyond the cap to others. It currently prices biological assays on a per plate basis and will price chemical reactions per compound.

The company plans to add labs in additional cities as demand for the services increases, in much the same way that Amazon Web Services adds data centers in multiple locales.

It has also started selling access to its software systems directly to companies looking to run their own, dedicated robotic biology labs.

Strateos, of course, had developed this technology long before the new coronavirus pushed people into remote work. Fischer-Colbrie says it has several advantages over traditional lab experiments in addition to enabling scientists to work from home. Experiments run via robots are easier to standardize, he says, and record more metadata than customary or even possible during a manual experiment. This will likely make repeating research easier, allow geographically separated scientists to work together, and create a shorter path to bringing AI into the design and analysis of experiments. “Because we can easily repeat experiments and generate clean datasets, training data for AI systems is cleaner,” he said.

And, he says, robotic labs open up the world of drug discovery to small companies and individuals who don’t have funding for expensive equipment, expanding startup opportunities in the same way software companies boomed when they could turn to cloud services for computing capacity instead of building their own server farms.

Says Alok Gupta, Strateos senior vice president of engineering, “This allows scientists to focus on the concept, not on buying equipment, setting it up, calibrating it; they can just get online and start their work.”

“It’s frictionless science,” says CEO Fischer-Colbrie, “giving scientists the ability to concentrate on their ideas and hypotheses.”

We’ve been writing about the musical robots from Georgia Tech’s Center for Music Technology for many, many years. Over that time, Gil Weinberg’s robots have progressed from being able to dance along to music that they hear, to being able to improvise along with it, to now being able to compose, play, and sing completely original songs.

Shimon, the marimba-playing robot that has performed in places like the Kennedy Center, will be going on a new tour to promote an album that will be released on Spotify next month, featuring songs written (and sung) entirely by the robot.

Deep learning is famous for producing results that seem like they sort of make sense, but actually don’t at all. Key to Shimon’s composing ability is its semantic knowledge—the ability to make thematic connections between things, which is a step beyond just throwing some deep learning at a huge database of music composed by humans (although that’s Shimon’s starting point, a dataset of 50,000 lyrics from jazz, prog rock, and hip-hop). So rather than just training a neural network that relates specific words that tend to be found together in lyrics, Shimon can recognize more general themes and build on them to create a coherent piece of music.

Fans of Shimon may have noticed that the robot has had its head almost completely replaced. It may be tempting to say “upgraded,” since the robot now has eyes, eyebrows, and a mouth, but I’ll always have a liking for Shimon’s older design, which had just one sort of abstract eye thing ( that functions as a mouth on the current design). Personally, I very much appreciate robots that are able to be highly expressive without resorting to anthropomorphism, but in its new career as a pop sensation, I guess having eyes and a mouth are, like, important, or something?

To find out more about Shimon’s new talents (and new face), we spoke with Georgia Tech professor Gil Weinberg and his PhD student Richard Savery.

IEEE Spectrum: What makes Shimon’s music fundamentally different from music that could have been written by a human? 

Richard Savery: Shimon’s musical knowledge is drawn from training on huge datasets of lyrics, around 20,000 prog rock songs and another 20,000 jazz songs. With this level of data Shimon is able to draw on far more sources of inspiration than than a human would ever be able to. At a fundamental level Shimon is able to take in huge amounts of new material very rapidly, so within a day it can change from focusing on jazz lyrics, to hip hop to prog rock, or a hybrid combination of them all. 

How much human adjustment is involved in developing coherent melodies and lyrics with Shimon?

Savery: Just like working with a human collaborator, there’s many different ways Shimon can interact. Shimon can perform a range of musical tasks from composing a full song by itself or just playing a part composed by a human. For the new album we focused on human-robot collaboration so every song has some elements that were created by a human and some by Shimon. More than human adjustment from Shimon’s generation we try and have a musical dialogue where we get inspired and build on Shimon’s creation. Like any band, each of us has our own strengths and weaknesses, in our case no one else writes lyrics, so it was natural for Shimon to take responsibility for the lyrics. As a lyricist there’s a few ways Shimon can work, firstly Shimon can be given some keywords or ideas, like “earth” and “humanity” and then generate a full song of lyrics around those words. In addition to keywords Shimon can also take a musical and write lyrics that fit over that melody. 

The press release mentions that Shimon is able to “decide what’s good.” What does that mean?

Richard Savery: When Shimon writes lyrics the first step is generating thousands of phrases. So for those keywords Shimon will generate lots of material about “earth,” and then also generate related synonyms and antonyms like “world,” and “ocean.” Like a human composer Shimon has to parse through lots of ideas to choose what’s good from the creations. Shimon has preferences towards maintaining the same sentiment, or gradually shifting sentiment as well as trying to keep rhymes going between lines. For Shimon good lyrics should rhyme, keep some core thematic ideas going, maintain a similar sentiment and have some similarity to existing lyrics. 

I would guess that Shimon’s voice could have been almost anything—why choose this particular voice?

Gil Weinberg: Since we did not have singing voice synthesis expertise in our Robotic Musicianship group at Georgia Tech, we looked to collaborate with other groups. The Music Technology Group at Pompeu Fabra University developed a remarkable deep learning-based singing voice synthesizer and was excited to collaborate. As part of the process, we sent them audio files of songs recorded by one of our students to be used as a dataset to train their neural network. At the end, we decided to use another voice that was trained on a different dataset, since we felt it better represented Shimon’s genderless personality and was a better fit to the melodic register of our songs. 

“We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us” —Gil Weinberg, Georgia Tech

Can you tell us about the changes made to Shimon’s face?

Weinberg: We are big fans of avoiding exaggerated anthropomorphism and using too many degrees of freedom in our robots. We feel that this might push robots into the uncanny valley. But after much deliberation, we decided that a singing robot should have a mouth to represent the embodiment of singing and to look believable. It was important to us, though, not to add DoFs for this purpose, rather to replace the old eye DoF with a mouth to minimize complexity. Originally, we thought to repurpose both DoFs of the old eye (bottom eyelid and top eye lid) to represent top lip and bottom lip. But we felt this might be too anthropomorphic, and that it would be more challenging and interesting to use only one DoF to automatically control mouth size based on the lyric’s phonemes. For this purpose, we looked at examples as varied as parrot vocalization and Muppets animation, to learn how animals and animators go about mouth actuation. Once we were happy with what we developed, we decided to use the old top eyelid DoFs as an eyebrow, to add more emotion to Shimon’s expression. 

Are you able to take advantage of any inherently robotic capabilities of Shimon?

Weinberg: One of the most important new features of the new Shimon, in addition to its singing song-writing capabilities, is a total redesign of its striking arms. As part of the process we replaced the old solenoid-based actuators with new brushless DC motors that can support a much faster striking (up to 30 hits per second) as well as a wider and more linear dynamic range—from very soft pianissimo to much louder fortissimo. This not only allows for a much richer musical expression, but also supports the ability to create new humanly impossible timbres and sonorities by using 8 novel virtuosic actuators. We hope and believe that these new abilities would push human collaborators to new uncharted directions that could not be achieved in human-to-human collaboration.

How do you hope audiences will react to Shimon?

Weinberg: We hope both audiences and musicians will see Shimon as an expressive and creative musician, who can understand and connect to music like we humans do, but also has a strange and unique mind that can surprise and inspire us to listen to, play, and think about music in new ways.

What are you working on next?

Gil Weinberg: We are currently working on new capabilities that would allow Shimon to listen to, understand, and respond to lyrics in real time. The first genre we are exploring for this functionality is rap battles. We plan to release a new album on Spotify April 10th featuring songs where Shimon not only sings but raps in real time as well.

[ Georgia Tech ]

As much as we love soft robots (and we really love soft robots), the vast majority of them operate pneumatically (or hydraulically) at larger scales, especially when they need to exert significant amounts of force. This causes complications, because pneumatics and hydraulics generally require a pump somewhere to move fluid around, so you often see soft robots tethered to external and decidedly non-soft power sources. There’s nothing wrong with this, really, because there are plenty of challenges that you can still tackle that way, and there are some up-and-coming technologies that might result in soft pumps or gas generators.

Researchers at Stanford have developed a new kind of (mostly) soft robot based around a series of compliant, air-filled tubes. It’s human scale, moves around, doesn’t require a pump or tether, is more or less as safe as large robots get, and even manages to play a little bit of basketball.

Image: Stanford/Science Robotics

Stanford’s soft robot consists of a set of identical robotic roller modules mounted onto inflated fabric tubes (A). The rollers pinch the fabric tube between rollers, creating an effective joint (B) that can be relocated by driving the rollers. The roller modules actuate the robot by driving along the tube, simultaneously lengthening one edge while shortening another (C). The roller modules connect to each other at nodes using three-degree-of-freedom universal joints that are composed of a clevis joint that couples two rods, each free to spin about its axis (D). The robot moves untethered outdoors using a rolling gait (E).

This thing looks a heck of a lot like the tensegrity robots that NASA Ames has been working on forever, and which are now being commercialized (hopefully?) by Squishy Robotics. Stanford’s model is not technically a tensegrity robot, though, because it doesn’t use structural components that are under tension (like cables). The researchers refer to this kind of robot as “isoperimetric,” which means while discrete parts of the structure may change length, the overall length of all the parts put together stays the same. This means it’s got a similar sort of inherent compliance across the structure to tensegrity robots, which is one of the things that makes them so appealing. 

While the compliance of Stanford’s robot comes from a truss-like structure made of air-filled tubes, its motion relies on powered movable modules. These modules pinch the tube that they’re located on through two cylindrical rollers (without creating a seal), and driving the rollers moves the module back and forth along the tube, effectively making one section of the tube longer and the other one shorter. Although this is just one degree of freedom, having a whole bunch of tubes each with an independently controlled roller module means that the robot as a whole can exhibit complex behaviors, like drastic shape changes, movement, and even manipulation.

There are numerous advantages to a design like this. You get all the advantages of pneumatic robots (compliance, flexibility, collapsibility, durability, high strength to weight ratio) without requiring some way of constantly moving air around, since the volume of air inside the robot stays constant. Each individual triangular module is self-contained (with one tube, two active roller modules, and one passive anchor module) and easy to combine with similar modules—the video shows an octahedron, but you can easily add or subtract modules to make a variety of differently shaped robots with different capabilities.

Since the robot is inherently so modular, there are all kinds of potential applications for this thing, as the researchers speculate in a paper published today in Science Robotics:

The compliance and shape change of the robot could make it suitable for several tasks involving humans. For example, the robot could work alongside workers, holding parts in place as the worker bolts them in place. In the classroom, the modularity and soft nature of the robotic system make it a potentially valuable educational tool. Students could create many different robots with a single collection of hardware and then physically interact with the robot. By including a much larger number of roller modules in a robot, the robot could function as a shape display, dynamically changing shape as a sort of high–refresh rate 3D printer. Incorporating touch-sensitive fabric into the structure could allow users to directly interact with the displayed shapes. More broadly, the modularity allows the same hardware to build a diverse family of robots—the same roller modules can be used with new tube routings to create new robots. If the user needed a robot to reach through a long, narrow passageway, they could assemble a chain-like robot; then, for a locomoting robot, they could reassemble into a spherical shape.

Image: Farrin Abbott

I’m having trouble picturing some of that stuff, but the rest of it sounds like fun.

We’re obligated to point out that because of the motorized roller modules, this soft robot is really only semi-soft, and you could argue that it’s not fundamentally all that much better than hydraulic or pneumatic soft robots with embedded rigid components like batteries and pumps. Calling this robot “inherently human-safe,” as the researchers do, might be overselling it slightly, in that it has hard edges, pokey bits, and what look to be some serious finger-munchers. It does sound like there might be some potential to replace the roller modules with something softer and more flexible, which will be a focus of future work.

An untethered isoperimetric soft robot,” by Nathan S. Usevitch, Zachary M. Hammond, Mac Schwager, Allison M. Okamura, Elliot W. Hawkes, and Sean Follmer from Stanford University and UCSB, was published in Science Robotics.

Editor’s Note: When we asked Rodney Brooks if he’d write an article for IEEE Spectrum on his definition of robot, he wrote back right away. “I recently learned that Warren McCulloch”—one of the pioneers of computational neuroscience—“wrote sonnets,” Brooks told us. “He, and your request, inspired me. Here is my article—a little shorter than you might have desired.” Included in his reply were 14 lines composed in iambic pentameter. Brooks titled it “What Is a Robot?” Later, after a few tweaks to improve the metric structure of some of the lines, he added, “I am no William Shakespeare, but I think it is now a real sonnet, if a little clunky in places.”

What Is a Robot?*
By Rodney Brooks

Shall I compare thee to creatures of God?
Thou art more simple and yet more remote.
You move about, but still today, a clod,
You sense and act but don’t see or emote.

You make fast maps with laser light all spread,
Then compare shapes to object libraries,
And quickly plan a path, to move ahead,
Then roll and touch and grasp so clumsily.

You learn just the tiniest little bit,
And start to show some low intelligence,
But we, your makers, Gods not, we admit,
All pledge to quest for genuine sentience.

    So long as mortals breathe, or eyes can see,
    We shall endeavor to give life to thee.

* With thanks to William Shakespeare

Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT, where he was director of the AI Lab and then CSAIL. He has been cofounder of iRobot, Rethink Robotics, and Robust AI, where he is currently CTO.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. [CANCELED] ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Having robots learn dexterous tasks requiring real-time hand-eye coordination is hard. Many tasks that we would consider simple, like hanging up a baseball cap on a rack, would be very challenging for most robot software. What’s more, for a robot to learn each new task, it typically takes significant amounts of engineering time to program the robot. Pete Florence and Lucas Manuelli in the Robot Locomotion Group took a step closer to that goal with their work.

[ Paper ]

Octo-Bouncer is not a robot that bounces an octopus. But it’s almost as good. Almost.

[ Electron Dust ]

D’Kitty (pronounced as “The Kitty”) is a 12-degree-of-freedom platform for exploring learning-based techniques in locomotion and it’s adooorable!

[ D’Kitty ]

Knightscope Autonomous Security Robot meets Tesla Model 3 in Summon Mode!  See, nothing to fear, Elon. :-)

The robots also have a message for us:

[ Knightscope ]

If you missed the robots vs. humans match at RoboCup 2019, here are the highlights.

Tech United ]

Fraunhofer developed this cute little demo of autonomously navigating, cooperating mobile robots executing a miniaturized logistics scenario involving chocolate for the LogiMAT trade show. Which was canceled. But enjoy the video!

[ Fraunhofer ]

Thanks Thilo!

Drones can potentially be used for taking soil samples in awkward areas by dropping darts equipped with accelerometers. But the really clever bit is how the drone can retrieve the dart on its own.

[ UH ]

Rope manipulation is one of those human-easy robot-hard things that’s really, really robot-hard.

[ UC Berkeley ]

Autonomous landing on a moving platform presents unique challenges for multirotor vehicles, including the need to accurately localize the platform, fast trajectory planning, and precise/robust control. This work presents a fully autonomous vision-based system that addresses these limitations by tightly coupling the localization, planning, and control, thereby enabling fast and accurate landing on a moving platform. The platform’s position, orientation, and velocity are estimated by an extended Kalman filter using simulated GPS measurements when the quadrotor-platform distance is large, and by a visual fiducial system when the platform is nearby. To improve the performance, the characteristics of the turbulent conditions are accounted for in the controller. The landing trajectory is fast, direct, and does not require hovering over the platform, as is typical of most state-of-the-art approaches. Simulations and hardware experiments are presented to validate the robustness of the approach.

[ MIT ACL ]

And now, this.

[ Soft Robotics ]

The EPRI (Electric Power Research Institute) recently worked with Exyn Technologies, a pioneer in autonomous aerial robot systems, for a safety and data collection demonstration at Exelon’s Peach Bottom Atomic Power Station in Pennsylvania. Exyn’s drone was able to autonomously inspect components in elevated hard to access areas, search for temperature anomalies, and collect dose rate surveys in radiological areas— without the need for a human operator.

[ Exyn ]

Thanks Zach!

Relax: Pepper is here to help with all of your medical problems.

[ Softbank ]

Amir Shapiro at BGU, along with Yoav Golan (whose work on haptic control of dogs we covered last year), have developed an interesting new kind of robotic finger with passively adjustable friction.

Paper ] via [ BGU ]

Thanks Andy!

UBTECH’s Alpha Mini Robot with Smart Robot’s “Maatje” software is expected to offer healthcare services to children at Sint Maartenskliniek in the Netherlands. Before that, three of them have been trained to have exercise, empathy and cognition capabilities.

[ UBTECH ]

Get ready for CYBATHLON, postponed to September 2020!

[ Cybathlon ]

In partnership with the World Mosquito Program (WMP), WeRobotics has led the development and deployment of a drone-based release mechanism that has been shown to help prevent the incidence of Dengue fever.

[ WeRobotics ]

Sadly, koalas today face a dire outlook across Australia due to human development, droughts, and forest fires. Events like these and a declining population make conservation and research more important than ever. Drones offer a more efficient way to count koalas from above, covering more ground than was possible in the past. Dr. Hamilton and his team at the Queensland University of Technology use DJI drones to count koalas, using the data obtained to better help these furry friends from down under.

[ DJI ]

Fostering the Next Generation of Robotics Startups | TC Sessions: Robotics

Robotics and AI are the future of many or most industries, but the barrier of entry is still difficult to surmount for many startups. Speakers will discuss the challenges of serving robotics startups and companies that require robotics labor, from bootstrapped startups to large scale enterprises.

[ TechCrunch ]

The absolute best way of dealing with the coronavirus pandemic is to just not get coronavirus in the first place. By now, you’ve (hopefully) had all of the strategies for doing this drilled into your skull—wash your hands, keep away from large groups of people, wash your hands, stay home when sick, wash your hands, avoid travel when possible, and please, please wash your hands.

At the top of the list of the places to avoid right now are hospitals, because that’s where all the really sick people go. But for healthcare workers, and the sick people themselves, there’s really no other option. To prevent the spread of coronavirus (and everything else) through hospitals, keeping surfaces disinfected is incredibly important, but it’s also dirty, dull, and (considering what you can get infected with) dangerous. And that’s why it’s an ideal task for autonomous robots.

Photo: UVD Robots The robots can travel through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge.

UVD Robots is a Danish company making robots that are able to disinfect patient rooms and operating theaters in hospitals. They’re able to disinfect pretty much anything you point them at—each robot is a mobile array of powerful short wavelength ultraviolet-C (UVC) lights that emit enough energy to literally shred the DNA or RNA of any microorganisms that have the misfortune of being exposed to them.

The company’s robots have been operating in China for the past two or three weeks, and UVD Robots CEO Per Juul Nielsen says they are sending more to China as fast as they can. “The initial volume is in the hundreds of robots; the first ones went to Wuhan where the situation is the most severe,” Nielsen told IEEE Spectrum. “We’re shipping every week—they’re going air freight into China because they’re so desperately needed.” The goal is to supply the robots to over 2,000 hospitals and medical facilities in China.

UV disinfecting technology has been around for something like a century, and it’s commonly used to disinfect drinking water. You don’t see it much outside of fixed infrastructure because you have to point a UV lamp directly at a surface for a couple of minutes in order to be effective, and since it can cause damage to skin and eyes, humans have to be careful around it. Mobile UVC disinfection systems are a bit more common—UV lamps on a cart that a human can move from place to place to disinfect specific areas, like airplanes. For large environments like a hospital with dozens of rooms, operating UV systems manually can be costly and have mixed results—humans can inadvertently miss certain areas, or not expose them long enough.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis.” —Per Juul Nielsen, UVD Robots

UVD Robots spent four years developing a robotic UV disinfection system, which it started selling in 2018. The robot consists of a mobile base equipped with multiple lidar sensors and an array of UV lamps mounted on top. To deploy a robot, you drive it around once using a computer. The robot scans the environment using its lidars and creates a digital map. You then annotate the map indicating all the rooms and points the robot should stop to perform disinfecting tasks.

After that, the robot relies on simultaneous localization and mapping (SLAM) to navigate, and it operates completely on its own. It’ll travel from its charging station, through hallways, up and down elevators if necessary, and perform the disinfection without human intervention before returning to recharge. For safety, the robot operates when people are not around, using its sensors to detect motion and shutting the UV lights off if a person enters the area.

It takes between 10 and 15 minutes to disinfect a typical room, with the robot spending 1 or 2 minutes in five or six different positions around the room to maximize the number of surfaces that it disinfects. The robot’s UV array emits 20 joules per square meter per second (at 1 meter distance) of 254-nanometer light, which will utterly wreck 99.99 percent of germs in just a few minutes without the robot having to do anything more complicated than just sit there. The process is more consistent than a human cleaning since the robot follows the same path each time, and its autonomy means that human staff can be freed up to do more interesting tasks, like interacting with patients.

Originally, the robots were developed to address hospital acquired infections, which are a significant problem globally. According to Nielsen, between 5 and 10 percent of hospital patients worldwide will acquire a new infection while in the hospital, and tens of thousands of people die from these infections every year. The goal of the UVD robots was to help hospitals prevent these infections in the first place.

Photo: UVD Robots A shipment of robots from UVD Robots arrives at a hospital in Wuhan, where the first coronavirus cases were reported in December.

“And then came the coronavirus, accelerating the situation—spreading more than anything we’ve seen before on a global basis,” Nielsen says. “That’s why there’s a big need for our robots all over the world now, because they can be used in fighting coronavirus, and for fighting all of the other infections that are still there.”

The robots, which cost between US $80,000 and $90,000, are relatively affordable for medical equipment, and as you might expect, recent interest in them has been substantial. “Once [hospitals] see it, it’s a no-brainer,” Nielsen says. “If they want this type of disinfection solution, then the robot is much smarter and more cost-effective than what’s available in the market today.” Hundreds of these robots are at work in more than 40 countries, and they’ve recently completed hospital trials in Florida. Over the next few weeks, they’ll be tested at other medical facilities around the United States, and Nielsen points out that they could be useful in schools, cruise ships, or any other relatively structured spaces. I’ll take one for my apartment, please.

UVD Robots ]

Back to IEEE COVID-19 Resources

Researchers on WeBank’s AI Moonshot Team have taken a deep learning system developed to detect solar panel installations from satellite imagery and repurposed it to track China’s economic recovery from the novel coronavirus outbreak.

This, as far as the researchers know, is the first time big data and AI have been used to measure the impact of the new coronavirus on China, Haishan Wu, vice general manager of WeBank’s AI department, told IEEE Spectrum. WeBank is a private Chinese online banking company founded by Tencent.

The team used its neural network to analyze visible, near-infrared, and short-wave infrared images from various satellites, including the infrared bands from the Sentinel-2 satellite. This allowed the system to look for hot spots indicative of actual steel manufacturing inside a plant. In the early days of the outbreak, this analysis showed that steel manufacturing had dropped to a low of 29 percent of capacity. But by 9 February, it had recovered to 76 percent.

The researchers then looked at other types of manufacturing and commercial activity using AI. One of the techniques was simply counting cars in large corporate parking lots. From that analysis, it appeared that, by 10 February, Tesla’s Shanghai car production had fully recovered, while tourism operations, like Shanghai Disneyland, are still shut down.

Images: WeBank

Moving beyond satellite data, the researchers took daily anonymized GPS data from several million mobile phone users in 2019 and 2020, and used AI to determine which of those users were commuters. The software then counted the number of commuters in each city, and compared the number of commuters on a given day in 2019 and its corresponding date in 2020, starting with Chinese New Year. In both cases, Chinese New Year saw a huge dip in commuting, but unlike in 2019, the number of people going to work didn’t bounce back after the holiday. While things picked up slowly, the WeBank researchers calculated that by 10 March 2020, about 75 percent of the workforce had returned to work.

Projecting out from these curves, the researchers concluded that most Chinese workers, with the exception of Wuhan, will be back to work by the end of March. Economic growth in the first quarter, their study indicated, will take a 36 percent hit.

Finally, the team used natural language processing technology to mine Twitter-like services and other social media platforms for mentions of companies that provide online working, gaming, education, streaming video, social networking, e-commerce, and express delivery services. According to this analysis, telecommuting for work is booming, up 537 percent from the first day of 2020; online education is up 169 percent gaming is up 124 percent; video streaming is up 55 percent; social networking is up 47 percent. Meanwhile, e-commerce is flat, and express delivery is down a little less than 1 percent. The analysis of China’s social media activity also yielded the prediction that the Chinese economy will be mostly back to normal by the end of March.

Back to IEEE COVID-19 Resources

A lot of people in the auto industry talked for way too long about the imminent advent of fully self-driving cars. 

In 2013, Carlos Ghosn, now very much the ex-chairman of Nissan, said it would happen in seven years. In 2016, Elon Musk, then chairman of Tesla, implied  his cars could basically do it already. In 2017 and right through early 2019 GM Cruise talked 2019. And Waymo, the company with the most to show for its efforts so far, is speaking in more measured terms than it used just a year or two ago. 

It’s all making Gill Pratt, CEO of the Toyota Research Institute in California, look rather prescient. A veteran roboticist who joined Toyota in 2015 with the task of developing robocars, Pratt from the beginning emphasized just how hard the task would be and how important it was to aim for intermediate goals—notably by making a car that could help drivers now, not merely replace them at some distant date.

That helpmate, called Guardian, is set to use a range of active safety features to coach a driver and, in the worst cases, to save him from his own mistakes. The more ambitious Chauffeur will one day really drive itself, though in a constrained operating environment. The constraints on the current iteration will be revealed at the first demonstration at this year’s Olympic games in Tokyo; they will certainly involve limits to how far afield and how fast the car may go.

Earlier this week, at TRI’s office in Palo Alto, Calif., Pratt and his colleagues gave Spectrum a walkaround look at the latest version of the Chauffeur, the P4; it’s a Lexus with a package of sensors neatly merging with the roof. Inside are two lidars from Luminar, a stereocamera, a mono-camera (just to zero in on traffic signs), and radar. At the car’s front and corners are small Velodyne lidars, hidden behind a grill or folded smoothly into small protuberances. Nothing more could be glimpsed, not even the electronics that no doubt filled the trunk.

Pratt and his colleagues had a lot to say on the promises and pitfalls of self-driving technology. The easiest to excerpt is their view on the difficulty of the problem.

“There isn’t anything that’s telling us it can’t be done; I should be very clear on that,” Pratt says. “Just because we don’t know how to do it doesn’t mean it can’t be done.”

That said, though, he notes that early successes (using deep neural networks to process vast amounts of data) led researchers to optimism. In describing that optimism, he does not object to the phrase “irrational exuberance,” made famous during the 1990s dot-com bubble.

It turned out that the early successes came in those fields where deep learning, as it’s known, was most effective, like artificial vision and other aspects of perception. Computers, long held to be particularly bad at pattern recognition, were suddenly shown to be particularly good at it—even better, in some cases, than human beings. 

“The irrational exuberance came from looking  at the slope of the [graph] and seeing the seemingly miraculous improvement deep learning had given us,” Pratt says. “Everyone was surprised, including the people who developed it, that suddenly, if you threw enough data and enough computing at it, the performance would get so good. It was then easy to say that because we were surprised just now, it must mean we’re going to continue to be surprised in the next couple of years.”

The mindset was one of permanent revolution: The difficult, we do immediately; the impossible just takes a little longer. 

Then came the slow realization that AI not only had to perceive the world—a nontrivial problem, even now—but also to make predictions, typically about human behavior. That problem is more than nontrivial. It is nearly intractable. 

Of course, you can always use deep learning to do whatever it does best, and then use expert systems to handle the rest. Such systems use logical rules, input by actual experts, to handle whatever problems come up. That method also enables engineers to tweak the system—an option that the black box of deep learning doesn’t allow.

Putting deep learning and expert systems together does help, says Pratt. “But not nearly enough.”

Day-to-day improvements will continue no matter what new tools become available to AI researchers, says Wolfram Burgard, Toyota’s vice president for automated driving technology. 

“We are now in the age of deep learning,” he says. “We don’t know what will come after—it could be a rebirth of an old technology that suddenly outperforms what we saw before. We are still in a phase where we are making progress with existing techniques, but the gradient isn’t as steep as it was a few years ago. It is getting more difficult.”

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

aside.inlay.xlrg.XploreFree { font-family: "Georgia", serif; border-width: 4px 0; border-top: solid #888; border-bottom: solid #888; padding: 10px 0; font-size: 19px; font-weight: bold; text-align: center; } span.FreeRed { color: red; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; } span.XploreBlue { color: #03a6e3; font-family: "Theinhardt-Medium", sans-serif; }

A new sensor for robots is designed to make our physical interactions with these machines a little smoother—and safer. The sensor, which is now being commercialized, allows robots to measure the distance and angle of approach of a human or object in close proximity.

Industrial robots often work autonomously to complete tasks. But increasingly, collaborative robots are working alongside humans. To avoid collisions in these circumstances, collaborative robots need highly accurate sensors to detect when someone (or something) is getting a little too close.

Many sensors have been developed for this purpose, each with its own advantages and disadvantages. Those that rely on sound and light (for example, infrared or ultrasonic time-of-flight sensors) measure the reflections of those signals and must therefore be closely aligned with the approaching object, which limits their field of detection.

Photos: Aidin Robotics

To circumvent this problem, a group of researchers in South Korea created a new proximity sensor that measures impedance. It works by inducing electric and magnetic fields with a wide angle. When a human approaches the sensor, their body causes changes in resistance within those fields. The sensor measures the changes and uses that data to inform the robot of the person’s distance and angle of approach. The researchers describe their design in a study published 26 February in IEEE Transactions on Industrial Electronics. It has since been commercialized by Aidin Robotics.

Read this article for free on IEEE Xplore until 08 April 2020.

The sensor is made of electrodes with a flexible, coil-like design. “Since the sensor is highly flexible, it can be manufactured in various shapes tailored to the geometries of the robot,” explains Yoon Haeng Lee, CEO of Aidin Robotics. “Moreover, it is able to classify the materials of the approaching objects such as human, metals, and plastics.”

Tests show that the sensor can detect humans from up to 30 centimeters away. It has an accuracy of 90 percent when on a flat surface. However, the electric and magnetic fields become weaker and more dispersed when the sensor is laid over a curved surface. Therefore, the sensor’s accuracy decreases as the underlying surface becomes increasingly curved.

Every robot is different, and the sensor’s performance may change based on a specific robot’s characteristics. The latest version of the integrated sensor module, when installed on a curved surface, can detect objects from up to 20 centimeters away with an accuracy of 94 percent.

Lee says the device is already being used in some collaborative robot models, including the UR10 (by Universal Robots) and Indy7 (by Neuromeka Inc.). “In the future, the sensor module will be mass-produced and applied to the other service robots, as well as collaborative and industrial robots, to contribute to the truly safe work and coexistence of robots and humans,” he says.

This article appears in the May 2020 print issue as “A Proximity Sensor for Robots.”

Back to IEEE Journal Watch

Dr. Arthur Kreitenberg and his son Elliot got some strange looks when they began the design work for the GermFalcon, a new machine that uses ultraviolet light to wipe out coronavirus and other germs inside an airplane. The father-son founders of Dimer UVC took tape measures with them on flights to unobtrusively record the distances that would form the key design constraints for their system.

“We definitely got lots of looks from passengers and lots of inquiries from flight attendants,” Dr. Kreitenberg recalls. “You can imagine that would cause some attention: taking out a tape measure midflight and measuring armrests. The truth is that when we explained to the flight attendants what we were doing and what we were designing, they [were] really excited about it.”

Perhaps that shouldn’t be surprising. In these days of coronavirus concerns, airline attendants work in what must seem like an aluminum-encased biohazard site.

Image: Dimer UVC

GermFalcon uses a set of mercury lamps to bathe the airline cabin, bathrooms, and galley in ultraviolet-C light. Unlike UV-A and UV-B, that 200 to 280 nanometer wavelength doesn’t reach the surface of the Earth from the sun, because it’s strongly absorbed by nitrogen in the air. And that’s a good thing, because it’s like kryptonite to DNA. Using 100-amps from a lithium-iron-phosphate battery pack, GermFalcon’s mercury lamps’ output is so strong that the company claims the system can wipe out flu viruses from an entire narrow-body plane in about three minutes: one pass up the aisle, one pass down the aisle, and a minute for the bathrooms and galley.

Flu prevention was the original inspiration for GermFalcon. Dr. Arthur Kreitenberg, an orthopaedic surgeon with a background in mechanical engineering, was already familiar with UV-C sterilization, because of its use in operating rooms. “Our motivation was to take it outside of the hospital into other areas where people are concerned about germs,” he says. With SARS and MERS and annual influenza, it seemed clear that airplanes are a major mode of transmission. It was also clear that nobody was effectively disinfecting aircraft.

Many of the chemicals you’d use in a hospital are not approved for use on an aircraft, Kreitenberg points out. And some of the ones that are, aren’t nearly as effective or practical as assumed. (Stop for a minute and look at the actual directions for disinfecting a surface with a Lysol Wipe, then try to imagine doing that on a plane. Go ahead. I’ll wait.)

Photo: Dimer UVC

The key design constraints for bringing UV-C sterilization into air travel were geometry, time, and power. The Kreitenbergs needed to know how much room their system had to move up and down the aisles without bashing into seats, armrests, restroom doors, and overhead bins. They also needed to know what surfaces were the most germ-ridden (the top of the seat back, as you might expect), something they discovered by swabbing surfaces on about a dozen flights. And from those data points, they had to figure out the proper power and position of the UV-lamps that would allow them to sterilize an aircraft in a matter of minutes. “Time is a big constraint as well. The airlines want us on and off the airplane as quick as possible,” he says.

“I wish I could tell you we solved it all mathematically,” says Kreitenberg. “But the truth is we went out to the airplane graveyard in Mojave, California and bought a couple rows of airplane seats and overhead bins, put [UV] meters on them, smeared them with bacteria, and did cultures.”

It took four or five iterations to get it right. “It turns out there are a lot of different airplane configurations,” he says.

Initially, the pair envisioned GermFalcon as a robot, but that made the design challenges multiply. “Robotics are easier said than done, even just going up and down an airplane,” he says. Sensors weren’t hardy enough and needed frequent recalibration, and the motor drives were heavy and energy consuming. The robotics consumed about a year of their development time before they decided to abandon that path in favor of a human protected by shielding.

Photo: Dimer UVC

Lacking a suitable lab for such a dangerous germ, Dimer UVC hasn’t tested the system on the virus that causes COVID-19. But Kreitenberg expects it will be similarly susceptible to UV-C as influenza and other germs are. The dose can be easily adjusted by slowing GermFalcon’s roll down the aisle. The company has offered GermFalcon’s services free of charge to airlines operating from a handful of U.S. airports

While Dimer UVC waits for airlines to take up its offer, it’s gotten involved in another attempt to robotize aerospace interiors. The company is part of a team building a UV-C sterilization robot for the International Space Station. “It’ll work basically work like a Roomba and skim the surface of the space station,” says Kreitenberg, a former finalist astronaut candidate.

Because it can get so close to the station’s surfaces, the zero-G death-ray Roomba the team is working on can use UV-C LEDs instead of the power-hungry mercury lamps of GermFalcon. Kreitenberg says he would be much happier using LEDs, if they could reach the needed power. “All of our power constraints and a lot of other constraints will be solved when there is an effective UV-C LED,” he says. Looking at the progress companies have made in that area over the last five years, he’s “optimistic" that GermFalcon will be able to switch to using only LEDs.

Back to IEEE COVID-19 Resources
aside.inlay.xlrg.XploreFree { font-family: "Georgia", serif; border-width: 4px 0; border-top: solid #888; border-bottom: solid #888; padding: 10px 0; font-size: 19px; font-weight: bold; text-align: center; } span.FreeRed { color: red; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; } span.XploreBlue { color: #03a6e3; font-family: "Theinhardt-Medium", sans-serif; }

Swarms of small, inexpensive robots are a compelling research area in robotics. With a swarm, you can often accomplish tasks that would be impractical (or impossible) for larger robots to do, in a way that’s much more resilient and cost effective than larger robots could ever be.

The tricky thing is getting a swarm of robots to work together to do what you want them to do, especially if what you want them to do is a task that’s complicated or highly structured. It’s not too bad if you have some kind of controller that can see all the robots at once and tell them where to go, but that’s a luxury that you’re not likely to find outside of a robotics lab.

Researchers at Northwestern University, in Evanston, have been working on a way to provide decentralized control for a swarm of 100 identically programmed small robots, which allows them to collectively work out a way to transition from one shape to another without running into each other even a little bit.

The process that the robots use to figure out where to go seems like it should be mostly straightforward: They’re given a shape to form, so each robot picks its goal location (where it wants to end up as part of the shape), and then plans a path to get from where it is to where it needs to go, following a grid pattern to make things a little easier. But using this method, you immediately run into two problems: First, since there’s no central control, you may end up with two (or more) robots with the same goal; and second, there’s no way for any single robot to path plan all the way to its goal in a way that it can be certain won’t run into another robot.

To solve these problems, the robots are all talking to each other as they move, not just to avoid colliding with its friends, but also to figure out where its friends are going and whether it might be worth swapping destinations. Since the robots are all the same, they don’t really care where exactly they end up, as long as all of the goal positions are filled up. And if one robot talks to another robot and they agree that a goal swap would result in both of them having to move less, they go ahead and swap. The algorithm makes sure that all goal positions are filled eventually, and also helps robots avoid running into each other through judicious use of a “wait” command.

What’s novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this.

What’s really novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this. And it means that since it’s effective with no centralized control at all, you can think of “the swarm” as a sort of Borg-like collective entity of its own, which is pretty cool.

The Northwestern researchers behind this are Michael Rubenstein, assistant professor of electrical engineering and computer science, and his PhD student Hanlin Wang. You might remember Mike from his work on Kilobots at Harvard, which we wrote about in 2011, 2013, and again in 2014, when Mike and his fellow researchers managed to put together a thousand (!) of them. As awesome as it is to have a thousand robots, when you start thinking about what it takes to charge, fix, and modify them, a thousand robots (a thousand robots!), it makes sense why they’ve updated the platform a bit (now called Coachbot) and reduced the swarm size to 100 physical robots, making up the rest in simulation.

These robots, we’re told, are “much better behaved.”

Image: Northwestern University

The hardware used by the researchers in their experiments. 1. The Coachbot V2.0 mobile robots (height of 12 cm and a diameter of 10 cm) are equipped with a localization system based on the HTC Vive (a), Raspberry Pi b+ computer (b), electronics motherboard (c), and rechargeable battery (d). The robot arena used in experiments has an overhead camera only used for recording videos (e) and an overhead HTC Vive base station (f). The experiments relied on a swarm of 100 robots (g). 2. The Coachbot V2.0 swarm communication network consists of an ethernet connection between the base station and a Wi-Fi router (green link), TCP/IP connections (blue links), and layer 2 broadcasting connections (black links). 3. A swarm of 100 robots. 4. The robots recharge their batteries by connecting to two metal strips attached to the wall.

For more details on this work, we spoke with Mike Rubenstein via email.

IEEE Spectrum: Why switch to the new hardware platform instead of Kilobots?

Mike Rubenstein: We wanted to make a platform more capable and extendable than Kilobot, and improve on lessons learned with Kilobot. These robots have far better locomotion capabilities that Kilobot, and include absolute position sensing, which makes operating the robots easier. They have truly “hands free” operations. For example with Kilobot to start an experiment you had to place the robots in their starting position by hand (sometimes taking an hour or two), while with these robots, a user just specifies a set of positions for all the robots and presses the “go” button. With Kilobot it was also hard to see what the state of all the robots were, for example it was difficult to see if 999 robots are powered on or 1000 robots are powered on. These new robots send state information back to a user display, making it easy to understand the full state of the swarm. 
 
How much of a constraint is grid-ifying the goal points and motion planning?

The grid constraint obviously makes motion less efficient as they must move in Manhattan-type paths, not straight line paths, so most of the time they move a bit farther. The reason we constrain the motions to move in a discrete grid is that it makes the robot algorithm less computationally complex and reasoning about collisions and deadlock becomes a lot easier, which allowed us to provide guarantees that the shape will form successfully. 

Image: Northwestern University

Still images of a 100 robot shape formation experiment. The robots start in a random configuration, and move to form the desired “N” shape. Once this shape is formed, they then form the shape “U.” The entire sequence is fully autonomous. (a) T = 0 s; (b) T = 20 s; (c) T = 64 s; (d) T = 72 s; (e)  T = 80 s; (f) T = 112 s.

Can you tell us about those couple of lonely wandering robots at the end of the simulated “N” formation in the video?

In our algorithm, we don’t assign goal locations to all the robots at the start, they have to figure out on their own which robot goes where. The last few robots you pointed out happened to be far away from the goal location the swarm figured they should have. Instead of having that robot move around the whole shape to its goal, you see a subset of robots all shift over by one to make room for the robot in the shape closer to its current position.
 
What are some examples of ways in which this research could be applied to real-world useful swarms of robots?

One example could be the shape formation in modular self-reconfigurable robots. The hope is that this shape formation algorithm could allow these self-reconfigurable systems to automatically change their shape in a simple and reliable way. Another example could be warehouse robots, where robots need to move to assigned goals to pick up items. This algorithm would help them move quickly and reliably.
 
What are you working on next?

I’m looking at trying to understand how to enable large groups of simple individuals to behave in a controlled and reliable way as a group. I’ve started looking at this question in a wide range of settings; from swarms of ground robots, to reconfigurable robots that attach together by melting conductive plastic, to swarms of flying vehicles, to satellite swarms. 

Shape Formation in Homogeneous Swarms Using Local Task Swapping,” by Hanlin Wang and Michael Rubenstein from Northwestern, is published in IEEE Transactions on Robotics. < Back to IEEE Journal Watch
aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

NASA Curiosity Project Scientist Ashwin Vasavada guides this tour of the rover’s view of the Martian surface. Composed of more than 1,000 images and carefully assembled over the ensuing months, the larger version of this composite contains nearly 1.8 billion pixels of Martian landscape.

This panorama showcases "Glen Torridon," a region on the side of Mount Sharp that Curiosity is exploring. The panorama was taken between Nov. 24 and Dec. 1, 2019, when the Curiosity team was out for the Thanksgiving holiday. Since the rover would be sitting still with few other tasks to do while it waited for the team to return and provide its next commands, the rover had a rare chance to image its surroundings several days in a row without moving.

[ MSL ]

Sarcos has been making progress with its Guardian XO powered exoskeleton, which we got to see late last year in prototype stage:

The Sarcos Guardian XO full-body, powered exoskeleton is a first-of-its-kind wearable robot that enhances human productivity while keeping workers safe from strain or injury. Set to transform the way work gets done, the Guardian XO exoskeleton augments operator strength without restricting freedom of movement to boost productivity while dramatically reducing injuries.

[ Sarcos ]

Professor Hooman Samani, director of the Artificial Intelligence and Robotics Technology Laboratory (AIART Lab) at National Taipei University, Taiwan, writes in to share some ideas on how robots could be used to fight the coronavirus outbreak. 

Time is a critical issue when dealing with people affected by Coronavirus. Also due to the current emergency disaster, doctors could be far away from the patients. Additionally, avoiding direct contact with infected person is a medical priority. An immediate monitoring and treatment using specific kits must be administered to the victim. We have designed and developed the Ambulance Robot (AmbuBot) which could be a solution to address those issues. AmbuBot could be placed in various locations especially in busy, remote or quarantine areas to assist in above mentioned scenario. The AmbuBot also brings along an AED in a sudden event of cardiac arrest and facilitates various modes of operation from manual to semi-autonomous to autonomous functioning.

[ AIART Lab ]

IEEE Spectrum is interested in exploring how robotics and related technologies can help to fight the coronavirus (COVID-19) outbreak. If you are involved with actual deployments of robots to hospitals and high risk areas or have experience working with robots, drones, or other autonomous systems designed for this kind of emergency, please contact  IEEE Spectrum senior editor Erico Guizzo (e.guizzo@ieee.org) Click here for additional coronavirus coverage

Digit is launching later this month alongside a brand new sim that’s a 1:1 match to both the API and physics of the actual robot. Here, we show off the ability to train a learned policy against the validated physics of the robot. We have a LOT more to say about RL with real hardware... stay tuned.

Staying tuned!

Agility Robotics ]

This video presents simulations and experiments highlighting the functioning of the proposed Trapezium Line Theta* planner, as well as its improvements over our previous work namely the Obstacle Negotiating A* planner. First, we briefly present a comparison of our previous and new planners. We then show two simulations. The first shows the robot traversing an inclined corridor to reach a goal near the low-lying obstacle. This demonstrates the omnidirectional and any-angle motion planning improvement achieved by the new planner, as well as the independent planning for the front and back wheel pairs. The second simulation further demonstrates the key improvements mentioned above by having the robot traverse tight right-angled corridors. Finally, we present two real experiments on the CENTAURO robot. In the first experiment, the robot has to traverse into a narrow passage and then expand over a low lying obstacle. The second experiment has the robot first expand over a wide obstacle and then move into a narrow passage.

To be presented at ICRA 2020.

Dimitrios Kanoulas ]

We’re contractually obligated to post any video with “adverse events” in the title.

JHU ]

Waymo advertises their self-driving system in this animated video that features a robot car making a right turn without indicating. Also pretty sure that it ends up in the wrong lane for a little bit after a super wide turn and blocks a crosswalk to pick up a passenger. Oops!

I’d still ride in one, though.

Waymo ]

Exyn is building the world’s most advanced, autonomous aerial robots. Today, we launched our latest capability, Scoutonomy. Our pilotless robot can now ‘scout’ freely within a desired volume, such as a tunnel, or this parking garage. The robot sees the white boxes as ‘unknown’ space, and flies to explore them. The orange boxes are mapped obstacles. It also intelligently avoids obstacles in its path and identifies objects, such as people or cars. Scoutonomy can be used to safely and quickly finding survivors in natural, or man-made, disasters.

Exyn ]

I don’t know what soma blocks are, but this robot is better with them than I am.

This work presents a planner that can automatically find an optimal assembly sequence for a dual-arm robot to assemble the soma blocks. The planner uses the mesh model of objects and the final state of the assembly to generate all possible assembly sequence and evaluate the optimal assembly sequence by considering the stability, graspability, assemblability, as well as the need for a second arm. Especially, the need for a second arm is considered when supports from worktables and other workpieces are not enough to produce a stable assembly.

[ Harada Lab ]

Semantic grasping is the problem of selecting stable grasps that are functionally suitable for specific object manipulation tasks. In order for robots to effectively perform object manipulation, a broad sense of contexts, including object and task constraints, needs to be accounted for. We introduce the Context-Aware Grasping Engine, which combines a novel semantic representation of grasp contexts with a neural network structure based on the Wide & Deep model, capable of capturing complex reasoning patterns. We quantitatively validate our approach against three prior methods on a novel dataset consisting of 14,000 semantic grasps for 44 objects, 7 tasks, and 6 different object states. Our approach outperformed all baselines by statistically significant margins, producing new insights into the importance of balancing memorization and generalization of contexts for semantic grasping. We further demonstrate the effectiveness of our approach on robot experiments in which the presented model successfully achieved 31 of 32 suitable grasps.

[ RAIL Lab ]

I’m not totally convinced that bathroom cleaning is an ideal job for autonomous robots at this point, just because of the unstructured nature of a messy bathroom (if not of the bathroom itself). But this startup is giving it a shot anyway.

The cost target is $1,000 per month.

[ Somatic ] via [ TechCrunch ]

IHMC is designing, building, and testing a mobility assistance research device named Quix. The main function of Quix is to restore mobility to those stricken with lower limb paralysis. In order to achieve this the device has motors at the pelvis, hips, knees, and ankles and an onboard computer controlling the motors and various sensors incorporated into the system.

[ IHMC ]

In this major advance for mind-controlled prosthetics, U-M research led by Paul Cederna and Cindy Chestek demonstrates an ultra-precise prosthetic interface technology that taps faint latent signals from nerves in the arm and amplifies them to enable real-time, intuitive, finger-level control of a robotic hand.

[ University of Michigan ]

Coral reefs represent only 1% of the seafloor, but are home to more than 25% of all marine life. Reefs are declining worldwide. Yet, critical information remains unknown about basic biological, ecological, and chemical processes that sustain coral reefs because of the challenges to access their narrow crevices and passageways. A robot that grows through its environment would be well suited to this challenge as there is no relative motion between the exterior of the robot and its surroundings. We design and develop a soft growing robot that operates underwater and take a step towards navigating the complex terrain of a coral reef.

[ UCSD ]

What goes on inside those package lockers, apparently.

[ Dorabot ]

In the future robots could track the progress of construction projects. As part of the MEMMO H2020 project, we recently carried out an autonomous inspection of the Costain High Speed Rail site in London with our ANYmal robot, in collaboration with Edinburgh Robotics.

[ ORI ]

Soft Robotics technology enables seafood handling at high speed even with amorphous products like mussels, crab legs, and lobster tails.

[ Soft Robotics ]

Pepper and Nao had a busy 2019:

[ SoftBank Robotics ]

Chris Atkeson, a professor at the Robotics Institute at Carnegie Mellon University, watches a variety of scenes featuring robots from movies and television and breaks down how accurate their depictions really are. Would the Terminator actually have dialogue options? Are the "three laws" from I, Robot a real thing? Is it actually hard to erase a robot’s memory (a la Westworld)?

[ Chris Atkeson ] via [ Wired ]

This week’s CMU RI Seminar comes from Anca Dragan at UC Berkeley, on “Optimizing for Coordination With People.”

From autonomous cars to quadrotors to mobile manipulators, robots need to co-exist and even collaborate with humans. In this talk, we will explore how our formalism for decision making needs to change to account for this interaction, and dig our heels into the subtleties of modeling human behavior — sometimes strategic, often irrational, and nearly always influenceable. Towards the end, I’ll try to convince you that every robotics task is actually a human-robot interaction task (its specification lies with a human!) and how this view has shaped our more recent work.

[ CMU RI ]

When the group of high schoolers arrived for the coding camp, the idea of spending the day staring at a computer screen didn’t seem too exciting to them. But then Pepper rolled into the room.

“All of a sudden everyone wanted to become a robot coder,” says Kass Dawson, head of marketing and business strategy at SoftBank Robotics America, in San Francisco. He saw the same thing happen in other classrooms, where the friendly humanoid was an instant hit with students.

“What we realized very quickly was, we need to take advantage of the fact that this robot can get kids excited about computer science,” Dawson says.

Today SoftBank is launching Tethys, a visual programming tool designed to teach students how to code by creating applications for Pepper. The company is hoping that its humanoid robot, which has been deployed in homesretail stores, and research labs, can also play a role in schools, helping to foster the next generation of engineers and roboticists.

As part of a pilot program, more than 1,000 students in about 20 public schools in Boston, San Francisco, and Vancouver, Canada

Tethys is based on an intuitive, graphical approach to coding. To create a program, you drag boxes (representing different robot behaviors) on the screen and connect them with wires. You can run your program instantly on a Pepper to see how it works. You can also run it on a virtual robot on the screen.

As part of a pilot program, more than 1,000 students in about 20 public schools in Boston, San Francisco, and Vancouver, Canada, are already using the tool. SoftBank plans to continue expanding to more locations. (Educators interested in bringing Tethys and Pepper to their schools should reach out to the company by email.)

Bringing robots to the classroom

The idea of using robots to teach coding, logic, and problem-solving skills is not new (in fact, in the United States it goes back nearly half a century). Lego robotics kits like Mindstorms, Boost, and WeDo are widely used in STEM education today. Other popular robots and kits include Dash and Dot, Cubelets, Sphero, VEX, Parallax, and Ozobot. Last year, iRobot acquired Root, a robotics education startup founded by Harvard researchers.

So SoftBank is entering a crowded market, although one that has a lot of growth potential. And to be fair, SoftBank is not entirely new to the educational space—its experience goes back to the acquisition of French company Aldebaran Robotics, whose Nao humanoid has long been used in classrooms. Pepper, also originally developed by Aldebaran, is Nao’s newer, bigger sibling, and it, too, has been used in classrooms before.

Photo: SoftBank Robotics Using the Tethys visual programming tool, students can program Pepper to move, gesticulate, talk, and display graphics on its tablet. They can run their programs on a real robot or a virtual one on their computers.

Pepper’s size is probably one of its main advantages over the competition. It’s a 1.2-meter tall humanoid that can move around a room, dance, and have conversations and play games with people—not just a small wheeled robot beeping and driving on a tabletop.

On the other hand, Pepper’s size also means it costs several times as much as those other robots. That’s a challenge if SoftBank wants to get lots of them out to schools, which may not be able to afford them. So far the company has addressed the issue by donating Peppers—over 100 robots in the past two years.

How Tethys work

When SoftBank first took Pepper to classrooms, it discovered that the robot’s original software development platform, called Choregraphe, wasn’t designed as an educational tool. It was hard to use by non engineers, and was glitchy. SoftBank then partnered with Finger Food Advanced Technology Group, a Vancouver-based software company, to develop Tethys.

Image: SoftBank Robotics While Tethys is based on a visual programming environment, students can inspect the underlying Python scripts and modify them or write their own code.

Tethys is an integrated development environment, or IDE, that runs on a web browser (it works on regular laptops and also Chromebooks, popular in schools). It features a user-friendly visual programming interface, and in that sense it is similar to other visual programming languages like Blockly and Scratch.

But students aren’t limited to dragging blocks and wires on the screen; they can inspect the underlying Python scripts and modify them, or write their own code.

SoftBank says the new initiative is focused on “STREAM” education, or Science, Technology, Robotics, Engineering, Art, and Mathematics. Accordingly, Tethys is named after the Greek Titan goddess of streams, says SoftBank’s Dawson, who heads its STREAM Education program.

“It’s really important to make sure that more people are getting involved in robotics,” he says, “and that means not just the existing engineers who are out there, but trying to encourage the engineers of the future.”

Today, Boston Dynamics and OTTO Motors (a division of Clearpath Robotics) are announcing a partnership to “coordinate mobile robots in the warehouse” as part of “the future of warehouse automation.” It’s a collaboration between OTTO’s autonomous mobile robots and Boston Dynamics’s Handle, showing how a heterogeneous robot team can be faster and more efficient in a realistic warehouse environment.

As much as we love Handle, it doesn’t really seem like the safest robot for humans to be working around. Its sheer size, dynamic motion, and heavy payloads mean that the kind of sense-and-avoid hardware and software you’d really want to have on it for humans to able to move through its space without getting smushed would likely be impractical, so you need another way of moving stuff in an out of its work zone. The Handle logistics video Boston Dynamics released about a year ago showed the robot working mostly with conveyor belts, but that kind of fixed infrastructure may not be ideal for warehouses that want to remain flexible.

This is where OTTO Motors comes in—its mobile robots (essentially autonomous mobile cargo pallets) can safely interact with Handles carrying boxes, moving stuff from where the Handles are working to where it needs to go without requiring intervention from a fragile and unpredictable human who would likely only get in the way of the whole process. 

From the press release:

“We’ve built a proof of concept demonstration of a heterogeneous fleet of robots building distribution center orders to provide a more flexible warehouse automation solution,” said Boston Dynamics VP of Product Engineering Kevin Blankespoor. “To meet the rates that our customers expect, we’re continuing to expand Handle’s capabilities and optimizing its interactions with other robots like the OTTO 1500 for warehouse applications.”

This sort of suggests that OTTO Motors might not be the only partner that Boston Dynamics is working with. There are certainly other companies who make autonomous mobile robots for warehouses like OTTO does, but it’s more fun to think about fleets of warehouse robots that are as heterogeneous as possible: drones, blimps, snake robots, hexapods—I wouldn’t put anything past them.

[ OTTO Motors ]

Pages