Feed aggregator

Niche construction is a process in which organisms modify the selection pressures on themselves and others through their ecological activities, and ecological inheritance is the consequence of niche construction inherited through generations. However, it is still unclear how such mutual interactions between robots or embodied agents and their physical environments can yield complex and divergent evolutionary processes or an open-ended evolution. Our purpose is to clarify what kind of complex and various niche-constructing behaviors evolve in a physically grounded environment under various conditions of ecological inheritance of constructed structures and spatial relationships. We focus on a predator-prey relationship, and constructed an evolutionary model in which a prey creature has to avoid predation through the construction of a structure composed of objects in a 2D physically simulated environment supported by a physics engine. We used a deep auto-encoder to extract the defining feature of adaptive structures automatically. The results in the case of no ecological inheritance revealed that the number of available resources can affect the diversity of emerging adaptive structures. Also, in the case with ecological inheritance, it was found that combinations of two types of ecological inheritance, which are the inheritance of adaptive structures and birthplace, can have strong effects on the diversity of emerging structures and the adaptivity of the population. We expect that findings in evolutionary simulations of niche-constructing behavior might contribute to evolutionary design of robotic builders or robot fabrication, especially when we assume physically simulated environments.

Today, robots are studied and expected to be used in a range of social roles within classrooms. Yet, due to a number of limitations in social robots, robot interactions should be expected to occasionally suffer from troublesome situations and breakdowns. In this paper, we explore this issue by studying how children handle interaction trouble with a robot tutee in a classroom setting. The findings have implications not only for the design of robots, but also for evaluating their benefit in, and for, educational contexts. In this study, we conducted video analysis of children's group interactions with a robot tutee in a classroom setting, in order to explore the nature of these troubles in the wild. Within each group, children took turns acting as the primary interaction partner for the robot within the context of a mathematics game. Specifically, we examined what types of situations constitute trouble in these child–robot interactions, the strategies that individual children employ to cope with this trouble, as well as the strategies employed by other actors witnessing the trouble. By means of Interaction Analysis, we studied the video recordings of nine group interaction sessions (n = 33 children) in primary school grades 2 and 4. We found that sources of trouble related to the robot's social norm violations, which could be either active or passive. In terms of strategies, the children either persisted in their attempts at interacting with the robot by adapting their behavior in different ways, distanced themselves from the robot, or sought the help of present adults (i.e., a researcher in a teacher role, or an experimenter) or their peers (i.e., the child's classmates in each group). In terms of the witnessing actors, they addressed the trouble by providing guidance directed at the child interacting with the robot, or by intervening in the interaction. These findings reveal the unspoken rules by which children orient toward social robots, the complexities of child–robot interaction in the wild, and provide insights on children's perspectives and expectations of social robots in classroom contexts.

A major goal of autonomous robot collectives is to robustly perform complex tasks in unstructured environments by leveraging hardware redundancy and the emergent ability to adapt to perturbations. In such collectives, large numbers is a major contributor to system-level robustness. Designing robot collectives, however, requires more than isolated development of hardware and software that supports large scales. Rather, to support scalability, we must also incorporate robust constituents and weigh interrelated design choices that span fabrication, operation, and control with an explicit focus on achieving system-level robustness. Following this philosophy, we present the first iteration of a new framework toward a scalable and robust, planar, modular robot collective capable of gradient tracking in cluttered environments. To support co-design, our framework consists of hardware, low-level motion primitives, and control algorithms validated through a kinematic simulation environment. We discuss how modules made primarily of flexible printed circuit boards enable inexpensive, rapid, low-precision manufacturing; safe interactions between modules and their environment; and large-scale lattice structures beyond what manufacturing tolerances allow using rigid parts. To support redundancy, our proposed modules have on-board processing, sensing, and communication. To lower wear and consequently maintenance, modules have no internally moving parts, and instead move collaboratively via switchable magnets on their perimeter. These magnets can be in any of three states enabling a large range of module configurations and motion primitives, in turn supporting higher system adaptability. We introduce and compare several controllers that can plan in the collective's configuration space without restricting motion to a discrete occupancy grid as has been done in many past planners. We show how we can incentively redundant connections to prevent single-module failures from causing collective-wide failure, explore bad configurations which impede progress as a result of the motion constraints, and discuss an alternative “naive” planner with improved performance in both clutter-free and cluttered environments. This dedicated focus on system-level robustness over all parts of a complete design cycle, advances the state-of-the-art robots capable of long-term exploration.

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

A year ago, we visited Rwanda to see how Zipline’s autonomous, fixed-wing delivery drones were providing blood to hospitals and clinics across the country. We were impressed with both Zipline’s system design (involving dramatic catapult launches, parachute drops, and mid-air drone catching), as well as their model of operations, which minimizes waste while making critical supplies available in minutes almost anywhere in the country.

Since then, Zipline has expanded into Ghana, and has plans to start flying in India as well, but the COVID-19 pandemic is changing everything. Africa is preparing for the worst, while in the United States, Zipline is working with the Federal Aviation Administration to try and expedite safety and regulatory approvals for an emergency humanitarian mission with the goal of launching a medical supply delivery network that could help people maintain social distancing or quarantine when necessary by delivering urgent medication nearly to their doorsteps.

In addition to its existing role delivering blood products and medication, Zipline is acting as a centralized distribution network for COVID-19 supplies in Ghana and Rwanda. Things like personal protective equipment (PPE) will be delivered as needed by drone, ensuring that demand is met across the entire healthcare network. This has been a problem in the United States—getting existing supplies where they’re needed takes a lot of organization and coordination, which the US government is finding to be a challenge.

Photo: Zipline

Zipline says that their drones are able to reduce human involvement in the supply chain (a vector for infection), while reducing hospital overcrowding by making it more practical for non-urgent patients to receive care in local clinics closer to home. COVID-19 is also having indirect effects on healthcare, with social distancing and community lockdowns straining blood supplies. With its centralized distribution model, Zipline has helped Rwanda to essentially eliminate wasted (expired) blood products. “We probably waste more blood [in the United States] than is used in all of Rwanda,” Zipline CEO Keller Rinaudo told us. But it’s going to take more than blood supply to fight COVID-19, and it may hit Africa particularly hard.

Click here for additional coronavirus coverage

“Things are earlier in Africa, you don’t see infections at the scale that we’re seeing in the U.S.,” says Rinaudo. “I also think Africa is responding much faster. Part of that is the benefit of seeing what’s happening in countries that didn’t take it seriously in the first few months where community spreading gets completely out of control. But it’s quite possible that COVID is going to be much more severe in countries that are less capable of locking down, where you have densely populated areas with people who can’t just stay in their house for 45 days.” 

In an attempt to prepare for things getting worse, Rinaudo says that Zipline is stocking as many COVID-related products as possible, and they’re also looking at whether they’ll be able to deliver to neighborhood drop-off points, or perhaps directly to homes. “That’s something that Zipline has been on track to do for quite some time, and we’re considering ways of accelerating that. When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way.” This kind of system, Rinaudo points out, would also benefit people with non-COVID healthcare needs, who need to do their best to avoid hospitals. If a combination of telemedicine and home or neighborhood delivery of medical supplies means they can stay home, it would be a benefit for everyone. “This is a transformation of the healthcare system that’s already happening and needs to happen anyway. COVID is just accelerating it.”

“When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way” —Keller Rinaudo, Zipline

For the past year, Zipline, working closely with the FAA, has been planning on a localized commercial trial of a medical drone delivery service that was scheduled to begin in North Carolina this fall. While COVID is more urgent, the work that’s already been done towards this trial puts Zipline in a good position to move quickly, says Rinaudo.

“All of the work that we did with the IPP [UAS Integration Pilot Program] is even more important, given this crisis. It means that we’ve already been working with the FAA in detail, and that’s made it possible for us to have a foundation to build on to help with the COVID-19 response.” Assuming that Zipline and the FAA can find a regulatory path forward, the company could begin setting up distribution centers that can support hospital networks for both interfacility delivery as well as contactless delivery to (eventually) neighborhood points and perhaps even homes. “It’s exactly the use case and value proposition that I was describing for Africa,” Rinaudo says.

Leveraging rapid deployment experience that it has from work with the U.S. Department of Defense, Zipline would launch one distribution center within just a few months of a go-ahead from the FAA. This single distribution center could cover an area representing up to 10 million people. “We definitely want to move quickly here,” Rinaudo tells us. Within 18 months, Zipline could theoretically cover the entire US, although he admits “that would be an insanely fast roll-out.”

The question, at this point, is how fast the FAA can take action to make innovative projects like this happen. Zipline, as far as we can tell, is ready to go. We did also ask Rinaudo if he thought that hospitals specifically, and the medical system in general, has the bandwidth to adopt a system like Zipline’s in the middle of a pandemic that’s already stretching people and resources to the limit.

“In the U.S. there’s this sense that this technology is impossible, whereas it’s already operating at multi-national scale, serving thousands of hospitals and health facilities, and it’s completely boring to the people who are benefiting from it,” Rinaudo says. “People in the U.S. have really not caught on that this is something that’s reliable and can dramatically improve our response to crises like this.”

[ Zipline ]

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

A year ago, we visited Rwanda to see how Zipline’s autonomous, fixed-wing delivery drones were providing blood to hospitals and clinics across the country. We were impressed with both Zipline’s system design (involving dramatic catapult launches, parachute drops, and mid-air drone catching), as well as their model of operations, which minimizes waste while making critical supplies available in minutes almost anywhere in the country.

Since then, Zipline has expanded into Ghana, and has plans to start flying in India as well, but the COVID-19 pandemic is changing everything. Africa is preparing for the worst, while in the United States, Zipline is working with the Federal Aviation Administration to try and expedite safety and regulatory approvals for an emergency humanitarian mission with the goal of launching a medical supply delivery network that could help people maintain social distancing or quarantine when necessary by delivering urgent medication nearly to their doorsteps.

In addition to its existing role delivering blood products and medication, Zipline is acting as a centralized distribution network for COVID-19 supplies in Ghana and Rwanda. Things like personal protective equipment (PPE) will be delivered as needed by drone, ensuring that demand is met across the entire healthcare network. This has been a problem in the United States—getting existing supplies where they’re needed takes a lot of organization and coordination, which the US government is finding to be a challenge.

Photo: Zipline

Zipline says that their drones are able to reduce human involvement in the supply chain (a vector for infection), while reducing hospital overcrowding by making it more practical for non-urgent patients to receive care in local clinics closer to home. COVID-19 is also having indirect effects on healthcare, with social distancing and community lockdowns straining blood supplies. With its centralized distribution model, Zipline has helped Rwanda to essentially eliminate wasted (expired) blood products. “We probably waste more blood [in the United States] than is used in all of Rwanda,” Zipline CEO Keller Rinaudo told us. But it’s going to take more than blood supply to fight COVID-19, and it may hit Africa particularly hard.

Click here for additional coronavirus coverage

“Things are earlier in Africa, you don’t see infections at the scale that we’re seeing in the U.S.,” says Rinaudo. “I also think Africa is responding much faster. Part of that is the benefit of seeing what’s happening in countries that didn’t take it seriously in the first few months where community spreading gets completely out of control. But it’s quite possible that COVID is going to be much more severe in countries that are less capable of locking down, where you have densely populated areas with people who can’t just stay in their house for 45 days.” 

In an attempt to prepare for things getting worse, Rinaudo says that Zipline is stocking as many COVID-related products as possible, and they’re also looking at whether they’ll be able to deliver to neighborhood drop-off points, or perhaps directly to homes. “That’s something that Zipline has been on track to do for quite some time, and we’re considering ways of accelerating that. When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way.” This kind of system, Rinaudo points out, would also benefit people with non-COVID healthcare needs, who need to do their best to avoid hospitals. If a combination of telemedicine and home or neighborhood delivery of medical supplies means they can stay home, it would be a benefit for everyone. “This is a transformation of the healthcare system that’s already happening and needs to happen anyway. COVID is just accelerating it.”

“When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way” —Keller Rinaudo, Zipline

For the past year, Zipline, working closely with the FAA, has been planning on a localized commercial trial of a medical drone delivery service that was scheduled to begin in North Carolina this fall. While COVID is more urgent, the work that’s already been done towards this trial puts Zipline in a good position to move quickly, says Rinaudo.

“All of the work that we did with the IPP [UAS Integration Pilot Program] is even more important, given this crisis. It means that we’ve already been working with the FAA in detail, and that’s made it possible for us to have a foundation to build on to help with the COVID-19 response.” Assuming that Zipline and the FAA can find a regulatory path forward, the company could begin setting up distribution centers that can support hospital networks for both interfacility delivery as well as contactless delivery to (eventually) neighborhood points and perhaps even homes. “It’s exactly the use case and value proposition that I was describing for Africa,” Rinaudo says.

Leveraging rapid deployment experience that it has from work with the U.S. Department of Defense, Zipline would launch one distribution center within just a few months of a go-ahead from the FAA. This single distribution center could cover an area representing up to 10 million people. “We definitely want to move quickly here,” Rinaudo tells us. Within 18 months, Zipline could theoretically cover the entire US, although he admits “that would be an insanely fast roll-out.”

The question, at this point, is how fast the FAA can take action to make innovative projects like this happen. Zipline, as far as we can tell, is ready to go. We did also ask Rinaudo if he thought that hospitals specifically, and the medical system in general, has the bandwidth to adopt a system like Zipline’s in the middle of a pandemic that’s already stretching people and resources to the limit.

“In the U.S. there’s this sense that this technology is impossible, whereas it’s already operating at multi-national scale, serving thousands of hospitals and health facilities, and it’s completely boring to the people who are benefiting from it,” Rinaudo says. “People in the U.S. have really not caught on that this is something that’s reliable and can dramatically improve our response to crises like this.”

[ Zipline ]

Self-organization offers a promising approach for designing adaptive systems. Given the inherent complexity of most cyber-physical systems, adaptivity is desired, as predictability is limited. Here I summarize different concepts and approaches that can facilitate self-organization in cyber-physical systems, and thus be exploited for design. Then I mention real-world examples of systems where self-organization has managed to provide solutions that outperform classical approaches, in particular related to urban mobility. Finally, I identify when a centralized, distributed, or self-organizing control is more appropriate.

We consider the problem of autonomous acquisition of manipulation skills where problem-solving strategies are initially available only for a narrow range of situations. We propose to extend the range of solvable situations by autonomous play with the object. By applying previously-trained skills and behaviors, the robot learns how to prepare situations for which a successful strategy is already known. The information gathered during autonomous play is additionally used to train an environment model. This model is exploited for active learning and the generation of novel preparatory behaviors compositions. We apply our approach to a wide range of different manipulation tasks, e.g., book grasping, grasping of objects of different sizes by selecting different grasping strategies, placement on shelves, and tower disassembly. We show that the composite behavior generation mechanism enables the robot to solve previously-unsolvable tasks, e.g., tower disassembly. We use success statistics gained during real-world experiments to simulate the convergence behavior of our system. Simulation experiments show that the learning speed can be improved by around 30% by using active learning.

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

For the past two months, the vegetables have arrived on the back of a robot. That’s how 16 communities in Zibo, in eastern China, have received fresh produce during the coronavirus pandemic. The robot is an autonomous van that uses lidars, cameras, and deep-learning algorithms to drive itself, carrying up to 1,000 kilograms on its cargo compartment.

The unmanned vehicle provides a “contactless” alternative to regular deliveries, helping reduce the risk of person-to-person infection, says Professor Ming Liu, a computer scientist at the Hong Kong University of Science and Technology (HKUST) and cofounder of Unity Drive Innovation, or UDI, the Shenzhen-based startup that developed the self-driving van.

Since February, UDI has been operating a small fleet of vehicles in Zibo and two other cities, Suzhou and Shenzhen, where they deliver meal boxes to checkpoint workers and spray disinfectant near hospitals. Combined, the vans have made more than 2,500 autonomous trips, often encountering busy traffic conditions despite the lockdown.

“It’s like Uber for packages—you use your phone to call a robot to pick up and deliver your boxes,” Professor Liu told IEEE Spectrum in an interview via Zoom.

Even before the pandemic, package shipments had been skyrocketing in China and elsewhere. Alibaba founder Jack Ma has said that his company is preparing to handle 1 billion packages per day. With the logistics sector facing major labor shortages, a 2016 McKinsey report predicted that autonomous vehicles will deliver 80 percent of parcels within 10 years.

That’s the future UDI is betting on. Unlike robocars developed by Waymo, Cruise, Zoox, and others, UDI’s vehicles are designed to transport goods, not people. They are similar to those of Nuro, a Silicon Valley startup, and Neolix, based in Beijing, which has deployed 50 robot vans in 10 Chinese cities to do mobile delivery and disinfection service.

Photo: UDI A self-driving vehicle delivers lunch boxes to workers in Pingshan District in Shenzhen. Since February, UDI’s autonomous fleet has made more than 800 meal deliveries.

Professor Liu, an IEEE Senior Member and director of the Intelligent Autonomous Driving Center at HKUST, is unfazed by the competition. He says UDI is ready to operate its vehicles on public roads thanks to the real-world experience it has gained from a string of recent projects. These involve large companies testing the robot vans inside their industrial parks.

One of them is Taiwanese electronics giant Foxconn. Since late 2018, it has used UDI vans to transport electronic parts and other items within its vast Shenzhen campus where some 200,000 workers reside. The robots have to navigate labyrinthine routes while avoiding an unpredictable mass of pedestrians, bicycles, and trucks.

Autonomous driving powered by deep learning

UDI’s vehicle, called Hercules, uses an industrial-grade PC running the Robot Operating System, or ROS. It’s also equipped with a drive-by-wire chassis with electric motors powered by a 8.4-kWh lithium-ion battery. Sensors include a main lidar, three auxiliary lidars, a stereo camera, four fisheye cameras, 16 sonars, redundant satellite navigation systems, an inertial measurement unit (IMU), and two wheel encoders.

The PC receives the lidar point-clouds and feeds them into the main perception algorithm, which consists of a convolutional neural network trained to detect and classify objects. The neural net outputs a set of 3D bounding boxes representing vehicles and other obstacles on the road. This process repeats 100 times per second.

Image: UDI UDI’s vehicle is equipped with a main lidar and three auxiliary lidars, a stereo camera, and various other sensors [top]. The cargo compartment can be modified based on the items to be transported and is not shown. The chassis [bottom] includes an electric motor, removable lithium-ion battery, vehicle control unit (VCU), motor control unit (MCU), electric power steering (EPS), electro-hydraulic brake (EHB), electronic parking brake (EPB), on-board charger (OBC), and direct-current-to-direct-current (DCDC) converter.

Another algorithm processes images from forward-facing cameras to identify road signs and traffic lights, and a third matches the point-clouds and IMU data to a global map, allowing the vehicle to self-localize. To accelerate, brake, and steer, the PC sends commands to two secondary computers running real-time operating systems and connected to the drive-by-wire modules.

Professor Liu says UDI faces more challenging driving conditions than competitors like Waymo and Nuro that conduct their tests in suburban areas in the United States. In Shenzhen, for example, the UDI vans have to navigate through narrow streets with double parked cars and aggressive motorcycles that whiz by narrowly missing the robot.

Click here for additional coronavirus coverage

Over the past couple of months, UDI has monitored its fleet from its headquarters. Using 5G, a remote operator can receive data from a vehicle with just 10 milliseconds of delay. In Shenzhen, human intervention was required about two dozen times when the robots encountered situations they didn’t know how to handle—too many vehicles on the road, false detections of traffic lights at night, or in one case, a worker coming out of a manhole.

Photo: UDI One of UDI’s autonomous vehicles equipped with a device that sprays disinfectant operates near a hospital in Shenzhen.

For safety, UDI programmed the vans to drive at low speeds of up to 30 kilometers per hour, though they can go faster. On a few occasions, remote operators took control because the vehicles were driving too slowly, becoming a road hazard and annoying nearby drivers. Professor Liu says it’s a challenge to balance cautiousness and aggressiveness in self-driving vehicles that will operate in the real world.

He notes that UDI vehicles have been collecting huge amounts of video and sensor data during their autonomous runs. This information will be useful to improve computer simulations of the vehicles and, later, the real vehicles themselves. UDI says it plans to open source part of the data.

Mass produced robot vans

Professor Liu has been working on advanced vehicles for nearly two decades. His projects include robotic cars, buses, and boats, with a focus on applying deep reinforcement learning to enable autonomous behaviors. He says UDI’s vehicles are not cars, and they aren’t unmanned ground robots, either—they are something in between. He likes to call them “running robots.”

Liu’s cofounders are Professor Xiaorui Zhu at Harbin Institute of Technology, in Shenzhen, and Professor Lujia Wang at the Shenzhen Institutes of Advanced Technology, part of the Chinese Academy of Sciences. “We want to be the first company in the world to achieve mass production of autonomous logistics vehicles,” says Wang, who is the CTO of UDI.

To do that, the startup has hired 100 employees and is preparing to put its assembly line into high gear in the next several months. “I’m not saying we solved all the problems,” Professor Liu says, citing system integration and cost as the biggest challenges. “Can we do better? Yes, it can always be better.”

In swarm robotics multiple robots collectively solve problems by forming advantageous structures and behaviors similar to the ones observed in natural systems, such as swarms of bees, birds, or fish. However, the step to industrial applications has not yet been made successfully. Literature is light on real-world swarm applications that apply actual swarm algorithms. Typically, only parts of swarm algorithms are used which we refer to as basic swarm behaviors. In this paper we collect and categorize these behaviors into spatial organization, navigation, decision making, and miscellaneous. This taxonomy is then applied to categorize a number of existing swarm robotic applications from research and industrial domains. Along with the classification, we give a comprehensive overview of research platforms that can be used for testing and evaluating swarm behavior, systems that are already on the market, and projects that target a specific market. Results from this survey show that swarm robotic applications are still rare today. Many industrial projects still rely on centralized control, and even though a solution with multiple robots is employed, the principal idea of swarm robotics of distributed decision making is neglected. We identified mainly following reasons: First of all, swarm behavior emerging from local interactions is hard to predict and a proof of its eligibility for applications in an industrial context is difficult to provide. Second, current communication architectures often do not match requirements for swarm communication, which often leads to a system with a centralized communication infrastructure. Finally, testing swarms for real industrial applications is an issue, since deployment in a productive environment is typically too risky and simulations of a target system may not be sufficiently accurate. In contrast, the research platforms present a means for transforming swarm robotics solutions from theory to prototype industrial systems.

Modeling of complex adaptive systems has revealed a still poorly understood benefit of unsupervised learning: when neural networks are enabled to form an associative memory of a large set of their own attractor configurations, they begin to reorganize their connectivity in a direction that minimizes the coordination constraints posed by the initial network architecture. This self-optimization process has been replicated in various neural network formalisms, but it is still unclear whether it can be applied to biologically more realistic network topologies and scaled up to larger networks. Here we continue our efforts to respond to these challenges by demonstrating the process on the connectome of the widely studied nematode worm C. elegans. We extend our previous work by considering the contributions made by hierarchical partitions of the connectome that form functional clusters, and we explore possible beneficial effects of inter-cluster inhibitory connections. We conclude that the self-optimization process can be applied to neural network topologies characterized by greater biological realism, and that long-range inhibitory connections can facilitate the generalization capacity of the process.

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

For the last several years, Diligent Robotics has been testing out its robot, Moxi, in hospitals in Texas. Diligent isn’t the only company working on hospital robots, but Moxi is unique in that it’s doing commercial mobile manipulation, picking supplies out of supply closets and delivering them to patient rooms, all completely autonomously.

A few weeks ago, Diligent announced US $10 million in new funding, which comes at a critical time, as the company addressed in their press release:

Now more than ever hospitals are under enormous stress, and the people bearing the most risk in this pandemic are the nurses and clinicians at the frontlines of patient care. Our mission with Moxi has always been focused on relieving tasks from nurses, giving them more time to focus on patients, and today that mission has a newfound meaning and purpose. Time and again, we hear from our hospital partners that Moxi not only returns time back to their day but also brings a smile to their face.  

We checked in with Diligent CEO Andrea Thomaz last week to get a better sense of how Moxi is being used at hospitals. “As our hospital customers are implementing new protocols to respond to the [COVID-19] crisis, we are working with them to identify the best ways for Moxi to be deployed as a resource,” Thomaz told us. “The same kinds of delivery tasks we have been doing are still just as needed as ever, but we are also working with them to identify use cases where having Moxi do a delivery task also reduces infection risk to people in the environment.”

Click here for additional coronavirus coverage

Since this is still something that Diligent and their hospital customers are actively working on, it’s a little early for them to share details. But in general, robots making deliveries means that people aren’t making deliveries, which has several immediate benefits. First, it means that overworked hospital staff can spend their time doing other things (like interacting with patients), and second, the robot is less likely to infect other people. It’s not just that the robot can’t get a virus (not that kind of virus, at any rate), but it’s also much easier to keep robots clean in ways that aren’t an option for humans. Besides wiping them down with chemicals, without too much trouble you could also have them autonomously disinfect themselves with UV, which is both efficient and effective.

While COVID-19 only emphasizes the importance of robots in healthcare, Diligent is tackling a particularly difficult set of problems with Moxi, involving full autonomy, manipulation, and human-robot interaction. Earlier this year, we spoke with Thomaz about how Moxi is starting to make a difference to hospital staff.

IEEE Spectrum: Last time we talked, Moxi was in beta testing. What’s different about Moxi now that it’s ready for full-time deployment?

Andrew Thomaz: During our beta trial, Moxi was deployed for over 120 days total, in four different hospitals (one of them was a children’s hospital, the other three were adult acute-care units), working alongside more than 125 nurses and clinicians. The people we were working with were so excited to be part of this kind of innovative research, and how this new technology is going to actually impact workloads. Our focus on the beta trials was to try any idea that a customer had of how Moxi could provide value—if it seemed at all reasonable, then we would quickly try to mock something up and try it.

I think it validates our human-robot interaction approach to building the company, of getting the technology out there in front of customers to make sure that we’re building the product that they really need. We started to see common workflows across hospitals—there are different kinds of patient care that’s happening, but the kinds of support and supplies and things that are moving around the hospital are similar—and so then we felt that we had learned what we needed to learn from the beta trial and we were ready to launch with our first customers.

Photo: Diligent Robotics

The primary function that Moxi has right now, of restocking and delivery, was that there from the beginning? Or was that something that people asked for and you realized, oh, well, this is how a robot can actually be the most useful.

We knew from the beginning that our goal was to provide the kind of operational support that an end-to-end mobile manipulation platform can do, where you can go somewhere autonomously, pick something up, and bring it to another location and put it down. With each of our beta customers, we were very focused on opportunities where that was the case, where nurses were wasting time.

We did a lot of that kind of discovery, and then you just start seeing that it’s not rocket science—there are central supply places where things are kept around the hospital, and nurses are running back and forth to these places multiple times a day. We’d look at some particular task like admission buckets, or something else that nurses have to do everyday, and then we say, where are the places that automation can really fit in? Some of that support is just navigation tasks, like going from one place to another, some actually involves manipulation, like you need to press this button or you need to pick up this thing. But with Moxi, we have a mobility and a manipulation component that we can put to work, to redefine workflows to include automation.

You mentioned that as part of the beta program that you were mocking the robot up to try all kinds of customer ideas. Was there something that hospitals really wanted the robot to do, that you mocked up and tried but just didn’t work at all?

We were pretty good at not setting ourselves up for failure. I think the biggest thing would be, if there was something that was going to be too heavy for the Kinova arm, or the Robotiq gripper, that’s something we just can’t do right now. But honestly, it was a pretty small percentage of things that we were kind of asked to manipulate that we had to say, oh no, sorry, we can’t lift that much or we can’t grip that wide. The other reason that things that we tried in the beta didn’t make it into our roadmap is if there was an idea that came up with only one of the beta sites. One example is delivering water: One of the beta sites was super excited about having water delivered to the patients every day, ahead of medication deliveries, which makes a lot of sense, but when we start talking to hospital leadership or other people, in other hospitals, it’s definitely just a “nice to have.” So for us, from a technical standpoint, it doesn’t make as much sense to devote a lot of resources into making water delivery a real task if it’s just going to be kind of a “nice to have” for a small percentage of our hospitals. That’s more how that R&D went—if we heard it from one hospital we’d ask, is this something that everybody wants, or just an idea that one person had. 

Let’s talk about how Moxi does what it does. How does the picking process work?

We’re focused on very structured manipulation; we’re not doing general purpose manipulation, and so we have a process for teaching Moxi a particular supply room. There are visual cues that are used to orient the robot to that supply room, and then once you are oriented you know where a bin is. Things don’t really move around a great deal in the supply room, the bigger variability is just how full each of the bins are.

The things that the robot is picking out of the bins are very well known, and we make sure that hospitals have a drop off location outside the patient’s room. In about half the hospitals we were in, they already had a drawer where the robot could bring supplies, but sometimes they didn’t have anything, and then we would install something like a mailbox on the wall. That’s something that we’re still working out exactly—it was definitely a prototype for the beta trials, and we’re working out how much that’s going to be needed in our future roll out.

“A robot needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around” —Andrea Thomaz, Diligent Robotics

These aren’t supply rooms that are dedicated to the robot—they’re also used by humans who may move things around unpredictably. How does Moxi deal with the added uncertainty?

That’s really the entire focus of our human-guided learning approach—having the robot build manipulation skills with perceptual cues that are telling it about different anchor points to do that manipulation skill with respect to, and learning particular grasp strategies for a particular category of objects. Those kinds of strategies are going to make that grasp into that bin more successful, and then also learning the sensory feedback that’s expected on a successful grasp versus an unsuccessful one, so that you have the ability to retry until you get the expected sensory feedback.

There must also be plenty of uncertainty when Moxi is navigating around the hospital, which is probably full of people who’ve never seen it before and want to interact with it. To what extent is Moxi designed for those kinds of interactions? And if Moxi needs to be somewhere because it has a job to do, how do you mitigate or avoid them?

One of the things that we liked about hospitals as a semi-structured environment is that even the human interaction that you’re going to run into is structured as well, more so than somewhere like a shopping mall. In a hospital you have a kind of idea of the kind of people that are going to be interacting with the robot, and you can have some expectations about who they are and why they’re there and things, so that’s nice.

We had gone into the beta trial thinking, okay, we’re not doing any patient care, we’re not going into patients’ rooms, we’re bringing things to right outside the patient rooms, we’re mostly going to be interacting with nurses and staff and doctors. We had developed a lot of the social capabilities, little things that Moxi would do with the eyes or little sounds that would be made occasionally, really thinking about nurses and doctors that were going to be in the hallways interacting with Moxi. Within the first couple weeks at the first beta site, the patients and general public in the hospital were having so many more interactions with the robot than we expected. There were people who were, like, grandma is in the hospital, so the entire family comes over on the weekend, to see the robot that happens to be on grandma’s unit, and stuff like that. It was fascinating.

We always knew that being socially acceptable and fitting into the social fabric of the team was important to focus on. A robot needs to have both sides of that coin—it needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around. But in the first couple weeks in our first beta trial, we quickly had to ramp up and say, okay, what else can Moxi do to be social? We had the robot, instead of just going to the charger in between tasks, taking an extra social lap to see if there’s anybody that wants to take a selfie. We added different kinds of hot word detections, like for when people say “hi Moxi,” “good morning, Moxi,” or “how are you?” Just all these things that people were saying to the robot that we wanted to turn into fun interactions.

I would guess that this could sometimes be a little problematic, especially at a children’s hospital where you’re getting lots of new people coming in who haven’t seen a robot before—people really want to interact with robots and that’s independent of whether or not the robot has something else it’s trying to do. How much of a problem is that for Moxi?

That’s on our technical roadmap. We still have to figure out socially appropriate ways to disengage. But what we did learn in our beta trials is that there are even just different navigation paths that you can take, by understanding where crowds tend to be at different times. Like, maybe don’t take a path right by the cafeteria at noon, instead take the back hallway at noon. There are always different ways to get to where you’re going. Houston was a great example—in that hospital, there was this one skyway where you knew the robot was going to get held up for 10 or 15 minutes taking selfies with people, but there was another hallway two floors down that was always empty. So you can kind of optimize navigation time for the number of selfies expected, things like that.

Photo: Diligent Robotics

To what extent is the visual design of Moxi intended to give people a sense of what its capabilities are, or aren’t?

For us, it started with the functional things that Moxie needs. We knew that we’re doing mobile manipulation, so we’d need a base, and we’d need an arm. And we knew we also wanted it to have a social presence, and so from those constraints, we worked with our amazing head of design, Carla Diana, on the look and feel of the robot. For this iteration, we wanted to make sure it didn’t have an overly humanoid look.

Some of the previous platforms that I used in academia, like the Simon robot or the Curie robot, had very realistic eyes. But when you start to talk about taking that to a commercial setting, now you have these eyeballs and eyelids and each of those is a motor that has to work every day all day long, so we realized that you can get a lot out of some simplified LED eyes, and it’s actually endearing to people to have this kind of simplified version of it. The eyes are a big component—that’s always been a big thing for me because of the importance of attention, and being able to communicate to people what the robot is paying attention to. Even if you don’t put eyeballs on a robot, people will find a thing to attribute attention to: They’ll find the camera and say, “oh, those are its eyes!” So I find it’s better to give the robot a socially expressive focus of attention.

I would say speech is the biggest one that we have drawn the line on. We want to make sure people don’t get the sense that Moxi can understand the full English language, because I think people are getting to be really used to speech interfaces, and we don’t have an Alexa or anything like that integrated yet. That could happen in the future, but we don’t have a real need for that right now, so it’s not there, so we want to make sure people don’t think of the robot as an Alexa or a Google Home or a Siri that you can just talk to, so we make sure that it just does beeps and whistles, and then that kind of makes sense to people. So they get that you can say stuff like “hi Moxi,” but that’s about it. 

Otherwise, I think the design is really meant to be socially acceptable, we want to make sure people are comfortable, because like you’re saying, this is a robot that a lot of people are going to see for the first time, and we have to be really sensitive to the fact that the hospital is a stressful place for a lot of people, you’re already there with a sick family member and you might have a lot going on, and we want to make sure that we aren’t contributing to additional stress in your day.

You mentioned that you have a vision for human-robot teaming. Longer term, how do you feel like people should be partnering more directly with robots?

Right now, we’re really focused on looking at operational processes that hit two or three different departments in the hospital and require a nurse to do this and a patient care technician to do that and a pharmacy or a materials supply person to do something else. We’re working with hospitals to understand how that whole team of people is making some big operational workflow happen and where Moxi could fit in. 

Some places where Moxi fits in, it’s a completely independent task. Other places, it might be a nurse on a unit calling Moxi over to do something, and so there might be a more direct interaction sometimes. Other times it might be that we’re able to connect to the electronic health record and infer automatically that something’s needed and then it really is just happening more in the background. We’re definitely open to both explicit interaction with the team where Moxi’s being called to do something in particular by someone, but I think some of the more powerful examples from our beta trials were ones that really take that cognitive burden off of people—where Moxi could just infer what could happen in the background.

In terms of direct collaboration, like side-by-side working together kind of thing, I do think there’s just such vast differences between—if you’re talking about a human and a robot cooperating on some manipulation task, robots are just—it’s going to be awhile before a robot is going to be as capable. If you already have a person there, doing some kind of manipulation task, it’s going to be hard for a robot to compete, and so I think it’s better to think about places where the person could be used for better things and you could hand something else off entirely to the robot.

So how feasible in the near-term is a nurse saying, “Moxi, could you hold this for me?” How complicated or potentially useful is that?

I think that’s a really interesting example. So then a question is, is the value of the resource and whether being always available to be like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, and what, we did a little bit of that kind on-demand, you know, hey Moxi come over here and do this thing, in some of our beta trials just to kind of look at that on demand versus pre planned activities, and if you can find things in workflows that can be automated and inferred what the robot’s gonna be doing, we think that’s gonna be the biggest bang for your buck, in terms of the value that the robot’s able to deliver, 

I think that there may come a day where every clinician’s walking around and there’s always a robot available to respond to “hey, hold this for me,” and I think that would be amazing. But for now, the question is whether the robot being like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, when it could instead be working all night long to get things ready for the next shift.

[ Diligent Robotics ]

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

For the last several years, Diligent Robotics has been testing out its robot, Moxi, in hospitals in Texas. Diligent isn’t the only company working on hospital robots, but Moxi is unique in that it’s doing commercial mobile manipulation, picking supplies out of supply closets and delivering them to patient rooms, all completely autonomously.

A few weeks ago, Diligent announced US $10 million in new funding, which comes at a critical time, as the company addressed in their press release:

Now more than ever hospitals are under enormous stress, and the people bearing the most risk in this pandemic are the nurses and clinicians at the frontlines of patient care. Our mission with Moxi has always been focused on relieving tasks from nurses, giving them more time to focus on patients, and today that mission has a newfound meaning and purpose. Time and again, we hear from our hospital partners that Moxi not only returns time back to their day but also brings a smile to their face.  

We checked in with Diligent CEO Andrea Thomaz last week to get a better sense of how Moxi is being used at hospitals. “As our hospital customers are implementing new protocols to respond to the [COVID-19] crisis, we are working with them to identify the best ways for Moxi to be deployed as a resource,” Thomaz told us. “The same kinds of delivery tasks we have been doing are still just as needed as ever, but we are also working with them to identify use cases where having Moxi do a delivery task also reduces infection risk to people in the environment.”

Click here for additional coronavirus coverage

Since this is still something that Diligent and their hospital customers are actively working on, it’s a little early for them to share details. But in general, robots making deliveries means that people aren’t making deliveries, which has several immediate benefits. First, it means that overworked hospital staff can spend their time doing other things (like interacting with patients), and second, the robot is less likely to infect other people. It’s not just that the robot can’t get a virus (not that kind of virus, at any rate), but it’s also much easier to keep robots clean in ways that aren’t an option for humans. Besides wiping them down with chemicals, without too much trouble you could also have them autonomously disinfect themselves with UV, which is both efficient and effective.

While COVID-19 only emphasizes the importance of robots in healthcare, Diligent is tackling a particularly difficult set of problems with Moxi, involving full autonomy, manipulation, and human-robot interaction. Earlier this year, we spoke with Thomaz about how Moxi is starting to make a difference to hospital staff.

IEEE Spectrum: Last time we talked, Moxi was in beta testing. What’s different about Moxi now that it’s ready for full-time deployment?

Andrew Thomaz: During our beta trial, Moxi was deployed for over 120 days total, in four different hospitals (one of them was a children’s hospital, the other three were adult acute-care units), working alongside more than 125 nurses and clinicians. The people we were working with were so excited to be part of this kind of innovative research, and how this new technology is going to actually impact workloads. Our focus on the beta trials was to try any idea that a customer had of how Moxi could provide value—if it seemed at all reasonable, then we would quickly try to mock something up and try it.

I think it validates our human-robot interaction approach to building the company, of getting the technology out there in front of customers to make sure that we’re building the product that they really need. We started to see common workflows across hospitals—there are different kinds of patient care that’s happening, but the kinds of support and supplies and things that are moving around the hospital are similar—and so then we felt that we had learned what we needed to learn from the beta trial and we were ready to launch with our first customers.

Photo: Diligent Robotics

The primary function that Moxi has right now, of restocking and delivery, was that there from the beginning? Or was that something that people asked for and you realized, oh, well, this is how a robot can actually be the most useful.

We knew from the beginning that our goal was to provide the kind of operational support that an end-to-end mobile manipulation platform can do, where you can go somewhere autonomously, pick something up, and bring it to another location and put it down. With each of our beta customers, we were very focused on opportunities where that was the case, where nurses were wasting time.

We did a lot of that kind of discovery, and then you just start seeing that it’s not rocket science—there are central supply places where things are kept around the hospital, and nurses are running back and forth to these places multiple times a day. We’d look at some particular task like admission buckets, or something else that nurses have to do everyday, and then we say, where are the places that automation can really fit in? Some of that support is just navigation tasks, like going from one place to another, some actually involves manipulation, like you need to press this button or you need to pick up this thing. But with Moxi, we have a mobility and a manipulation component that we can put to work, to redefine workflows to include automation.

You mentioned that as part of the beta program that you were mocking the robot up to try all kinds of customer ideas. Was there something that hospitals really wanted the robot to do, that you mocked up and tried but just didn’t work at all?

We were pretty good at not setting ourselves up for failure. I think the biggest thing would be, if there was something that was going to be too heavy for the Kinova arm, or the Robotiq gripper, that’s something we just can’t do right now. But honestly, it was a pretty small percentage of things that we were kind of asked to manipulate that we had to say, oh no, sorry, we can’t lift that much or we can’t grip that wide. The other reason that things that we tried in the beta didn’t make it into our roadmap is if there was an idea that came up with only one of the beta sites. One example is delivering water: One of the beta sites was super excited about having water delivered to the patients every day, ahead of medication deliveries, which makes a lot of sense, but when we start talking to hospital leadership or other people, in other hospitals, it’s definitely just a “nice to have.” So for us, from a technical standpoint, it doesn’t make as much sense to devote a lot of resources into making water delivery a real task if it’s just going to be kind of a “nice to have” for a small percentage of our hospitals. That’s more how that R&D went—if we heard it from one hospital we’d ask, is this something that everybody wants, or just an idea that one person had. 

Let’s talk about how Moxi does what it does. How does the picking process work?

We’re focused on very structured manipulation; we’re not doing general purpose manipulation, and so we have a process for teaching Moxi a particular supply room. There are visual cues that are used to orient the robot to that supply room, and then once you are oriented you know where a bin is. Things don’t really move around a great deal in the supply room, the bigger variability is just how full each of the bins are.

The things that the robot is picking out of the bins are very well known, and we make sure that hospitals have a drop off location outside the patient’s room. In about half the hospitals we were in, they already had a drawer where the robot could bring supplies, but sometimes they didn’t have anything, and then we would install something like a mailbox on the wall. That’s something that we’re still working out exactly—it was definitely a prototype for the beta trials, and we’re working out how much that’s going to be needed in our future roll out.

“A robot needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around” —Andrea Thomaz, Diligent Robotics

These aren’t supply rooms that are dedicated to the robot—they’re also used by humans who may move things around unpredictably. How does Moxi deal with the added uncertainty?

That’s really the entire focus of our human-guided learning approach—having the robot build manipulation skills with perceptual cues that are telling it about different anchor points to do that manipulation skill with respect to, and learning particular grasp strategies for a particular category of objects. Those kinds of strategies are going to make that grasp into that bin more successful, and then also learning the sensory feedback that’s expected on a successful grasp versus an unsuccessful one, so that you have the ability to retry until you get the expected sensory feedback.

There must also be plenty of uncertainty when Moxi is navigating around the hospital, which is probably full of people who’ve never seen it before and want to interact with it. To what extent is Moxi designed for those kinds of interactions? And if Moxi needs to be somewhere because it has a job to do, how do you mitigate or avoid them?

One of the things that we liked about hospitals as a semi-structured environment is that even the human interaction that you’re going to run into is structured as well, more so than somewhere like a shopping mall. In a hospital you have a kind of idea of the kind of people that are going to be interacting with the robot, and you can have some expectations about who they are and why they’re there and things, so that’s nice.

We had gone into the beta trial thinking, okay, we’re not doing any patient care, we’re not going into patients’ rooms, we’re bringing things to right outside the patient rooms, we’re mostly going to be interacting with nurses and staff and doctors. We had developed a lot of the social capabilities, little things that Moxi would do with the eyes or little sounds that would be made occasionally, really thinking about nurses and doctors that were going to be in the hallways interacting with Moxi. Within the first couple weeks at the first beta site, the patients and general public in the hospital were having so many more interactions with the robot than we expected. There were people who were, like, grandma is in the hospital, so the entire family comes over on the weekend, to see the robot that happens to be on grandma’s unit, and stuff like that. It was fascinating.

We always knew that being socially acceptable and fitting into the social fabric of the team was important to focus on. A robot needs to have both sides of that coin—it needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around. But in the first couple weeks in our first beta trial, we quickly had to ramp up and say, okay, what else can Moxi do to be social? We had the robot, instead of just going to the charger in between tasks, taking an extra social lap to see if there’s anybody that wants to take a selfie. We added different kinds of hot word detections, like for when people say “hi Moxi,” “good morning, Moxi,” or “how are you?” Just all these things that people were saying to the robot that we wanted to turn into fun interactions.

I would guess that this could sometimes be a little problematic, especially at a children’s hospital where you’re getting lots of new people coming in who haven’t seen a robot before—people really want to interact with robots and that’s independent of whether or not the robot has something else it’s trying to do. How much of a problem is that for Moxi?

That’s on our technical roadmap. We still have to figure out socially appropriate ways to disengage. But what we did learn in our beta trials is that there are even just different navigation paths that you can take, by understanding where crowds tend to be at different times. Like, maybe don’t take a path right by the cafeteria at noon, instead take the back hallway at noon. There are always different ways to get to where you’re going. Houston was a great example—in that hospital, there was this one skyway where you knew the robot was going to get held up for 10 or 15 minutes taking selfies with people, but there was another hallway two floors down that was always empty. So you can kind of optimize navigation time for the number of selfies expected, things like that.

Photo: Diligent Robotics

To what extent is the visual design of Moxi intended to give people a sense of what its capabilities are, or aren’t?

For us, it started with the functional things that Moxie needs. We knew that we’re doing mobile manipulation, so we’d need a base, and we’d need an arm. And we knew we also wanted it to have a social presence, and so from those constraints, we worked with our amazing head of design, Carla Diana, on the look and feel of the robot. For this iteration, we wanted to make sure it didn’t have an overly humanoid look.

Some of the previous platforms that I used in academia, like the Simon robot or the Curie robot, had very realistic eyes. But when you start to talk about taking that to a commercial setting, now you have these eyeballs and eyelids and each of those is a motor that has to work every day all day long, so we realized that you can get a lot out of some simplified LED eyes, and it’s actually endearing to people to have this kind of simplified version of it. The eyes are a big component—that’s always been a big thing for me because of the importance of attention, and being able to communicate to people what the robot is paying attention to. Even if you don’t put eyeballs on a robot, people will find a thing to attribute attention to: They’ll find the camera and say, “oh, those are its eyes!” So I find it’s better to give the robot a socially expressive focus of attention.

I would say speech is the biggest one that we have drawn the line on. We want to make sure people don’t get the sense that Moxi can understand the full English language, because I think people are getting to be really used to speech interfaces, and we don’t have an Alexa or anything like that integrated yet. That could happen in the future, but we don’t have a real need for that right now, so it’s not there, so we want to make sure people don’t think of the robot as an Alexa or a Google Home or a Siri that you can just talk to, so we make sure that it just does beeps and whistles, and then that kind of makes sense to people. So they get that you can say stuff like “hi Moxi,” but that’s about it. 

Otherwise, I think the design is really meant to be socially acceptable, we want to make sure people are comfortable, because like you’re saying, this is a robot that a lot of people are going to see for the first time, and we have to be really sensitive to the fact that the hospital is a stressful place for a lot of people, you’re already there with a sick family member and you might have a lot going on, and we want to make sure that we aren’t contributing to additional stress in your day.

You mentioned that you have a vision for human-robot teaming. Longer term, how do you feel like people should be partnering more directly with robots?

Right now, we’re really focused on looking at operational processes that hit two or three different departments in the hospital and require a nurse to do this and a patient care technician to do that and a pharmacy or a materials supply person to do something else. We’re working with hospitals to understand how that whole team of people is making some big operational workflow happen and where Moxi could fit in. 

Some places where Moxi fits in, it’s a completely independent task. Other places, it might be a nurse on a unit calling Moxi over to do something, and so there might be a more direct interaction sometimes. Other times it might be that we’re able to connect to the electronic health record and infer automatically that something’s needed and then it really is just happening more in the background. We’re definitely open to both explicit interaction with the team where Moxi’s being called to do something in particular by someone, but I think some of the more powerful examples from our beta trials were ones that really take that cognitive burden off of people—where Moxi could just infer what could happen in the background.

In terms of direct collaboration, like side-by-side working together kind of thing, I do think there’s just such vast differences between—if you’re talking about a human and a robot cooperating on some manipulation task, robots are just—it’s going to be awhile before a robot is going to be as capable. If you already have a person there, doing some kind of manipulation task, it’s going to be hard for a robot to compete, and so I think it’s better to think about places where the person could be used for better things and you could hand something else off entirely to the robot.

So how feasible in the near-term is a nurse saying, “Moxi, could you hold this for me?” How complicated or potentially useful is that?

I think that’s a really interesting example. So then a question is, is the value of the resource and whether being always available to be like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, and what, we did a little bit of that kind on-demand, you know, hey Moxi come over here and do this thing, in some of our beta trials just to kind of look at that on demand versus pre planned activities, and if you can find things in workflows that can be automated and inferred what the robot’s gonna be doing, we think that’s gonna be the biggest bang for your buck, in terms of the value that the robot’s able to deliver, 

I think that there may come a day where every clinician’s walking around and there’s always a robot available to respond to “hey, hold this for me,” and I think that would be amazing. But for now, the question is whether the robot being like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, when it could instead be working all night long to get things ready for the next shift.

[ Diligent Robotics ]

The present work is a collaborative research aimed at testing the effectiveness of the robot-assisted intervention administered in real clinical settings by real educators. Social robots dedicated to assisting persons with autism spectrum disorder (ASD) are rarely used in clinics. In a collaborative effort to bridge the gap between innovation in research and clinical practice, a team of engineers, clinicians and researchers working in the field of psychology developed and tested a robot-assisted educational intervention for children with low-functioning ASD (N = 20) A total of 14 lessons targeting requesting and turn-taking were elaborated, based on the Pivotal Training Method and principles of Applied Analysis of Behavior. Results showed that sensory rewards provided by the robot elicited more positive reactions than verbal praises from humans. The robot was of greatest benefit to children with a low level of disability. The educators were quite enthusiastic about children's progress in learning basic psychosocial skills from interactions with the robot. The robot nonetheless failed to act as a social mediator, as more prosocial behaviors were observed in the control condition, where instead of interacting with the robot children played with a ball. We discuss how to program robots to the distinct needs of individuals with ASD, how to harness robots' likability in order to enhance social skill learning, and how to arrive at a consensus about the standards of excellence that need to be met in interdisciplinary co-creation research. Our intuition is that robotic assistance, obviously judged as to be positive by educators, may contribute to the dissemination of innovative evidence-based practice for individuals with ASD.

Brain signals represent a communication modality that can allow users of assistive robots to specify high-level goals, such as the object to fetch and deliver. In this paper, we consider a screen-free Brain-Computer Interface (BCI), where the robot highlights candidate objects in the environment using a laser pointer, and the user goal is decoded from the evoked responses in the electroencephalogram (EEG). Having the robot present stimuli in the environment allows for more direct commands than traditional BCIs that require the use of graphical user interfaces. Yet bypassing a screen entails less control over stimulus appearances. In realistic environments, this leads to heterogeneous brain responses for dissimilar objects—posing a challenge for reliable EEG classification. We model object instances as subclasses to train specialized classifiers in the Riemannian tangent space, each of which is regularized by incorporating data from other objects. In multiple experiments with a total of 19 healthy participants, we show that our approach not only increases classification performance but is also robust to both heterogeneous and homogeneous objects. While especially useful in the case of a screen-free BCI, our approach can naturally be applied to other experimental paradigms with potential subclass structure.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICARSC 2020 – April 15-17, 2020 – [Online Conference] ICRA 2020 – May 31-4, 2020 – [TBD] ICUAS 2020 – June 9-12, 2020 – Athens, Greece RSS 2020 – July 12-16, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

You need this dancing robot right now.

By Vanessa Weiß at UPenn.

[ KodLab ]

Remember Qoobo the headless robot cat? There’s a TINY QOOBO NOW!

It’s available now on a Japanese crowdfunding site, but I can’t tell if it’ll ship to other countries.

[ Qoobo ]

Just what we need, more of this thing.

[ Vstone ]

HiBot, which just received an influx of funding, is adding new RaaS (robotics as a service) offerings to its collection of robot arms and snakebots.

HiBot ]

If social distancing already feels like too much work, Misty is like that one-in-a-thousand child that enjoys cleaning. See her in action here as a robot disinfector and sanitizer for common and high-touch surfaces. Alcohol reservoir, servo actuator, and nozzle not (yet) included. But we will provide the support to help you build the skill.

[ Misty Robotics ]

After seeing this tweet from Kate Darling that mentions an MIT experiment in which “a group of gerbils inhabited an architectural environment made of modular blocks, which were manipulated by a robotic arm in response to the gerbils’ movements,” I had to find a video of the robot arm gerbil habitat. The best I could do was this 2007 German remake, but it’s pretty good:

[ Lutz Dammbeck ]

We posted about this research almost a year ago when it came out in RA-L, but I’m not tired of watching the video yet.

Today’s autonomous drones have reaction times of tens of milliseconds, which is not enough for navigating fast in complex dynamic environments. To safely avoid fast moving objects, drones need low-latency sensors and algorithms. We depart from state of the art approaches by using event cameras, which are novel bioinspired sensors with reaction times of microseconds. We demonstrate the effectiveness of our approach on an autonomous quadrotor using only onboard sensing and computation. Our drone was capable of avoiding multiple obstacles of different sizes and shapes at relative speeds up to 10 meters/second, both indoors and outdoors.

[ UZH ]

In this video we present the autonomous exploration of a staircase with four sub-levels and the transition between two floors of the Satsop Nuclear Power Plant during the DARPA Subterranean Challenge Urban Circuit. The utilized system is a collision-tolerant flying robot capable of multi-modal Localization And Mapping fusing LiDAR, vision and inertial sensing. Autonomous exploration and navigation through the staircase is enabled through a Graph-based Exploration Planner implementing a specific mode for vertical exploration. The collision-tolerance of the platform was of paramount importance especially due to the thin features of the involved geometry such as handrails. The whole mission was conducted fully autonomously.

[ CERBERUS ]

At Cognizant’s Inclusion in Tech: Work of Belonging conference, Cognizant VP and Managing Director of the Center for the Future of Work, Ben Pring, sits down with Mary “Mary” Cummings. Missy is currently a Professor at Duke University and the Director of the Duke Robotics Labe. Interestingly, Missy began her career as one of the first female fighter pilots in the U.S. Navy. Working in predominantly male fields – the military, tech, academia – Missy understands the prevalence of sexism, bias and gender discrimination.

Let’s hear more from Missy Cummings on, like, everything.

[ Duke ] via [ Cognizant ]

You don’t need to mountain bike for the Skydio 2 to be worth it, but it helps.

[ Skydio ]

Here’s a look at one of the preliminary simulated cave environments for the DARPA SubT Challenge.

[ Robotika ]

SherpaUW is a hybrid walking and driving exploration rover for subsea applications. The locomotive system consists of four legs with 5 active DoF each. Additionally, a 6 DoF manipulation arm is available. All joints of the legs and the manipulation arm are sealed against water. The arm is pressure compensated, allowing the deployment in deep sea applications.

SherpaUW’s hybrid crawler-design is intended to allow for extended long-term missions on the sea floor. Since it requires no extra energy to maintain its posture and position compared to traditional underwater ROVs (Remotely Operated Vehicles), SherpaUW is well suited for repeated and precise sampling operations, for example monitoring black smockers over a longer period of time.

[ DFKI ]

In collaboration with the Army and Marines, 16 active-duty Army soldiers and Marines used Near Earth’s technology to safely execute 64 resupply missions in an operational demonstration at Fort AP Hill, Virginia in Sep 2019. This video shows some of the modes used during the demonstration.

[ NEA ]

For those of us who aren’t either lucky enough or cursed enough to live with our robotic co-workers, HEBI suggests that now might be a great time to try simulation.

[ GitHub ]

DJI Phantom 4 Pro V2.0 is a complete aerial imaging solution, designed for the professional creator. Featuring a 1-inch CMOS sensor that can shoot 4K/60fps videos and 20MP photos, the Phantom 4 Pro V2.0 grants filmmakers absolute creative freedom. The OcuSync 2.0 HD transmission system ensures stable connectivity and reliability, five directions of obstacle sensing ensures additional safety, and a dedicated remote controller with a built-in screen grants even greater precision and control.

US $1600, or $2k with VR goggles.

[ DJI ]

Not sure why now is the right time to introduce the Fetch research robot, but if you forgot it existed, here’s a reminder.

[ Fetch ]

Two keynotes from the MBZIRC Symposium, featuring Oussama Khatib and Ron Arkin.

[ MBZIRC ]

And here are a couple of talks from the 2020 ROS-I Consortium.

Roger Barga, GM of AWS Robotics and Autonomous Services at Amazon shares some of the latest developments around ROS and advanced robotics in the cloud.

Alex Shikany, VP of Membership and Business Intelligence for A3 shares insights from his organization on the relationship between robotics growth and employment.

[ ROS-I ]

Many tech companies are trying to build machines that detect people’s emotions, using techniques from artificial intelligence. Some companies claim to have succeeded already. Dr. Lisa Feldman Barrett evaluates these claims against the latest scientific evidence on emotion. What does it mean to “detect” emotion in a human face? How often do smiles express happiness and scowls express anger? And what are emotions, scientifically speaking?

[ Microsoft ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICARSC 2020 – April 15-17, 2020 – [Online Conference] ICRA 2020 – May 31-4, 2020 – [TBD] ICUAS 2020 – June 9-12, 2020 – Athens, Greece RSS 2020 – July 12-16, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

You need this dancing robot right now.

By Vanessa Weiß at UPenn.

[ KodLab ]

Remember Qoobo the headless robot cat? There’s a TINY QOOBO NOW!

It’s available now on a Japanese crowdfunding site, but I can’t tell if it’ll ship to other countries.

[ Qoobo ]

Just what we need, more of this thing.

[ Vstone ]

HiBot, which just received an influx of funding, is adding new RaaS (robotics as a service) offerings to its collection of robot arms and snakebots.

HiBot ]

If social distancing already feels like too much work, Misty is like that one-in-a-thousand child that enjoys cleaning. See her in action here as a robot disinfector and sanitizer for common and high-touch surfaces. Alcohol reservoir, servo actuator, and nozzle not (yet) included. But we will provide the support to help you build the skill.

[ Misty Robotics ]

After seeing this tweet from Kate Darling that mentions an MIT experiment in which “a group of gerbils inhabited an architectural environment made of modular blocks, which were manipulated by a robotic arm in response to the gerbils’ movements,” I had to find a video of the robot arm gerbil habitat. The best I could do was this 2007 German remake, but it’s pretty good:

[ Lutz Dammbeck ]

We posted about this research almost a year ago when it came out in RA-L, but I’m not tired of watching the video yet.

Today’s autonomous drones have reaction times of tens of milliseconds, which is not enough for navigating fast in complex dynamic environments. To safely avoid fast moving objects, drones need low-latency sensors and algorithms. We depart from state of the art approaches by using event cameras, which are novel bioinspired sensors with reaction times of microseconds. We demonstrate the effectiveness of our approach on an autonomous quadrotor using only onboard sensing and computation. Our drone was capable of avoiding multiple obstacles of different sizes and shapes at relative speeds up to 10 meters/second, both indoors and outdoors.

[ UZH ]

In this video we present the autonomous exploration of a staircase with four sub-levels and the transition between two floors of the Satsop Nuclear Power Plant during the DARPA Subterranean Challenge Urban Circuit. The utilized system is a collision-tolerant flying robot capable of multi-modal Localization And Mapping fusing LiDAR, vision and inertial sensing. Autonomous exploration and navigation through the staircase is enabled through a Graph-based Exploration Planner implementing a specific mode for vertical exploration. The collision-tolerance of the platform was of paramount importance especially due to the thin features of the involved geometry such as handrails. The whole mission was conducted fully autonomously.

[ CERBERUS ]

At Cognizant’s Inclusion in Tech: Work of Belonging conference, Cognizant VP and Managing Director of the Center for the Future of Work, Ben Pring, sits down with Mary “Mary” Cummings. Missy is currently a Professor at Duke University and the Director of the Duke Robotics Labe. Interestingly, Missy began her career as one of the first female fighter pilots in the U.S. Navy. Working in predominantly male fields – the military, tech, academia – Missy understands the prevalence of sexism, bias and gender discrimination.

Let’s hear more from Missy Cummings on, like, everything.

[ Duke ] via [ Cognizant ]

You don’t need to mountain bike for the Skydio 2 to be worth it, but it helps.

[ Skydio ]

Here’s a look at one of the preliminary simulated cave environments for the DARPA SubT Challenge.

[ Robotika ]

SherpaUW is a hybrid walking and driving exploration rover for subsea applications. The locomotive system consists of four legs with 5 active DoF each. Additionally, a 6 DoF manipulation arm is available. All joints of the legs and the manipulation arm are sealed against water. The arm is pressure compensated, allowing the deployment in deep sea applications.

SherpaUW’s hybrid crawler-design is intended to allow for extended long-term missions on the sea floor. Since it requires no extra energy to maintain its posture and position compared to traditional underwater ROVs (Remotely Operated Vehicles), SherpaUW is well suited for repeated and precise sampling operations, for example monitoring black smockers over a longer period of time.

[ DFKI ]

In collaboration with the Army and Marines, 16 active-duty Army soldiers and Marines used Near Earth’s technology to safely execute 64 resupply missions in an operational demonstration at Fort AP Hill, Virginia in Sep 2019. This video shows some of the modes used during the demonstration.

[ NEA ]

For those of us who aren’t either lucky enough or cursed enough to live with our robotic co-workers, HEBI suggests that now might be a great time to try simulation.

[ GitHub ]

DJI Phantom 4 Pro V2.0 is a complete aerial imaging solution, designed for the professional creator. Featuring a 1-inch CMOS sensor that can shoot 4K/60fps videos and 20MP photos, the Phantom 4 Pro V2.0 grants filmmakers absolute creative freedom. The OcuSync 2.0 HD transmission system ensures stable connectivity and reliability, five directions of obstacle sensing ensures additional safety, and a dedicated remote controller with a built-in screen grants even greater precision and control.

US $1600, or $2k with VR goggles.

[ DJI ]

Not sure why now is the right time to introduce the Fetch research robot, but if you forgot it existed, here’s a reminder.

[ Fetch ]

Two keynotes from the MBZIRC Symposium, featuring Oussama Khatib and Ron Arkin.

[ MBZIRC ]

And here are a couple of talks from the 2020 ROS-I Consortium.

Roger Barga, GM of AWS Robotics and Autonomous Services at Amazon shares some of the latest developments around ROS and advanced robotics in the cloud.

Alex Shikany, VP of Membership and Business Intelligence for A3 shares insights from his organization on the relationship between robotics growth and employment.

[ ROS-I ]

Many tech companies are trying to build machines that detect people’s emotions, using techniques from artificial intelligence. Some companies claim to have succeeded already. Dr. Lisa Feldman Barrett evaluates these claims against the latest scientific evidence on emotion. What does it mean to “detect” emotion in a human face? How often do smiles express happiness and scowls express anger? And what are emotions, scientifically speaking?

[ Microsoft ]

There’s been a lot of intense and well-funded work developing chips that are specially designed to perform AI algorithms faster and more efficiently. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Ideally you want a chip that’s optimized to do today’s AI, not the AI of two to five years ago. Google’s solution: have an AI design the AI chip.

“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other,” they write in a paper describing the work that posted today to Arxiv.

“We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn't exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.”

Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learn to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and area.

Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized. Heightening the challenge is the requirement that all this happen while at the same time obeying rules about the density of interconnects. Goldie and Mirhoseini targeted chip placement, because even with today’s advanced tools, it takes a human expert weeks of iteration to produce an acceptable design.

Goldie and Mirhoseini modeled chip placement as a reinforcement learning problem. Reinforcement learning systems, unlike typical deep learning, do not train on a large set of labeled data. Instead, they learn by doing, adjusting the parameters in their networks according to a reward signal when they succeed. In this case, the reward was a proxy measure of a combination of power reduction, performance improvement, and area reduction. As a result, the placement-bot becomes better at its task the more designs it does.

The team hopes AI systems like theirs will lead to the design of “more chips in the same time period, and also chips that run faster, use less power, cost less to build, and use less area,” says Goldie.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – [ONLINE EVENT] ICARSC 2020 – April 15-17, 2020 – [ONLINE EVENT] ICRA 2020 – May 31-4, 2020 – [SEE ATTENDANCE SURVEY] ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

UBTECH Robotics’ ATRIS, AIMBOT, and Cruzr robots were deployed at a Shenzhen hospital specialized in treating COVID-19 patients. The company says the robots, which are typically used in retail and hospitality scenarios, were modified to perform tasks that can help keep the hospital safer for everyone, especially front-line healthcare workers. The tasks include providing videoconferencing services between patients and doctors, monitoring the body temperatures of visitors and patients, and disinfecting designated areas.

The Third People’s Hospital of Shenzhen (TPHS), the only designated hospital for treating COVID-19 in Shenzhen, a metropolis with a population of more than 12.5 million, has introduced an intelligent anti-epidemic solution to combat the coronavirus.

AI robots are playing a key role. The UBTECH-developed robot trio, namely ATRIS, AIMBOT, and Cruzr, are giving a helping hand to monitor body temperature, detect people without masks, spray disinfectants and provide medical inquiries.

[ UBTECH ]

Someone has spilled gold all over the place! Probably one of those St. Paddy’s leprechauns... Anyways... It happened near a Robotiq Wrist Camera and Epick setup so it only took a couple of minutes to program and ’’pick and place’’ the mess up.

Even in situations like these, it’s important to stay positive and laugh a little, we had this ready and though we’d still share. Stay safe!

[ Robotiq ]

HEBI Robotics is helping out with social distancing by controlling a robot arm in Austria from their lab in Pittsburgh.

Can’t be too careful!

[ HEBI Robotics ]

Thanks Dave!

SLIDER, a new robot under development at Imperial College London, reminds us a little bit of what SCHAFT was working on with its straight-legged design.

[ Imperial ]

Imitation learning is an effective and safe technique to train robot policies in the real world because it does not depend on an expensive random exploration process. However, due to the lack of exploration, learning policies that generalize beyond the demonstrated behaviors is still an open challenge. We present a novel imitation learning framework to enable robots to 1) learn complex real world manipulation tasks efficiently from a small number of human demonstrations, and 2) synthesize new behaviors not contained in the collected demonstrations. Our key insight is that multi-task domains often present a latent structure, where demonstrated trajectories for different tasks intersect at common regions of the state space. We present Generalization Through Imitation (GTI), a two-stage offline imitation learning algorithm that exploits this intersecting structure to train goal-directed policies that generalize to unseen start and goal state combinations.

[ GTI ]

Here are two excellent videos from UPenn’s Kod*lab showing the capabilities of their programmable compliant origami spring things.

[ Kod*lab ]

We met Bornlove when we were reporting on drones in Tanzania in 2018, and it’s good to see that he’s still improving on his built-from-scratch drone.

[ ADF ]

Laser. Guided. Sandwich. Stacking.

[ Kawasaki ]

The Self-Driving Car Research Studio is a highly expandable and powerful platform designed specifically for academic research. It includes the tools and components researchers need to start testing and validating their concepts and technologies on the first day, without spending time and resources on building DYI platforms or implementing hobby-level vehicles. The research studio includes a fleet of vehicles, software tools enabling researchers to work in Simulink, C/C++, Python, or ROS, with pre-built libraries and models and simulated environments support, even a set of reconfigurable floor panels with road patterns and a set of traffic signs. The research studio’s feature vehicle, QCar, is a 1/10 scale model vehicle powered by NVIDIA Jetson TX2 supercomputer and equipped with LIDAR, 360-degree vision, depth sensor, IMU, encoders, and other sensors, as well as user-expandable IO.

[ Quanser ]

Thanks Zuzana!

The Swarm-Probe Enabling ATEG Reactor, or SPEAR, is a nuclear electric propulsion spacecraft that uses a new, lightweight reactor moderator and advanced thermoelectric generators (ATEGs) to greatly reduce overall core mass. If the total mass of an NEP system could be reduced to levels that were able to be launched on smaller vehicles, these devices could deliver scientific payloads to anywhere in the solar system.

One major destination of recent importance is Europa, one of the moons of Jupiter, which may contain traces of extraterrestrial life deep beneath the surface of its icy crust. Occasionally, the subsurface water on Europa violently breaks through the icy crust and bursts into the space above, creating a large water plume. One proposed method of searching for evidence of life on Europa is to orbit the moon and scan these plumes for ejected organic material. By deploying a swarm of Cubesats, these plumes can be flown through and analyzed multiple times to find important scientific data.

[ SPEAR ]

This hydraulic cyborg hand costs just $35.

Available next month in Japan.

[ Elekit ]

Microsoft is collaborating with researchers from Carnegie Mellon University and Oregon State University to compete in the DARPA Subterranean (SubT) challenges, collectively named Team Explorer. These challenges are designed to test drones and robots on how they perform in hazardous physical environments where humans can’t access safely. By participating in these challenges, these teams hope to find a solution that will assist emergency first responders to help find survivors more quickly.

[ Team Explorer ]

Aalborg University Hospital is the largest hospital in the North Jutland region of Denmark. Up to 3,000 blood samples arrive here in the lab every day. They must be tested and sorted – a time-consuming and monotonous process which was done manually until now. The university hospital has now automated the procedure: a robot-based system and intelligent transport boxes ensure the quality of the samples – and show how workflows in hospitals can be simplified by automation.

[ Kuka ]

This video shows human-robot collaboration for assembly of a gearbox mount in a realistic replica of a production line of Volkswagen AG. Knowledge-based robot skills enable autonomous operation of a mobile dual arm robot side-by-side of a worker.

[ DFKI ]

A brief overview of what’s going on in Max Likhachev’s lab at CMU.

Always good to see PR2 keeping busy!

[ CMU ]

The Intelligent Autonomous Manipulation (IAM) Lab at the Carnegie Mellon University (CMU) Robotics Institute brings together researchers to address the challenges of creating general purpose robots that are capable of performing manipulation tasks in unstructured and everyday environments. Our research focuses on developing learning methods for robots to model tasks and acquire versatile and robust manipulation skills in a sample-efficient manner.

[ IAM Lab ]

Jesse Hostetler is an Advanced Computer Scientist in the Vision and Learning org at SRI International in Princeton, NJ. In this episode of The Dish TV they explore the different aspects of artificial intelligence, and creating robots that use sleep and dream states to prevent catastrophic forgetting.

[ SRI ]

On the latest episode of the AI Podcast, Lex interviews Anca Dragan from UC Berkeley.

Anca Dragan is a professor at Berkeley, working on human-robot interaction -- algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings.

[ AI Podcast ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – [ONLINE EVENT] ICARSC 2020 – April 15-17, 2020 – [ONLINE EVENT] ICRA 2020 – May 31-4, 2020 – [SEE ATTENDANCE SURVEY] ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

UBTECH Robotics’ ATRIS, AIMBOT, and Cruzr robots were deployed at a Shenzhen hospital specialized in treating COVID-19 patients. The company says the robots, which are typically used in retail and hospitality scenarios, were modified to perform tasks that can help keep the hospital safer for everyone, especially front-line healthcare workers. The tasks include providing videoconferencing services between patients and doctors, monitoring the body temperatures of visitors and patients, and disinfecting designated areas.

The Third People’s Hospital of Shenzhen (TPHS), the only designated hospital for treating COVID-19 in Shenzhen, a metropolis with a population of more than 12.5 million, has introduced an intelligent anti-epidemic solution to combat the coronavirus.

AI robots are playing a key role. The UBTECH-developed robot trio, namely ATRIS, AIMBOT, and Cruzr, are giving a helping hand to monitor body temperature, detect people without masks, spray disinfectants and provide medical inquiries.

[ UBTECH ]

Someone has spilled gold all over the place! Probably one of those St. Paddy’s leprechauns... Anyways... It happened near a Robotiq Wrist Camera and Epick setup so it only took a couple of minutes to program and ’’pick and place’’ the mess up.

Even in situations like these, it’s important to stay positive and laugh a little, we had this ready and though we’d still share. Stay safe!

[ Robotiq ]

HEBI Robotics is helping out with social distancing by controlling a robot arm in Austria from their lab in Pittsburgh.

Can’t be too careful!

[ HEBI Robotics ]

Thanks Dave!

SLIDER, a new robot under development at Imperial College London, reminds us a little bit of what SCHAFT was working on with its straight-legged design.

[ Imperial ]

Imitation learning is an effective and safe technique to train robot policies in the real world because it does not depend on an expensive random exploration process. However, due to the lack of exploration, learning policies that generalize beyond the demonstrated behaviors is still an open challenge. We present a novel imitation learning framework to enable robots to 1) learn complex real world manipulation tasks efficiently from a small number of human demonstrations, and 2) synthesize new behaviors not contained in the collected demonstrations. Our key insight is that multi-task domains often present a latent structure, where demonstrated trajectories for different tasks intersect at common regions of the state space. We present Generalization Through Imitation (GTI), a two-stage offline imitation learning algorithm that exploits this intersecting structure to train goal-directed policies that generalize to unseen start and goal state combinations.

[ GTI ]

Here are two excellent videos from UPenn’s Kod*lab showing the capabilities of their programmable compliant origami spring things.

[ Kod*lab ]

We met Bornlove when we were reporting on drones in Tanzania in 2018, and it’s good to see that he’s still improving on his built-from-scratch drone.

[ ADF ]

Laser. Guided. Sandwich. Stacking.

[ Kawasaki ]

The Self-Driving Car Research Studio is a highly expandable and powerful platform designed specifically for academic research. It includes the tools and components researchers need to start testing and validating their concepts and technologies on the first day, without spending time and resources on building DYI platforms or implementing hobby-level vehicles. The research studio includes a fleet of vehicles, software tools enabling researchers to work in Simulink, C/C++, Python, or ROS, with pre-built libraries and models and simulated environments support, even a set of reconfigurable floor panels with road patterns and a set of traffic signs. The research studio’s feature vehicle, QCar, is a 1/10 scale model vehicle powered by NVIDIA Jetson TX2 supercomputer and equipped with LIDAR, 360-degree vision, depth sensor, IMU, encoders, and other sensors, as well as user-expandable IO.

[ Quanser ]

Thanks Zuzana!

The Swarm-Probe Enabling ATEG Reactor, or SPEAR, is a nuclear electric propulsion spacecraft that uses a new, lightweight reactor moderator and advanced thermoelectric generators (ATEGs) to greatly reduce overall core mass. If the total mass of an NEP system could be reduced to levels that were able to be launched on smaller vehicles, these devices could deliver scientific payloads to anywhere in the solar system.

One major destination of recent importance is Europa, one of the moons of Jupiter, which may contain traces of extraterrestrial life deep beneath the surface of its icy crust. Occasionally, the subsurface water on Europa violently breaks through the icy crust and bursts into the space above, creating a large water plume. One proposed method of searching for evidence of life on Europa is to orbit the moon and scan these plumes for ejected organic material. By deploying a swarm of Cubesats, these plumes can be flown through and analyzed multiple times to find important scientific data.

[ SPEAR ]

This hydraulic cyborg hand costs just $35.

Available next month in Japan.

[ Elekit ]

Microsoft is collaborating with researchers from Carnegie Mellon University and Oregon State University to compete in the DARPA Subterranean (SubT) challenges, collectively named Team Explorer. These challenges are designed to test drones and robots on how they perform in hazardous physical environments where humans can’t access safely. By participating in these challenges, these teams hope to find a solution that will assist emergency first responders to help find survivors more quickly.

[ Team Explorer ]

Aalborg University Hospital is the largest hospital in the North Jutland region of Denmark. Up to 3,000 blood samples arrive here in the lab every day. They must be tested and sorted – a time-consuming and monotonous process which was done manually until now. The university hospital has now automated the procedure: a robot-based system and intelligent transport boxes ensure the quality of the samples – and show how workflows in hospitals can be simplified by automation.

[ Kuka ]

This video shows human-robot collaboration for assembly of a gearbox mount in a realistic replica of a production line of Volkswagen AG. Knowledge-based robot skills enable autonomous operation of a mobile dual arm robot side-by-side of a worker.

[ DFKI ]

A brief overview of what’s going on in Max Likhachev’s lab at CMU.

Always good to see PR2 keeping busy!

[ CMU ]

The Intelligent Autonomous Manipulation (IAM) Lab at the Carnegie Mellon University (CMU) Robotics Institute brings together researchers to address the challenges of creating general purpose robots that are capable of performing manipulation tasks in unstructured and everyday environments. Our research focuses on developing learning methods for robots to model tasks and acquire versatile and robust manipulation skills in a sample-efficient manner.

[ IAM Lab ]

Jesse Hostetler is an Advanced Computer Scientist in the Vision and Learning org at SRI International in Princeton, NJ. In this episode of The Dish TV they explore the different aspects of artificial intelligence, and creating robots that use sleep and dream states to prevent catastrophic forgetting.

[ SRI ]

On the latest episode of the AI Podcast, Lex interviews Anca Dragan from UC Berkeley.

Anca Dragan is a professor at Berkeley, working on human-robot interaction -- algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings.

[ AI Podcast ]

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

When I reached Professor Guang-Zhong Yang on the phone last week, he was cooped up in a hotel room in Shanghai, where he had self-isolated after returning from a trip abroad. I wanted to hear from Yang, a widely respected figure in the robotics community, about the role that robots are playing in fighting the coronavirus pandemic. He’d been monitoring the situation from his room over the previous week, and during that time his only visitors were a hotel employee, who took his temperature twice a day, and a small wheeled robot, which delivered his meals autonomously.

An IEEE Fellow and founding editor of the journal Science Robotics, Yang is the former director and co-founder of the Hamlyn Centre for Robotic Surgery at Imperial College London. More recently, he became the founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University, often called the MIT of China. Yang wants to build the new institute into a robotics powerhouse, recruiting 500 faculty members and graduate students over the next three years to explore areas like surgical and rehabilitation robots, image-guided systems, and precision mechatronics.

“I ran a lot of the operations for the institute from my hotel room using Zoom,” he told me.

Yang is impressed by the different robotic systems being deployed as part of the COVID-19 response. There are robots checking patients for fever, robots disinfecting hospitals, and robots delivering medicine and food. But he thinks robotics can do even more.

Photo: Shanghai Jiao Tong University Professor Guang-Zhong Yang, founding dean of the Institute of Medical Robotics at Shanghai Jiao Tong University.

“Robots can be really useful to help you manage this kind of situation, whether to minimize human-to-human contact or as a front-line tool you can use to help contain the outbreak,” he says. While the robots currently being used rely on technologies that are mature enough to be deployed, he argues that roboticists should work more closely with medical experts to develop new types of robots for fighting infectious diseases.

“What I fear is that, there is really no sustained or coherent effort in developing these types of robots,” he says. “We need an orchestrated effort in the medical robotics community, and also the research community at large, to really look at this more seriously.”

Yang calls for a global effort to tackle the problem. “In terms of the way to move forward, I think we need to be more coordinated globally,” he says. “Because many of the challenges require that we work collectively to deal with them.”

Our full conversation, edited for clarity and length, is below.

IEEE Spectrum: How is the situation in Shanghai?

Guang-Zhong Yang: I came back to Shanghai about 10 days ago, via Hong Kong, so I’m now under self-imposed isolation in a hotel room just to be cautious, for two weeks. The general feeling in Shanghai is that it’s really calm and orderly. Everything seems well under control. And as you probably know, in recent days the number of new cases is steadily dropping. So the main priority for the government is to restore normal routines, and also for companies to go back to work. Of course, people are still very cautious, and there are systematic checks in place. In my hotel, for instance, I get checked twice a day for my temperature to make sure that all the people in the hotel are well.

Are most people staying inside, are the streets empty?

No, the streets are not empty. In fact, in Minhang, next to Shanghai Jiao Tong University, things are going back to normal. Not at full capacity, but stores and restaurants are gradually opening. And people are thinking about the essential travels they need to do, what they can do remotely. As you know in China we have very good online order and delivery services, so people use them a lot more. I was really impressed by how the whole thing got under control, really.

Has Shanghai Jiao Tong University switched to online classes?

Yes. Since last week, the students are attending online lectures. The university has 1449 courses for undergrads and 657 for graduate students. I participated in some of them. It’s really well run. You can have the typical format with a presenter teaching the class, but you can also have part of the lecture with the students divided into groups and having discussions. Of course what’s really affected is laboratory-based work. So we’ll need to wait for some more time to get back into action.

What do you think of the robots being used to help fight the outbreak?

I’ve seen reports showing a variety of robots being deployed. Disinfection robots that use UV light in hospitals. Drones being used for transporting samples. There’s a prototype robot, developed by the Chinese Academy of Sciences, to remotely collect oropharyngeal swabs from patients for testing, so a medical worker doesn’t have to directly swab the patient. In my hotel, there’s a robot that brings my meals to my door. This little robot can manage to get into the lift, go to your room, and call you to open the door. I’m a roboticist myself and I find it striking how well this robot works every time! [Laughs.]

Photo: UVD Robots UVD Robots has shipped hundreds of ultraviolet-C disinfection robots like the one above to Chinese hospitals. 

After Japan’s Fukushima nuclear emergency, the robotics community realized that it needed to be better prepared. It seems that we’ve made progress with disaster-response robots, but what about dealing with pandemics?

I think that for events involving infectious diseases, like this coronavirus outbreak, when they happen, everybody realizes the importance of robots. The challenge is that at most research institutions, people are more concerned with specific research topics, and that’s indeed the work of a scientist—to dig deep into the scientific issues and solve those specific problems. But we also need to have a global view to deal with big challenges like this pandemic.

So I think what we need to do, starting now, is to have a more systematic effort to make sure those robots can be deployed when we need them. We just need to recompose ourselves and work to identify the technologies that are ready to be deployed, and what are the key directions we need to pursue. There’s a lot we can do. It’s not too late. Because this is not going to disappear. We have to see the worst before it gets better.

Click here for additional coronavirus coverage

So what should we do to be better prepared?

After a major crisis, when everything is under control, people’s priority is to go back to our normal routines. The last thing in people’s minds is, What should we do to prepare for the next crisis? And the thing is, you can’t predict when the next crisis will happen. So I think we need three levels of action, and it really has to be a global effort. One is at the government level, in particular funding agencies: How to make sure we can plan ahead and to prepare for the worst.

Another level is the robotics community, including organizations like the IEEE, we need leadership to advocate for these issues and promote activities like robotics challenges. We see challenges for disasters, logistics, drones—how about a robotic challenge for infectious diseases. I was surprised, and a bit disappointed in myself, that we didn’t think about this before. So for the editorial board of Science Robotics, for instance, this will become an important topic for us to rethink.

And the third level is our interaction with front-line clinicians—our interaction with them needs to be stronger. We need to understand the requirements and not be obsessed with pure technologies, so we can ensure that our systems are effective, safe, and can be rapidly deployed. I think that if we can mobilize and coordinate our effort at all these three levels, that would be transformative. And we’ll be better prepared for the next crisis.

Are there projects taking place at the Institute of Medical Robotics that could help with this pandemic?

The institute has been in full operation for just over a year now. We have three main areas of research: The first is surgical robotics, which is my main area of research. The second area is in rehabilitation and assistive robots. The third area is hospital and laboratory automation. One important lesson that we learned from the coronavirus is that, if we can detect and intervene early, we have a better chance of containing it. And for other diseases, it’s the same. For cancer, early detection based on imaging and other sensing technologies, is critical. So that’s something we want to explore—how robotics, including technologies like laboratory automation, can help with early detection and intervention.

“One area we are working on is automated intensive-care unit wards. The idea it to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks”

One area we are working on is automated intensive-care unit wards. The idea it to build negative-pressure ICU wards for infectious diseases equipped with robotic capabilities that can take care of certain critical care tasks. Some tasks could be performed remotely by medical personnel, while other tasks could be fully automated. A lot of the technologies that we already use in surgical robotics can be translated into this area. We’re hoping to work with other institutions and share our expertise to continue developing this further. Indeed, this technology is not just for emergency situations. It will also be useful for routine management of infectious disease patients. We really need to rethink how hospitals are organized in the future to avoid unnecessary exposure and cross-infection.

Photo: Shanghai Jiao Tong University Shanghai Jiao Tong University’s Institute of Medical Robotics is researching areas like micro/nano systems, surgical and rehabilitation robotics, and human-robot interaction.

I’ve seen some recent headlines—“China’s tech fights back,” “Coronavirus is the first big test for futuristic tech”—many people expect technology to save the day.

When there’s a major crisis like this pandemic, in the general public’s mind, people want to find a magic cure that will solve all the problems. I completely understand that expectation. But technology can’t always do that, of course. What technology can do is to help us to be better prepared. For example, it’s clear that in the last few years self-navigating robots with localization and mapping are becoming a mature technology, so we should see more of those used for situations like this. I’d also like to see more technologies developed for front-line management of patients, like the robotic ICU I mentioned earlier. Another area is public transportation systems—can they have an element of disease prevention, using technology to minimize the spread of diseases so that lockdowns are only imposed as a last resort?

And then there’s the problem of people being isolated. You probably saw that Italy has imposed a total lockdown. That could have a major psychological impact, particularly for people who are vulnerable and living alone. There is one area of robotics, called social robotics, that could play a part in this as well. I’ve been in this hotel room by myself for days now—I’m really starting to feel the isolation…

We should have done a Zoom call.

Yes, we should. [Laughs.] I guess this isolation, or quarantine for various people, also provides the opportunity for us to reflect on our lives, our work, our daily routines. That’s the silver lining that we may see from this crisis.

Photo: Unity Drive Innovation Unity Drive, a startup spun out of Hong Kong University of Science and Technology, is deploying self-driving vehicles to carry out contactless deliveries in three Chinese cities.

While some people say we need more technology during emergencies like this, others worry that companies and governments will use things like cameras and facial recognition to increase surveillance of individuals.

A while ago we published an article listing the 10 grand challenges for robotics in Science Robotics. One of the grand challenges is concerned with legal and ethical issues, which include what you mentioned in your question. Respecting privacy, and also being sensitive about individual and citizens’ rights—these are very, very important. Because we must operate within this legal ethical boundary. We should not use technologies that will intrude in people’s lives. You mentioned that some people say that we don’t have enough technology, and that others say we have too much. And I think both have a point. What we need to do is to develop technologies that are appropriate to be deployed in the right situation and for the right tasks.

Many researchers seem eager to help. What would you say to roboticists interested in helping fight this outbreak or prepare for the next one?

For medical robotics research, my experience is that for your technology to be effective, it has to be application oriented. You need to ensure that end-users like the clinicians who will use your robot, or in the case of assistive robots, the patients, that they are deeply involved in the development of the technology. And the second thing is really to think out of the box—how to develop radically different new technologies. Because robotics research is very hands on and there’s a tendency of adapting what’s readily available out there. For your technology to have a major impact, you need to fundamentally rethink your research and innovation, not just follow the waves.

For example, at our institute we’re investing a lot of effort on the development of micro and nano systems and also new materials that could one day be used in robots. Because for micro robotic systems, we can’t rely on the more traditional approach of using motors and gears that we use in larger systems. So my suggestion is to work on technologies that not only have a deep science element but can also become part of a real-world application. Only then we can be sure to have strong technologies to deal with future crises.

Pages