Feed aggregator

Many applications benefit from the use of multiple robots, but their scalability and applicability are fundamentally limited when relying on a central control station. Getting beyond the centralized approach can increase the complexity of the embedded software, the sensitivity to the network topology, and render the deployment on physical devices tedious and error-prone. This work introduces a software-based solution to cope with these challenges on commercial hardware. We bring together our previous work on Buzz, the swarm-oriented programming language, and the many contributions of the Robotic Operating System (ROS) community into a reliable workflow, from rapid prototyping of decentralized behaviors up to robust field deployment. The Buzz programming language is a hardware independent, domain-specific (swarm-oriented), and composable language. From simulation to the field, a Buzz script can stay unmodified and almost seamlessly applicable to all units of a heterogeneous robotic team. We present the software structure of our solution, and the swarm-oriented paradigms it encompasses. While the design of a new behavior can be achieved on a lightweight simulator, we show how our security mechanisms enhance field deployment robustness. In addition, developers can update their scripts in the field using a safe software release mechanism. Integrating Buzz in ROS, adding safety mechanisms and granting field updates are core contributions essential to swarm robotics deployment: from simulation to the field. We show the applicability of our work with the implementation of two practical decentralized scenarios: a robust generic task allocation strategy and an optimized area coverage algorithm. Both behaviors are explained and tested with simulations, then experimented with heterogeneous ground-and-air robotic teams.

Media influence people's perceptions of reality broadly and of technology in particular. Robot villains and heroes—from Ultron to Wall-E—have been shown to serve a specific cultivation function, shaping people's perceptions of those embodied social technologies, especially when individuals do not have direct experience with them. To date, however, little is understood about the nature of the conceptions people hold for what robots are, how they work, and how they may function in society, as well as the media antecedents and relational effects of those cognitive structures. This study takes a step toward bridging that gap by exploring relationships among individuals' recall of robot characters from popular media, their mental models for actual robots, and social evaluations of an actual robot. Findings indicate that mental models consist of a small set of common and tightly linked components (beyond which there is a good deal of individual difference), but robot character recall and evaluation have little association with whether people hold any of those components. Instead, data are interpreted to suggest that cumulative sympathetic evaluations of robot media characters may form heuristics that are primed by and engaged in social evaluations of actual robots, while technical content in mental models is associated with a more utilitarian approach to actual robots.

Research related to regulatory focus theory has shown that the way in which a message is conveyed can increase the effectiveness of the message. While different research fields have used this theory, in human-robot interaction (HRI), no real attention has been given to this theory. In this paper, we investigate it in an in the wild scenario. More specifically, we are interested in how individuals react when a robot suddenly appears at their office doors. Will they interact with it or will they ignore it? We report the results from our experimental study in which the robot approaches 42 individuals. Twenty-nine of them interacted with the robot, while the others either ignored it or avoided any interaction with it. The robot displayed two types of behavior (i.e., promotion or prevention). Our results show that individuals that interacted with a robot that matched their regulatory focus type interacted with it significantly longer than individuals that did not experience regulatory fit. Other qualitative results are also reported, together with some reactions from the participants.

Online social networks (OSN) are prime examples of socio-technical systems in which individuals interact via a technical platform. OSN are very volatile because users enter and exit and frequently change their interactions. This makes the robustness of such systems difficult to measure and to control. To quantify robustness, we propose a coreness value obtained from the directed interaction network. We study the emergence of large drop-out cascades of users leaving the OSN by means of an agent-based model. For agents, we define a utility function that depends on their relative reputation and their costs for interactions. The decision of agents to leave the OSN depends on this utility. Our aim is to prevent drop-out cascades by influencing specific agents with low utility. We identify strategies to control agents in the core and the periphery of the OSN such that drop-out cascades are significantly reduced, and the robustness of the OSN is increased.

Laparoscopic surgery is a representative operative method of minimally invasive surgery. However, most laparoscopic hand instruments consist of rigid and straight structures, which have serious limitations such as interference by the instruments and limited field of view of the endoscope. To improve the flexibility and dexterity of these instruments, we propose a new concept of a multijoint manipulator using a variable stiffness mechanism. The manipulator uses a magneto-rheological compound (MRC) whose rheological properties can be tuned by an external magnetic field. In this study, we changed the shape of the electromagnet and MRC to improve the performance of the variable stiffness joint we previously fabricated; further, we fabricated a prototype and performed basic evaluation of the joint using this prototype. The MRC was fabricated by mixing carbonyl iron particles and glycerol. The prototype single joint was assembled by combining MRC and electromagnets. The configuration of the joint indicates that it has a closed magnetic circuit. To examine the basic properties of the joint, we conducted preliminary experiments such as elastic modulus measurement and rigidity evaluation. We confirmed that the elastic modulus increased when a magnetic field was applied. The rigidity of the joint was also verified under bending conditions. Our results confirmed that the stiffness of the new joint changed significantly compared with the old joint depending on the presence or absence of a magnetic field, and the performance of the new joint also improved.

Quadruped robots require compliance to handle unexpected external forces, such as impulsive contact forces from rough terrain, or from physical human-robot interaction. This paper presents a locomotion controller using Cartesian impedance control to coordinate tracking performance and desired compliance, along with Quadratic Programming (QP) to satisfy friction cone constraints, unilateral constraints, and torque limits. First, we resort to projected inverse-dynamics to derive an analytical control law of Cartesian impedance control for constrained and underactuated systems (typically a quadruped robot). Second, we formulate a QP to compute the optimal torques that are as close as possible to the desired values resulting from Cartesian impedance control while satisfying all of the physical constraints. When the desired motion torques lead to violation of physical constraints, the QP will result in a trade-off solution that sacrifices motion performance to ensure physical constraints. The proposed algorithm gives us more insight into the system that benefits from an analytical derivation and more efficient computation compared to hierarchical QP (HQP) controllers that typically require a solution of three QPs or more. Experiments applied on the ANYmal robot with various challenging terrains show the efficiency and performance of our controller.

How does AI need to evolve in order to better support more effective decision-making in managing the many complex problems we face at every scale, from global climate change, collapsing ecosystems, international conflicts and extremism, through to all the dimensions of public policy, economics, and governance that affect human well-being? Research in complex decision-making at an individual human level (understanding of what constitutes more, and less, effective decision-making behaviors, and in particular the many pathways to failures in dealing with complex problems), informs a discussion about the potential for AI to aid in mitigating those failures and enabling a more robust and adaptive (and therefore more effective) decision-making framework, calling for AI to move well-beyond the current envelope of competencies.

Many real-world applications have been suggested in the swarm robotics literature. However, there is a general lack of understanding of what needs to be done for robot swarms to be useful and trusted by users in reality. This paper aims to investigate user perception of robot swarms in the workplace, and inform design principles for the deployment of future swarms in real-world applications. Three qualitative studies with a total of 37 participants were done across three sectors: fire and rescue, storage organization, and bridge inspection. Each study examined the users' perceptions using focus groups and interviews. In this paper, we describe our findings regarding: the current processes and tools used in these professions and their main challenges; attitudes toward robot swarms assisting them; and the requirements that would encourage them to use robot swarms. We found that there was a generally positive reaction to robot swarms for information gathering and automation of simple processes. Furthermore, a human in the loop is preferred when it comes to decision making. Recommendations to increase trust and acceptance are related to transparency, accountability, safety, reliability, ease of maintenance, and ease of use. Finally, we found that mutual shaping, a methodology to create a bidirectional relationship between users and technology developers to incorporate societal choices in all stages of research and development, is a valid approach to increase knowledge and acceptance of swarm robotics. This paper contributes to the creation of such a culture of mutual shaping between researchers and users, toward increasing the chances of a successful deployment of robot swarms in the physical realm.

A year ago, we visited Rwanda to see how Zipline’s autonomous, fixed-wing delivery drones were providing blood to hospitals and clinics across the country. We were impressed with both Zipline’s system design (involving dramatic catapult launches, parachute drops, and mid-air drone catching), as well as their model of operations, which minimizes waste while making critical supplies available in minutes almost anywhere in the country.

Since then, Zipline has expanded into Ghana, and has plans to start flying in India as well, but the COVID-19 pandemic is changing everything. Africa is preparing for the worst, while in the United States, Zipline is working with the Federal Aviation Administration to try and expedite safety and regulatory approvals for an emergency humanitarian mission with the goal of launching a medical supply delivery network that could help people maintain social distancing or quarantine when necessary by delivering urgent medication nearly to their doorsteps.

In addition to its existing role delivering blood products and medication, Zipline is acting as a centralized distribution network for COVID-19 supplies in Ghana and Rwanda. Things like personal protective equipment (PPE) will be delivered as needed by drone, ensuring that demand is met across the entire healthcare network. This has been a problem in the United States—getting existing supplies where they’re needed takes a lot of organization and coordination, which the US government is finding to be a challenge.

Photo: Zipline

Zipline says that their drones are able to reduce human involvement in the supply chain (a vector for infection), while reducing hospital overcrowding by making it more practical for non-urgent patients to receive care in local clinics closer to home. COVID-19 is also having indirect effects on healthcare, with social distancing and community lockdowns straining blood supplies. With its centralized distribution model, Zipline has helped Rwanda to essentially eliminate wasted (expired) blood products. “We probably waste more blood [in the United States] than is used in all of Rwanda,” Zipline CEO Keller Rinaudo told us. But it’s going to take more than blood supply to fight COVID-19, and it may hit Africa particularly hard.

“Things are earlier in Africa, you don’t see infections at the scale that we’re seeing in the U.S.,” says Rinaudo. “I also think Africa is responding much faster. Part of that is the benefit of seeing what’s happening in countries that didn’t take it seriously in the first few months where community spreading gets completely out of control. But it’s quite possible that COVID is going to be much more severe in countries that are less capable of locking down, where you have densely populated areas with people who can’t just stay in their house for 45 days.”

In an attempt to prepare for things getting worse, Rinaudo says that Zipline is stocking as many COVID-related products as possible, and they’re also looking at whether they’ll be able to deliver to neighborhood drop-off points, or perhaps directly to homes. “That’s something that Zipline has been on track to do for quite some time, and we’re considering ways of accelerating that. When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way.” This kind of system, Rinaudo points out, would also benefit people with non-COVID healthcare needs, who need to do their best to avoid hospitals. If a combination of telemedicine and home or neighborhood delivery of medical supplies means they can stay home, it would be a benefit for everyone. “This is a transformation of the healthcare system that’s already happening and needs to happen anyway. COVID is just accelerating it.”

“When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way” —Keller Rinaudo, Zipline

For the past year, Zipline, working closely with the FAA, has been planning on a localized commercial trial of a medical drone delivery service that was scheduled to begin in North Carolina this fall. While COVID is more urgent, the work that’s already been done towards this trial puts Zipline in a good position to move quickly, says Rinaudo.

“All of the work that we did with the IPP [UAS Integration Pilot Program] is even more important, given this crisis. It means that we’ve already been working with the FAA in detail, and that’s made it possible for us to have a foundation to build on to help with the COVID-19 response.” Assuming that Zipline and the FAA can find a regulatory path forward, the company could begin setting up distribution centers that can support hospital networks for both interfacility delivery as well as contactless delivery to (eventually) neighborhood points and perhaps even homes. “It’s exactly the use case and value proposition that I was describing for Africa,” Rinaudo says.

Leveraging rapid deployment experience that it has from work with the U.S. Department of Defense, Zipline would launch one distribution center within just a few months of a go-ahead from the FAA. This single distribution center could cover an area representing up to 10 million people. “We definitely want to move quickly here,” Rinaudo tells us. Within 18 months, Zipline could theoretically cover the entire US, although he admits “that would be an insanely fast roll-out.”

The question, at this point, is how fast the FAA can take action to make innovative projects like this happen. Zipline, as far as we can tell, is ready to go. We did also ask Rinaudo if he thought that hospitals specifically, and the medical system in general, has the bandwidth to adopt a system like Zipline’s in the middle of a pandemic that’s already stretching people and resources to the limit.

“In the U.S. there’s this sense that this technology is impossible, whereas it’s already operating at multi-national scale, serving thousands of hospitals and health facilities, and it’s completely boring to the people who are benefiting from it,” Rinaudo says. “People in the U.S. have really not caught on that this is something that’s reliable and can dramatically improve our response to crises like this.”

[ Zipline ]

Back to IEEE COVID-19 Resources

A year ago, we visited Rwanda to see how Zipline’s autonomous, fixed-wing delivery drones were providing blood to hospitals and clinics across the country. We were impressed with both Zipline’s system design (involving dramatic catapult launches, parachute drops, and mid-air drone catching), as well as their model of operations, which minimizes waste while making critical supplies available in minutes almost anywhere in the country.

Since then, Zipline has expanded into Ghana, and has plans to start flying in India as well, but the COVID-19 pandemic is changing everything. Africa is preparing for the worst, while in the United States, Zipline is working with the Federal Aviation Administration to try and expedite safety and regulatory approvals for an emergency humanitarian mission with the goal of launching a medical supply delivery network that could help people maintain social distancing or quarantine when necessary by delivering urgent medication nearly to their doorsteps.

In addition to its existing role delivering blood products and medication, Zipline is acting as a centralized distribution network for COVID-19 supplies in Ghana and Rwanda. Things like personal protective equipment (PPE) will be delivered as needed by drone, ensuring that demand is met across the entire healthcare network. This has been a problem in the United States—getting existing supplies where they’re needed takes a lot of organization and coordination, which the US government is finding to be a challenge.

Photo: Zipline

Zipline says that their drones are able to reduce human involvement in the supply chain (a vector for infection), while reducing hospital overcrowding by making it more practical for non-urgent patients to receive care in local clinics closer to home. COVID-19 is also having indirect effects on healthcare, with social distancing and community lockdowns straining blood supplies. With its centralized distribution model, Zipline has helped Rwanda to essentially eliminate wasted (expired) blood products. “We probably waste more blood [in the United States] than is used in all of Rwanda,” Zipline CEO Keller Rinaudo told us. But it’s going to take more than blood supply to fight COVID-19, and it may hit Africa particularly hard.

“Things are earlier in Africa, you don’t see infections at the scale that we’re seeing in the U.S.,” says Rinaudo. “I also think Africa is responding much faster. Part of that is the benefit of seeing what’s happening in countries that didn’t take it seriously in the first few months where community spreading gets completely out of control. But it’s quite possible that COVID is going to be much more severe in countries that are less capable of locking down, where you have densely populated areas with people who can’t just stay in their house for 45 days.”

In an attempt to prepare for things getting worse, Rinaudo says that Zipline is stocking as many COVID-related products as possible, and they’re also looking at whether they’ll be able to deliver to neighborhood drop-off points, or perhaps directly to homes. “That’s something that Zipline has been on track to do for quite some time, and we’re considering ways of accelerating that. When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way.” This kind of system, Rinaudo points out, would also benefit people with non-COVID healthcare needs, who need to do their best to avoid hospitals. If a combination of telemedicine and home or neighborhood delivery of medical supplies means they can stay home, it would be a benefit for everyone. “This is a transformation of the healthcare system that’s already happening and needs to happen anyway. COVID is just accelerating it.”

“When everyone’s staying at home, that’s the ideal time for robots to be making deliveries in a contactless way” —Keller Rinaudo, Zipline

For the past year, Zipline, working closely with the FAA, has been planning on a localized commercial trial of a medical drone delivery service that was scheduled to begin in North Carolina this fall. While COVID is more urgent, the work that’s already been done towards this trial puts Zipline in a good position to move quickly, says Rinaudo.

“All of the work that we did with the IPP [UAS Integration Pilot Program] is even more important, given this crisis. It means that we’ve already been working with the FAA in detail, and that’s made it possible for us to have a foundation to build on to help with the COVID-19 response.” Assuming that Zipline and the FAA can find a regulatory path forward, the company could begin setting up distribution centers that can support hospital networks for both interfacility delivery as well as contactless delivery to (eventually) neighborhood points and perhaps even homes. “It’s exactly the use case and value proposition that I was describing for Africa,” Rinaudo says.

Leveraging rapid deployment experience that it has from work with the U.S. Department of Defense, Zipline would launch one distribution center within just a few months of a go-ahead from the FAA. This single distribution center could cover an area representing up to 10 million people. “We definitely want to move quickly here,” Rinaudo tells us. Within 18 months, Zipline could theoretically cover the entire US, although he admits “that would be an insanely fast roll-out.”

The question, at this point, is how fast the FAA can take action to make innovative projects like this happen. Zipline, as far as we can tell, is ready to go. We did also ask Rinaudo if he thought that hospitals specifically, and the medical system in general, has the bandwidth to adopt a system like Zipline’s in the middle of a pandemic that’s already stretching people and resources to the limit.

“In the U.S. there’s this sense that this technology is impossible, whereas it’s already operating at multi-national scale, serving thousands of hospitals and health facilities, and it’s completely boring to the people who are benefiting from it,” Rinaudo says. “People in the U.S. have really not caught on that this is something that’s reliable and can dramatically improve our response to crises like this.”

[ Zipline ]

Back to IEEE COVID-19 Resources

For the past two months, the vegetables have arrived on the back of a robot. That’s how 16 communities in Zibo, in eastern China, have received fresh produce during the coronavirus pandemic. The robot is an autonomous van that uses lidars, cameras, and deep-learning algorithms to drive itself, carrying up to 1,000 kilograms on its cargo compartment.

The unmanned vehicle provides a “contactless” alternative to regular deliveries, helping reduce the risk of person-to-person infection, says Professor Ming Liu, a computer scientist at the Hong Kong University of Science and Technology (HKUST) and cofounder of Unity Drive Innovation, or UDI, the Shenzhen-based startup that developed the self-driving van.

Since February, UDI has been operating a small fleet of vehicles in Zibo and two other cities, Suzhou and Shenzhen, where they deliver meal boxes to checkpoint workers and spray disinfectant near hospitals. Combined, the vans have made more than 2,500 autonomous trips, often encountering busy traffic conditions despite the lockdown.

“It’s like Uber for packages—you use your phone to call a robot to pick up and deliver your boxes,” Professor Liu told IEEE Spectrum in an interview via Zoom.

Even before the pandemic, package shipments had been skyrocketing in China and elsewhere. Alibaba founder Jack Ma has said that his company is preparing to handle 1 billion packages per day. With the logistics sector facing major labor shortages, a 2016 McKinsey report predicted that autonomous vehicles will deliver 80 percent of parcels within 10 years.

That’s the future UDI is betting on. Unlike robocars developed by Waymo, Cruise, Zoox, and others, UDI’s vehicles are designed to transport goods, not people. They are similar to those of Nuro, a Silicon Valley startup, and Neolix, based in Beijing, which has deployed 50 robot vans in 10 Chinese cities to do mobile delivery and disinfection service.

Photo: UDI A self-driving vehicle delivers lunch boxes to workers in Pingshan District in Shenzhen. Since February, UDI’s autonomous fleet has made more than 800 meal deliveries.

Professor Liu, an IEEE Senior Member and director of the Intelligent Autonomous Driving Center at HKUST, is unfazed by the competition. He says UDI is ready to operate its vehicles on public roads thanks to the real-world experience it has gained from a string of recent projects. These involve large companies testing the robot vans inside their industrial parks.

One of them is Taiwanese electronics giant Foxconn. Since late 2018, it has used UDI vans to transport electronic parts and other items within its vast Shenzhen campus where some 200,000 workers reside. The robots have to navigate labyrinthine routes while avoiding an unpredictable mass of pedestrians, bicycles, and trucks.

Autonomous driving powered by deep learning

UDI’s vehicle, called Hercules, uses an industrial-grade PC running the Robot Operating System, or ROS. It’s also equipped with a drive-by-wire chassis with electric motors powered by a 8.4-kWh lithium-ion battery. Sensors include a main lidar, three auxiliary lidars, a stereo camera, four fisheye cameras, 16 sonars, redundant satellite navigation systems, an inertial measurement unit (IMU), and two wheel encoders.

The PC receives the lidar point-clouds and feeds them into the main perception algorithm, which consists of a convolutional neural network trained to detect and classify objects. The neural net outputs a set of 3D bounding boxes representing vehicles and other obstacles on the road. This process repeats 100 times per second.

Image: UDI UDI’s vehicle is equipped with a main lidar and three auxiliary lidars, a stereo camera, and various other sensors [top]. The cargo compartment can be modified based on the items to be transported and is not shown. The chassis [bottom] includes an electric motor, removable lithium-ion battery, vehicle control unit (VCU), motor control unit (MCU), electric power steering (EPS), electro-hydraulic brake (EHB), electronic parking brake (EPB), on-board charger (OBC), and direct-current-to-direct-current (DCDC) converter.

Another algorithm processes images from forward-facing cameras to identify road signs and traffic lights, and a third matches the point-clouds and IMU data to a global map, allowing the vehicle to self-localize. To accelerate, brake, and steer, the PC sends commands to two secondary computers running real-time operating systems and connected to the drive-by-wire modules.

Professor Liu says UDI faces more challenging driving conditions than competitors like Waymo and Nuro that conduct their tests in suburban areas in the United States. In Shenzhen, for example, the UDI vans have to navigate through narrow streets with double parked cars and aggressive motorcycles that whiz by narrowly missing the robot.

Over the past couple of months, UDI has monitored its fleet from its headquarters. Using 5G, a remote operator can receive data from a vehicle with just 10 milliseconds of delay. In Shenzhen, human intervention was required about two dozen times when the robots encountered situations they didn’t know how to handle—too many vehicles on the road, false detections of traffic lights at night, or in one case, a worker coming out of a manhole.

Photo: UDI One of UDI’s autonomous vehicles equipped with a device that sprays disinfectant operates near a hospital in Shenzhen.

For safety, UDI programmed the vans to drive at low speeds of up to 30 kilometers per hour, though they can go faster. On a few occasions, remote operators took control because the vehicles were driving too slowly, becoming a road hazard and annoying nearby drivers. Professor Liu says it’s a challenge to balance cautiousness and aggressiveness in self-driving vehicles that will operate in the real world.

He notes that UDI vehicles have been collecting huge amounts of video and sensor data during their autonomous runs. This information will be useful to improve computer simulations of the vehicles and, later, the real vehicles themselves. UDI says it plans to open source part of the data.

Mass produced robot vans

Professor Liu has been working on advanced vehicles for nearly two decades. His projects include robotic cars, buses, and boats, with a focus on applying deep reinforcement learning to enable autonomous behaviors. He says UDI’s vehicles are not cars, and they aren’t unmanned ground robots, either—they are something in between. He likes to call them “running robots.”

Liu’s cofounders are Professor Xiaorui Zhu at Harbin Institute of Technology, in Shenzhen, and Professor Lujia Wang at the Shenzhen Institutes of Advanced Technology, part of the Chinese Academy of Sciences. “We want to be the first company in the world to achieve mass production of autonomous logistics vehicles,” says Wang, who is the CTO of UDI.

To do that, the startup has hired 100 employees and is preparing to put its assembly line into high gear in the next several months. “I’m not saying we solved all the problems,” Professor Liu says, citing system integration and cost as the biggest challenges. “Can we do better? Yes, it can always be better.”

Back to IEEE COVID-19 Resources

For the last several years, Diligent Robotics has been testing out its robot, Moxi, in hospitals in Texas. Diligent isn’t the only company working on hospital robots, but Moxi is unique in that it’s doing commercial mobile manipulation, picking supplies out of supply closets and delivering them to patient rooms, all completely autonomously.

A few weeks ago, Diligent announced US $10 million in new funding, which comes at a critical time, as the company addressed in their press release:

Now more than ever hospitals are under enormous stress, and the people bearing the most risk in this pandemic are the nurses and clinicians at the frontlines of patient care. Our mission with Moxi has always been focused on relieving tasks from nurses, giving them more time to focus on patients, and today that mission has a newfound meaning and purpose. Time and again, we hear from our hospital partners that Moxi not only returns time back to their day but also brings a smile to their face.

We checked in with Diligent CEO Andrea Thomaz last week to get a better sense of how Moxi is being used at hospitals. “As our hospital customers are implementing new protocols to respond to the [COVID-19] crisis, we are working with them to identify the best ways for Moxi to be deployed as a resource,” Thomaz told us. “The same kinds of delivery tasks we have been doing are still just as needed as ever, but we are also working with them to identify use cases where having Moxi do a delivery task also reduces infection risk to people in the environment.”

Since this is still something that Diligent and their hospital customers are actively working on, it’s a little early for them to share details. But in general, robots making deliveries means that people aren’t making deliveries, which has several immediate benefits. First, it means that overworked hospital staff can spend their time doing other things (like interacting with patients), and second, the robot is less likely to infect other people. It’s not just that the robot can’t get a virus (not that kind of virus, at any rate), but it’s also much easier to keep robots clean in ways that aren’t an option for humans. Besides wiping them down with chemicals, without too much trouble you could also have them autonomously disinfect themselves with UV, which is both efficient and effective.

While COVID-19 only emphasizes the importance of robots in healthcare, Diligent is tackling a particularly difficult set of problems with Moxi, involving full autonomy, manipulation, and human-robot interaction. Earlier this year, we spoke with Thomaz about how Moxi is starting to make a difference to hospital staff.

IEEE Spectrum: Last time we talked, Moxi was in beta testing. What’s different about Moxi now that it’s ready for full-time deployment?

Andrew Thomaz: During our beta trial, Moxi was deployed for over 120 days total, in four different hospitals (one of them was a children’s hospital, the other three were adult acute-care units), working alongside more than 125 nurses and clinicians. The people we were working with were so excited to be part of this kind of innovative research, and how this new technology is going to actually impact workloads. Our focus on the beta trials was to try any idea that a customer had of how Moxi could provide value—if it seemed at all reasonable, then we would quickly try to mock something up and try it.

I think it validates our human-robot interaction approach to building the company, of getting the technology out there in front of customers to make sure that we’re building the product that they really need. We started to see common workflows across hospitals—there are different kinds of patient care that’s happening, but the kinds of support and supplies and things that are moving around the hospital are similar—and so then we felt that we had learned what we needed to learn from the beta trial and we were ready to launch with our first customers.

Photo: Diligent Robotics

The primary function that Moxi has right now, of restocking and delivery, was that there from the beginning? Or was that something that people asked for and you realized, oh, well, this is how a robot can actually be the most useful.

We knew from the beginning that our goal was to provide the kind of operational support that an end-to-end mobile manipulation platform can do, where you can go somewhere autonomously, pick something up, and bring it to another location and put it down. With each of our beta customers, we were very focused on opportunities where that was the case, where nurses were wasting time.

We did a lot of that kind of discovery, and then you just start seeing that it’s not rocket science—there are central supply places where things are kept around the hospital, and nurses are running back and forth to these places multiple times a day. We’d look at some particular task like admission buckets, or something else that nurses have to do everyday, and then we say, where are the places that automation can really fit in? Some of that support is just navigation tasks, like going from one place to another, some actually involves manipulation, like you need to press this button or you need to pick up this thing. But with Moxi, we have a mobility and a manipulation component that we can put to work, to redefine workflows to include automation.

You mentioned that as part of the beta program that you were mocking the robot up to try all kinds of customer ideas. Was there something that hospitals really wanted the robot to do, that you mocked up and tried but just didn’t work at all?

We were pretty good at not setting ourselves up for failure. I think the biggest thing would be, if there was something that was going to be too heavy for the Kinova arm, or the Robotiq gripper, that’s something we just can’t do right now. But honestly, it was a pretty small percentage of things that we were kind of asked to manipulate that we had to say, oh no, sorry, we can’t lift that much or we can’t grip that wide. The other reason that things that we tried in the beta didn’t make it into our roadmap is if there was an idea that came up with only one of the beta sites. One example is delivering water: One of the beta sites was super excited about having water delivered to the patients every day, ahead of medication deliveries, which makes a lot of sense, but when we start talking to hospital leadership or other people, in other hospitals, it’s definitely just a “nice to have.” So for us, from a technical standpoint, it doesn’t make as much sense to devote a lot of resources into making water delivery a real task if it’s just going to be kind of a “nice to have” for a small percentage of our hospitals. That’s more how that R&D went—if we heard it from one hospital we’d ask, is this something that everybody wants, or just an idea that one person had. 

Let’s talk about how Moxi does what it does. How does the picking process work?

We’re focused on very structured manipulation; we’re not doing general purpose manipulation, and so we have a process for teaching Moxi a particular supply room. There are visual cues that are used to orient the robot to that supply room, and then once you are oriented you know where a bin is. Things don’t really move around a great deal in the supply room, the bigger variability is just how full each of the bins are.

The things that the robot is picking out of the bins are very well known, and we make sure that hospitals have a drop off location outside the patient’s room. In about half the hospitals we were in, they already had a drawer where the robot could bring supplies, but sometimes they didn’t have anything, and then we would install something like a mailbox on the wall. That’s something that we’re still working out exactly—it was definitely a prototype for the beta trials, and we’re working out how much that’s going to be needed in our future roll out.

“A robot needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around” —Andrea Thomaz, Diligent Robotics

These aren’t supply rooms that are dedicated to the robot—they’re also used by humans who may move things around unpredictably. How does Moxi deal with the added uncertainty?

That’s really the entire focus of our human-guided learning approach—having the robot build manipulation skills with perceptual cues that are telling it about different anchor points to do that manipulation skill with respect to, and learning particular grasp strategies for a particular category of objects. Those kinds of strategies are going to make that grasp into that bin more successful, and then also learning the sensory feedback that’s expected on a successful grasp versus an unsuccessful one, so that you have the ability to retry until you get the expected sensory feedback.

There must also be plenty of uncertainty when Moxi is navigating around the hospital, which is probably full of people who’ve never seen it before and want to interact with it. To what extent is Moxi designed for those kinds of interactions? And if Moxi needs to be somewhere because it has a job to do, how do you mitigate or avoid them?

One of the things that we liked about hospitals as a semi-structured environment is that even the human interaction that you’re going to run into is structured as well, more so than somewhere like a shopping mall. In a hospital you have a kind of idea of the kind of people that are going to be interacting with the robot, and you can have some expectations about who they are and why they’re there and things, so that’s nice.

We had gone into the beta trial thinking, okay, we’re not doing any patient care, we’re not going into patients’ rooms, we’re bringing things to right outside the patient rooms, we’re mostly going to be interacting with nurses and staff and doctors. We had developed a lot of the social capabilities, little things that Moxi would do with the eyes or little sounds that would be made occasionally, really thinking about nurses and doctors that were going to be in the hallways interacting with Moxi. Within the first couple weeks at the first beta site, the patients and general public in the hospital were having so many more interactions with the robot than we expected. There were people who were, like, grandma is in the hospital, so the entire family comes over on the weekend, to see the robot that happens to be on grandma’s unit, and stuff like that. It was fascinating.

We always knew that being socially acceptable and fitting into the social fabric of the team was important to focus on. A robot needs to have both sides of that coin—it needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around. But in the first couple weeks in our first beta trial, we quickly had to ramp up and say, okay, what else can Moxi do to be social? We had the robot, instead of just going to the charger in between tasks, taking an extra social lap to see if there’s anybody that wants to take a selfie. We added different kinds of hot word detections, like for when people say “hi Moxi,” “good morning, Moxi,” or “how are you?” Just all these things that people were saying to the robot that we wanted to turn into fun interactions.

I would guess that this could sometimes be a little problematic, especially at a children’s hospital where you’re getting lots of new people coming in who haven’t seen a robot before—people really want to interact with robots and that’s independent of whether or not the robot has something else it’s trying to do. How much of a problem is that for Moxi?

That’s on our technical roadmap. We still have to figure out socially appropriate ways to disengage. But what we did learn in our beta trials is that there are even just different navigation paths that you can take, by understanding where crowds tend to be at different times. Like, maybe don’t take a path right by the cafeteria at noon, instead take the back hallway at noon. There are always different ways to get to where you’re going. Houston was a great example—in that hospital, there was this one skyway where you knew the robot was going to get held up for 10 or 15 minutes taking selfies with people, but there was another hallway two floors down that was always empty. So you can kind of optimize navigation time for the number of selfies expected, things like that.

Photo: Diligent Robotics

To what extent is the visual design of Moxi intended to give people a sense of what its capabilities are, or aren’t?

For us, it started with the functional things that Moxie needs. We knew that we’re doing mobile manipulation, so we’d need a base, and we’d need an arm. And we knew we also wanted it to have a social presence, and so from those constraints, we worked with our amazing head of design, Carla Diana, on the look and feel of the robot. For this iteration, we wanted to make sure it didn’t have an overly humanoid look.

Some of the previous platforms that I used in academia, like the Simon robot or the Curie robot, had very realistic eyes. But when you start to talk about taking that to a commercial setting, now you have these eyeballs and eyelids and each of those is a motor that has to work every day all day long, so we realized that you can get a lot out of some simplified LED eyes, and it’s actually endearing to people to have this kind of simplified version of it. The eyes are a big component—that’s always been a big thing for me because of the importance of attention, and being able to communicate to people what the robot is paying attention to. Even if you don’t put eyeballs on a robot, people will find a thing to attribute attention to: They’ll find the camera and say, “oh, those are its eyes!” So I find it’s better to give the robot a socially expressive focus of attention.

I would say speech is the biggest one that we have drawn the line on. We want to make sure people don’t get the sense that Moxi can understand the full English language, because I think people are getting to be really used to speech interfaces, and we don’t have an Alexa or anything like that integrated yet. That could happen in the future, but we don’t have a real need for that right now, so it’s not there, so we want to make sure people don’t think of the robot as an Alexa or a Google Home or a Siri that you can just talk to, so we make sure that it just does beeps and whistles, and then that kind of makes sense to people. So they get that you can say stuff like “hi Moxi,” but that’s about it.

Otherwise, I think the design is really meant to be socially acceptable, we want to make sure people are comfortable, because like you’re saying, this is a robot that a lot of people are going to see for the first time, and we have to be really sensitive to the fact that the hospital is a stressful place for a lot of people, you’re already there with a sick family member and you might have a lot going on, and we want to make sure that we aren’t contributing to additional stress in your day.

You mentioned that you have a vision for human-robot teaming. Longer term, how do you feel like people should be partnering more directly with robots?

Right now, we’re really focused on looking at operational processes that hit two or three different departments in the hospital and require a nurse to do this and a patient care technician to do that and a pharmacy or a materials supply person to do something else. We’re working with hospitals to understand how that whole team of people is making some big operational workflow happen and where Moxi could fit in.

Some places where Moxi fits in, it’s a completely independent task. Other places, it might be a nurse on a unit calling Moxi over to do something, and so there might be a more direct interaction sometimes. Other times it might be that we’re able to connect to the electronic health record and infer automatically that something’s needed and then it really is just happening more in the background. We’re definitely open to both explicit interaction with the team where Moxi’s being called to do something in particular by someone, but I think some of the more powerful examples from our beta trials were ones that really take that cognitive burden off of people—where Moxi could just infer what could happen in the background.

In terms of direct collaboration, like side-by-side working together kind of thing, I do think there’s just such vast differences between—if you’re talking about a human and a robot cooperating on some manipulation task, robots are just—it’s going to be awhile before a robot is going to be as capable. If you already have a person there, doing some kind of manipulation task, it’s going to be hard for a robot to compete, and so I think it’s better to think about places where the person could be used for better things and you could hand something else off entirely to the robot.

So how feasible in the near-term is a nurse saying, “Moxi, could you hold this for me?” How complicated or potentially useful is that?

I think that’s a really interesting example. So then a question is, is the value of the resource and whether being always available to be like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, and what, we did a little bit of that kind on-demand, you know, hey Moxi come over here and do this thing, in some of our beta trials just to kind of look at that on demand versus pre planned activities, and if you can find things in workflows that can be automated and inferred what the robot’s gonna be doing, we think that’s gonna be the biggest bang for your buck, in terms of the value that the robot’s able to deliver.

I think that there may come a day where every clinician’s walking around and there’s always a robot available to respond to “hey, hold this for me,” and I think that would be amazing. But for now, the question is whether the robot being like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, when it could instead be working all night long to get things ready for the next shift.

[ Diligent Robotics ]

Back to IEEE COVID-19 Resources

For the last several years, Diligent Robotics has been testing out its robot, Moxi, in hospitals in Texas. Diligent isn’t the only company working on hospital robots, but Moxi is unique in that it’s doing commercial mobile manipulation, picking supplies out of supply closets and delivering them to patient rooms, all completely autonomously.

A few weeks ago, Diligent announced US $10 million in new funding, which comes at a critical time, as the company addressed in their press release:

Now more than ever hospitals are under enormous stress, and the people bearing the most risk in this pandemic are the nurses and clinicians at the frontlines of patient care. Our mission with Moxi has always been focused on relieving tasks from nurses, giving them more time to focus on patients, and today that mission has a newfound meaning and purpose. Time and again, we hear from our hospital partners that Moxi not only returns time back to their day but also brings a smile to their face.

We checked in with Diligent CEO Andrea Thomaz last week to get a better sense of how Moxi is being used at hospitals. “As our hospital customers are implementing new protocols to respond to the [COVID-19] crisis, we are working with them to identify the best ways for Moxi to be deployed as a resource,” Thomaz told us. “The same kinds of delivery tasks we have been doing are still just as needed as ever, but we are also working with them to identify use cases where having Moxi do a delivery task also reduces infection risk to people in the environment.”

Since this is still something that Diligent and their hospital customers are actively working on, it’s a little early for them to share details. But in general, robots making deliveries means that people aren’t making deliveries, which has several immediate benefits. First, it means that overworked hospital staff can spend their time doing other things (like interacting with patients), and second, the robot is less likely to infect other people. It’s not just that the robot can’t get a virus (not that kind of virus, at any rate), but it’s also much easier to keep robots clean in ways that aren’t an option for humans. Besides wiping them down with chemicals, without too much trouble you could also have them autonomously disinfect themselves with UV, which is both efficient and effective.

While COVID-19 only emphasizes the importance of robots in healthcare, Diligent is tackling a particularly difficult set of problems with Moxi, involving full autonomy, manipulation, and human-robot interaction. Earlier this year, we spoke with Thomaz about how Moxi is starting to make a difference to hospital staff.

IEEE Spectrum: Last time we talked, Moxi was in beta testing. What’s different about Moxi now that it’s ready for full-time deployment?

Andrew Thomaz: During our beta trial, Moxi was deployed for over 120 days total, in four different hospitals (one of them was a children’s hospital, the other three were adult acute-care units), working alongside more than 125 nurses and clinicians. The people we were working with were so excited to be part of this kind of innovative research, and how this new technology is going to actually impact workloads. Our focus on the beta trials was to try any idea that a customer had of how Moxi could provide value—if it seemed at all reasonable, then we would quickly try to mock something up and try it.

I think it validates our human-robot interaction approach to building the company, of getting the technology out there in front of customers to make sure that we’re building the product that they really need. We started to see common workflows across hospitals—there are different kinds of patient care that’s happening, but the kinds of support and supplies and things that are moving around the hospital are similar—and so then we felt that we had learned what we needed to learn from the beta trial and we were ready to launch with our first customers.

Photo: Diligent Robotics

The primary function that Moxi has right now, of restocking and delivery, was that there from the beginning? Or was that something that people asked for and you realized, oh, well, this is how a robot can actually be the most useful.

We knew from the beginning that our goal was to provide the kind of operational support that an end-to-end mobile manipulation platform can do, where you can go somewhere autonomously, pick something up, and bring it to another location and put it down. With each of our beta customers, we were very focused on opportunities where that was the case, where nurses were wasting time.

We did a lot of that kind of discovery, and then you just start seeing that it’s not rocket science—there are central supply places where things are kept around the hospital, and nurses are running back and forth to these places multiple times a day. We’d look at some particular task like admission buckets, or something else that nurses have to do everyday, and then we say, where are the places that automation can really fit in? Some of that support is just navigation tasks, like going from one place to another, some actually involves manipulation, like you need to press this button or you need to pick up this thing. But with Moxi, we have a mobility and a manipulation component that we can put to work, to redefine workflows to include automation.

You mentioned that as part of the beta program that you were mocking the robot up to try all kinds of customer ideas. Was there something that hospitals really wanted the robot to do, that you mocked up and tried but just didn’t work at all?

We were pretty good at not setting ourselves up for failure. I think the biggest thing would be, if there was something that was going to be too heavy for the Kinova arm, or the Robotiq gripper, that’s something we just can’t do right now. But honestly, it was a pretty small percentage of things that we were kind of asked to manipulate that we had to say, oh no, sorry, we can’t lift that much or we can’t grip that wide. The other reason that things that we tried in the beta didn’t make it into our roadmap is if there was an idea that came up with only one of the beta sites. One example is delivering water: One of the beta sites was super excited about having water delivered to the patients every day, ahead of medication deliveries, which makes a lot of sense, but when we start talking to hospital leadership or other people, in other hospitals, it’s definitely just a “nice to have.” So for us, from a technical standpoint, it doesn’t make as much sense to devote a lot of resources into making water delivery a real task if it’s just going to be kind of a “nice to have” for a small percentage of our hospitals. That’s more how that R&D went—if we heard it from one hospital we’d ask, is this something that everybody wants, or just an idea that one person had. 

Let’s talk about how Moxi does what it does. How does the picking process work?

We’re focused on very structured manipulation; we’re not doing general purpose manipulation, and so we have a process for teaching Moxi a particular supply room. There are visual cues that are used to orient the robot to that supply room, and then once you are oriented you know where a bin is. Things don’t really move around a great deal in the supply room, the bigger variability is just how full each of the bins are.

The things that the robot is picking out of the bins are very well known, and we make sure that hospitals have a drop off location outside the patient’s room. In about half the hospitals we were in, they already had a drawer where the robot could bring supplies, but sometimes they didn’t have anything, and then we would install something like a mailbox on the wall. That’s something that we’re still working out exactly—it was definitely a prototype for the beta trials, and we’re working out how much that’s going to be needed in our future roll out.

“A robot needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around” —Andrea Thomaz, Diligent Robotics

These aren’t supply rooms that are dedicated to the robot—they’re also used by humans who may move things around unpredictably. How does Moxi deal with the added uncertainty?

That’s really the entire focus of our human-guided learning approach—having the robot build manipulation skills with perceptual cues that are telling it about different anchor points to do that manipulation skill with respect to, and learning particular grasp strategies for a particular category of objects. Those kinds of strategies are going to make that grasp into that bin more successful, and then also learning the sensory feedback that’s expected on a successful grasp versus an unsuccessful one, so that you have the ability to retry until you get the expected sensory feedback.

There must also be plenty of uncertainty when Moxi is navigating around the hospital, which is probably full of people who’ve never seen it before and want to interact with it. To what extent is Moxi designed for those kinds of interactions? And if Moxi needs to be somewhere because it has a job to do, how do you mitigate or avoid them?

One of the things that we liked about hospitals as a semi-structured environment is that even the human interaction that you’re going to run into is structured as well, more so than somewhere like a shopping mall. In a hospital you have a kind of idea of the kind of people that are going to be interacting with the robot, and you can have some expectations about who they are and why they’re there and things, so that’s nice.

We had gone into the beta trial thinking, okay, we’re not doing any patient care, we’re not going into patients’ rooms, we’re bringing things to right outside the patient rooms, we’re mostly going to be interacting with nurses and staff and doctors. We had developed a lot of the social capabilities, little things that Moxi would do with the eyes or little sounds that would be made occasionally, really thinking about nurses and doctors that were going to be in the hallways interacting with Moxi. Within the first couple weeks at the first beta site, the patients and general public in the hospital were having so many more interactions with the robot than we expected. There were people who were, like, grandma is in the hospital, so the entire family comes over on the weekend, to see the robot that happens to be on grandma’s unit, and stuff like that. It was fascinating.

We always knew that being socially acceptable and fitting into the social fabric of the team was important to focus on. A robot needs to have both sides of that coin—it needs to do something functional, be a utility, and provide value, but also be socially acceptable and something that people want to have around. But in the first couple weeks in our first beta trial, we quickly had to ramp up and say, okay, what else can Moxi do to be social? We had the robot, instead of just going to the charger in between tasks, taking an extra social lap to see if there’s anybody that wants to take a selfie. We added different kinds of hot word detections, like for when people say “hi Moxi,” “good morning, Moxi,” or “how are you?” Just all these things that people were saying to the robot that we wanted to turn into fun interactions.

I would guess that this could sometimes be a little problematic, especially at a children’s hospital where you’re getting lots of new people coming in who haven’t seen a robot before—people really want to interact with robots and that’s independent of whether or not the robot has something else it’s trying to do. How much of a problem is that for Moxi?

That’s on our technical roadmap. We still have to figure out socially appropriate ways to disengage. But what we did learn in our beta trials is that there are even just different navigation paths that you can take, by understanding where crowds tend to be at different times. Like, maybe don’t take a path right by the cafeteria at noon, instead take the back hallway at noon. There are always different ways to get to where you’re going. Houston was a great example—in that hospital, there was this one skyway where you knew the robot was going to get held up for 10 or 15 minutes taking selfies with people, but there was another hallway two floors down that was always empty. So you can kind of optimize navigation time for the number of selfies expected, things like that.

Photo: Diligent Robotics

To what extent is the visual design of Moxi intended to give people a sense of what its capabilities are, or aren’t?

For us, it started with the functional things that Moxie needs. We knew that we’re doing mobile manipulation, so we’d need a base, and we’d need an arm. And we knew we also wanted it to have a social presence, and so from those constraints, we worked with our amazing head of design, Carla Diana, on the look and feel of the robot. For this iteration, we wanted to make sure it didn’t have an overly humanoid look.

Some of the previous platforms that I used in academia, like the Simon robot or the Curie robot, had very realistic eyes. But when you start to talk about taking that to a commercial setting, now you have these eyeballs and eyelids and each of those is a motor that has to work every day all day long, so we realized that you can get a lot out of some simplified LED eyes, and it’s actually endearing to people to have this kind of simplified version of it. The eyes are a big component—that’s always been a big thing for me because of the importance of attention, and being able to communicate to people what the robot is paying attention to. Even if you don’t put eyeballs on a robot, people will find a thing to attribute attention to: They’ll find the camera and say, “oh, those are its eyes!” So I find it’s better to give the robot a socially expressive focus of attention.

I would say speech is the biggest one that we have drawn the line on. We want to make sure people don’t get the sense that Moxi can understand the full English language, because I think people are getting to be really used to speech interfaces, and we don’t have an Alexa or anything like that integrated yet. That could happen in the future, but we don’t have a real need for that right now, so it’s not there, so we want to make sure people don’t think of the robot as an Alexa or a Google Home or a Siri that you can just talk to, so we make sure that it just does beeps and whistles, and then that kind of makes sense to people. So they get that you can say stuff like “hi Moxi,” but that’s about it.

Otherwise, I think the design is really meant to be socially acceptable, we want to make sure people are comfortable, because like you’re saying, this is a robot that a lot of people are going to see for the first time, and we have to be really sensitive to the fact that the hospital is a stressful place for a lot of people, you’re already there with a sick family member and you might have a lot going on, and we want to make sure that we aren’t contributing to additional stress in your day.

You mentioned that you have a vision for human-robot teaming. Longer term, how do you feel like people should be partnering more directly with robots?

Right now, we’re really focused on looking at operational processes that hit two or three different departments in the hospital and require a nurse to do this and a patient care technician to do that and a pharmacy or a materials supply person to do something else. We’re working with hospitals to understand how that whole team of people is making some big operational workflow happen and where Moxi could fit in.

Some places where Moxi fits in, it’s a completely independent task. Other places, it might be a nurse on a unit calling Moxi over to do something, and so there might be a more direct interaction sometimes. Other times it might be that we’re able to connect to the electronic health record and infer automatically that something’s needed and then it really is just happening more in the background. We’re definitely open to both explicit interaction with the team where Moxi’s being called to do something in particular by someone, but I think some of the more powerful examples from our beta trials were ones that really take that cognitive burden off of people—where Moxi could just infer what could happen in the background.

In terms of direct collaboration, like side-by-side working together kind of thing, I do think there’s just such vast differences between—if you’re talking about a human and a robot cooperating on some manipulation task, robots are just—it’s going to be awhile before a robot is going to be as capable. If you already have a person there, doing some kind of manipulation task, it’s going to be hard for a robot to compete, and so I think it’s better to think about places where the person could be used for better things and you could hand something else off entirely to the robot.

So how feasible in the near-term is a nurse saying, “Moxi, could you hold this for me?” How complicated or potentially useful is that?

I think that’s a really interesting example. So then a question is, is the value of the resource and whether being always available to be like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, and what, we did a little bit of that kind on-demand, you know, hey Moxi come over here and do this thing, in some of our beta trials just to kind of look at that on demand versus pre planned activities, and if you can find things in workflows that can be automated and inferred what the robot’s gonna be doing, we think that’s gonna be the biggest bang for your buck, in terms of the value that the robot’s able to deliver.

I think that there may come a day where every clinician’s walking around and there’s always a robot available to respond to “hey, hold this for me,” and I think that would be amazing. But for now, the question is whether the robot being like a third hand for any particular clinician is the most valuable thing that this mobile manipulation platform could be doing, when it could instead be working all night long to get things ready for the next shift.

[ Diligent Robotics ]

Back to IEEE COVID-19 Resources

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICARSC 2020 – April 15-17, 2020 – [Online Conference] ICRA 2020 – May 31-4, 2020 – [TBD] ICUAS 2020 – June 9-12, 2020 – Athens, Greece RSS 2020 – July 12-16, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

You need this dancing robot right now.

By Vanessa Weiß at UPenn.

[ KodLab ]

Remember Qoobo the headless robot cat? There’s a TINY QOOBO NOW!

It’s available now on a Japanese crowdfunding site, but I can’t tell if it’ll ship to other countries.

[ Qoobo ]

Just what we need, more of this thing.

[ Vstone ]

HiBot, which just received an influx of funding, is adding new RaaS (robotics as a service) offerings to its collection of robot arms and snakebots.

HiBot ]

If social distancing already feels like too much work, Misty is like that one-in-a-thousand child that enjoys cleaning. See her in action here as a robot disinfector and sanitizer for common and high-touch surfaces. Alcohol reservoir, servo actuator, and nozzle not (yet) included. But we will provide the support to help you build the skill.

[ Misty Robotics ]

After seeing this tweet from Kate Darling that mentions an MIT experiment in which “a group of gerbils inhabited an architectural environment made of modular blocks, which were manipulated by a robotic arm in response to the gerbils’ movements,” I had to find a video of the robot arm gerbil habitat. The best I could do was this 2007 German remake, but it’s pretty good:

[ Lutz Dammbeck ]

We posted about this research almost a year ago when it came out in RA-L, but I’m not tired of watching the video yet.

Today’s autonomous drones have reaction times of tens of milliseconds, which is not enough for navigating fast in complex dynamic environments. To safely avoid fast moving objects, drones need low-latency sensors and algorithms. We depart from state of the art approaches by using event cameras, which are novel bioinspired sensors with reaction times of microseconds. We demonstrate the effectiveness of our approach on an autonomous quadrotor using only onboard sensing and computation. Our drone was capable of avoiding multiple obstacles of different sizes and shapes at relative speeds up to 10 meters/second, both indoors and outdoors.

[ UZH ]

In this video we present the autonomous exploration of a staircase with four sub-levels and the transition between two floors of the Satsop Nuclear Power Plant during the DARPA Subterranean Challenge Urban Circuit. The utilized system is a collision-tolerant flying robot capable of multi-modal Localization And Mapping fusing LiDAR, vision and inertial sensing. Autonomous exploration and navigation through the staircase is enabled through a Graph-based Exploration Planner implementing a specific mode for vertical exploration. The collision-tolerance of the platform was of paramount importance especially due to the thin features of the involved geometry such as handrails. The whole mission was conducted fully autonomously.

[ CERBERUS ]

At Cognizant’s Inclusion in Tech: Work of Belonging conference, Cognizant VP and Managing Director of the Center for the Future of Work, Ben Pring, sits down with Mary “Mary” Cummings. Missy is currently a Professor at Duke University and the Director of the Duke Robotics Labe. Interestingly, Missy began her career as one of the first female fighter pilots in the U.S. Navy. Working in predominantly male fields – the military, tech, academia – Missy understands the prevalence of sexism, bias and gender discrimination.

Let’s hear more from Missy Cummings on, like, everything.

[ Duke ] via [ Cognizant ]

You don’t need to mountain bike for the Skydio 2 to be worth it, but it helps.

[ Skydio ]

Here’s a look at one of the preliminary simulated cave environments for the DARPA SubT Challenge.

[ Robotika ]

SherpaUW is a hybrid walking and driving exploration rover for subsea applications. The locomotive system consists of four legs with 5 active DoF each. Additionally, a 6 DoF manipulation arm is available. All joints of the legs and the manipulation arm are sealed against water. The arm is pressure compensated, allowing the deployment in deep sea applications.

SherpaUW’s hybrid crawler-design is intended to allow for extended long-term missions on the sea floor. Since it requires no extra energy to maintain its posture and position compared to traditional underwater ROVs (Remotely Operated Vehicles), SherpaUW is well suited for repeated and precise sampling operations, for example monitoring black smockers over a longer period of time.

[ DFKI ]

In collaboration with the Army and Marines, 16 active-duty Army soldiers and Marines used Near Earth’s technology to safely execute 64 resupply missions in an operational demonstration at Fort AP Hill, Virginia in Sep 2019. This video shows some of the modes used during the demonstration.

[ NEA ]

For those of us who aren’t either lucky enough or cursed enough to live with our robotic co-workers, HEBI suggests that now might be a great time to try simulation.

[ GitHub ]

DJI Phantom 4 Pro V2.0 is a complete aerial imaging solution, designed for the professional creator. Featuring a 1-inch CMOS sensor that can shoot 4K/60fps videos and 20MP photos, the Phantom 4 Pro V2.0 grants filmmakers absolute creative freedom. The OcuSync 2.0 HD transmission system ensures stable connectivity and reliability, five directions of obstacle sensing ensures additional safety, and a dedicated remote controller with a built-in screen grants even greater precision and control.

US $1600, or $2k with VR goggles.

[ DJI ]

Not sure why now is the right time to introduce the Fetch research robot, but if you forgot it existed, here’s a reminder.

[ Fetch ]

Two keynotes from the MBZIRC Symposium, featuring Oussama Khatib and Ron Arkin.

[ MBZIRC ]

And here are a couple of talks from the 2020 ROS-I Consortium.

Roger Barga, GM of AWS Robotics and Autonomous Services at Amazon shares some of the latest developments around ROS and advanced robotics in the cloud.

Alex Shikany, VP of Membership and Business Intelligence for A3 shares insights from his organization on the relationship between robotics growth and employment.

[ ROS-I ]

Many tech companies are trying to build machines that detect people’s emotions, using techniques from artificial intelligence. Some companies claim to have succeeded already. Dr. Lisa Feldman Barrett evaluates these claims against the latest scientific evidence on emotion. What does it mean to “detect” emotion in a human face? How often do smiles express happiness and scowls express anger? And what are emotions, scientifically speaking?

[ Microsoft ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICARSC 2020 – April 15-17, 2020 – [Online Conference] ICRA 2020 – May 31-4, 2020 – [TBD] ICUAS 2020 – June 9-12, 2020 – Athens, Greece RSS 2020 – July 12-16, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

You need this dancing robot right now.

By Vanessa Weiß at UPenn.

[ KodLab ]

Remember Qoobo the headless robot cat? There’s a TINY QOOBO NOW!

It’s available now on a Japanese crowdfunding site, but I can’t tell if it’ll ship to other countries.

[ Qoobo ]

Just what we need, more of this thing.

[ Vstone ]

HiBot, which just received an influx of funding, is adding new RaaS (robotics as a service) offerings to its collection of robot arms and snakebots.

HiBot ]

If social distancing already feels like too much work, Misty is like that one-in-a-thousand child that enjoys cleaning. See her in action here as a robot disinfector and sanitizer for common and high-touch surfaces. Alcohol reservoir, servo actuator, and nozzle not (yet) included. But we will provide the support to help you build the skill.

[ Misty Robotics ]

After seeing this tweet from Kate Darling that mentions an MIT experiment in which “a group of gerbils inhabited an architectural environment made of modular blocks, which were manipulated by a robotic arm in response to the gerbils’ movements,” I had to find a video of the robot arm gerbil habitat. The best I could do was this 2007 German remake, but it’s pretty good:

[ Lutz Dammbeck ]

We posted about this research almost a year ago when it came out in RA-L, but I’m not tired of watching the video yet.

Today’s autonomous drones have reaction times of tens of milliseconds, which is not enough for navigating fast in complex dynamic environments. To safely avoid fast moving objects, drones need low-latency sensors and algorithms. We depart from state of the art approaches by using event cameras, which are novel bioinspired sensors with reaction times of microseconds. We demonstrate the effectiveness of our approach on an autonomous quadrotor using only onboard sensing and computation. Our drone was capable of avoiding multiple obstacles of different sizes and shapes at relative speeds up to 10 meters/second, both indoors and outdoors.

[ UZH ]

In this video we present the autonomous exploration of a staircase with four sub-levels and the transition between two floors of the Satsop Nuclear Power Plant during the DARPA Subterranean Challenge Urban Circuit. The utilized system is a collision-tolerant flying robot capable of multi-modal Localization And Mapping fusing LiDAR, vision and inertial sensing. Autonomous exploration and navigation through the staircase is enabled through a Graph-based Exploration Planner implementing a specific mode for vertical exploration. The collision-tolerance of the platform was of paramount importance especially due to the thin features of the involved geometry such as handrails. The whole mission was conducted fully autonomously.

[ CERBERUS ]

At Cognizant’s Inclusion in Tech: Work of Belonging conference, Cognizant VP and Managing Director of the Center for the Future of Work, Ben Pring, sits down with Mary “Mary” Cummings. Missy is currently a Professor at Duke University and the Director of the Duke Robotics Labe. Interestingly, Missy began her career as one of the first female fighter pilots in the U.S. Navy. Working in predominantly male fields – the military, tech, academia – Missy understands the prevalence of sexism, bias and gender discrimination.

Let’s hear more from Missy Cummings on, like, everything.

[ Duke ] via [ Cognizant ]

You don’t need to mountain bike for the Skydio 2 to be worth it, but it helps.

[ Skydio ]

Here’s a look at one of the preliminary simulated cave environments for the DARPA SubT Challenge.

[ Robotika ]

SherpaUW is a hybrid walking and driving exploration rover for subsea applications. The locomotive system consists of four legs with 5 active DoF each. Additionally, a 6 DoF manipulation arm is available. All joints of the legs and the manipulation arm are sealed against water. The arm is pressure compensated, allowing the deployment in deep sea applications.

SherpaUW’s hybrid crawler-design is intended to allow for extended long-term missions on the sea floor. Since it requires no extra energy to maintain its posture and position compared to traditional underwater ROVs (Remotely Operated Vehicles), SherpaUW is well suited for repeated and precise sampling operations, for example monitoring black smockers over a longer period of time.

[ DFKI ]

In collaboration with the Army and Marines, 16 active-duty Army soldiers and Marines used Near Earth’s technology to safely execute 64 resupply missions in an operational demonstration at Fort AP Hill, Virginia in Sep 2019. This video shows some of the modes used during the demonstration.

[ NEA ]

For those of us who aren’t either lucky enough or cursed enough to live with our robotic co-workers, HEBI suggests that now might be a great time to try simulation.

[ GitHub ]

DJI Phantom 4 Pro V2.0 is a complete aerial imaging solution, designed for the professional creator. Featuring a 1-inch CMOS sensor that can shoot 4K/60fps videos and 20MP photos, the Phantom 4 Pro V2.0 grants filmmakers absolute creative freedom. The OcuSync 2.0 HD transmission system ensures stable connectivity and reliability, five directions of obstacle sensing ensures additional safety, and a dedicated remote controller with a built-in screen grants even greater precision and control.

US $1600, or $2k with VR goggles.

[ DJI ]

Not sure why now is the right time to introduce the Fetch research robot, but if you forgot it existed, here’s a reminder.

[ Fetch ]

Two keynotes from the MBZIRC Symposium, featuring Oussama Khatib and Ron Arkin.

[ MBZIRC ]

And here are a couple of talks from the 2020 ROS-I Consortium.

Roger Barga, GM of AWS Robotics and Autonomous Services at Amazon shares some of the latest developments around ROS and advanced robotics in the cloud.

Alex Shikany, VP of Membership and Business Intelligence for A3 shares insights from his organization on the relationship between robotics growth and employment.

[ ROS-I ]

Many tech companies are trying to build machines that detect people’s emotions, using techniques from artificial intelligence. Some companies claim to have succeeded already. Dr. Lisa Feldman Barrett evaluates these claims against the latest scientific evidence on emotion. What does it mean to “detect” emotion in a human face? How often do smiles express happiness and scowls express anger? And what are emotions, scientifically speaking?

[ Microsoft ]

There’s been a lot of intense and well-funded work developing chips that are specially designed to perform AI algorithms faster and more efficiently. The trouble is that it takes years to design a chip, and the universe of machine learning algorithms moves a lot faster than that. Ideally you want a chip that’s optimized to do today’s AI, not the AI of two to five years ago. Google’s solution: have an AI design the AI chip.

“We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI, with each fueling advances in the other,” they write in a paper describing the work that posted today to Arxiv.

“We have already seen that there are algorithms or neural network architectures that… don’t perform as well on existing generations of accelerators, because the accelerators were designed like two years ago, and back then these neural nets didn't exist,” says Azalia Mirhoseini, a senior research scientist at Google. “If we reduce the design cycle, we can bridge the gap.”

Mirhoseini and senior software engineer Anna Goldie have come up with a neural network that learn to do a particularly time-consuming part of design called placement. After studying chip designs long enough, it can produce a design for a Google Tensor Processing Unit in less than 24 hours that beats several weeks-worth of design effort by human experts in terms of power, performance, and area.

Placement is so complex and time-consuming because it involves placing blocks of logic and memory or clusters of those blocks called macros in such a way that power and performance are maximized and the area of the chip is minimized. Heightening the challenge is the requirement that all this happen while at the same time obeying rules about the density of interconnects. Goldie and Mirhoseini targeted chip placement, because even with today’s advanced tools, it takes a human expert weeks of iteration to produce an acceptable design.

Goldie and Mirhoseini modeled chip placement as a reinforcement learning problem. Reinforcement learning systems, unlike typical deep learning, do not train on a large set of labeled data. Instead, they learn by doing, adjusting the parameters in their networks according to a reward signal when they succeed. In this case, the reward was a proxy measure of a combination of power reduction, performance improvement, and area reduction. As a result, the placement-bot becomes better at its task the more designs it does.

The team hopes AI systems like theirs will lead to the design of “more chips in the same time period, and also chips that run faster, use less power, cost less to build, and use less area,” says Goldie.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – [ONLINE EVENT] ICARSC 2020 – April 15-17, 2020 – [ONLINE EVENT] ICRA 2020 – May 31-4, 2020 – [SEE ATTENDANCE SURVEY] ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

UBTECH Robotics’ ATRIS, AIMBOT, and Cruzr robots were deployed at a Shenzhen hospital specialized in treating COVID-19 patients. The company says the robots, which are typically used in retail and hospitality scenarios, were modified to perform tasks that can help keep the hospital safer for everyone, especially front-line healthcare workers. The tasks include providing videoconferencing services between patients and doctors, monitoring the body temperatures of visitors and patients, and disinfecting designated areas.

The Third People’s Hospital of Shenzhen (TPHS), the only designated hospital for treating COVID-19 in Shenzhen, a metropolis with a population of more than 12.5 million, has introduced an intelligent anti-epidemic solution to combat the coronavirus.

AI robots are playing a key role. The UBTECH-developed robot trio, namely ATRIS, AIMBOT, and Cruzr, are giving a helping hand to monitor body temperature, detect people without masks, spray disinfectants and provide medical inquiries.

[ UBTECH ]

Someone has spilled gold all over the place! Probably one of those St. Paddy’s leprechauns... Anyways... It happened near a Robotiq Wrist Camera and Epick setup so it only took a couple of minutes to program and ’’pick and place’’ the mess up.

Even in situations like these, it’s important to stay positive and laugh a little, we had this ready and though we’d still share. Stay safe!

[ Robotiq ]

HEBI Robotics is helping out with social distancing by controlling a robot arm in Austria from their lab in Pittsburgh.

Can’t be too careful!

[ HEBI Robotics ]

Thanks Dave!

SLIDER, a new robot under development at Imperial College London, reminds us a little bit of what SCHAFT was working on with its straight-legged design.

[ Imperial ]

Imitation learning is an effective and safe technique to train robot policies in the real world because it does not depend on an expensive random exploration process. However, due to the lack of exploration, learning policies that generalize beyond the demonstrated behaviors is still an open challenge. We present a novel imitation learning framework to enable robots to 1) learn complex real world manipulation tasks efficiently from a small number of human demonstrations, and 2) synthesize new behaviors not contained in the collected demonstrations. Our key insight is that multi-task domains often present a latent structure, where demonstrated trajectories for different tasks intersect at common regions of the state space. We present Generalization Through Imitation (GTI), a two-stage offline imitation learning algorithm that exploits this intersecting structure to train goal-directed policies that generalize to unseen start and goal state combinations.

[ GTI ]

Here are two excellent videos from UPenn’s Kod*lab showing the capabilities of their programmable compliant origami spring things.

[ Kod*lab ]

We met Bornlove when we were reporting on drones in Tanzania in 2018, and it’s good to see that he’s still improving on his built-from-scratch drone.

[ ADF ]

Laser. Guided. Sandwich. Stacking.

[ Kawasaki ]

The Self-Driving Car Research Studio is a highly expandable and powerful platform designed specifically for academic research. It includes the tools and components researchers need to start testing and validating their concepts and technologies on the first day, without spending time and resources on building DYI platforms or implementing hobby-level vehicles. The research studio includes a fleet of vehicles, software tools enabling researchers to work in Simulink, C/C++, Python, or ROS, with pre-built libraries and models and simulated environments support, even a set of reconfigurable floor panels with road patterns and a set of traffic signs. The research studio’s feature vehicle, QCar, is a 1/10 scale model vehicle powered by NVIDIA Jetson TX2 supercomputer and equipped with LIDAR, 360-degree vision, depth sensor, IMU, encoders, and other sensors, as well as user-expandable IO.

[ Quanser ]

Thanks Zuzana!

The Swarm-Probe Enabling ATEG Reactor, or SPEAR, is a nuclear electric propulsion spacecraft that uses a new, lightweight reactor moderator and advanced thermoelectric generators (ATEGs) to greatly reduce overall core mass. If the total mass of an NEP system could be reduced to levels that were able to be launched on smaller vehicles, these devices could deliver scientific payloads to anywhere in the solar system.

One major destination of recent importance is Europa, one of the moons of Jupiter, which may contain traces of extraterrestrial life deep beneath the surface of its icy crust. Occasionally, the subsurface water on Europa violently breaks through the icy crust and bursts into the space above, creating a large water plume. One proposed method of searching for evidence of life on Europa is to orbit the moon and scan these plumes for ejected organic material. By deploying a swarm of Cubesats, these plumes can be flown through and analyzed multiple times to find important scientific data.

[ SPEAR ]

This hydraulic cyborg hand costs just $35.

Available next month in Japan.

[ Elekit ]

Microsoft is collaborating with researchers from Carnegie Mellon University and Oregon State University to compete in the DARPA Subterranean (SubT) challenges, collectively named Team Explorer. These challenges are designed to test drones and robots on how they perform in hazardous physical environments where humans can’t access safely. By participating in these challenges, these teams hope to find a solution that will assist emergency first responders to help find survivors more quickly.

[ Team Explorer ]

Aalborg University Hospital is the largest hospital in the North Jutland region of Denmark. Up to 3,000 blood samples arrive here in the lab every day. They must be tested and sorted – a time-consuming and monotonous process which was done manually until now. The university hospital has now automated the procedure: a robot-based system and intelligent transport boxes ensure the quality of the samples – and show how workflows in hospitals can be simplified by automation.

[ Kuka ]

This video shows human-robot collaboration for assembly of a gearbox mount in a realistic replica of a production line of Volkswagen AG. Knowledge-based robot skills enable autonomous operation of a mobile dual arm robot side-by-side of a worker.

[ DFKI ]

A brief overview of what’s going on in Max Likhachev’s lab at CMU.

Always good to see PR2 keeping busy!

[ CMU ]

The Intelligent Autonomous Manipulation (IAM) Lab at the Carnegie Mellon University (CMU) Robotics Institute brings together researchers to address the challenges of creating general purpose robots that are capable of performing manipulation tasks in unstructured and everyday environments. Our research focuses on developing learning methods for robots to model tasks and acquire versatile and robust manipulation skills in a sample-efficient manner.

[ IAM Lab ]

Jesse Hostetler is an Advanced Computer Scientist in the Vision and Learning org at SRI International in Princeton, NJ. In this episode of The Dish TV they explore the different aspects of artificial intelligence, and creating robots that use sleep and dream states to prevent catastrophic forgetting.

[ SRI ]

On the latest episode of the AI Podcast, Lex interviews Anca Dragan from UC Berkeley.

Anca Dragan is a professor at Berkeley, working on human-robot interaction -- algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings.

[ AI Podcast ]

Back to IEEE COVID-19 Resources

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – [ONLINE EVENT] ICARSC 2020 – April 15-17, 2020 – [ONLINE EVENT] ICRA 2020 – May 31-4, 2020 – [SEE ATTENDANCE SURVEY] ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

UBTECH Robotics’ ATRIS, AIMBOT, and Cruzr robots were deployed at a Shenzhen hospital specialized in treating COVID-19 patients. The company says the robots, which are typically used in retail and hospitality scenarios, were modified to perform tasks that can help keep the hospital safer for everyone, especially front-line healthcare workers. The tasks include providing videoconferencing services between patients and doctors, monitoring the body temperatures of visitors and patients, and disinfecting designated areas.

The Third People’s Hospital of Shenzhen (TPHS), the only designated hospital for treating COVID-19 in Shenzhen, a metropolis with a population of more than 12.5 million, has introduced an intelligent anti-epidemic solution to combat the coronavirus.

AI robots are playing a key role. The UBTECH-developed robot trio, namely ATRIS, AIMBOT, and Cruzr, are giving a helping hand to monitor body temperature, detect people without masks, spray disinfectants and provide medical inquiries.

[ UBTECH ]

Someone has spilled gold all over the place! Probably one of those St. Paddy’s leprechauns... Anyways... It happened near a Robotiq Wrist Camera and Epick setup so it only took a couple of minutes to program and ’’pick and place’’ the mess up.

Even in situations like these, it’s important to stay positive and laugh a little, we had this ready and though we’d still share. Stay safe!

[ Robotiq ]

HEBI Robotics is helping out with social distancing by controlling a robot arm in Austria from their lab in Pittsburgh.

Can’t be too careful!

[ HEBI Robotics ]

Thanks Dave!

SLIDER, a new robot under development at Imperial College London, reminds us a little bit of what SCHAFT was working on with its straight-legged design.

[ Imperial ]

Imitation learning is an effective and safe technique to train robot policies in the real world because it does not depend on an expensive random exploration process. However, due to the lack of exploration, learning policies that generalize beyond the demonstrated behaviors is still an open challenge. We present a novel imitation learning framework to enable robots to 1) learn complex real world manipulation tasks efficiently from a small number of human demonstrations, and 2) synthesize new behaviors not contained in the collected demonstrations. Our key insight is that multi-task domains often present a latent structure, where demonstrated trajectories for different tasks intersect at common regions of the state space. We present Generalization Through Imitation (GTI), a two-stage offline imitation learning algorithm that exploits this intersecting structure to train goal-directed policies that generalize to unseen start and goal state combinations.

[ GTI ]

Here are two excellent videos from UPenn’s Kod*lab showing the capabilities of their programmable compliant origami spring things.

[ Kod*lab ]

We met Bornlove when we were reporting on drones in Tanzania in 2018, and it’s good to see that he’s still improving on his built-from-scratch drone.

[ ADF ]

Laser. Guided. Sandwich. Stacking.

[ Kawasaki ]

The Self-Driving Car Research Studio is a highly expandable and powerful platform designed specifically for academic research. It includes the tools and components researchers need to start testing and validating their concepts and technologies on the first day, without spending time and resources on building DYI platforms or implementing hobby-level vehicles. The research studio includes a fleet of vehicles, software tools enabling researchers to work in Simulink, C/C++, Python, or ROS, with pre-built libraries and models and simulated environments support, even a set of reconfigurable floor panels with road patterns and a set of traffic signs. The research studio’s feature vehicle, QCar, is a 1/10 scale model vehicle powered by NVIDIA Jetson TX2 supercomputer and equipped with LIDAR, 360-degree vision, depth sensor, IMU, encoders, and other sensors, as well as user-expandable IO.

[ Quanser ]

Thanks Zuzana!

The Swarm-Probe Enabling ATEG Reactor, or SPEAR, is a nuclear electric propulsion spacecraft that uses a new, lightweight reactor moderator and advanced thermoelectric generators (ATEGs) to greatly reduce overall core mass. If the total mass of an NEP system could be reduced to levels that were able to be launched on smaller vehicles, these devices could deliver scientific payloads to anywhere in the solar system.

One major destination of recent importance is Europa, one of the moons of Jupiter, which may contain traces of extraterrestrial life deep beneath the surface of its icy crust. Occasionally, the subsurface water on Europa violently breaks through the icy crust and bursts into the space above, creating a large water plume. One proposed method of searching for evidence of life on Europa is to orbit the moon and scan these plumes for ejected organic material. By deploying a swarm of Cubesats, these plumes can be flown through and analyzed multiple times to find important scientific data.

[ SPEAR ]

This hydraulic cyborg hand costs just $35.

Available next month in Japan.

[ Elekit ]

Microsoft is collaborating with researchers from Carnegie Mellon University and Oregon State University to compete in the DARPA Subterranean (SubT) challenges, collectively named Team Explorer. These challenges are designed to test drones and robots on how they perform in hazardous physical environments where humans can’t access safely. By participating in these challenges, these teams hope to find a solution that will assist emergency first responders to help find survivors more quickly.

[ Team Explorer ]

Aalborg University Hospital is the largest hospital in the North Jutland region of Denmark. Up to 3,000 blood samples arrive here in the lab every day. They must be tested and sorted – a time-consuming and monotonous process which was done manually until now. The university hospital has now automated the procedure: a robot-based system and intelligent transport boxes ensure the quality of the samples – and show how workflows in hospitals can be simplified by automation.

[ Kuka ]

This video shows human-robot collaboration for assembly of a gearbox mount in a realistic replica of a production line of Volkswagen AG. Knowledge-based robot skills enable autonomous operation of a mobile dual arm robot side-by-side of a worker.

[ DFKI ]

A brief overview of what’s going on in Max Likhachev’s lab at CMU.

Always good to see PR2 keeping busy!

[ CMU ]

The Intelligent Autonomous Manipulation (IAM) Lab at the Carnegie Mellon University (CMU) Robotics Institute brings together researchers to address the challenges of creating general purpose robots that are capable of performing manipulation tasks in unstructured and everyday environments. Our research focuses on developing learning methods for robots to model tasks and acquire versatile and robust manipulation skills in a sample-efficient manner.

[ IAM Lab ]

Jesse Hostetler is an Advanced Computer Scientist in the Vision and Learning org at SRI International in Princeton, NJ. In this episode of The Dish TV they explore the different aspects of artificial intelligence, and creating robots that use sleep and dream states to prevent catastrophic forgetting.

[ SRI ]

On the latest episode of the AI Podcast, Lex interviews Anca Dragan from UC Berkeley.

Anca Dragan is a professor at Berkeley, working on human-robot interaction -- algorithms that look beyond the robot’s function in isolation, and generate robot behavior that accounts for interaction and coordination with human beings.

[ AI Podcast ]

Back to IEEE COVID-19 Resources

Pages