Feed aggregator

Origami folding is an ancient art which holds promise for creating compliant and adaptable mechanisms, but has yet to be extensively studied for granular environments. At the same time, biological systems exploit anisotropic body forces for locomotion, such as the frictional anisotropy of a snake’s skin. In this work, we explore how foldable origami feet can be used to passively induce anisotropic force response in granular media, through varying their resistive plane. We present a reciprocating burrower which transfers pure symmetric linear motion into directed burrowing motion using a pair of deployable origami feet on either end. We also present an application of the reduced order model granular Resistive Force Theory to inform the design of deformable structures, and compare results with those from experiments and Discrete Element Method simulations. Through a single actuator, and without the use of advanced controllers or sensors, these origami feet enable burrowing locomotion. In this paper, we achieve burrowing translation ratios—net forward motion to overall linear actuation—over 46% by changing foot design without altering overall foot size. Specifically, anisotropic folding foot parameters should be tuned for optimal performance given a linear actuator’s stroke length.



With the aid of crystals known as perovskites, solar cells are increasingly breaking records in how well they convert sunlight to electricity. Now a new automated system could make those records fall even faster. North Carolina State University’s RoboMapper can analyze how well perovskites might perform in solar cells, using roughly one-tenth to one-fiftieth the time, cost, and energy of either manual labor or previous robotic platforms, its inventors say.

The most common solar cells use silicon to convert light to electricity. These devices are rapidly approaching their theoretical conversion efficiency limit of 29.4 percent; modern commercial silicon solar cells now reach efficiencies of more than 24 percent, and the best lab cell has an efficiency of 26.8 percent.

One strategy to boost a solar cell’s efficiency is by stacking two different light-absorbing materials together into one device. This tandem method increases the spectrum of sunlight the solar cell can harvest. A common approach with tandem cells is to use a top cell made of perovskites to absorb higher-energy visible light and a bottom cell made of silicon for lower-energy infrared rays. Last year scientists unveiled the first perovskite-silicon tandem solar cells to pass the 30 percent efficiency threshold, and last month another group reported the same milestone.

Conventional materials research has scientists prepare a sample on a chip and then go through multiple steps to examine it using different instruments. Existing automation efforts “tend to emulate human workflows—we tend to process materials one parameter at a time,” says Aram Amassian, a materials scientist at North Carolina State University, in Raleigh.

RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

However, modern genetics and pharmaceutical analysis often achieves high throughput by placing dozens of samples on each plate and examining them all at once. RoboMapper also follows this strategy, using printing techniques to miniaturize the material samples.

“We’ve benefited a lot from hardware interoperability with biology and chemistry, such as in liquid handling,” Amassian says. However for RoboMapper, Amassian and his team had to develop new protocols for handling perovskite materials and different characterization experiments from what you’d find in chemistry automation. “One particular development we had to make is to make sure that characterization instruments can handle the high density of materials on a chip with automation. This required a little bit of engineering on both the hardware and software side.”

One key to saving time, energy, material, and money was to shrink the sample size by a factor of 1,000. “The print size is on the order of 50 to 150 [micrometers], while most other tools create samples on the order of centimeters,” Amassian says. “Typically, we print picoliter to nanoliter volumes while other platforms print or coat microliters.”

Perovskite Properties for Pennies

In the first tests of RoboMapper, the scientists analyzed 150 different perovskite compositions. In all RoboMapper was 12 percent the cost, nine times as fast, and 18 times as energy efficient as other robotic platforms. And it was 2 percent the cost, 14 times as fast, and 26 times as energy efficient as manual labor.

“We set out to build a robot that can generate large material libraries so that we can build datasets for training AI models in the future,” Amassian says. Such an AI could then predict which perovskite structures will perform best.

North Carolina State University

The researchers focused on perovskites’ stability, which is a major challenge when it comes to tandem cells. Perovskites tend to degrade when exposed to light, losing the properties that made them desirable in the first place, Amassian explains.

The scientists analyzed perovskite structure, electronic properties, and stability in response to intense light using optical microscopy, microphotoluminescence spectroscopy mapping, and synchrotron-based wide-angle X-ray scattering mapping. This experimental data was then used to develop computational models that identified a specific composition that the researchers predicted would have the best combination of attributes.

“These models are now available for others to use,” Amassian says. He notes they are now in talks with leading tandem solar cell research groups.

Unexpectedly, the scientists found that RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

“We and others did not realize this, because electricity used by instruments in the lab is unseen, whereas materials and supplies are tangible,” Amassian says. “RoboMapper was designed in part to address this insidious problem by placing dozens of materials in the same measurement tools and significantly reducing the amount of time it needs to be powered on to collect data. We showed that tenfold reduction in carbon footprint and other negative environmental impacts can be achieved.”

In the future, “we will continue to search for newer and better perovskites,” Amassian says. “We’re also actively looking at organic solar-cell materials to find compositions that are stable for solar-energy applications. The ability to test dozens of compositions under intense simulated sunlight helps save tremendous time and energy.”

The scientists detailed their findings online 25 July in the journal Matter.



With the aid of crystals known as perovskites, solar cells are increasingly breaking records in how well they convert sunlight to electricity. Now a new automated system could make those records fall even faster. North Carolina State University’s RoboMapper can analyze how well perovskites might perform in solar cells, using roughly one-tenth to one-fiftieth the time, cost, and energy of either manual labor or previous robotic platforms, its inventors say.

The most common solar cells use silicon to convert light to electricity. These devices are rapidly approaching their theoretical conversion efficiency limit of 29.4 percent; modern commercial silicon solar cells now reach efficiencies of more than 24 percent, and the best lab cell has an efficiency of 26.8 percent.

One strategy to boost a solar cell’s efficiency is by stacking two different light-absorbing materials together into one device. This tandem method increases the spectrum of sunlight the solar cell can harvest. A common approach with tandem cells is to use a top cell made of perovskites to absorb higher-energy visible light and a bottom cell made of silicon for lower-energy infrared rays. Last year scientists unveiled the first perovskite-silicon tandem solar cells to pass the 30 percent efficiency threshold, and last month another group reported the same milestone.

Conventional materials research has scientists prepare a sample on a chip and then go through multiple steps to examine it using different instruments. Existing automation efforts “tend to emulate human workflows—we tend to process materials one parameter at a time,” says Aram Amassian, a materials scientist at North Carolina State University, in Raleigh.

RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

However, modern genetics and pharmaceutical analysis often achieves high throughput by placing dozens of samples on each plate and examining them all at once. RoboMapper also follows this strategy, using printing techniques to miniaturize the material samples.

“We’ve benefited a lot from hardware interoperability with biology and chemistry, such as in liquid handling,” Amassian says. However for RoboMapper, Amassian and his team had to develop new protocols for handling perovskite materials and different characterization experiments from what you’d find in chemistry automation. “One particular development we had to make is to make sure that characterization instruments can handle the high density of materials on a chip with automation. This required a little bit of engineering on both the hardware and software side.”

One key to saving time, energy, material, and money was to shrink the sample size by a factor of 1,000. “The print size is on the order of 50 to 150 [micrometers], while most other tools create samples on the order of centimeters,” Amassian says. “Typically, we print picoliter to nanoliter volumes while other platforms print or coat microliters.”

Perovskite Properties for Pennies

In the first tests of RoboMapper, the scientists analyzed 150 different perovskite compositions. In all RoboMapper was 12 percent the cost, nine times as fast, and 18 times as energy efficient as other robotic platforms. And it was 2 percent the cost, 14 times as fast, and 26 times as energy efficient as manual labor.

“We set out to build a robot that can generate large material libraries so that we can build datasets for training AI models in the future,” Amassian says. Such an AI could then predict which perovskite structures will perform best.

North Carolina State University

The researchers focused on perovskites’ stability, which is a major challenge when it comes to tandem cells. Perovskites tend to degrade when exposed to light, losing the properties that made them desirable in the first place, Amassian explains.

The scientists analyzed perovskite structure, electronic properties, and stability in response to intense light using optical microscopy, microphotoluminescence spectroscopy mapping, and synchrotron-based wide-angle X-ray scattering mapping. This experimental data was then used to develop computational models that identified a specific composition that the researchers predicted would have the best combination of attributes.

“These models are now available for others to use,” Amassian says. He notes they are now in talks with leading tandem solar cell research groups.

Unexpectedly, the scientists found that RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

“We and others did not realize this, because electricity used by instruments in the lab is unseen, whereas materials and supplies are tangible,” Amassian says. “RoboMapper was designed in part to address this insidious problem by placing dozens of materials in the same measurement tools and significantly reducing the amount of time it needs to be powered on to collect data. We showed that tenfold reduction in carbon footprint and other negative environmental impacts can be achieved.”

In the future, “we will continue to search for newer and better perovskites,” Amassian says. “We’re also actively looking at organic solar-cell materials to find compositions that are stable for solar-energy applications. The ability to test dozens of compositions under intense simulated sunlight helps save tremendous time and energy.”

The scientists detailed their findings online 25 July in the journal Matter.



When Marc Raibert founded Boston Dynamics in 1992, he wasn’t even sure it was going to be a robotics company—he thought it might become a modeling and simulation company instead. Now, of course, Boston Dynamics is the authority in legged robots, with its Atlas biped and Spot quadruped. But as the company focuses more on commercializing its technology, Raibert has become more interested in pursuing the long-term vision of what robotics can be.

To that end, Raibert founded the Boston Dynamics AI Institute in August of 2022. Funded by Hyundai (the company also acquired Boston Dynamics in 2020), the Institute’s first few projects will focus on making robots useful outside the lab by teaching them to better understand the world around them.

Marc Raibert 

Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

At the 2023 IEEE International Conference on Robotics at Automation (ICRA) in London this past May, Raibert gave a keynote talk that discussed some of his specific goals, with an emphasis on developing practical, helpful capabilities in robots. For example, Raibert hopes to teach robots to watch humans perform tasks, understand what they’re seeing, and then do it themselves—or know when they don’t understand something, and how to ask questions to fill in those gaps. Another of Raibert’s goals is to teach robots to inspect equipment to figure out whether something is working—and if it’s not, to determine what’s wrong with it and make repairs. Raibert showed concept art at ICRA that included robots working in domestic environments such as kitchens, living rooms, and laundry rooms as well as industrial settings. “I look forward to having some demos of something like this happening at ICRA 2028 or 2029,” Raibert quipped.

Following his keynote, IEEE Spectrum spoke with Raibert, and he answered five questions about where he wants robotics to go next.

At the Institute, you’re starting to share your vision for the future of robotics more than you did at Boston Dynamics. Why is that?

Marc Raibert: At Boston Dynamics, I don’t think we talked about the vision. We just did the next thing, saw how it went, and then decided what to do after that. I was taught that when you wrote a paper or gave a presentation, you showed what you had accomplished. All that really mattered was the data in your paper. You could talk about what you want to do, but people talk about all kinds of things that way—the future is so cheap, and so variable. That’s not the same as showing what you did. And I took pride in showing what we actually did at Boston Dynamics.

But if you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision. So I’m starting to be a little more comfortable with doing that. Not to mention that at this point, we don’t have any actual results to show.

Right now, robots must be carefully trained to complete specific tasks. But Marc Raibert wants to give robots the ability to watch a human do a task, understand what\u2019s happening, and then do the task themselves, whether it\u2019s in a factory [top left and bottom] or in your home [top right and bottom]. Boston Dynamics AI Institute

The Institute will be putting a lot of effort into how robots can better manipulate objects. What’s the opportunity there?

Raibert: I think that for 50 years, people have been working on manipulation, and it hasn’t progressed enough. I’m not criticizing anybody, but I think that there’s been so much work on path planning, where path planning means how you move through open space. But that’s not where the action is. The action is when you’re in contact with things—we humans basically juggle with our hands when we’re manipulating, and I’ve seen very few things that look like that. It’s going to be hard, but maybe we can make progress on it. One idea is that going from static robot manipulation to dynamic can advance the field the way that going from static to dynamic advanced legged robots.

How are you going to make your vision happen?

Raibert: I don’t know any of the answers for how we’re going to do any of this! That’s the technical fearlessness—or maybe the technical foolishness. My long-term hope for the Institute is that most of the ideas don’t come from me, and that we succeed in hiring the kind of people who can have ideas that lead the field. We’re looking for people who are good at bracketing a problem, doing a quick pass at it (“quick” being maybe a year), seeing what sticks, and then taking another pass at it. And we’ll give them the resources they need to go after problems that way.

“If you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision.”

Are you concerned about how the public perception of robots, and especially of robots you have developed, is sometimes negative?

Raibert: The media can be over the top with stories about the fear of robots. I think that by and large, people really love robots. Or at least, a lot of people could love them, even though sometimes they’re afraid of them. But I think people just have to get to know robots, and at some point I’d like to open up an outreach center where people could interact with our robots in positive ways. We are actively working on that.

What do you find so interesting about dancing robots?

Raibert: I think there are a lot of opportunities for emotional expression by robots, and there’s a lot to be done that hasn’t been done. Right now, it’s labor-intensive to create these performances, and the robots are not perceiving anything. They’re just playing back the behaviors that we program. They should be listening to the music. They should be seeing who they’re dancing with, and coordinating with them. And I have to say, every time I think about that, I wonder if I’m getting soft because robots don’t have to be emotional, either on the giving side or on the receiving side. But somehow, it’s captivating.

Marc Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

This article appears in the August 2023 print issue as “5 Questions for Marc Raibert.”



When Marc Raibert founded Boston Dynamics in 1992, he wasn’t even sure it was going to be a robotics company—he thought it might become a modeling and simulation company instead. Now, of course, Boston Dynamics is the authority in legged robots, with its Atlas biped and Spot quadruped. But as the company focuses more on commercializing its technology, Raibert has become more interested in pursuing the long-term vision of what robotics can be.

To that end, Raibert founded the Boston Dynamics AI Institute in August of 2022. Funded by Hyundai (the company also acquired Boston Dynamics in 2020), the Institute’s first few projects will focus on making robots useful outside the lab by teaching them to better understand the world around them.

Marc Raibert 

Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

At the 2023 IEEE International Conference on Robotics at Automation (ICRA) in London this past May, Raibert gave a keynote talk that discussed some of his specific goals, with an emphasis on developing practical, helpful capabilities in robots. For example, Raibert hopes to teach robots to watch humans perform tasks, understand what they’re seeing, and then do it themselves—or know when they don’t understand something, and how to ask questions to fill in those gaps. Another of Raibert’s goals is to teach robots to inspect equipment to figure out whether something is working—and if it’s not, to determine what’s wrong with it and make repairs. Raibert showed concept art at ICRA that included robots working in domestic environments such as kitchens, living rooms, and laundry rooms as well as industrial settings. “I look forward to having some demos of something like this happening at ICRA 2028 or 2029,” Raibert quipped.

Following his keynote, IEEE Spectrum spoke with Raibert, and he answered five questions about where he wants robotics to go next.

At the Institute, you’re starting to share your vision for the future of robotics more than you did at Boston Dynamics. Why is that?

Marc Raibert: At Boston Dynamics, I don’t think we talked about the vision. We just did the next thing, saw how it went, and then decided what to do after that. I was taught that when you wrote a paper or gave a presentation, you showed what you had accomplished. All that really mattered was the data in your paper. You could talk about what you want to do, but people talk about all kinds of things that way—the future is so cheap, and so variable. That’s not the same as showing what you did. And I took pride in showing what we actually did at Boston Dynamics.

But if you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision. So I’m starting to be a little more comfortable with doing that. Not to mention that at this point, we don’t have any actual results to show.

Right now, robots must be carefully trained to complete specific tasks. But Marc Raibert wants to give robots the ability to watch a human do a task, understand what\u2019s happening, and then do the task themselves, whether it\u2019s in a factory [top left and bottom] or in your home [top right and bottom]. Boston Dynamics AI Institute

The Institute will be putting a lot of effort into how robots can better manipulate objects. What’s the opportunity there?

Raibert: I think that for 50 years, people have been working on manipulation, and it hasn’t progressed enough. I’m not criticizing anybody, but I think that there’s been so much work on path planning, where path planning means how you move through open space. But that’s not where the action is. The action is when you’re in contact with things—we humans basically juggle with our hands when we’re manipulating, and I’ve seen very few things that look like that. It’s going to be hard, but maybe we can make progress on it. One idea is that going from static robot manipulation to dynamic can advance the field the way that going from static to dynamic advanced legged robots.

How are you going to make your vision happen?

Raibert: I don’t know any of the answers for how we’re going to do any of this! That’s the technical fearlessness—or maybe the technical foolishness. My long-term hope for the Institute is that most of the ideas don’t come from me, and that we succeed in hiring the kind of people who can have ideas that lead the field. We’re looking for people who are good at bracketing a problem, doing a quick pass at it (“quick” being maybe a year), seeing what sticks, and then taking another pass at it. And we’ll give them the resources they need to go after problems that way.

“If you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision.”

Are you concerned about how the public perception of robots, and especially of robots you have developed, is sometimes negative?

Raibert: The media can be over the top with stories about the fear of robots. I think that by and large, people really love robots. Or at least, a lot of people could love them, even though sometimes they’re afraid of them. But I think people just have to get to know robots, and at some point I’d like to open up an outreach center where people could interact with our robots in positive ways. We are actively working on that.

What do you find so interesting about dancing robots?

Raibert: I think there are a lot of opportunities for emotional expression by robots, and there’s a lot to be done that hasn’t been done. Right now, it’s labor-intensive to create these performances, and the robots are not perceiving anything. They’re just playing back the behaviors that we program. They should be listening to the music. They should be seeing who they’re dancing with, and coordinating with them. And I have to say, every time I think about that, I wonder if I’m getting soft because robots don’t have to be emotional, either on the giving side or on the receiving side. But somehow, it’s captivating.

Marc Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

This article appears in the August 2023 print issue as “5 Questions for Marc Raibert.”

The lateral line system of zebrafish consists of the anterior lateral line, with neuromasts distributed on the head, and the posterior lateral line, with neuromasts distributed on the trunk. The sensory afferent neurons are contained in the anterior and posterior lateral line ganglia, respectively. So far, the vast majority of physiological and developmental studies have focused on the posterior lateral line. However, studies that focus on the anterior lateral line, especially on its physiology, are very rare. The anterior lateral line involves different neuromast patterning processes, specific distribution of synapses, and a unique role in behavior. Here, we report our observations regarding the development of the lateral line and analyze the physiological responses of the anterior lateral line to mechanical and water jet stimuli. Sensing in the fish head may be crucial to avoid obstacles, catch prey, and orient in water current, especially in the absence of visual cues. Alongside the lateral line, the trigeminal system, with its fine nerve endings innervating the skin, could contribute to perceiving mechanosensory stimulation. Therefore, we compare the physiological responses of the lateral line afferent neurons to responses of trigeminal neurons and responsiveness of auditory neurons. We show that anterior lateral line neurons are tuned to the velocity of mechanosensory ramp stimulation, while trigeminal neurons either only respond to mechanical step stimuli or fast ramp and step stimuli. Auditory neurons did not respond to mechanical or water jet stimuli. These results may prove to be essential in designing underwater robots and artificial lateral lines, with respect to the spectra of stimuli that the different mechanosensory systems in the larval head are tuned to, and underline the importance and functionality of the anterior lateral line system in the larval fish head.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Two interesting things about this video: First, the “where’s the button” poke at 2:20, and second, the custom Spot-friendly wrench.

[ Boston Dynamics ]

This is one of the more interesting drone designs that I’ve seen recently, since it’s modular, and you can clip on wings and props. And somehow it just flies.

[ AIR Lab ]

This soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work. The researchers wanted to design a soft gripper that would be ready to use right as it comes off the 3D printer, equipped with built in gravity and touch sensors. As a result, the gripper can pick up, hold, and release objects.

[ UCSD ]

Thanks, Daniel!

Through this powerful collaboration with the U.S. Agency for International Development (USAID), we are proud to donate cutting-edge Skydio drones, complemented by 3D Scan technology and comprehensive professional training. These resources will aid the Office of the Prosecutor General to document the more than 115,000 instances of destroyed civilian infrastructure, and evidence of human rights abuses on frontline communities and liberated territories.

[ Skydio ]

Grasping objects with limited or no prior knowledge about them is a highly relevant skill in assistive robotics. Still, in this general setting, it has remained an open problem. We present a deep learning pipeline consisting of a shape completion module that is based on a single depth image, and followed by a grasp predictor that is based on the predicted object shape.

[ DLR RM ]

This is a video announcing the opening of the MyoChallenge 23, part of the challenge track of the NeurIPS23 conference. This competition merges physiologically realistic musculoskeletal models and AI with the goal of creating controllers for locomotion and manipulation.

[ MyoChallenge ]

Thanks, Guillaume!

The new DJI Air 3 has a transmission range of 20 kilometers and a flight time of 46 minutes. Consumer drones have made a lot of progress in a pretty short time, haven’t they?

[ DJI ]

With [human driving’s track] record of nearly 43,000 deaths and 2.5 million injuries in the U.S. alone in 2021, we believe autonomous driving technology has the potential to save lives and improve mobility options for millions of people. The data to-date indicates that the Waymo Driver is reducing traffic injuries and fatalities in the places where we operate, and we aim to continue safely designing and deploying our Driver to help more people in more places.

Humans are bad drivers for sure, but according to expert Missy Cummings, as quoted by AP, “autonomous vehicles from Waymo, a spinoff of Google, are four times more likely than humans to crash.”

Watch Tanner Lecturers, Fei-Fei Li and Eric Horvitz, discuss the topics of AI and Human Values.

[ Stanford HAI ]

Tin Lun Lam writes, “in the last two months, we have organized a Lecture Series on Multi-robot Systems and invited eight world-renowned scholars to share their wisdom to help promote knowledge sharing and technological advancement in this field.” Here’s two of the lectures, and you can find the other six at the link below.

[ Freeform Robotics ]

Thanks, Tin Lun!



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Two interesting things about this video: First, the “where’s the button” poke at 2:20, and second, the custom Spot-friendly wrench.

[ Boston Dynamics ]

This is one of the more interesting drone designs that I’ve seen recently, since it’s modular, and you can clip on wings and props. And somehow it just flies.

[ AIR Lab ]

This soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work. The researchers wanted to design a soft gripper that would be ready to use right as it comes off the 3D printer, equipped with built in gravity and touch sensors. As a result, the gripper can pick up, hold, and release objects.

[ UCSD ]

Thanks, Daniel!

Through this powerful collaboration with the U.S. Agency for International Development (USAID), we are proud to donate cutting-edge Skydio drones, complemented by 3D Scan technology and comprehensive professional training. These resources will aid the Office of the Prosecutor General to document the more than 115,000 instances of destroyed civilian infrastructure, and evidence of human rights abuses on frontline communities and liberated territories.

[ Skydio ]

Grasping objects with limited or no prior knowledge about them is a highly relevant skill in assistive robotics. Still, in this general setting, it has remained an open problem. We present a deep learning pipeline consisting of a shape completion module that is based on a single depth image, and followed by a grasp predictor that is based on the predicted object shape.

[ DLR RM ]

This is a video announcing the opening of the MyoChallenge 23, part of the challenge track of the NeurIPS23 conference. This competition merges physiologically realistic musculoskeletal models and AI with the goal of creating controllers for locomotion and manipulation.

[ MyoChallenge ]

Thanks, Guillaume!

The new DJI Air 3 has a transmission range of 20 kilometers and a flight time of 46 minutes. Consumer drones have made a lot of progress in a pretty short time, haven’t they?

[ DJI ]

With [human driving’s track] record of nearly 43,000 deaths and 2.5 million injuries in the U.S. alone in 2021, we believe autonomous driving technology has the potential to save lives and improve mobility options for millions of people. The data to-date indicates that the Waymo Driver is reducing traffic injuries and fatalities in the places where we operate, and we aim to continue safely designing and deploying our Driver to help more people in more places.

Humans are bad drivers for sure, but according to expert Missy Cummings, as quoted by AP, “autonomous vehicles from Waymo, a spinoff of Google, are four times more likely than humans to crash.”

Watch Tanner Lecturers, Fei-Fei Li and Eric Horvitz, discuss the topics of AI and Human Values.

[ Stanford HAI ]

Tin Lun Lam writes, “in the last two months, we have organized a Lecture Series on Multi-robot Systems and invited eight world-renowned scholars to share their wisdom to help promote knowledge sharing and technological advancement in this field.” Here’s two of the lectures, and you can find the other six at the link below.

[ Freeform Robotics ]

Thanks, Tin Lun!

The robust detection of GNSS non-line-of-sight (NLOS) signals is of vital importance for land- and close-to-land-based safe navigation applications. The usage of GNSS measurements affected by NLOS can lead to large unbounded positioning errors and loss of safety. Due to the complex signal conditions in urban environments, the use of machine learning or artificial intelligence techniques and algorithms has recently been identified as potential tools to classify GNSS LOS/NLOS signals. The design of machine learning algorithms with GNSS features is an emerging field of research that must, however, be tackled carefully to avoid biased estimation results and to guarantee algorithms that can be generalized for different scenarios, receivers, antennas, and their specific installations and configurations. This work first provides new options to guarantee a proper generalization of trained algorithms by means of a pre-normalization of features with models extracted in open-sky (nominal) scenarios. The second main contribution focuses on designing a branched (or parallel) machine learning process to handle the intermittent presence of GNSS features in certain frequencies. This allows to exploit measurements in all available frequencies as compared to current approaches in the literature based on only the single frequency. The detection by means of logistic regression not only provides a binary LOS/NLOS decision but also an associated probability which can be used in the future as a means to weight-specific measurements. The detection with the proposed branched logistic regression with pre-normalized multi-frequency features has shown better results than the state-of-the-art algorithms, reaching 90% detection accuracy in the validation scenarios evaluated.

Robots currently provide only a limited amount of information about their future movements to human collaborators. In human interaction, communication through gaze can be helpful by intuitively directing attention to specific targets. Whether and how this mechanism could benefit the interaction with robots and how a design of predictive robot eyes in general should look like is not well understood. In a between-subjects design, four different types of eyes were therefore compared with regard to their attention directing potential: a pair of arrows, human eyes, and two anthropomorphic robot eye designs. For this purpose, 39 subjects performed a novel, screen-based gaze cueing task in the laboratory. Participants’ attention was measured using manual responses and eye-tracking. Information on the perception of the tested cues was provided through additional subjective measures. All eye models were overall easy to read and were able to direct participants’ attention. The anthropomorphic robot eyes were most efficient at shifting participants’ attention which was revealed by faster manual and saccadic reaction times. In addition, a robot equipped with anthropomorphic eyes was perceived as being more competent. Abstract anthropomorphic robot eyes therefore seem to trigger a reflexive reallocation of attention. This points to a social and automatic processing of such artificial stimuli.

Deep-sea manganese nodules are abundant in the ocean, with high exploitation potential and commercial value, and have become mineral resources that coastal countries compete to develop. The pipeline-lifting mining system is the most promising deep-sea mining system at present. A deep-sea mining vehicle is the core equipment of this system. Mining quality and efficiency rely on mining vehicles to a great extent. According to the topographic and geomorphic environmental characteristics of deep-sea manganese nodules at the bottom of the ocean, a new deep-sea mining system based on an autonomous manganese nodule mining vehicle is proposed in this paper. According to the operating environment and functional requirements of the seabed, a new mining method is proposed, and the global traverse path planning research of the autonomous manganese nodule mining vehicle based on this mining method is carried out. The arc round-trip acquisition path planning method is put forward, and the simulation verification shows that the method effectively solves the problems of low efficiency of mining vehicle traversing acquisition and obstacle avoidance.



It’s been just two years since Hangzhou, China–based Unitree introduced the Go1, a US $2,700 quadruped robot. And since then, the Go1 has had a huge influence over the small quadruped-research market, due to its unique combination of performance, accessibility, and being (as legged robots go) supercheap.

Unitree has just announced the Go2, a new version that manages to both be significantly better and super-duper-cheap—it’s faster and more agile and now even includes a lidar, but somehow costs just $1,600.

Okay, yes, some of the word choice in that video is slightly odd. But who cares, because that’s some very impressive, dynamic mobility at a shockingly low cost. The $1,600 base model, the Go2 Air, includes a chin-mounted 360- by 90-degree hemispherical lidar, which has a minimum sensing range of 0.05 meters for intelligent terrain navigation and obstacle avoidance. The Go2 can move at a brisk 2.5 meters per second with a 7-kilogram payload, and operates for up to 2 hours with a 8000 milliamp-hour (mAh) battery. There’s even a graphical programming interface, if you have no idea what you’re doing but just want to mess around a little bit.

Like its predecessor, the Go2 is available in several different models. For $2,800, you get the Go2 Pro, with an additional kilo of payload capacity, an extra meter per second of speed, onboard compute, and 4G connectivity. It also comes with side-following, which is what will let the robot go for a jog alongside you. And if you need even more, the Go2 Edu (which you’ll have to contact Unitree about directly) boasts a peak speed of a blistering 5 m/s, has force sensors on its feet, and will run for up to 4 hours with a 15,000-mAh battery.

Unitree

“Go2 was a huge project with many difficulties we had to overcome,” Unitree founder and CEO Xingxing Wang told IEEE Spectrum. “We have researched and developed almost every mechanical part and circuit board. Through continuously improving the design, we tried hard to improve its performance and quality as well as reduce costs, which required a lot of work and effort.”

We also asked Wang what has impressed him the most about how other people have used his robots. “We are very happy that many global institutions and companies use our quadruped robot in meaningful and innovative development,” he says. He points to a couple of his favorite examples, including CSIC using a Go1 as a robot guide dog that he hopes will have significant benefits for the visually impaired, and a recent paper in Science Robotics that uses a brain-inspired multimodal hybrid-neural network running on a Go1 for place recognition.

Lastly, we wanted to know whether all of this new footage of Go2 balancing on two legs means that Unitree might be taking an interest in bipeds sometime soon. “I think it’s cool that a quadruped robot can realize bipedal locomotion,” says Wang. “We may try to make a bipedal robot on the basis of a quadruped robot.” Yeah, sign us up for that.



It’s been just two years since Hangzhou, China–based Unitree introduced the Go1, a US $2,700 quadruped robot. And since then, the Go1 has had a huge influence over the small quadruped-research market, due to its unique combination of performance, accessibility, and being (as legged robots go) supercheap.

Unitree has just announced the Go2, a new version that manages to both be significantly better and super-duper-cheap—it’s faster and more agile and now even includes a lidar, but somehow costs just $1,600.

Okay, yes, some of the word choice in that video is slightly odd. But who cares, because that’s some very impressive, dynamic mobility at a shockingly low cost. The $1,600 base model, the Go2 Air, includes a chin-mounted 360- by 90-degree hemispherical lidar, which has a minimum sensing range of 0.05 meters for intelligent terrain navigation and obstacle avoidance. The Go2 can move at a brisk 2.5 meters per second with a 7-kilogram payload, and operates for up to 2 hours with a 8000 milliamp-hour (mAh) battery. There’s even a graphical programming interface, if you have no idea what you’re doing but just want to mess around a little bit.

Like its predecessor, the Go2 is available in several different models. For $2,800, you get the Go2 Pro, with an additional kilo of payload capacity, an extra meter per second of speed, onboard compute, and 4G connectivity. It also comes with side-following, which is what will let the robot go for a jog alongside you. And if you need even more, the Go2 Edu (which you’ll have to contact Unitree about directly) boasts a peak speed of a blistering 5 m/s, has force sensors on its feet, and will run for up to 4 hours with a 15,000-mAh battery.

Unitree

“Go2 was a huge project with many difficulties we had to overcome,” Unitree founder and CEO Xingxing Wang told IEEE Spectrum. “We have researched and developed almost every mechanical part and circuit board. Through continuously improving the design, we tried hard to improve its performance and quality as well as reduce costs, which required a lot of work and effort.”

We also asked Wang what has impressed him the most about how other people have used his robots. “We are very happy that many global institutions and companies use our quadruped robot in meaningful and innovative development,” he says. He points to a couple of his favorite examples, including CSIC using a Go1 as a robot guide dog that he hopes will have significant benefits for the visually impaired, and a recent paper in Science Robotics that uses a brain-inspired multimodal hybrid-neural network running on a Go1 for place recognition.

Lastly, we wanted to know whether all of this new footage of Go2 balancing on two legs means that Unitree might be taking an interest in bipeds sometime soon. “I think it’s cool that a quadruped robot can realize bipedal locomotion,” says Wang. “We may try to make a bipedal robot on the basis of a quadruped robot.” Yeah, sign us up for that.



As the automotive industry navigates a new era of self-driving cars, every second matters. Information from sensors and electronics must reach the main CPU as quickly as possible, but faster data rates impact signals. Coping with data loss is imperative for safety. Validating receiver operation in a car’s noisy environment in both ideal and stressed conditions improves in-vehicle network (IVN) performance. Delve into the automotive trends driving focus on receiver testing, understand the implications of not testing, and learn how to prepare for receiver testing and validate its performance at the physical layer in this white paper.

Download this free whitepaper now!



As the automotive industry navigates a new era of self-driving cars, every second matters. Information from sensors and electronics must reach the main CPU as quickly as possible, but faster data rates impact signals. Coping with data loss is imperative for safety. Validating receiver operation in a car’s noisy environment in both ideal and stressed conditions improves in-vehicle network (IVN) performance. Delve into the automotive trends driving focus on receiver testing, understand the implications of not testing, and learn how to prepare for receiver testing and validate its performance at the physical layer in this white paper.

Download this free whitepaper now!

As the responses of chat dialogue systems have become more natural, the empathy skill of dialogue systems has become an important new issue. In text-based chat dialogue systems, the definition of empathy is not precise, and how to design the kind of utterance that improves the user’s impression of receiving empathy is not clear since the main method used is to imitate utterances and dialogues that humans consider empathetic. In this study, we focus on the necessity of grasping an agent as an experienceable Other, which is considered the most important factor when empathy is performed by an agent, and propose an utterance design that directly conveys the fact that the agent can experience and feel empathy through text. Our system has an experience database including the system’s pseudo-experience and feelings to show empathetic feelings. Then, the system understands the user’s experiences and empathizes with the user on the basis of the system’s experience database, in line with the dialogue content. As a result of developing and evaluating several systems with different ways of conveying the aforementioned rationale, we found that conveying the rationale as a hearsay experience improved the user’s impression of receiving empathy more than conveying it as the system’s own experience. Moreover, an exhaustive evaluation shows that our empathetic utterance design using hearsay experience is effective to improve the user’s impression about the system’s cognitive empathy.

Optical colonoscopy is the gold standard procedure to detect colorectal cancer, the fourth most common cancer in the United Kingdom. Up to 22%–28% of polyps can be missed during the procedure that is associated with interval cancer. A vision-based autonomous soft endorobot for colonoscopy can drastically improve the accuracy of the procedure by inspecting the colon more systematically with reduced discomfort. A three-dimensional understanding of the environment is essential for robot navigation and can also improve the adenoma detection rate. Monocular depth estimation with deep learning methods has progressed substantially, but collecting ground-truth depth maps remains a challenge as no 3D camera can be fitted to a standard colonoscope. This work addresses this issue by using a self-supervised monocular depth estimation model that directly learns depth from video sequences with view synthesis. In addition, our model accommodates wide field-of-view cameras typically used in colonoscopy and specific challenges such as deformable surfaces, specular lighting, non-Lambertian surfaces, and high occlusion. We performed qualitative analysis on a synthetic data set, a quantitative examination of the colonoscopy training model, and real colonoscopy videos in near real-time.

The development of soft robotic hand exoskeletons for rehabilitation has been well-reported in the literature, whereby the emphasis was placed on the development of soft actuators for flexion and extension. Little attention was focused on developing the glove interface and attachments of actuators to the hand. As these hand exoskeletons are largely developed for personnel with impaired hand function for rehabilitation, it may be tedious to aid the patients in donning and doffing the glove, given that patients usually have stiff fingers exhibiting high muscle tone. To address this issue, a hybrid securing actuator was developed and powered pneumatically to allow for rapid securing and release of a body segment. As a proof of concept, the actuator was further adapted into a self-securing glove mechanism and assembled into a complete self-securing soft robotic hand exoskeleton with the attachment of bidirectional actuators. Our validation tests show that the self-wearing soft robotic hand exoskeleton can easily conform and secure onto the human hand and assist with manipulation tasks.



Everybody likes watching robots fall over. We get it, it’s funny. And we here at IEEE Spectrum are as guilty as anyone of making it a thing: Our compilation of robots falling down at the DARPA Robotics Challenge eight years ago has several million views on YouTube. But a couple of months ago, Agility Robotics shared a video of one of its Digit robots collapsing while stacking boxes during the ProMat trade show, which went nuts across Twitter, TikTok, and Instagram. Agility eventually issued a statement to the Associated Press clarifying that Digit didn’t deactivate itself due to the nature of the work, which is how some viewers reacted to the viral clip.

Agility isn’t the only robotics company to share its failures with an online audience. Boston Dynamics, developer of the Spot and Atlas robots, may have been the first company to be accused of “robot abuse” because of its videos, and the company frequently includes footage of its research robots being unsuccessful as well as successful on YouTube. And now there are 1,100 Spots out in the world being useful, falls happen both more frequently, and more visibly.

Even though falling robots aren’t a new thing, what may be a new(ish) thing are some technological advances that have changed the nature of falling. First, both Boston Dynamics and Agility Robotics have human-scale bipedal robots for which not falling seems pretty normal. This is a relatively recent development. Although a number of companies are working on humanoids, the Agility and Boston Dynamics humanoids are (as far as we are aware) the only ones that can routinely handle untethered dynamic walking.

“Sometimes the robot is going to break something when it falls. But it’s learning, and eventually I think these robots will fall even less often than people do.”
—Jonathan Hurst, Agility Robotics

The other important advance is that these humanoid robots are usually able to fall without destroying themselves. During the DARPA Robotics Challenge in 2015, falling generally meant doom for the competitors, with one exception: Carnegie Mellon University’s CHIMP, which was built like a literal tank. Since then, roboticists have tried adding things like armor and airbags to keep a falling robot in one piece. But now, these robots can fall with minimal drama and get back up again. If they do suffer damage, they can be easily fixed.

And yet, even though falling has become much less of a big deal for the roboticists, it’s still a big deal for the general public, as these viral videos of robots falling down prove. We recently spoke with Agility Robotics’ Chief Robot Officer Jonathan Hurst and Head of Customer Experience Bambi Brewer, as well as Boston Dynamics CTO Aaron Saunders to understand why that is, and whether they think things are likely to change anytime soon.

Boston Dynamics’s Aaron Saunders, and Agility Robitics’ Jonathan Hurst and Bambi Brewer on...

Why do you think people react so strongly to seeing robots fall over, especially bipedal robots?

Jonathan Hurst: People post funny videos of pets or kids, making some expression or having a reaction that you can identify with. It’s even funnier when it’s a robot that wouldn’t typically do that. And so when Digit [at ProMat] seems to be just like, “I’m so tired of doing this work” and falls down, people are like, “I understand you, robot!” But [seeing robots behave that way] is going to become more common, and when people see this and it becomes just a regular part of their experience, the novelty will wear off.

Bambi Brewer: People who make robots spend a lot of time trying to present them at their best. The way robots move does seem very repetitive, very scripted. I can see why it’s very interesting when something goes wrong, because the public usually doesn’t see what that looks like, and they’re not used to those moments yet.

“People perceive machines based on how they perceive themselves. Falling on its face is a good example of something that looks bad for a robot but might not actually be bad.”
—Aaron Saunders, Boston Dynamics

How different is falling for robots than for humans?

Hurst: The way I think about the robot right now is like a two-and-a-half-year-old child. They fall more often than adults do, and it’s not terribly concerning. Sometimes they skin their knee. And sometimes the robot is going to break something when it falls. But it’s learning, and eventually I think these robots will fall even less often than people do. Physics is still true, though, and so it’s probably going to be on the same order of magnitude as how often people fall. It won’t be rare.

When you think about this ‘physics is true’ thing—that’s actually where robots will be able to have superhuman capabilities. A robot is going to be close to human strength and close to human speed, but you can take much bigger risks with a robot because you don’t really care that much if you break something.

Fundamentally, I don’t care if the robot breaks. I mean, I care a little bit, but I care a lot if any of our employees were to fall.

Do you think that humanoid robots falling in nonhuman ways might be part of why people react so strongly to these videos?

Aaron Saunders: We have a massive metal frame around the front of Atlas. It’s okay if it face-plants. It tucks its limbs in to protect them and other parts of the robot. A human would do the opposite—we put our limbs out and try to protect our heads. Robots can handle certain types of impacts and forces better than humans can. We have a lot of conversations around how people perceive machines based on how they perceive themselves. Falling on its face is a good example of something that looks bad for a robot but might not actually be bad.

“I can see why it’s very interesting when something goes wrong, because the public usually doesn’t see what that looks like, and they’re not used to those moments yet.”
—Bambi Brewer, Agility Robotics

Return to top

How normal is it for your robot to fall?

Saunders: Almost everything we do on Atlas is about pushing some limit. We don’t shy away from falling, because staying in a safe place means leaving a lot on the table in terms of understanding the performance of the machine and how to solve problems. In our development work, it falls all the time, both because we’re pushing it and because there’s very little risk or hazard—we’re not delivering Atlas out into the world.

On a long flat sidewalk, I don’t think Atlas would fall in a statistically relevant way. People think back to the video of robots falling all over the place at the DARPA Robotics Challenge, and that’s not the type of falling we worry about now.

For Spot, falling can be more of a risk, because it is out in the world. On a weekly basis, our internal fleet of Spots are walking about 2,000 kilometers, and we also have them in these test cells where they’re walking on rocks, on grates, over obstacles, and on slippery floors. We want to robustly test all of this stuff and try to drive those cases of falling down to their minimums.

“If a person is carrying a baby and falls down some stairs, they have this intuition and natural ability to save the baby, even if it means injuring themselves. We can design our robots to do the same kind of thing to protect the people around it when it falls.”
—Jonathan Hurst, Agility Robotics

How big of a deal is it for your robot to fall?

Hurst: Digit was designed to fall. That’s one of the reasons that it has arms—to be able to survive a fall. When we were first designing the robot, we said, okay, at some point the robot’s going to fall, how can we protect it? We calculated how much padding we would need to minimize the acceleration on the electronic components. It turned out that we would have needed several inches of padding, and Digit would have ended up looking like the Michelin Man.

The only realistic way to have Digit safely decelerate was to have an appendage that’s going to stick out and absorb that fall. And where is the best place to locate that appendage? You get the same answer as you do when you think about inertial actuation and bimanual manipulation. Digit’s arms are where they are not because we’re trying to build a humanoid, but because we’re trying to solve locomotion challenges, manipulation challenges, and making sure that we can catch the robot when it falls.

Was there a point during the development of your robot where falling went from normal to unusual?

Saunders: The thing that really took us from worrying about normal walking to feeling pretty good about normal walking is when we pushed aggressively into things that went way beyond walking.

To jump and land successfully, we needed to develop control algorithms that could accommodate all of the mass and the dynamics of the robot. It was no longer about carefully picking where you put your foot for each step, it was about coordinating all of that moving mass in a really robust way. So when Atlas started jumping and doing parkour, it made walking easier too. A few weeks ago, we had a new team member go back and apply some of the latest control algorithms that we’re using for parkour to our standing algorithm. With those new algorithms we saw big improvements in the robot’s ability to handle disturbances from a stand—if somebody were to shove the robot, this new controller is able to think and reason about all of its dynamics, resulting in massive gains in how Atlas reacts.

“We need to give a very clear signal to people to tell them not to try and help—just step back and let the robot fall. It’ll be fine.”
—Bambi Brewer, Agility Robotics

Return to top

At this point, how much is falling just an “oops,” and how much is it a learning opportunity?

Hurst: We’re always looking for bugs that you can iron out. Digit’s collapse at ProMat was one. In this scenario, there really should not have been an emergency stop.

Brewer: Falls are points at which somebody is filing a bug card, or looking through the logs. They’re trying to figure out what happened, and how to make sure it doesn’t happen again. At ProMat, there was something wrong with an encoder in the arm. It’s been updated now. It was a bug that hadn’t occurred before. Now if that happens, the robot’s arm will freeze, but the robot will remain upright.

Saunders: On Spot, I think there are relatively few learning opportunities these days. We know pretty well what Spot’s capable of, in what situations a fall might occur, what the robot is likely to do in those situations, and how it’s going to recover. We designed Spot to be able to fall robustly and not break, and to get up from falls. Obviously, there are some extreme cases—one of our industrial customers had a need for Spot to cross a soapy floor, which is about as close as you can get to walking on ice, a challenge for anything with legs. So our control team set up a slippery environment in our lab, using cooking oil on plastic, and then just started “robustifying.” They figured out how to detect slips and adapt the gait of the robot, and went from a situation where falling was regular to one where falling was infrequent.

For Atlas, generally the falling state happens after the part that we care about. What we’re learning there is what went wrong right before the fall. If we’re working on one of Atlas’s aerial tricks—say, something that we’ve never landed before—then of course we’re doing a ton of work to figure out why falls happen. But if we’re just walking around the lab, and there was some misstep, I don’t think people stress out too much, and we just stand it back up and reset it and go again.

“Robots should be able to fall. We should give them a break when they do.”
—Aaron Saunders, Boston Dynamics

We’re not afraid of a fall—we’re not treating the robots like they’re going to break all the time. Our robot falls a lot, and one of the things we decided a long time ago that we needed to build robots that can fall without breaking. If you can go through that cycle of pushing your robot to failure, studying the failure, and fixing it, you can make progress to where it’s not falling. But if you build a machine or a control system or a culture around never falling, then you’ll never learn what you need to learn to make your robot not fall. We celebrate falls, even the falls that break the robot.

Return to top

If a robot knows that it’s about to fall, what can it do to protect itself, and protect people around it?

Hurst: There are strategies when you know you’re about to fall. If a person is carrying a baby and falls down some stairs, they have this intuition and natural ability to save the baby, even if it means injuring themselves. We can design our robots to do the same kind of thing to protect the people around it when it falls.

Brewer: In addition to the robot falling safely, we need to give a very clear signal to people to tell them not to try and help—just step back and let the robot fall. It’ll be fine.

Hurst: The other thing is to try to fall sooner rather than later. If you’re not sure whether you can stay balanced, you might end up taking a step to try to correct, and then another step, and then maybe you’re moving in a direction that’s not all that controlled. So when it starts to lose its balance, we can tell the robot, “Just fall. You’ll get back up.”

Saunders: We have these detections inside of our control system that trigger when the robot starts doing something that the controller didn’t ask it to do. Maybe the velocity is starting to do something, or the robot is at some angle that it isn’t supposed to be. If that makes us think that a fall might be happening, we’ll run a different controller to try to stop it from falling—Atlas might decide to swing its arms, or move its upper body, or throw its leg out. And if that fails, there’s another control layer for when the robot is really falling. That last layer is about putting the robot in a state that sets its pose and joint stiffnesses to basically ensure that it will do minimal damage to itself and the world. How exactly we do this is different for each robot and for each type of fall. If you comb through videos of Atlas, you might see the robot tucking itself up into a little bit of a ball—that’s a shape and a set of joint stiffnesses that help it mitigate impacts, and also help protect things around it.

Sometimes, though, these falls happen because the robot catastrophically breaks. With Atlas, we definitely have instances where we break the foot off. And at that point, I don’t have good answers.

Return to top

The next time a video of a humanoid robot falling over goes viral, whether it’s your robot or someone else’s, what is one thing you’d like people watching that video to know?

Hurst: If Digit falls, I think it’d be great for people to know that the reaction from the engineers who built that robot would not be, “our robot fell over and we didn’t expect that!” It would just be a shrug.

Brewer: I’d like people to know that when a robot is actually out in the world doing real things, unexpected things are going to happen. You’re going to see some falls, but that’s part of learning to run a really long time in real-world environments. It’s expected, and it’s a sign that you’re not staging things.

Saunders: I think people should recognize that it’s normal for equipment to sometimes fail. Equipment can be fixed, equipment can be improved, and over time, equipment gets more and more reliable. And so, when people see these failures, it may be a situation that the robot has never experienced. They should know that we are gathering all that information and that we’re continuously improving and iterating, and that what they’re seeing now doesn’t represent the end state. It just represents where the technology is today.

I also think that there has to be some balance between our expectations for what robots can do, and the process for getting them to do it. People will come to me and they’ll want a robot that can do amazing things that robots don’t do yet, but they’re very nervous if a robot fails. If we want our robots to do amazing things and enrich our lives and be our tools in the workforce, we’re going to need to build those capabilities over time, because this is emerging technology, not established technology.

Robots should be able to fall. We should give them a break when they do. It’s okay if we laugh at them. But we should also work hard to make our products safe and reliable and things that we can trust, because if we don’t trust our robots, we won’t use them.

Return to top

Pages