Feed aggregator

When a natural disaster strikes, first responders must move quickly to search for survivors. To support the search-and-rescue efforts, one group of innovators in Europe has succeeded in harnessing the power of drones, AI, and smartphones, all in one novel combination.

Their idea is to use a single drone as a moving cellular base station, which can do large sweeps over disaster areas and locate survivors using signals from their phones. AI helps the drone methodically survey the area and even estimate the trajectory of survivors who are moving.

The team built its platform, called Search-And-Rescue DrOne based solution (SARDO), using off-the-shelf hardware and tested it in field experiments and simulations. They describe the results in a study published 13 January in IEEE Transactions on Mobile Computing.

“We built SARDO to provide first responders with an all-in-one victims localization system capable of working in the aftermath of a disaster without existing network infrastructure support,” explains Antonio Albanese, a Research Associate at NEC Laboratories Europe GmbH, which is headquartered in Heidelberg, Germany.

The point is that a natural disaster may knock out cell towers along with other infrastructure. SARDO, which is quipped with a light-weight cellular base station, is a mobile solution that could be implemented regardless of what infrastructure remains after a natural disaster. 

To detect and map out the locations of victims, SARDO performs time-of-flight measurements (using the timing of signals emitted by the users’ phones to estimate distance). 

A machine learning algorithm is then applied to the time-of-flight measurements to calculate the positions of victims. The algorithm compensates for when signals are blocked by rubble.

If a victim is on the move in the wake of a disaster, a second machine learning algorithm, tasked with estimating the person’s trajectory based on their current movement, kicks in—potentially helping first responders locate the person sooner.   

After sweeping an area, the drone is programmed to automatically maneuver closer to the position of a suspected victim to retrieve more accurate distance measurements. If too many errors are interfering with the drone’s ability to locate victims, it’s programmed to enlarge the scanning area.

In their study, Albanese and his colleagues tested SARDO in several field experiments without rubble, and used simulations to test the approach in a scenario where rubble interfered with some signals. In the field experiments, the drone was able to pinpoint the location of missing people to within a few tens of meters, requiring approximately three minutes to locate each victim (within a field roughly 200 meters squared. As would be expected, SARDO was less accurate when rubble was present or when the drone was flying at higher speeds or altitudes.

Albanese notes that a limitation of SARDO–as is the case with all drone-based approaches–is the battery life of the drone. But, he says, the energy consumption of the NEC team’s design remains relatively low.

The group is consulting the laboratory’s business experts on the possibility of commercializing this tech.  Says Albanese: “There is interest, especially from the public safety divisions, but still no final decision has been taken.”

In the meantime, SARDO may undergo further advances. “We plan to extend SARDO to emergency indoor localization so [it is] capable of working in any emergency scenario where buildings might not be accessible [to human rescuers],” says Albanese.

<Back to IEEE Journal Watch

The purpose of this work is to optimize the rigid or compliant behavior of a new type of parallel-actuated robot architecture developed for exoskeleton robot applications. This is done in an effort to provide those that utilize the architecture with the means to maximize, minimize, or simply adjust its stiffness property so as to optimize it for particular tasks, such as augmented lifting or impact absorption. This research even provides the means to produce non-homogeneous stiffness properties for applications that may require non-homogeneous dynamic behavior. In this work, the new architecture is demonstrated in the form of a shoulder exoskeleton. An analytical stiffness model for the shoulder exoskeleton is created and validated experimentally. The model is then used, along with a method of bounded nonlinear multi-objective optimization to configure the parallel substructures for desired rigidity, compliance or nonhomogeneous stiffness behavior. The stiffness model and its optimization can be applied beyond the shoulder to any embodiment of the new parallel architecture, including hip, wrist and ankle robot applications. In order to exemplify this, we present the rigidity optimization for a theoretical hip exoskeleton.

At a press conference this afternoon, NASA released a new video showing, in real-time and full color, the entire descent and landing of the Perseverance Mars rover. The video begins with the deployment of the parachute, and ends with the Skycrane cutting the rover free and flying away. It’s the most mind-blowing three minutes of video I have ever seen. 

Image: NASA/JPL The cameras that recorded video during the Mars 2020 rover’s landing on Mars.

Some very quick context: during landing, multiple cameras were recording the event, and this video is a combination of these. No audio was recorded, so you’re hearing a feed from JPL mission control.

Here’s the video:

We’ll have a lot more on the Perseverance rover, but for now, we’re just going to let this video sink in.

[ Mars 2020 ]

At a press conference this afternoon, NASA released a new video showing, in real-time and full color, the entire descent and landing of the Perseverance Mars rover. The video begins with the deployment of the parachute, and ends with the Skycrane cutting the rover free and flying away. It’s the most mind-blowing three minutes of video I have ever seen. 

Image: NASA/JPL The cameras that recorded video during the Mars 2020 rover’s landing on Mars.

Some very quick context: during landing, multiple cameras were recording the event, and this video is a combination of these. No audio was recorded, so you’re hearing a feed from JPL mission control.

Here’s the video:

We’ll have a lot more on the Perseverance rover, but for now, we’re just going to let this video sink in.

[ Mars 2020 ]

Inspecting old mines is a dangerous business. For humans, mines can be lethal: prone to rockfalls and filled with noxious gases. Robots can go where humans might suffocate, but even robots can only do so much when mines are inaccessible from the surface.

Now, researchers in the UK, led by Headlight AI, have developed a drone that could cast a light in the darkness. Named Prometheus, this drone can enter a mine through a borehole not much larger than a football, before unfurling its arms and flying around the void. Once down there, it can use its payload of scanning equipment to map mines where neither humans nor robots can presently go. This, the researchers hope, could make mine inspection quicker and easier. The team behind Prometheus published its design in November in the journal Robotics.

Mine inspection might seem like a peculiarly specific task to fret about, but old mines can collapse, causing the ground to sink and damaging nearby buildings. It’s a far-reaching threat: the geotechnical engineering firm Geoinvestigate, based in Northeast England, estimates that around 8 percent of all buildings in the UK are at risk from any of the thousands of abandoned coal mines near the country’s surface. It’s also a threat to transport, such as road and rail. Indeed, Prometheus is backed by Network Rail, which operates Britain’s railway infrastructure.

Such grave dangers mean that old mines need periodic check-ups. To enter depths that are forbidden to traditional wheeled robots—such as those featured in the DARPA SubT Challenge—inspectors today drill boreholes down into the mine and lower scanners into the darkness.

But that can be an arduous and often fruitless process. Inspecting the entirety of a mine can take multiple boreholes, and that still might not be enough to chart a complete picture. Mines are jagged, labyrinthine places, and much of the void might lie out of sight. Furthermore, many old mines aren’t well-mapped, so it’s hard to tell where best to enter them.

Prometheus can fly around some of those challenges. Inspectors can lower Prometheus, tethered to a docking apparatus, down a single borehole. Once inside the mine, the drone can undock and fly around, using LIDAR scanners—common in mine inspection today—to generate a 3D map of the unknown void. Prometheus can fly through the mine autonomously, using infrared data to plot out its own course.

Other drones exist that can fly underground, but they’re either too small to carry a relatively heavy payload of scanning equipment, or too large to easily fit down a borehole. What makes Prometheus unique is its ability to fold its arms, allowing it to squeeze down spaces its counterparts cannot.

It’s that ability to fold and enter a borehole that makes Prometheus remarkable, says Jason Gross, a professor of mechanical and aerospace engineering at West Virginia University. Gross calls Prometheus “an exciting idea,” but he does note that it has a relatively short flight window and few abilities beyond scanning.

The researchers have conducted a number of successful test flights, both in a basement and in an old mine near Shrewsbury, England. Not only was Prometheus able to map out its space, the drone was able to plot its own course in an unknown area.

The researchers’ next steps, according to Puneet Chhabra, co-founder of Headlight AI, will be to test Prometheus’s ability to unfold in an actual mine. Following that, researchers plan to conduct full-scale test flights by the end of 2021.

Inspecting old mines is a dangerous business. For humans, mines can be lethal: prone to rockfalls and filled with noxious gases. Robots can go where humans might suffocate, but even robots can only do so much when mines are inaccessible from the surface.

Now, researchers in the UK, led by Headlight AI, have developed a drone that could cast a light in the darkness. Named Prometheus, this drone can enter a mine through a borehole not much larger than a football, before unfurling its arms and flying around the void. Once down there, it can use its payload of scanning equipment to map mines where neither humans nor robots can presently go. This, the researchers hope, could make mine inspection quicker and easier. The team behind Prometheus published its design in November in the journal Robotics.

Mine inspection might seem like a peculiarly specific task to fret about, but old mines can collapse, causing the ground to sink and damaging nearby buildings. It’s a far-reaching threat: the geotechnical engineering firm Geoinvestigate, based in Northeast England, estimates that around 8 percent of all buildings in the UK are at risk from any of the thousands of abandoned coal mines near the country’s surface. It’s also a threat to transport, such as road and rail. Indeed, Prometheus is backed by Network Rail, which operates Britain’s railway infrastructure.

Such grave dangers mean that old mines need periodic check-ups. To enter depths that are forbidden to traditional wheeled robots—such as those featured in the DARPA SubT Challenge—inspectors today drill boreholes down into the mine and lower scanners into the darkness.

But that can be an arduous and often fruitless process. Inspecting the entirety of a mine can take multiple boreholes, and that still might not be enough to chart a complete picture. Mines are jagged, labyrinthine places, and much of the void might lie out of sight. Furthermore, many old mines aren’t well-mapped, so it’s hard to tell where best to enter them.

Prometheus can fly around some of those challenges. Inspectors can lower Prometheus, tethered to a docking apparatus, down a single borehole. Once inside the mine, the drone can undock and fly around, using LIDAR scanners—common in mine inspection today—to generate a 3D map of the unknown void. Prometheus can fly through the mine autonomously, using infrared data to plot out its own course.

Other drones exist that can fly underground, but they’re either too small to carry a relatively heavy payload of scanning equipment, or too large to easily fit down a borehole. What makes Prometheus unique is its ability to fold its arms, allowing it to squeeze down spaces its counterparts cannot.

It’s that ability to fold and enter a borehole that makes Prometheus remarkable, says Jason Gross, a professor of mechanical and aerospace engineering at West Virginia University. Gross calls Prometheus “an exciting idea,” but he does note that it has a relatively short flight window and few abilities beyond scanning.

The researchers have conducted a number of successful test flights, both in a basement and in an old mine near Shrewsbury, England. Not only was Prometheus able to map out its space, the drone was able to plot its own course in an unknown area.

The researchers’ next steps, according to Puneet Chhabra, co-founder of Headlight AI, will be to test Prometheus’s ability to unfold in an actual mine. Following that, researchers plan to conduct full-scale test flights by the end of 2021.

The current pandemic has highlighted the need for rapid construction of structures to treat patients and ensure manufacturing of health care products such as vaccines. In order to achieve this, rapid transportation of construction materials from staging area to deposition is needed. In the future, this could be achieved through automated construction sites that make use of robots. Toward this, in this paper a cable driven parallel manipulator (CDPM) is designed and built to balance a highly unstable load, a ball plate system. The system consists of eight cables attached to the end effector plate that can be extended or retracted to actuate movement of the plate. The hardware for the system was designed and built utilizing modern manufacturing processes. A camera system was designed using image recognition to identify the ball pose on the plate. The hardware was used to inform the development of a control system consisting of a reinforcement-learning trained neural network controller that outputs the desired platform response. A nested PID controller for each motor attached to each cable was used to realize the desired response. For the neural network controller, three different model structures were compared to assess the impact of varying model complexity. It was seen that less complex structures resulted in a slower response that was less flexible and more complex structures output a high frequency oscillation of the actuation signal resulting in an unresponsive system. It was concluded that the system showed promise for future development with the potential to improve on the state of the art.

This paper proposes an underactuated grippers mechanism that grasps and pulls in different types of objects. These two movements are generated by only a single actuator while two independent actuators are used in conventional grippers. To demonstrate this principle, we have developed two kinds of gripper by different driving systems: one is driven by a DC motor with planetary gear reducers and another is driven by pneumatic actuators with branch tubes as a differential. Each pulling-in mechanism in the former one and the latter one is achieved by a belt-driven finger surface and a linear slider with an air cylinder, respectively. The motor-driven gripper with planetary gear reducers can pull-up the object after grasping. However, the object tends to fall when placing because it opens the finger before pushing out the object during the reversed movement. In addition, the closing speed and the picking-up speed of the fingers are slow due to the high reduction gear. To solve these drawbacks, a new pneumatic gripper by combining three valves, a speed control valve, a relief valve, and non-return valves, is proposed. The proposed pneumatic gripper is superior in the sense that it can perform pulling-up after grasping the object and opening the fingers after pushing-out the object. In the present paper, a design methodology of the different underactuated grippers that can not only grasp but also pull up objects is discussed. Then, to examine the performance of the grippers, experiments were conducted using various objects with different rigidity, shapes, size, and mass, which may be potentially available in real applications.

Soft robots are inherently safe, highly resilient, and potentially very cheap, making them promising for a wide array of applications. But development on them has been a bit slow relative to other areas of robotics, at least partially because soft robots can’t directly benefit from the massive increase in computing power and sensor and actuator availability that we’ve seen over the last few decades. Instead, roboticists have had to get creative to find ways of achieving the functionality of conventional robotics components using soft materials and compatible power sources.

In the current issue of Science Robotics, researchers from UC San Diego demonstrate a soft walking robot with four legs that moves with a turtle-like gait controlled by a pneumatic circuit system made from tubes and valves. This air-powered nervous system can actuate multiple degrees of freedom in sequence from a single source of pressurized air, offering a huge reduction in complexity and bringing a very basic form of decision making onto the robot itself.

Generally, when people talk about soft robots, the robots are only mostly soft. There are some components that are very difficult to make soft, including pressure sources and the necessary electronics to direct that pressure between different soft actuators in a way that can be used for propulsion. What’s really cool about this robot is that researchers have managed to take a pressure source (either a single tether or an onboard CO2 cartridge) and direct it to four different legs, each with three different air chambers, using an oscillating three valve circuit made entirely of soft materials. 

Photo: UCSD The pneumatic circuit that powers and controls the soft quadruped.

The inspiration for this can be found in biology—natural organisms, including quadrupeds, use nervous system components called central pattern generators (CPGs) to prompt repetitive motions with limbs that are used for walking, flying, and swimming. This is obviously more complicated in some organisms than in others, and is typically mediated by sensory feedback, but the underlying structure of a CPG is basically just a repeating circuit that drives muscles in sequence to produce a stable, continuous gait. In this case, we’ve got pneumatic muscles being driven in opposing pairs, resulting in a diagonal couplet gait, where diagonally opposed limbs rotate forwards and backwards at the same time.

Diagram: Science Robotics

(J) Pneumatic logic circuit for rhythmic leg motion. A constant positive pressure source (P+) applied to three inverter components causes a high-pressure state to propagate around the circuit, with a delay at each inverter. While the input to one inverter is high, the attached actuator (i.e., A1, A2, or A3) is inflated. This sequence of high-pressure states causes each pair of legs of the robot to rotate in a direction determined by the pneumatic connections. (K) By reversing the sequence of activation of the pneumatic oscillator circuit, the attached actuators inflate in a new sequence (A1, A3, and A2), causing (L) the legs of the robot to rotate in reverse. (M) Schematic bottom view of the robot with the directions of leg motions indicated for forward walking.

Diagram: Science Robotics

Each of the valves acts as an inverter by switching the normally closed half (top) to open and the normally open half (bottom) to closed.

The circuit itself is made up of three bistable pneumatic valves connected by tubing that acts as a delay by providing resistance to the gas moving through it that can be adjusted by altering the tube’s length and inner diameter. Within the circuit, the movement of the pressurized gas acts as both a source of energy and as a signal, since wherever the pressure is in the circuit is where the legs are moving. The simplest circuit uses only three valves, and can keep the robot walking in one single direction, but more valves can add more complex leg control options. For example, the researchers were able to use seven valves to tune the phase offset of the gait, and even just one additional valve (albeit of a slightly more complex design) could enable reversal of the system, causing the robot to walk backwards in response to input from a soft sensor. And with another complex valve, a manual (tethered) controller could be used for omnidirectional movement.

This work has some similarities to the rover that JPL is developing to explore Venus—that rover isn’t a soft robot, of course, but it operates under similar constraints in that it can’t rely on conventional electronic systems for autonomous navigation or control. It turns out that there are plenty of clever ways to use mechanical (or in this case, pneumatic) intelligence to make robots with relatively complex autonomous behaviors, meaning that in the future, soft (or soft-ish) robots could find valuable roles in situations where using a non-compliant system is not a good option.

For more on why we should be so excited about soft robots and just how soft a soft robot needs to be, we spoke with Michael Tolley, who runs the Bioinspired Robotics and Design Lab at UCSD, and Dylan Drotman, the paper’s first author.

IEEE Spectrum: What can soft robots do for us that more rigid robotic designs can’t?

Michael Tolley: At the very highest level, one of the fundamental assumptions of robotics is that you have rigid bodies connected at joints, and all your motion happens at these joints. That's a really nice approach because it makes the math easy, frankly, and it simplifies control. But when you look around us in nature, even though animals do have bones and joints, the way we interact with the world is much more complicated than that simple story. I’m interested in where we can take advantage of material properties in robotics. If you look at robots that have to operate in very unknown environments, I think you can build in some of the intelligence for how to deal with those environments into the body of the robot itself. And that’s the category this work really falls under—it's about navigating the world.

Dylan Drotman: Walking through confined spaces is a good example. With the rigid legged robot, you would have to completely change the way that the legs move to walk through a confined space, while if you have flexible legs, like the robot in our paper, you can use relatively simple control strategies to squeeze through an area you wouldn’t be able to get through with a rigid system. 

How smart can a soft robot get?

Drotman: Right now we have a sensor on the front that's connected through a fluidic transmission to a bistable valve that causes the robot to reverse. We could add other sensors around the robot to allow it to change direction whenever it runs into an obstacle to effectively make an electronics-free version of a Roomba.

Tolley: Stepping back a little bit from that, one could make an argument that we’re using basic memory elements to generate very basic signals. There’s nothing in principle that would stop someone from making a pneumatic computer—it’s just very complicated to make something that complex. I think you could build on this and do more intelligent decision making, but using this specific design and the components we’re using, it’s likely to be things that are more direct responses to the environment. 

How well would robots like these scale down?

Drotman: At the moment we’re manufacturing these components by hand, so the idea would be to make something more like a printed circuit board instead, and looking at how the channel sizes and the valve design would affect the actuation properties. We’ll also be coming up with new circuits, and different designs for the circuits themselves.

Tolley: Down to centimeter or millimeter scale, I don’t think you’d have fundamental fluid flow problems. I think you’re going to be limited more by system design constraints. You’ll have to be able to locomote while carrying around your pressure source, and possibly some other components that are also still rigid. When you start to talk about really small scales, though, it's not as clear to me that you really need an intrinsically soft robot. If you think about insects, their structural geometry can make them behave like they’re soft, but they’re not intrinsically soft.

Should we be thinking about soft robots and compliant robots in the same way, or are they fundamentally different?

Tolley: There’s certainly a connection between the two. You could have a compliant robot that behaves in a very similar way to an intrinsically soft robot, or a robot made of intrinsically soft materials. At that point, it comes down to design and manufacturing and practical limitations on what you can make. I think when you get down to small scales, the two sort of get connected. 

There was some interesting work several years ago on using explosions to power soft robots. Is that still a thing?

Tolley: One of the opportunities with soft robots is that with material compliance, you have the potential to store energy. I think there’s exciting potential there for rapid motion with a soft body. Combustion is one way of doing that with power coming from a chemical source all at once, but you could also use a relatively weak muscle that over time stores up energy in a soft body and then releases it. 

Is it realistic to expect complete softness from soft robots, or will they likely always have rigid components because they have to store or generate and move pressurized gas somehow?

Tolley: If you look in nature, you do have soft pumps like the heart, but although it’s soft, it’s still relatively stiff. Like, if you grab a heart, it’s not totally squishy. I haven’t done it, but I’d imagine. If you have a container that you’re pressurizing, it has to be stiff enough to not just blow up like a balloon. Certainly pneumatics or hydraulics are not the only way to go for soft actuators; there has been some really nice work on smart muscles and smart materials like hydraulic electrostatic (HASEL) actuators. They seem promising, but all of these actuators have challenges. We’ve chosen to stick with pressurized pneumatics in the near term; longer term, I think you’ll start to see more of these smart material actuators become more practical.

Personally, I don’t have any problem with soft robots having some rigid components. Most animals on land have some rigid components, but they can still take advantage of being soft, so it’s probably going to be a combination. But I do also like the vision of making an entirely soft, squishy thing.

Soft robots are inherently safe, highly resilient, and potentially very cheap, making them promising for a wide array of applications. But development on them has been a bit slow relative to other areas of robotics, at least partially because soft robots can’t directly benefit from the massive increase in computing power and sensor and actuator availability that we’ve seen over the last few decades. Instead, roboticists have had to get creative to find ways of achieving the functionality of conventional robotics components using soft materials and compatible power sources.

In the current issue of Science Robotics, researchers from UC San Diego demonstrate a soft walking robot with four legs that moves with a turtle-like gait controlled by a pneumatic circuit system made from tubes and valves. This air-powered nervous system can actuate multiple degrees of freedom in sequence from a single source of pressurized air, offering a huge reduction in complexity and bringing a very basic form of decision making onto the robot itself.

Generally, when people talk about soft robots, the robots are only mostly soft. There are some components that are very difficult to make soft, including pressure sources and the necessary electronics to direct that pressure between different soft actuators in a way that can be used for propulsion. What’s really cool about this robot is that researchers have managed to take a pressure source (either a single tether or an onboard CO2 cartridge) and direct it to four different legs, each with three different air chambers, using an oscillating three valve circuit made entirely of soft materials. 

Photo: UCSD The pneumatic circuit that powers and controls the soft quadruped.

The inspiration for this can be found in biology—natural organisms, including quadrupeds, use nervous system components called central pattern generators (CPGs) to prompt repetitive motions with limbs that are used for walking, flying, and swimming. This is obviously more complicated in some organisms than in others, and is typically mediated by sensory feedback, but the underlying structure of a CPG is basically just a repeating circuit that drives muscles in sequence to produce a stable, continuous gait. In this case, we’ve got pneumatic muscles being driven in opposing pairs, resulting in a diagonal couplet gait, where diagonally opposed limbs rotate forwards and backwards at the same time.

Diagram: Science Robotics

(J) Pneumatic logic circuit for rhythmic leg motion. A constant positive pressure source (P+) applied to three inverter components causes a high-pressure state to propagate around the circuit, with a delay at each inverter. While the input to one inverter is high, the attached actuator (i.e., A1, A2, or A3) is inflated. This sequence of high-pressure states causes each pair of legs of the robot to rotate in a direction determined by the pneumatic connections. (K) By reversing the sequence of activation of the pneumatic oscillator circuit, the attached actuators inflate in a new sequence (A1, A3, and A2), causing (L) the legs of the robot to rotate in reverse. (M) Schematic bottom view of the robot with the directions of leg motions indicated for forward walking.

Diagram: Science Robotics

Each of the valves acts as an inverter by switching the normally closed half (top) to open and the normally open half (bottom) to closed.

The circuit itself is made up of three bistable pneumatic valves connected by tubing that acts as a delay by providing resistance to the gas moving through it that can be adjusted by altering the tube’s length and inner diameter. Within the circuit, the movement of the pressurized gas acts as both a source of energy and as a signal, since wherever the pressure is in the circuit is where the legs are moving. The simplest circuit uses only three valves, and can keep the robot walking in one single direction, but more valves can add more complex leg control options. For example, the researchers were able to use seven valves to tune the phase offset of the gait, and even just one additional valve (albeit of a slightly more complex design) could enable reversal of the system, causing the robot to walk backwards in response to input from a soft sensor. And with another complex valve, a manual (tethered) controller could be used for omnidirectional movement.

This work has some similarities to the rover that JPL is developing to explore Venus—that rover isn’t a soft robot, of course, but it operates under similar constraints in that it can’t rely on conventional electronic systems for autonomous navigation or control. It turns out that there are plenty of clever ways to use mechanical (or in this case, pneumatic) intelligence to make robots with relatively complex autonomous behaviors, meaning that in the future, soft (or soft-ish) robots could find valuable roles in situations where using a non-compliant system is not a good option.

For more on why we should be so excited about soft robots and just how soft a soft robot needs to be, we spoke with Michael Tolley, who runs the Bioinspired Robotics and Design Lab at UCSD, and Dylan Drotman, the paper’s first author.

IEEE Spectrum: What can soft robots do for us that more rigid robotic designs can’t?

Michael Tolley: At the very highest level, one of the fundamental assumptions of robotics is that you have rigid bodies connected at joints, and all your motion happens at these joints. That's a really nice approach because it makes the math easy, frankly, and it simplifies control. But when you look around us in nature, even though animals do have bones and joints, the way we interact with the world is much more complicated than that simple story. I’m interested in where we can take advantage of material properties in robotics. If you look at robots that have to operate in very unknown environments, I think you can build in some of the intelligence for how to deal with those environments into the body of the robot itself. And that’s the category this work really falls under—it's about navigating the world.

Dylan Drotman: Walking through confined spaces is a good example. With the rigid legged robot, you would have to completely change the way that the legs move to walk through a confined space, while if you have flexible legs, like the robot in our paper, you can use relatively simple control strategies to squeeze through an area you wouldn’t be able to get through with a rigid system. 

How smart can a soft robot get?

Drotman: Right now we have a sensor on the front that's connected through a fluidic transmission to a bistable valve that causes the robot to reverse. We could add other sensors around the robot to allow it to change direction whenever it runs into an obstacle to effectively make an electronics-free version of a Roomba.

Tolley: Stepping back a little bit from that, one could make an argument that we’re using basic memory elements to generate very basic signals. There’s nothing in principle that would stop someone from making a pneumatic computer—it’s just very complicated to make something that complex. I think you could build on this and do more intelligent decision making, but using this specific design and the components we’re using, it’s likely to be things that are more direct responses to the environment. 

How well would robots like these scale down?

Drotman: At the moment we’re manufacturing these components by hand, so the idea would be to make something more like a printed circuit board instead, and looking at how the channel sizes and the valve design would affect the actuation properties. We’ll also be coming up with new circuits, and different designs for the circuits themselves.

Tolley: Down to centimeter or millimeter scale, I don’t think you’d have fundamental fluid flow problems. I think you’re going to be limited more by system design constraints. You’ll have to be able to locomote while carrying around your pressure source, and possibly some other components that are also still rigid. When you start to talk about really small scales, though, it's not as clear to me that you really need an intrinsically soft robot. If you think about insects, their structural geometry can make them behave like they’re soft, but they’re not intrinsically soft.

Should we be thinking about soft robots and compliant robots in the same way, or are they fundamentally different?

Tolley: There’s certainly a connection between the two. You could have a compliant robot that behaves in a very similar way to an intrinsically soft robot, or a robot made of intrinsically soft materials. At that point, it comes down to design and manufacturing and practical limitations on what you can make. I think when you get down to small scales, the two sort of get connected. 

There was some interesting work several years ago on using explosions to power soft robots. Is that still a thing?

Tolley: One of the opportunities with soft robots is that with material compliance, you have the potential to store energy. I think there’s exciting potential there for rapid motion with a soft body. Combustion is one way of doing that with power coming from a chemical source all at once, but you could also use a relatively weak muscle that over time stores up energy in a soft body and then releases it. 

Is it realistic to expect complete softness from soft robots, or will they likely always have rigid components because they have to store or generate and move pressurized gas somehow?

Tolley: If you look in nature, you do have soft pumps like the heart, but although it’s soft, it’s still relatively stiff. Like, if you grab a heart, it’s not totally squishy. I haven’t done it, but I’d imagine. If you have a container that you’re pressurizing, it has to be stiff enough to not just blow up like a balloon. Certainly pneumatics or hydraulics are not the only way to go for soft actuators; there has been some really nice work on smart muscles and smart materials like hydraulic electrostatic (HASEL) actuators. They seem promising, but all of these actuators have challenges. We’ve chosen to stick with pressurized pneumatics in the near term; longer term, I think you’ll start to see more of these smart material actuators become more practical.

Personally, I don’t have any problem with soft robots having some rigid components. Most animals on land have some rigid components, but they can still take advantage of being soft, so it’s probably going to be a combination. But I do also like the vision of making an entirely soft, squishy thing.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

Hmm, did anything interesting happen in robotics yesterday week?

Obviously, we're going to have tons more on the Mars Rover and Mars Helicopter over the next days, weeks, months, years, and (if JPL's track record has anything to say about it) decades. Meantime, here's what's going to happen over the next day or two:

[ Mars 2020 ]

PLEN hopes you had a happy Valentine's Day!

[ PLEN ]

Unitree dressed up a whole bunch of Laikago quadrupeds to take part in the 2021 Spring Festival Gala in China.

[ Unitree ]

Thanks Xingxing!

Marine iguanas compete for the best nesting sites on the Galapagos Islands. Meanwhile RoboSpy Iguana gets involved in a snot sneezing competition after the marine iguanas return from the sea.

[ Spy in the Wild ]

Tails, it turns out, are useful for almost everything.

[ DART Lab ]

Partnered with MD-TEC, this video demonstrates use of teleoperated robotic arms and virtual reality interface to perform closed suction for self-ventilating tracheostomy patients during COVID -19 outbreak. Use of closed suction is recommended to minimise aerosol generated during this procedure. This robotic method avoids staff exposure to virus to further protect NHS.

[ Extend Robotics ]

Fotokite is a safe, practical way to do local surveillance with a drone.

I just wish they still had a consumer version :(

[ Fotokite ]

How to confuse fish.

[ Harvard ]

Army researchers recently expanded their research area for robotics to a site just north of Baltimore. Earlier this year, Army researchers performed the first fully-autonomous tests onsite using an unmanned ground vehicle test bed platform, which serves as the standard baseline configuration for multiple programmatic efforts within the laboratory. As a means to transition from simulation-based testing, the primary purpose of this test event was to capture relevant data in a live, operationally-relevant environment.

[ Army ]

Flexiv's new RIZON 10 robot hopes you had a happy Valentine's Day!

[ Flexiv ]

Thanks Yunfan!

An inchworm-inspired crawling robot (iCrawl) is a 5 DOF robot with two legs; each with an electromagnetic foot to crawl on the metal pipe surfaces. The robot uses a passive foot-cap underneath an electromagnetic foot, enabling it to be a versatile pipe-crawler. The robot has the ability to crawl on the metal pipes of various curvatures in horizontal and vertical directions. The robot can be used as a new robotic solution to assist close inspection outside the pipelines, thus minimizing downtime in the oil and gas industry.

[ Paper ]

Thanks Poramate!

A short film about Robot Wars from Blender Magazine in 1995.

[ YouTube ]

While modern cameras provide machines with a very well-developed sense of vision, robots still lack such a comprehensive solution for their sense of touch. The talk will present examples of why the sense of touch can prove crucial for a wide range of robotic applications, and a tech demo will introduce a novel sensing technology targeting the next generation of soft robotic skins. The prototype of the tactile sensor developed at ETH Zurich exploits the advances in camera technology to reconstruct the forces applied to a soft membrane. This technology has the potential to revolutionize robotic manipulation, human-robot interaction, and prosthetics.

[ ETHZ ]

Thanks Markus!

Quadrupedal robotics has reached a level of performance and maturity that enables some of the most advanced real-world applications with autonomous mobile robots. Driven by excellent research in academia and industry all around the world, a growing number of platforms with different skills target different applications and markets. We have invited a selection of experts with long-standing experience in this vibrant research area

[ IFRR ]

Thanks Fan!

Since January 2020, more than 300 different robots in over 40 countries have been used to cope with some aspect of the impact of the coronavirus pandemic on society. The majority of these robots have been used to support clinical care and public safety, allowing responders to work safely and to handle the surge in infections. This panel will discuss how robots have been successfully used and what is needed, both in terms of fundamental research and policy, for robotics to be prepared for the future emergencies.

[ IFRR ]

At Skydio, we ship autonomous robots that are flown at scale in complex, unknown environments every day. We’ve invested six years of R&D into handling extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. Drones are commonly in scenes with few or no semantic priors on the environment and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, smudges, and fog. These challenges are daunting for classical vision, because photometric signals are simply inconsistent. And yet, there is no ground truth for direct supervision of deep networks. We’ll take a detailed look at these issues and how we’ve tackled them to push the state of the art in visual inertial navigation, obstacle avoidance, rapid trajectory planning. We will also cover the new capabilities on top of our core navigation engine to autonomously map complex scenes and capture all surfaces, by performing real-time 3D reconstruction across multiple flights.

[ UPenn ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

Hmm, did anything interesting happen in robotics yesterday week?

Obviously, we're going to have tons more on the Mars Rover and Mars Helicopter over the next days, weeks, months, years, and (if JPL's track record has anything to say about it) decades. Meantime, here's what's going to happen over the next day or two:

[ Mars 2020 ]

PLEN hopes you had a happy Valentine's Day!

[ PLEN ]

Unitree dressed up a whole bunch of Laikago quadrupeds to take part in the 2021 Spring Festival Gala in China.

[ Unitree ]

Thanks Xingxing!

Marine iguanas compete for the best nesting sites on the Galapagos Islands. Meanwhile RoboSpy Iguana gets involved in a snot sneezing competition after the marine iguanas return from the sea.

[ Spy in the Wild ]

Tails, it turns out, are useful for almost everything.

[ DART Lab ]

Partnered with MD-TEC, this video demonstrates use of teleoperated robotic arms and virtual reality interface to perform closed suction for self-ventilating tracheostomy patients during COVID -19 outbreak. Use of closed suction is recommended to minimise aerosol generated during this procedure. This robotic method avoids staff exposure to virus to further protect NHS.

[ Extend Robotics ]

Fotokite is a safe, practical way to do local surveillance with a drone.

I just wish they still had a consumer version :(

[ Fotokite ]

How to confuse fish.

[ Harvard ]

Army researchers recently expanded their research area for robotics to a site just north of Baltimore. Earlier this year, Army researchers performed the first fully-autonomous tests onsite using an unmanned ground vehicle test bed platform, which serves as the standard baseline configuration for multiple programmatic efforts within the laboratory. As a means to transition from simulation-based testing, the primary purpose of this test event was to capture relevant data in a live, operationally-relevant environment.

[ Army ]

Flexiv's new RIZON 10 robot hopes you had a happy Valentine's Day!

[ Flexiv ]

Thanks Yunfan!

An inchworm-inspired crawling robot (iCrawl) is a 5 DOF robot with two legs; each with an electromagnetic foot to crawl on the metal pipe surfaces. The robot uses a passive foot-cap underneath an electromagnetic foot, enabling it to be a versatile pipe-crawler. The robot has the ability to crawl on the metal pipes of various curvatures in horizontal and vertical directions. The robot can be used as a new robotic solution to assist close inspection outside the pipelines, thus minimizing downtime in the oil and gas industry.

[ Paper ]

Thanks Poramate!

A short film about Robot Wars from Blender Magazine in 1995.

[ YouTube ]

While modern cameras provide machines with a very well-developed sense of vision, robots still lack such a comprehensive solution for their sense of touch. The talk will present examples of why the sense of touch can prove crucial for a wide range of robotic applications, and a tech demo will introduce a novel sensing technology targeting the next generation of soft robotic skins. The prototype of the tactile sensor developed at ETH Zurich exploits the advances in camera technology to reconstruct the forces applied to a soft membrane. This technology has the potential to revolutionize robotic manipulation, human-robot interaction, and prosthetics.

[ ETHZ ]

Thanks Markus!

Quadrupedal robotics has reached a level of performance and maturity that enables some of the most advanced real-world applications with autonomous mobile robots. Driven by excellent research in academia and industry all around the world, a growing number of platforms with different skills target different applications and markets. We have invited a selection of experts with long-standing experience in this vibrant research area

[ IFRR ]

Thanks Fan!

Since January 2020, more than 300 different robots in over 40 countries have been used to cope with some aspect of the impact of the coronavirus pandemic on society. The majority of these robots have been used to support clinical care and public safety, allowing responders to work safely and to handle the surge in infections. This panel will discuss how robots have been successfully used and what is needed, both in terms of fundamental research and policy, for robotics to be prepared for the future emergencies.

[ IFRR ]

At Skydio, we ship autonomous robots that are flown at scale in complex, unknown environments every day. We’ve invested six years of R&D into handling extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. Drones are commonly in scenes with few or no semantic priors on the environment and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, smudges, and fog. These challenges are daunting for classical vision, because photometric signals are simply inconsistent. And yet, there is no ground truth for direct supervision of deep networks. We’ll take a detailed look at these issues and how we’ve tackled them to push the state of the art in visual inertial navigation, obstacle avoidance, rapid trajectory planning. We will also cover the new capabilities on top of our core navigation engine to autonomously map complex scenes and capture all surfaces, by performing real-time 3D reconstruction across multiple flights.

[ UPenn ]

We report on a series of workshops with musicians and robotics engineers aimed to study how human and machine improvisation can be explored through interdisciplinary design research. In the first workshop, we posed two leading questions to participants. First, what can AI and robotics learn by how improvisers think about time, space, actions, and decisions? Second, how can improvisation and musical instruments be enhanced by AI and robotics? The workshop included sessions led by the musicians, which provided an overview of the theory and practice of musical improvisation. In other sessions, AI and robotics researchers introduced AI principles to the musicians. Two smaller follow-up workshops comprised of only engineering and information science students provided an opportunity to elaborate on the principles covered in the first workshop. The workshops revealed parallels and discrepancies in the conceptualization of improvisation between musicians and engineers. These thematic differences could inform considerations for future designers of improvising robots.

In this paper we present a surveillance system for early detection of escapers from a restricted area based on a new swarming mobility model called CROMM-MS (Chaotic Rössler Mobility Model for Multi-Swarms). CROMM-MS is designed for controlling the trajectories of heterogeneous multi-swarms of aerial, ground and marine unmanned vehicles with important features such as prioritising early detections and success rate. A new Competitive Coevolutionary Genetic Algorithm (CompCGA) is proposed to optimise the vehicles’ parameters and escapers’ evasion ability using a predator-prey approach. Our results show that CROMM-MS is not only viable for surveillance tasks but also that its results are competitive in regard to the state-of-the-art approaches.

They used to call it “Seven Minutes of Terror”—a NASA probe would slice into the atmosphere of Mars at more than 20,000 kilometers per hour; slow itself with a heat shield, parachute, and rocket engines; and somehow land intact on the surface, just six or seven minutes later, while its makers waited helplessly on Earth. The computer-animated landing videos NASA produced before previous Mars missions—in 2004, 2008, and 2012—became online sensations. “If any one thing doesn’t work just right,” said NASA engineer Tom Rivellini in the last one, “it’s game over.”

NASA is now trying again, with the Perseverance rover and the tiny Ingenuity drone bolted to its undercarriage. NASA will be live-streaming the landing (across many video and social media platforms as well as in a Spanish language feed and in an immersive, 360-degree view) beginning at 11:15 a.m. PST/2:15 p.m. EST/19:15 UTC on Thursday, 18 February 2021. 

While this year’s animated landing video is as dramatic as ever, the tone has changed. “The models and simulations of landing at Jezero crater have assessed the probability of landing safely to be above 99 percent,” says Swati Mohan, the guidance, navigation and controls operations lead for the mission.

There isn’t a trace of arrogance in her voice as she says this. She’s been working on this mission for five years, has teammates who were around for NASA’s first Mars rover in 1997, and knows what they’re up against. Yes, they say, 99 percent reliability is realistic. 

The biggest advance over past missions is a system called Terrain Relative Navigation—TRN for short. In essence, it gives the spacecraft a way to know precisely where it’s headed, so it can steer clear of hazards on the very jagged landscapes that scientists most want to explore. If all goes as planned, Perseverance will image the Martian surface in rapid sequence as it plows toward its landing site, and compare what it sees to onboard maps of the ground below. The onboard database is primarily based on high-resolution images from NASA’s Mars Reconnaissance Orbiter, which has been mapping the planet from an altitude of 250 kilometers since 2006. Its images have a resolution of 30 cm per pixel. 

“This is kind of along the same lines as what the Apollo astronauts did with people in the loop, back in the day. Those guys looked out the window,” says Allen Chen, the mission’s entry, descent, and landing lead. “For the first time here on Mars, we’re automating that.”

  Illustration: NASA/JPL-Caltech NASA’s Perseverance Mars mission follows a carefully choreographed sequence of steps, pictured here, that—with many engineers on the ground holding their breath—will hopefully end in the newest Mars rover ready to explore the red planet.

There will still be plenty of anxious controllers at NASA’s Jet Propulsion Laboratory in California. After all, the spacecraft will be on its own, about 209 million kilometers from Earth, far enough away that its radio signals will take more than 11 minutes to reach home. The ship should reach the surface four minutes before engineers even know it has entered the Martian atmosphere. “Landing on Mars is hard enough,” says Thomas Zurbuchen, NASA’s associate administrator for science missions. “It is not guaranteed that we will be successful.” 

But the new navigation technology makes a very risky landing possible. Jezero crater, which was probably once a lake at the end of a river delta, has been on scientists’ shortlist since the 1990s as place to look for signs of past life on Mars. But engineers voted against it until this mission. Previous landers used radar, which Mohan likens to “closing your eyes and holding your hands out in front of you. You can use that to slow down and to stop. But with your eyes closed you can't really control where you're coming down.”

Everything happens fast as Perseverance comes in, following a long arcing path. Fewer than 90 seconds before scheduled touchdown, and about 2,100 meters above the Martian surface, the TRN system makes its calculations. Its rapid-fire imaging should by then have told it where it is relative to the ground below, and from that it can project its likely touchdown spot. If the ship is headed for a ridge, a crevice, or a dangerous outcropping of rock, the computer will send commands to eight downward-facing rocket engines to change the descent trajectory. 

In that final minute, as the spacecraft slows from 300 kilometers per hour to zero, the TRN system can shift the touchdown spot by up to 330 meters. The safe targets map in Perseverance’s memory is detailed enough, the team says, that the ship should be able to reach a suitable location for a safe landing. 

“It’s able to thread the needle of all these different hazards to land in the safe spots in between these hazards,” says Mohan, “and by landing amongst the hazards it’s also landing amongst the scientific features of interest.”

Update as of 3:55 p.m. EST, 18 Feb. 2021: Perseverance has landed! 

I’m safe on Mars. Perseverance will get you anywhere.

#CountdownToMars

— NASA's Perseverance Mars Rover (@NASAPersevere) February 18, 2021

This paper presents a novel approach to implement hierarchical, dense and dynamic reconstruction of 3D objects based on the VDB (Variational Dynamic B + Trees) data structure for robotic applications. The scene reconstruction is done by the integration of depth-images using the Truncated Signed Distance Field (TSDF). The proposed reconstruction method is based on dynamic trees in order to provide similar reconstruction results to the current state-of-the-art methods (i.e., complete volumes, hashing voxels and hierarchical volumes) in terms of execution time but with a direct multi-level representation that remains real-time. This representation provides two major advantages: it is a hierarchical and unbounded space representation. The proposed method is optimally implemented to be used on a GPU architecture, exploiting the parallelism skills of this hardware. A series of experiments will be presented to prove the performance of this approach in a robot arm platform.

Developing high-strength continuum robots can be challenging without compromising on the overall size of the robot, the complexity of design and the range of motion. In this work, we explore how the load capacity of continuum robots can drastically be improved through a combination of backbone design and convergent actuation path routing. We propose a rhombus-patterned backbone structure composed of thin walled-plates that can be easily fabricated via 3D printing and exhibits high shear and torsional stiffness while allowing bending. We then explore the effect of combined parallel and converging actuation path routing and its influence on continuum robot strength. Experimentally determined compliance matrices are generated for straight, translation and bending configurations for analysis and discussion. A robotic actuation platform is constructed to demonstrate the applicability of these design choices.

Tucked under the belly of the Perseverance rover that will be landing on Mars in just a few days is a little helicopter called Ingenuity. Its body is the size of a box of tissues, slung underneath a pair of 1.2m carbon fiber rotors on top of four spindly legs. It weighs just 1.8kg, but the importance of its mission is massive. If everything goes according to plan, Ingenuity will become the first aircraft to fly on Mars. 

In order for this to work, Ingenuity has to survive frigid temperatures, manage merciless power constraints, and attempt a series of 90 second flights while separated from Earth by 10 light minutes. Which means that real-time communication or control is impossible. To understand how NASA is making this happen, below is our conversation with Tim Canham, Mars Helicopter Operations Lead at NASA’s Jet Propulsion Laboratory (JPL).

It’s important to keep the Mars Helicopter mission in context, because this is a technology demonstration. The primary goal here is to fly on Mars, full stop. Ingenuity won’t be doing any of the same sort of science that the Perseverance rover is designed to do. If we’re lucky, the helicopter will take a couple of in-flight pictures, but that’s about it. The importance and the value of the mission is to show that flight on Mars is possible, and to collect data that will enable the next generation of Martian rotorcraft, which will be able to do more ambitious and exciting things. 

Here’s an animation from JPL showing the most complex mission that’s planned right now:

Ingenuity isn’t intended to do anything complicated because everything about the Mars helicopter itself is inherently complicated already. Flying a helicopter on Mars is incredibly challenging for a bunch of reasons, including the very thin atmosphere (just 1% the density of Earth’s), the power requirements, and the communications limitations. 

With all this in mind, getting Ingenuity to Mars in one piece and having it take off and land even once is a definite victory for NASA, JPL’s Tim Canham tells us. Canham helped develop the software architecture that runs Ingenuity. As the Ingenuity operations lead, he’s now focused on flight planning and coordinating with the Perseverance rover team. We spoke with Canham to get a better understanding of how Ingenuity will be relying on autonomy for its upcoming flights on Mars.

IEEE Spectrum: What can you tell us about Ingenuity’s hardware?

Tim Canham: Since Ingenuity is classified as a technology demo, JPL is willing to accept more risk. The main unmanned projects like rovers and deep space explorers are what’s called Class B missions, in which there are many people working on ruggedized hardware and software over many years. With a technology demo, JPL is willing to try new ways of doing things. So we essentially went out and used a lot of off-the-shelf consumer hardware. 

There are some avionics components that are very tough and radiation resistant, but much of the technology is commercial grade. The processor board that we used, for instance, is a Snapdragon 801, which is manufactured by Qualcomm. It’s essentially a cell phone class processor, and the board is very small. But ironically, because it’s relatively modern technology, it’s vastly more powerful than the processors that are flying on the rover. We actually have a couple of orders of magnitude more computing power than the rover does, because we need it. Our guidance loops are running at 500 Hz in order to maintain control in the atmosphere that we're flying in. And on top of that, we’re capturing images and analyzing features and tracking them from frame to frame at 30 Hz, and so there's some pretty serious computing power needed for that. And none of the avionics that NASA is currently flying are anywhere near powerful enough. In some cases we literally ordered parts from SparkFun [Electronics]. Our philosophy was, “this is commercial hardware, but we’ll test it, and if it works well, we’ll use it.”

Can you describe what sensors Ingenuity uses for navigation?

We use a cellphone-grade IMU, a laser altimeter (from SparkFun), and a downward-pointing VGA camera for monocular feature tracking. A few dozen features are compared frame to frame to track relative position to figure out direction and speed, which is how the helicopter navigates. It’s all done by estimates of position, as opposed to memorizing features or creating a map.

Photo: NASA/JPL-Caltech NASA’s Ingenuity Mars helicopter viewed from below, showing its laser altimeter and navigation camera.

We also have an inclinometer that we use to establish the tilt of the ground just during takeoff, and we have a cellphone-grade 13 megapixel color camera that isn’t used for navigation, but we’re going to try to take some nice pictures while we’re flying. It’s called the RTE, because everything has to have an acronym. There was an idea of putting hazard detection in the system early on, but we didn’t have the schedule to do that.

In what sense is the helicopter operating autonomously?

You can almost think of the helicopter like a traditional JPL spacecraft in some ways. It has a sequencing engine on board, and we write a set of sequences, a series of commands, and we upload that file to the helicopter and it executes those commands. We plan the guidance part of the flights on the ground in simulation as a series of waypoints, and those waypoints are the sequence of commands that we send to the guidance software. When we want the helicopter to fly, we tell it to go, and the guidance software takes over and executes taking off, traversing to the different waypoints, and then landing.

This means the flights are pre-planned very specifically. It’s not true autonomy, in the sense that we don’t give it goals and rules and it’s not doing any on-board high-level reasoning. It’s sort of half-way autonomy. The brute force way would be a human sitting there and flying it around with joysticks, and obviously we can’t do that on Mars. But there wasn’t time in the project to develop really detailed autonomy on the helicopter, so we tell it the flight plan ahead of time, and it executes a trajectory that’s been pre-planned for it. As it’s flying, it’s autonomously trying to make sure it stays on that trajectory in the presence of wind gusts or other things that may happen in that environment. But it’s really designed to follow a trajectory that we plan on the ground before it flies.

This isn’t necessarily an advanced autonomy proof of concept—something like telling it to “go take a picture of that rock” would be more advanced autonomy, in my view. Whereas, this is really a scripted flight, the primary goal is to prove that we can fly around on Mars successfully. There are future mission concepts that we’re working through now that would involve a bigger helicopter with much more autonomy on board that may be able to [achieve] that kind of advanced autonomy. But if you remember Mars Pathfinder, the very first rover that drove on Mars, it had a very basic mission: Drive in a circle around the base station and try to take some pictures and samples of some rocks. So, as a technology demo, we’re trying to be modest about what we try to do the first time with the helicopter, too. 

Is there any situation where something might cause the helicopter to decide to deviate from its pre-planned trajectory?

The guidance software is always making sure that all the sensors are healthy and producing good data. If a sensor goes wonky, the helicopter really has one response, which is to take the last propagated state and just try to land and then tell us what happened and wait for us to deal with it. The helicopter won’t try to continue its flight if a sensor fails. All three sensors that we use during flight are necessary to complete the flight because of how their data is fused together.

Illustration: NASA/JPL-Caltech An artist’s illustration of Ingenuity flying on Mars.

How will you decide where to fly?

We’ll be doing what we’re calling a site selection process, and that’s even starting now from orbital images of where we anticipate the rover is going to land. Orbital images are the coarse way of identifying potential sites, and then the rover will go to one of those sites and do a very extensive survey of the area. Based on the rockiness, the slope, and even how textured the area is for feature tracking, we’ll select a site for the helicopter to operate in. There are some tradeoffs, because the safest surface is one that’s featureless, with no rocks, but that’s also the worst surface to do feature tracking on, so we have to find a balance that might include a bunch of little rocks that make good features to track but no big rocks that might make it more difficult to land.

What kind of flights are you hoping the robot will make?

Because we’re trying this out for the first time, we have three main flights planned, and all three of them have the helicopter landing in the same spot that it took off from, because we know we’ll have a surveyed safe area. We have a limited 30 day window, and if we have the time, then we might try to land it in a different area that looks safe from a distance. But the first three canonical flights are all going to be takeoff, fly, and then come back and land in the same spot.

JPL has a history of building robots that are able to remain functional long after their primary mission is over. With only a 30 day mission, does that mean that barring some kind of accident, the rover will end up just driving away from a perfectly functional Mars helicopter?

Yeah, that’s the plan, because the rover has to get on with its primary mission. And it does consume resources to support us. And so they gave us this 30 day window, which we’re very grateful for of course, and then they’re moving on, whether we’re still working or not. Whatever wild and crazy stuff we want to do, we’ll have to do within our 30 days. We don’t actually have the final two flights planned yet. Depending on how quickly the first three go, we may have a week or so to try some more exotic things. But we’re really concentrating on those first three flights.

Our ultimate success criterion is a single flight, so if we get that first flight, we’re going to be doing high fives. The next two flights are going to be stretching that envelope a little bit. And then the final two flights are, hey, let’s see how adventurous we can get. We might fly off a hundred meters, or do a big circle or something like that. But the whole point is understanding how it flies, and that means doing our first flight and seeing how well it performs.

Let’s say everything goes great on your first four flights and you have one flight left. Would you  rather try something really adventurous that might not work, or something a little safer that’s more likely to work but that wouldn’t teach you quite as much?

That’s a good question, and we’ll have to figure that out. If we have one flight left and they’re going to leave us behind anyway, maybe we could try something bold. But we haven’t really gotten that far yet. We’re really concentrating on those first three flights, and everything after that is a bonus.

Anything else you can share with us that engineers might find particularly interesting?

This the first time we’ll be flying Linux on Mars. We’re actually running on a Linux operating system. The software framework that we’re using is one that we developed at JPL for cubesats and instruments, and we open-sourced it a few years ago. So, you can get the software framework that’s flying on the Mars helicopter, and use it on your own project. It’s kind of an open-source victory, because we’re flying an open-source operating system and an open-source flight software framework and flying commercial parts that you can buy off the shelf if you wanted to do this yourself someday. This is a new thing for JPL because they tend to like what’s very safe and proven, but a lot of people are very excited about it, and we’re really looking forward to doing it.

Background: Sawing of bone is an essential part of an autopsy procedure. An oscillating saw always generates noise, fine infectious dust particles, and the possibility of traumatic injuries, all of which can induce occupational hazard risks to autopsy workers, especially during the COVID-19 pandemic.

Objectives: The first goal of this study was to explore the production of noise and bone dust emission, comparing an oscillating saw and a robotic autopsy saw during an autopsy. The second goal was to evaluate the performance of a new robotic autopsy method, used during skull opening. The third goal was to encourage mortuary workers to use robotic technology during the autopsy procedure to protect us away from occupational injuries as well as airborne infections.

Materials and Methods: The experiments involved a comparison of noise levels and aerosol production during skull cutting between the oscillating saw and the robotic autopsy saw.

Results: The results confirmed that noise production from the robotic autopsy saw was lower than the oscillating saw. However, the bone dust levels, produced by the robotic autopsy saw, were greater than the oscillating saw, but were not greater than the dust concentrations which were present before opening the skull.

Conclusions: The use of a new robotic system might be an alternative choice for protecting against occupational damage among the healthcare workers. Further research might attempt to consider other healthcare problems which occur in the autopsy workplace and apply the robotic-assisted technology in autopsy surgery.

Pages