IEEE Spectrum Automation

IEEE Spectrum Automaton blog recent content
Subscribe to IEEE Spectrum Automation feed

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

Shiny robotic cat toy blimp!

I am pretty sure this is Google Translate getting things wrong, but the About page mentions that the blimp will “take you to your destination after appearing in the death of God.”

[ NTT DoCoMo ] via [ RobotStart ]

If you have yet to see this real-time video of Perseverance landing on Mars, drop everything and watch it.

During the press conference, someone commented that this is the first time anyone on the team who designed and built this system has ever seen it in operation, since it could only be tested at the component scale on Earth. This landing system has blown my mind since Curiosity.

Here's a better look at where Percy ended up:

[ NASA ]

The fact that Digit can just walk up and down wet, slippery, muddy hills without breaking a sweat is (still) astonishing.

[ Agility Robotics ]

SkyMul wants drones to take over the task of tying rebar, which looks like just the sort of thing we'd rather robots be doing so that we don't have to:

The tech certainly looks promising, and SkyMul says that they're looking for some additional support to bring things to the pilot stage.

[ SkyMul ]

Thanks Eohan!

Flatcat is a pet-like, playful robot that reacts to touch. Flatcat feels everything exactly: Cuddle with it, romp around with it, or just watch it do weird things of its own accord. We are sure that flatcat will amaze you, like us, and caress your soul.

I don't totally understand it, but I want it anyway.

[ Flatcat ]

Thanks Oswald!

This is how I would have a romantic dinner date if I couldn't get together in person. Herman the UR3 and an OptiTrack system let me remotely make a romantic meal!

[ Dave's Armoury ]

Here, we propose a novel design of deformable propellers inspired by dragonfly wings. The structure of these propellers includes a flexible segment similar to the nodus on a dragonfly wing. This flexible segment can bend, twist and even fold upon collision, absorbing force upon impact and protecting the propeller from damage.

[ Paper ]

Thanks Van!

In the 1970s, The CIA​ created the world's first miniaturized unmanned aerial vehicle, or UAV, which was intended to be a clandestine listening device. The Insectothopter was never deployed operationally, but was still revolutionary for its time.

It may never have been deployed (not that they'll admit to, anyway), but it was definitely operational and could fly controllably.

[ CIA ]

Research labs are starting to get Digits, which means we're going to get a much better idea of what its limitations are.

[ Ohio State ]

This video shows the latest achievements for LOLA walking on undetected uneven terrain. The robot is technically blind, not using any camera-based or prior information on the terrain.

[ TUM ]

We define "robotic contact juggling" to be the purposeful control of the motion of a three-dimensional smooth object as it rolls freely on a motion-controlled robot manipulator, or “hand.” While specific examples of robotic contact juggling have been studied before, in this paper we provide the first general formulation and solution method for the case of an arbitrary smooth object in single-point rolling contact on an arbitrary smooth hand.

[ Paper ]

Thanks Fan!

A couple of new cobots from ABB, designed to work safely around humans.

[ ABB ]

Thanks Fan!

It's worth watching at least a little bit of Adam Savage testing Spot's new arm, because we get to see Spot try, fail, and eventually succeed at an autonomous door-opening behavior at the 10 minute mark.

[ Tested ]

SVR discusses diversity with guest speakers Dr. Michelle Johnson from the GRASP Lab at UPenn; Dr Ariel Anders from Women in Robotics and first technical hire at Robust.ai; Alka Roy from The Responsible Innovation Project; and Kenechukwu C. Mbanesi and Kenya Andrews from Black in Robotics. The discussion here is moderated by Dr. Ken Goldberg—artist, roboticist and Director of the CITRIS People and Robots Lab—and Andra Keay from Silicon Valley Robotics.

[ SVR ]

RAS presents a Soft Robotics Debate on Bioinspired vs. Biohybrid Design.

In this debate, we will bring together experts in Bioinspiration and Biohybrid design to discuss the necessary steps to make more competent soft robots. We will try to answer whether bioinspired research should focus more on developing new bioinspired material and structures or on the integration of living and artificial structures in biohybrid designs.

[ RAS SoRo ]

IFRR presents a Colloquium on Human Robot Interaction.

Across many application domains, robots are expected to work in human environments, side by side with people. The users will vary substantially in background, training, physical and cognitive abilities, and readiness to adopt technology. Robotic products are expected to not only be intuitive, easy to use, and responsive to the needs and states of their users, but they must also be designed with these differences in mind, making human-robot interaction (HRI) a key area of research.

[ IFRR ]

Vijay Kumar, Nemirovsky Family Dean and Professor at Penn Engineering, gives an introduction to ENIAC day and David Patterson, Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley, speaks about the legacy of the ENIAC and its impact on computer architecture today. This video is comprised of lectures one and two of nine total lectures in the ENIAC Day series.

There are more interesting ENIAC videos at the link below, but we'll highlight this particular one, about the women of the ENIAC, also known as the First Programmers.

[ ENIAC Day ]

Over the last half decade or so, the commercialization of autonomous robots that can operate outside of structured environments has dramatically increased. But this relatively new transition of robotic technologies from research projects to commercial products comes with its share of challenges, many of which relate to the rapidly increasing visibility that these robots have in society.

Whether it's because of their appearance of agency, or because of their history in popular culture, robots frequently inspire people’s imagination. Sometimes this is a good thing, like when it leads to innovative new use cases. And sometimes this is a bad thing, like when it leads to use cases that could be classified as irresponsible or unethical. Can the people selling robots do anything about the latter? And even if they can, should they?

Roboticists understand that robots, fundamentally, are tools. We build them, we program them, and even the autonomous ones are just following the instructions that we’ve coded into them. However, that same appearance of agency that makes robots so compelling means that it may not be clear to people without much experience with or exposure to real robots that a robot itself isn’t inherently good or bad—rather, as a tool, a robot is a reflection of its designers and users.

This can put robotics companies into a difficult position. When they sell a robot to someone, that person can, hypothetically, use the robot in any way they want. Of course, this is the case with every tool, but it’s the autonomous aspect that makes robots unique. I would argue that autonomy brings with it an implied association between a robot and its maker, or in this case, the company that develops and sells it. I’m not saying that this association is necessarily a reasonable one, but I think that it exists, even if that robot has been sold to someone else who has assumed full control over everything it does.

“All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon”  —Robert Playter, Boston Dynamics

Robotics companies are certainly aware of this, because many of them are very careful about who they sell their robots to, and very explicit about what they want their robots to be doing. But once a robot is out in the wild, as it were, how far should that responsibility extend? And realistically, how far can it extend? Should robotics companies be held accountable for what their robots do in the world, or should we accept that once a robot is sold to someone else, responsibility is transferred as well? And what can be done if a robot is being used in an irresponsible or unethical way that could have a negative impact on the robotics community?

For perspective on this, we contacted folks from three different robotics companies, each of which has experience selling distinctive mobile robots to commercial end users. We asked them the same five questions about the responsibility that robotics companies have regarding the robots that they sell, and here’s what they had to say:

Do you have any restrictions on what people can do with your robots? If so, what are they, and if not, why not?

Péter Fankhauser, CEO, ANYbotics:

We closely work together with our customers to make sure that our solution provides the right approach for their problem. Thereby, the target use case is clear from the beginning and we do not work with customers interested in using our robot ANYmal outside the intended target applications. Specifically, we strictly exclude any military or weaponized uses and since the foundation of ANYbotics it is close to our heart to make human work easier, safer, and more enjoyable.

Robert Playter, CEO, Boston Dynamics:

Yes, we have restrictions on what people can do with our robots, which are outlined in our Terms and Conditions of Sale. All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon. Spot, just like any product, must be used in compliance with the law. 

Ryan Gariepy, CTO, Clearpath Robotics:

We do have strict restrictions and KYC processes which are based primarily on Canadian export control regulations. They depend on the type of equipment sold as well as where it is going. More generally, we also will not sell or support a robot if we know that it will create an uncontrolled safety hazard or if we have reason to believe that the buyer is unqualified to use the product. And, as always, we do not support using our products for the development of fully autonomous weapons systems.

More broadly, if you sell someone a robot, why should they be restricted in what they can do with it?

Péter Fankhauser, ANYbotics: We see the robot less as a simple object but more as an artificial workforce. This implies to us that the usage is closely coupled with the transfer of the robot and both the customer and the provider agree what the robot is expected to do. This approach is supported by what we hear from our customers with an increasing interest to pay for the robots as a service or per use.

Robert Playter, Boston Dynamics: We’re offering a product for sale. We’re going to do the best we can to stop bad actors from using our technology for harm, but we don’t have the control to regulate every use. That said, we believe that our business will be best served if our technology is used for peaceful purposes—to work alongside people as trusted assistants and remove them from harm’s way. We do not want to see our technology used to cause harm or promote violence. Our restrictions are similar to those of other manufacturers or technology companies that take steps to reduce or eliminate the violent or unlawful use of their products. 

Ryan Gariepy, Clearpath Robotics: Assuming the organization doing the restricting is a private organization and the robot and its software is sold vs. leased or “managed,” there aren't strong legal reasons to restrict use. That being said, the manufacturer likewise has no obligation to continue supporting that specific robot or customer going forward. However, given that we are only at the very edge of how robots will reshape a great deal of society, it is in the best interest for the manufacturer and user to be honest with each other about their respective goals. Right now, you're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.

“If a robot is being used in a way that is irresponsible due to safety: intervene! If it’s unethical: speak up!” —Péter Fankhauser, ANYbotics What can you realistically do to make sure that people who buy your robots use them in the ways that you intend?

Péter Fankhauser, ANYbotics: We maintain a close collaboration with our customers to ensure their success with our solution. So for us, we have refrained from technical solutions to block unintended use.

Robert Playter, Boston Dynamics: We vet our customers to make sure that their desired applications are things that Spot can support, and are in alignment with our Terms and Conditions of Sale. We’ve turned away customers whose applications aren’t a good match with our technology. If customers misuse our technology, we’re clear in our Terms of Sale that their violations may void our warranty and prevent their robots from being updated, serviced, repaired, or replaced. We may also repossess robots that are not purchased, but leased. Finally, we will refuse future sales to customers that violate our Terms of Sale.

Ryan Gariepy, Clearpath Robotics: We typically work with our clients ahead of the purchase to make sure their expectations match reality, in particular on aspects like safety, supervisory requirements, and usability. It's far worse to sell a robot that'll sit on a shelf or worse, cause harm, then to not sell a robot at all, so we prefer to reduce the risk of this situation in advance of receiving an order or shipping a robot.

How do you evaluate the merit of edge cases, for example if someone wants to use your robot in research or art that may push the boundaries of what you personally think is responsible or ethical?

Péter Fankhauser, ANYbotics: It’s about the dialog, understanding, and figuring out alternatives that work for all involved parties and the earlier you can have this dialog the better.

Robert Playter, Boston Dynamics: There’s a clear line between exploring robots in research and art, and using the robot for violent or illegal purposes. 

Ryan Gariepy, Clearpath Robotics: We have sold thousands of robots to hundreds of clients, and I do not recall the last situation that was not covered by a combination of export control and a general evaluation of the client's goals and expectations. I'm sure this will change as robots continue to drop in price and increase in flexibility and usability.

“You're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.” —Ryan Gariepy, Clearpath Robotics What should roboticists do if we see a robot being used in a way that we feel is unethical or irresponsible?

Péter Fankhauser, ANYbotics: If it’s irresponsible due to safety: intervene! If it’s unethical: speak up!

Robert Playter, Boston Dynamics: We want robots to be beneficial for humanity, which includes the notion of not causing harm. As an industry, we think robots will achieve long-term commercial viability only if people see robots as helpful, beneficial tools without worrying if they’re going to cause harm.

Ryan Gariepy, Clearpath Robotics: On a one off basis, they should speak to a combination of the user, the supplier or suppliers, the media, and, if safety is an immediate concern, regulatory or government agencies. If the situation in question risks becoming commonplace and is not being taken seriously, they should speak up more generally in appropriate forums—conferences, industry groups, standards bodies, and the like.

As more and more robots representing different capabilities become commercially available, these issues are likely to come up more frequently. The three companies we talked to certainly don’t represent every viewpoint, and we did reach out to other companies who declined to comment. But I would think (I would hope?) that everyone in the robotics community can agree that robots should be used in a way that makes people’s lives better. What “better” means in the context of art and research and even robots in the military may not always be easy to define, and inevitably there’ll be disagreement as to what is ethical and responsible, and what isn’t.

We’ll keep on talking about it, though, and do our best to help the robotics community to continue growing and evolving in a positive way. Let us know what you think in the comments.

At a press conference this afternoon, NASA released a new video showing, in real-time and full color, the entire descent and landing of the Perseverance Mars rover. The video begins with the deployment of the parachute, and ends with the Skycrane cutting the rover free and flying away. It’s the most mind-blowing three minutes of video I have ever seen. 

Image: NASA/JPL The cameras that recorded video during the Mars 2020 rover’s landing on Mars.

Some very quick context: during landing, multiple cameras were recording the event, and this video is a combination of these. No audio was recorded, so you’re hearing a feed from JPL mission control.

Here’s the video:

We’ll have a lot more on the Perseverance rover, but for now, we’re just going to let this video sink in.

[ Mars 2020 ]

Inspecting old mines is a dangerous business. For humans, mines can be lethal: prone to rockfalls and filled with noxious gases. Robots can go where humans might suffocate, but even robots can only do so much when mines are inaccessible from the surface.

Now, researchers in the UK, led by Headlight AI, have developed a drone that could cast a light in the darkness. Named Prometheus, this drone can enter a mine through a borehole not much larger than a football, before unfurling its arms and flying around the void. Once down there, it can use its payload of scanning equipment to map mines where neither humans nor robots can presently go. This, the researchers hope, could make mine inspection quicker and easier. The team behind Prometheus published its design in November in the journal Robotics.

Mine inspection might seem like a peculiarly specific task to fret about, but old mines can collapse, causing the ground to sink and damaging nearby buildings. It’s a far-reaching threat: the geotechnical engineering firm Geoinvestigate, based in Northeast England, estimates that around 8 percent of all buildings in the UK are at risk from any of the thousands of abandoned coal mines near the country’s surface. It’s also a threat to transport, such as road and rail. Indeed, Prometheus is backed by Network Rail, which operates Britain’s railway infrastructure.

Such grave dangers mean that old mines need periodic check-ups. To enter depths that are forbidden to traditional wheeled robots—such as those featured in the DARPA SubT Challenge—inspectors today drill boreholes down into the mine and lower scanners into the darkness.

But that can be an arduous and often fruitless process. Inspecting the entirety of a mine can take multiple boreholes, and that still might not be enough to chart a complete picture. Mines are jagged, labyrinthine places, and much of the void might lie out of sight. Furthermore, many old mines aren’t well-mapped, so it’s hard to tell where best to enter them.

Prometheus can fly around some of those challenges. Inspectors can lower Prometheus, tethered to a docking apparatus, down a single borehole. Once inside the mine, the drone can undock and fly around, using LIDAR scanners—common in mine inspection today—to generate a 3D map of the unknown void. Prometheus can fly through the mine autonomously, using infrared data to plot out its own course.

Other drones exist that can fly underground, but they’re either too small to carry a relatively heavy payload of scanning equipment, or too large to easily fit down a borehole. What makes Prometheus unique is its ability to fold its arms, allowing it to squeeze down spaces its counterparts cannot.

It’s that ability to fold and enter a borehole that makes Prometheus remarkable, says Jason Gross, a professor of mechanical and aerospace engineering at West Virginia University. Gross calls Prometheus “an exciting idea,” but he does note that it has a relatively short flight window and few abilities beyond scanning.

The researchers have conducted a number of successful test flights, both in a basement and in an old mine near Shrewsbury, England. Not only was Prometheus able to map out its space, the drone was able to plot its own course in an unknown area.

The researchers’ next steps, according to Puneet Chhabra, co-founder of Headlight AI, will be to test Prometheus’s ability to unfold in an actual mine. Following that, researchers plan to conduct full-scale test flights by the end of 2021.

Soft robots are inherently safe, highly resilient, and potentially very cheap, making them promising for a wide array of applications. But development on them has been a bit slow relative to other areas of robotics, at least partially because soft robots can’t directly benefit from the massive increase in computing power and sensor and actuator availability that we’ve seen over the last few decades. Instead, roboticists have had to get creative to find ways of achieving the functionality of conventional robotics components using soft materials and compatible power sources.

In the current issue of Science Robotics, researchers from UC San Diego demonstrate a soft walking robot with four legs that moves with a turtle-like gait controlled by a pneumatic circuit system made from tubes and valves. This air-powered nervous system can actuate multiple degrees of freedom in sequence from a single source of pressurized air, offering a huge reduction in complexity and bringing a very basic form of decision making onto the robot itself.

Generally, when people talk about soft robots, the robots are only mostly soft. There are some components that are very difficult to make soft, including pressure sources and the necessary electronics to direct that pressure between different soft actuators in a way that can be used for propulsion. What’s really cool about this robot is that researchers have managed to take a pressure source (either a single tether or an onboard CO2 cartridge) and direct it to four different legs, each with three different air chambers, using an oscillating three valve circuit made entirely of soft materials. 

Photo: UCSD The pneumatic circuit that powers and controls the soft quadruped.

The inspiration for this can be found in biology—natural organisms, including quadrupeds, use nervous system components called central pattern generators (CPGs) to prompt repetitive motions with limbs that are used for walking, flying, and swimming. This is obviously more complicated in some organisms than in others, and is typically mediated by sensory feedback, but the underlying structure of a CPG is basically just a repeating circuit that drives muscles in sequence to produce a stable, continuous gait. In this case, we’ve got pneumatic muscles being driven in opposing pairs, resulting in a diagonal couplet gait, where diagonally opposed limbs rotate forwards and backwards at the same time.

Diagram: Science Robotics

(J) Pneumatic logic circuit for rhythmic leg motion. A constant positive pressure source (P+) applied to three inverter components causes a high-pressure state to propagate around the circuit, with a delay at each inverter. While the input to one inverter is high, the attached actuator (i.e., A1, A2, or A3) is inflated. This sequence of high-pressure states causes each pair of legs of the robot to rotate in a direction determined by the pneumatic connections. (K) By reversing the sequence of activation of the pneumatic oscillator circuit, the attached actuators inflate in a new sequence (A1, A3, and A2), causing (L) the legs of the robot to rotate in reverse. (M) Schematic bottom view of the robot with the directions of leg motions indicated for forward walking.

Diagram: Science Robotics

Each of the valves acts as an inverter by switching the normally closed half (top) to open and the normally open half (bottom) to closed.

The circuit itself is made up of three bistable pneumatic valves connected by tubing that acts as a delay by providing resistance to the gas moving through it that can be adjusted by altering the tube’s length and inner diameter. Within the circuit, the movement of the pressurized gas acts as both a source of energy and as a signal, since wherever the pressure is in the circuit is where the legs are moving. The simplest circuit uses only three valves, and can keep the robot walking in one single direction, but more valves can add more complex leg control options. For example, the researchers were able to use seven valves to tune the phase offset of the gait, and even just one additional valve (albeit of a slightly more complex design) could enable reversal of the system, causing the robot to walk backwards in response to input from a soft sensor. And with another complex valve, a manual (tethered) controller could be used for omnidirectional movement.

This work has some similarities to the rover that JPL is developing to explore Venus—that rover isn’t a soft robot, of course, but it operates under similar constraints in that it can’t rely on conventional electronic systems for autonomous navigation or control. It turns out that there are plenty of clever ways to use mechanical (or in this case, pneumatic) intelligence to make robots with relatively complex autonomous behaviors, meaning that in the future, soft (or soft-ish) robots could find valuable roles in situations where using a non-compliant system is not a good option.

For more on why we should be so excited about soft robots and just how soft a soft robot needs to be, we spoke with Michael Tolley, who runs the Bioinspired Robotics and Design Lab at UCSD, and Dylan Drotman, the paper’s first author.

IEEE Spectrum: What can soft robots do for us that more rigid robotic designs can’t?

Michael Tolley: At the very highest level, one of the fundamental assumptions of robotics is that you have rigid bodies connected at joints, and all your motion happens at these joints. That's a really nice approach because it makes the math easy, frankly, and it simplifies control. But when you look around us in nature, even though animals do have bones and joints, the way we interact with the world is much more complicated than that simple story. I’m interested in where we can take advantage of material properties in robotics. If you look at robots that have to operate in very unknown environments, I think you can build in some of the intelligence for how to deal with those environments into the body of the robot itself. And that’s the category this work really falls under—it's about navigating the world.

Dylan Drotman: Walking through confined spaces is a good example. With the rigid legged robot, you would have to completely change the way that the legs move to walk through a confined space, while if you have flexible legs, like the robot in our paper, you can use relatively simple control strategies to squeeze through an area you wouldn’t be able to get through with a rigid system. 

How smart can a soft robot get?

Drotman: Right now we have a sensor on the front that's connected through a fluidic transmission to a bistable valve that causes the robot to reverse. We could add other sensors around the robot to allow it to change direction whenever it runs into an obstacle to effectively make an electronics-free version of a Roomba.

Tolley: Stepping back a little bit from that, one could make an argument that we’re using basic memory elements to generate very basic signals. There’s nothing in principle that would stop someone from making a pneumatic computer—it’s just very complicated to make something that complex. I think you could build on this and do more intelligent decision making, but using this specific design and the components we’re using, it’s likely to be things that are more direct responses to the environment. 

How well would robots like these scale down?

Drotman: At the moment we’re manufacturing these components by hand, so the idea would be to make something more like a printed circuit board instead, and looking at how the channel sizes and the valve design would affect the actuation properties. We’ll also be coming up with new circuits, and different designs for the circuits themselves.

Tolley: Down to centimeter or millimeter scale, I don’t think you’d have fundamental fluid flow problems. I think you’re going to be limited more by system design constraints. You’ll have to be able to locomote while carrying around your pressure source, and possibly some other components that are also still rigid. When you start to talk about really small scales, though, it's not as clear to me that you really need an intrinsically soft robot. If you think about insects, their structural geometry can make them behave like they’re soft, but they’re not intrinsically soft.

Should we be thinking about soft robots and compliant robots in the same way, or are they fundamentally different?

Tolley: There’s certainly a connection between the two. You could have a compliant robot that behaves in a very similar way to an intrinsically soft robot, or a robot made of intrinsically soft materials. At that point, it comes down to design and manufacturing and practical limitations on what you can make. I think when you get down to small scales, the two sort of get connected. 

There was some interesting work several years ago on using explosions to power soft robots. Is that still a thing?

Tolley: One of the opportunities with soft robots is that with material compliance, you have the potential to store energy. I think there’s exciting potential there for rapid motion with a soft body. Combustion is one way of doing that with power coming from a chemical source all at once, but you could also use a relatively weak muscle that over time stores up energy in a soft body and then releases it. 

Is it realistic to expect complete softness from soft robots, or will they likely always have rigid components because they have to store or generate and move pressurized gas somehow?

Tolley: If you look in nature, you do have soft pumps like the heart, but although it’s soft, it’s still relatively stiff. Like, if you grab a heart, it’s not totally squishy. I haven’t done it, but I’d imagine. If you have a container that you’re pressurizing, it has to be stiff enough to not just blow up like a balloon. Certainly pneumatics or hydraulics are not the only way to go for soft actuators; there has been some really nice work on smart muscles and smart materials like hydraulic electrostatic (HASEL) actuators. They seem promising, but all of these actuators have challenges. We’ve chosen to stick with pressurized pneumatics in the near term; longer term, I think you’ll start to see more of these smart material actuators become more practical.

Personally, I don’t have any problem with soft robots having some rigid components. Most animals on land have some rigid components, but they can still take advantage of being soft, so it’s probably going to be a combination. But I do also like the vision of making an entirely soft, squishy thing.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

Hmm, did anything interesting happen in robotics yesterday week?

Obviously, we're going to have tons more on the Mars Rover and Mars Helicopter over the next days, weeks, months, years, and (if JPL's track record has anything to say about it) decades. Meantime, here's what's going to happen over the next day or two:

[ Mars 2020 ]

PLEN hopes you had a happy Valentine's Day!

[ PLEN ]

Unitree dressed up a whole bunch of Laikago quadrupeds to take part in the 2021 Spring Festival Gala in China.

[ Unitree ]

Thanks Xingxing!

Marine iguanas compete for the best nesting sites on the Galapagos Islands. Meanwhile RoboSpy Iguana gets involved in a snot sneezing competition after the marine iguanas return from the sea.

[ Spy in the Wild ]

Tails, it turns out, are useful for almost everything.

[ DART Lab ]

Partnered with MD-TEC, this video demonstrates use of teleoperated robotic arms and virtual reality interface to perform closed suction for self-ventilating tracheostomy patients during COVID -19 outbreak. Use of closed suction is recommended to minimise aerosol generated during this procedure. This robotic method avoids staff exposure to virus to further protect NHS.

[ Extend Robotics ]

Fotokite is a safe, practical way to do local surveillance with a drone.

I just wish they still had a consumer version :(

[ Fotokite ]

How to confuse fish.

[ Harvard ]

Army researchers recently expanded their research area for robotics to a site just north of Baltimore. Earlier this year, Army researchers performed the first fully-autonomous tests onsite using an unmanned ground vehicle test bed platform, which serves as the standard baseline configuration for multiple programmatic efforts within the laboratory. As a means to transition from simulation-based testing, the primary purpose of this test event was to capture relevant data in a live, operationally-relevant environment.

[ Army ]

Flexiv's new RIZON 10 robot hopes you had a happy Valentine's Day!

[ Flexiv ]

Thanks Yunfan!

An inchworm-inspired crawling robot (iCrawl) is a 5 DOF robot with two legs; each with an electromagnetic foot to crawl on the metal pipe surfaces. The robot uses a passive foot-cap underneath an electromagnetic foot, enabling it to be a versatile pipe-crawler. The robot has the ability to crawl on the metal pipes of various curvatures in horizontal and vertical directions. The robot can be used as a new robotic solution to assist close inspection outside the pipelines, thus minimizing downtime in the oil and gas industry.

[ Paper ]

Thanks Poramate!

A short film about Robot Wars from Blender Magazine in 1995.

[ YouTube ]

While modern cameras provide machines with a very well-developed sense of vision, robots still lack such a comprehensive solution for their sense of touch. The talk will present examples of why the sense of touch can prove crucial for a wide range of robotic applications, and a tech demo will introduce a novel sensing technology targeting the next generation of soft robotic skins. The prototype of the tactile sensor developed at ETH Zurich exploits the advances in camera technology to reconstruct the forces applied to a soft membrane. This technology has the potential to revolutionize robotic manipulation, human-robot interaction, and prosthetics.

[ ETHZ ]

Thanks Markus!

Quadrupedal robotics has reached a level of performance and maturity that enables some of the most advanced real-world applications with autonomous mobile robots. Driven by excellent research in academia and industry all around the world, a growing number of platforms with different skills target different applications and markets. We have invited a selection of experts with long-standing experience in this vibrant research area

[ IFRR ]

Thanks Fan!

Since January 2020, more than 300 different robots in over 40 countries have been used to cope with some aspect of the impact of the coronavirus pandemic on society. The majority of these robots have been used to support clinical care and public safety, allowing responders to work safely and to handle the surge in infections. This panel will discuss how robots have been successfully used and what is needed, both in terms of fundamental research and policy, for robotics to be prepared for the future emergencies.

[ IFRR ]

At Skydio, we ship autonomous robots that are flown at scale in complex, unknown environments every day. We’ve invested six years of R&D into handling extreme visual scenarios not typically considered by academia nor encountered by cars, ground robots, or AR applications. Drones are commonly in scenes with few or no semantic priors on the environment and must deftly navigate thin objects, extreme lighting, camera artifacts, motion blur, textureless surfaces, vibrations, dirt, smudges, and fog. These challenges are daunting for classical vision, because photometric signals are simply inconsistent. And yet, there is no ground truth for direct supervision of deep networks. We’ll take a detailed look at these issues and how we’ve tackled them to push the state of the art in visual inertial navigation, obstacle avoidance, rapid trajectory planning. We will also cover the new capabilities on top of our core navigation engine to autonomously map complex scenes and capture all surfaces, by performing real-time 3D reconstruction across multiple flights.

[ UPenn ]

They used to call it “Seven Minutes of Terror”—a NASA probe would slice into the atmosphere of Mars at more than 20,000 kilometers per hour; slow itself with a heat shield, parachute, and rocket engines; and somehow land intact on the surface, just six or seven minutes later, while its makers waited helplessly on Earth. The computer-animated landing videos NASA produced before previous Mars missions—in 2004, 2008, and 2012—became online sensations. “If any one thing doesn’t work just right,” said NASA engineer Tom Rivellini in the last one, “it’s game over.”

NASA is now trying again, with the Perseverance rover and the tiny Ingenuity drone bolted to its undercarriage. NASA will be live-streaming the landing (across many video and social media platforms as well as in a Spanish language feed and in an immersive, 360-degree view) beginning at 11:15 a.m. PST/2:15 p.m. EST/19:15 UTC on Thursday, 18 February 2021. 

While this year’s animated landing video is as dramatic as ever, the tone has changed. “The models and simulations of landing at Jezero crater have assessed the probability of landing safely to be above 99 percent,” says Swati Mohan, the guidance, navigation and controls operations lead for the mission.

There isn’t a trace of arrogance in her voice as she says this. She’s been working on this mission for five years, has teammates who were around for NASA’s first Mars rover in 1997, and knows what they’re up against. Yes, they say, 99 percent reliability is realistic. 

The biggest advance over past missions is a system called Terrain Relative Navigation—TRN for short. In essence, it gives the spacecraft a way to know precisely where it’s headed, so it can steer clear of hazards on the very jagged landscapes that scientists most want to explore. If all goes as planned, Perseverance will image the Martian surface in rapid sequence as it plows toward its landing site, and compare what it sees to onboard maps of the ground below. The onboard database is primarily based on high-resolution images from NASA’s Mars Reconnaissance Orbiter, which has been mapping the planet from an altitude of 250 kilometers since 2006. Its images have a resolution of 30 cm per pixel. 

“This is kind of along the same lines as what the Apollo astronauts did with people in the loop, back in the day. Those guys looked out the window,” says Allen Chen, the mission’s entry, descent, and landing lead. “For the first time here on Mars, we’re automating that.”

  Illustration: NASA/JPL-Caltech NASA’s Perseverance Mars mission follows a carefully choreographed sequence of steps, pictured here, that—with many engineers on the ground holding their breath—will hopefully end in the newest Mars rover ready to explore the red planet.

There will still be plenty of anxious controllers at NASA’s Jet Propulsion Laboratory in California. After all, the spacecraft will be on its own, about 209 million kilometers from Earth, far enough away that its radio signals will take more than 11 minutes to reach home. The ship should reach the surface four minutes before engineers even know it has entered the Martian atmosphere. “Landing on Mars is hard enough,” says Thomas Zurbuchen, NASA’s associate administrator for science missions. “It is not guaranteed that we will be successful.” 

But the new navigation technology makes a very risky landing possible. Jezero crater, which was probably once a lake at the end of a river delta, has been on scientists’ shortlist since the 1990s as place to look for signs of past life on Mars. But engineers voted against it until this mission. Previous landers used radar, which Mohan likens to “closing your eyes and holding your hands out in front of you. You can use that to slow down and to stop. But with your eyes closed you can't really control where you're coming down.”

Everything happens fast as Perseverance comes in, following a long arcing path. Fewer than 90 seconds before scheduled touchdown, and about 2,100 meters above the Martian surface, the TRN system makes its calculations. Its rapid-fire imaging should by then have told it where it is relative to the ground below, and from that it can project its likely touchdown spot. If the ship is headed for a ridge, a crevice, or a dangerous outcropping of rock, the computer will send commands to eight downward-facing rocket engines to change the descent trajectory. 

In that final minute, as the spacecraft slows from 300 kilometers per hour to zero, the TRN system can shift the touchdown spot by up to 330 meters. The safe targets map in Perseverance’s memory is detailed enough, the team says, that the ship should be able to reach a suitable location for a safe landing. 

“It’s able to thread the needle of all these different hazards to land in the safe spots in between these hazards,” says Mohan, “and by landing amongst the hazards it’s also landing amongst the scientific features of interest.”

Update as of 3:55 p.m. EST, 18 Feb. 2021: Perseverance has landed! 

I’m safe on Mars. Perseverance will get you anywhere.

#CountdownToMars

— NASA's Perseverance Mars Rover (@NASAPersevere) February 18, 2021

Tucked under the belly of the Perseverance rover that will be landing on Mars in just a few days is a little helicopter called Ingenuity. Its body is the size of a box of tissues, slung underneath a pair of 1.2m carbon fiber rotors on top of four spindly legs. It weighs just 1.8kg, but the importance of its mission is massive. If everything goes according to plan, Ingenuity will become the first aircraft to fly on Mars. 

In order for this to work, Ingenuity has to survive frigid temperatures, manage merciless power constraints, and attempt a series of 90 second flights while separated from Earth by 10 light minutes. Which means that real-time communication or control is impossible. To understand how NASA is making this happen, below is our conversation with Tim Canham, Mars Helicopter Operations Lead at NASA’s Jet Propulsion Laboratory (JPL).

It’s important to keep the Mars Helicopter mission in context, because this is a technology demonstration. The primary goal here is to fly on Mars, full stop. Ingenuity won’t be doing any of the same sort of science that the Perseverance rover is designed to do. If we’re lucky, the helicopter will take a couple of in-flight pictures, but that’s about it. The importance and the value of the mission is to show that flight on Mars is possible, and to collect data that will enable the next generation of Martian rotorcraft, which will be able to do more ambitious and exciting things. 

Here’s an animation from JPL showing the most complex mission that’s planned right now:

Ingenuity isn’t intended to do anything complicated because everything about the Mars helicopter itself is inherently complicated already. Flying a helicopter on Mars is incredibly challenging for a bunch of reasons, including the very thin atmosphere (just 1% the density of Earth’s), the power requirements, and the communications limitations. 

With all this in mind, getting Ingenuity to Mars in one piece and having it take off and land even once is a definite victory for NASA, JPL’s Tim Canham tells us. Canham helped develop the software architecture that runs Ingenuity. As the Ingenuity operations lead, he’s now focused on flight planning and coordinating with the Perseverance rover team. We spoke with Canham to get a better understanding of how Ingenuity will be relying on autonomy for its upcoming flights on Mars.

IEEE Spectrum: What can you tell us about Ingenuity’s hardware?

Tim Canham: Since Ingenuity is classified as a technology demo, JPL is willing to accept more risk. The main unmanned projects like rovers and deep space explorers are what’s called Class B missions, in which there are many people working on ruggedized hardware and software over many years. With a technology demo, JPL is willing to try new ways of doing things. So we essentially went out and used a lot of off-the-shelf consumer hardware. 

There are some avionics components that are very tough and radiation resistant, but much of the technology is commercial grade. The processor board that we used, for instance, is a Snapdragon 801, which is manufactured by Qualcomm. It’s essentially a cell phone class processor, and the board is very small. But ironically, because it’s relatively modern technology, it’s vastly more powerful than the processors that are flying on the rover. We actually have a couple of orders of magnitude more computing power than the rover does, because we need it. Our guidance loops are running at 500 Hz in order to maintain control in the atmosphere that we're flying in. And on top of that, we’re capturing images and analyzing features and tracking them from frame to frame at 30 Hz, and so there's some pretty serious computing power needed for that. And none of the avionics that NASA is currently flying are anywhere near powerful enough. In some cases we literally ordered parts from SparkFun [Electronics]. Our philosophy was, “this is commercial hardware, but we’ll test it, and if it works well, we’ll use it.”

Can you describe what sensors Ingenuity uses for navigation?

We use a cellphone-grade IMU, a laser altimeter (from SparkFun), and a downward-pointing VGA camera for monocular feature tracking. A few dozen features are compared frame to frame to track relative position to figure out direction and speed, which is how the helicopter navigates. It’s all done by estimates of position, as opposed to memorizing features or creating a map.

Photo: NASA/JPL-Caltech NASA’s Ingenuity Mars helicopter viewed from below, showing its laser altimeter and navigation camera.

We also have an inclinometer that we use to establish the tilt of the ground just during takeoff, and we have a cellphone-grade 13 megapixel color camera that isn’t used for navigation, but we’re going to try to take some nice pictures while we’re flying. It’s called the RTE, because everything has to have an acronym. There was an idea of putting hazard detection in the system early on, but we didn’t have the schedule to do that.

In what sense is the helicopter operating autonomously?

You can almost think of the helicopter like a traditional JPL spacecraft in some ways. It has a sequencing engine on board, and we write a set of sequences, a series of commands, and we upload that file to the helicopter and it executes those commands. We plan the guidance part of the flights on the ground in simulation as a series of waypoints, and those waypoints are the sequence of commands that we send to the guidance software. When we want the helicopter to fly, we tell it to go, and the guidance software takes over and executes taking off, traversing to the different waypoints, and then landing.

This means the flights are pre-planned very specifically. It’s not true autonomy, in the sense that we don’t give it goals and rules and it’s not doing any on-board high-level reasoning. It’s sort of half-way autonomy. The brute force way would be a human sitting there and flying it around with joysticks, and obviously we can’t do that on Mars. But there wasn’t time in the project to develop really detailed autonomy on the helicopter, so we tell it the flight plan ahead of time, and it executes a trajectory that’s been pre-planned for it. As it’s flying, it’s autonomously trying to make sure it stays on that trajectory in the presence of wind gusts or other things that may happen in that environment. But it’s really designed to follow a trajectory that we plan on the ground before it flies.

This isn’t necessarily an advanced autonomy proof of concept—something like telling it to “go take a picture of that rock” would be more advanced autonomy, in my view. Whereas, this is really a scripted flight, the primary goal is to prove that we can fly around on Mars successfully. There are future mission concepts that we’re working through now that would involve a bigger helicopter with much more autonomy on board that may be able to [achieve] that kind of advanced autonomy. But if you remember Mars Pathfinder, the very first rover that drove on Mars, it had a very basic mission: Drive in a circle around the base station and try to take some pictures and samples of some rocks. So, as a technology demo, we’re trying to be modest about what we try to do the first time with the helicopter, too. 

Is there any situation where something might cause the helicopter to decide to deviate from its pre-planned trajectory?

The guidance software is always making sure that all the sensors are healthy and producing good data. If a sensor goes wonky, the helicopter really has one response, which is to take the last propagated state and just try to land and then tell us what happened and wait for us to deal with it. The helicopter won’t try to continue its flight if a sensor fails. All three sensors that we use during flight are necessary to complete the flight because of how their data is fused together.

Illustration: NASA/JPL-Caltech An artist’s illustration of Ingenuity flying on Mars.

How will you decide where to fly?

We’ll be doing what we’re calling a site selection process, and that’s even starting now from orbital images of where we anticipate the rover is going to land. Orbital images are the coarse way of identifying potential sites, and then the rover will go to one of those sites and do a very extensive survey of the area. Based on the rockiness, the slope, and even how textured the area is for feature tracking, we’ll select a site for the helicopter to operate in. There are some tradeoffs, because the safest surface is one that’s featureless, with no rocks, but that’s also the worst surface to do feature tracking on, so we have to find a balance that might include a bunch of little rocks that make good features to track but no big rocks that might make it more difficult to land.

What kind of flights are you hoping the robot will make?

Because we’re trying this out for the first time, we have three main flights planned, and all three of them have the helicopter landing in the same spot that it took off from, because we know we’ll have a surveyed safe area. We have a limited 30 day window, and if we have the time, then we might try to land it in a different area that looks safe from a distance. But the first three canonical flights are all going to be takeoff, fly, and then come back and land in the same spot.

JPL has a history of building robots that are able to remain functional long after their primary mission is over. With only a 30 day mission, does that mean that barring some kind of accident, the rover will end up just driving away from a perfectly functional Mars helicopter?

Yeah, that’s the plan, because the rover has to get on with its primary mission. And it does consume resources to support us. And so they gave us this 30 day window, which we’re very grateful for of course, and then they’re moving on, whether we’re still working or not. Whatever wild and crazy stuff we want to do, we’ll have to do within our 30 days. We don’t actually have the final two flights planned yet. Depending on how quickly the first three go, we may have a week or so to try some more exotic things. But we’re really concentrating on those first three flights.

Our ultimate success criterion is a single flight, so if we get that first flight, we’re going to be doing high fives. The next two flights are going to be stretching that envelope a little bit. And then the final two flights are, hey, let’s see how adventurous we can get. We might fly off a hundred meters, or do a big circle or something like that. But the whole point is understanding how it flies, and that means doing our first flight and seeing how well it performs.

Let’s say everything goes great on your first four flights and you have one flight left. Would you  rather try something really adventurous that might not work, or something a little safer that’s more likely to work but that wouldn’t teach you quite as much?

That’s a good question, and we’ll have to figure that out. If we have one flight left and they’re going to leave us behind anyway, maybe we could try something bold. But we haven’t really gotten that far yet. We’re really concentrating on those first three flights, and everything after that is a bonus.

Anything else you can share with us that engineers might find particularly interesting?

This the first time we’ll be flying Linux on Mars. We’re actually running on a Linux operating system. The software framework that we’re using is one that we developed at JPL for cubesats and instruments, and we open-sourced it a few years ago. So, you can get the software framework that’s flying on the Mars helicopter, and use it on your own project. It’s kind of an open-source victory, because we’re flying an open-source operating system and an open-source flight software framework and flying commercial parts that you can buy off the shelf if you wanted to do this yourself someday. This is a new thing for JPL because they tend to like what’s very safe and proven, but a lot of people are very excited about it, and we’re really looking forward to doing it.

It’s been very cool to watch 3D printers and laser cutters evolve into fairly common tools over the last decade-ish, finding useful niches across research, industry, and even with hobbyists at home. Capable as these fabricators are, they tend to be good at just one specific thing: making shapes out of polymer. Which is great! But we have all kinds of other techniques for making things that are even more useful, like by adding computers and actuators and stuff like that. You just can’t do that with your 3D printer or laser cutter, because it just does its one thing—which is too bad.

At CHI this year, researchers from MIT CSAIL are presenting LaserFactory, an integrated fabrication system that turns any laser cutter into a device that can (with just a little bit of assistance) build you an entire drone at a level of finish that means when the fabricator is done, the drone can actually fly itself off of the print bed. Sign me up.

There are a couple different components that make up LaserFactory. First, you’ve got a commercial laser cutter to do what commercial laser cutters  do— in this case, cutting a quadrotor frame out of plastic. Second, you’ve got a hardware-add on, which is a whole bunch of other stuff that reversibly bolts onto the head of the laser cutter. The add-on includes a silver paste dispenser, a little suction gripper to do pick-and-place, and some actuators, solenoids, and a small vacuum pump. Once the laser has cut out the base structure, the silver paste is dispensed wherever you’d need either a conductive circuit trace or something glued to something else. Then, the suction gripper adds components one by one, moving them from a preloaded storage area into the fabrication area. The last step is to use the laser once more to zap the silver paste to thermally cure it, turning the traces conductive and also soldering components together. Those conductive traces do look a bit messy (the paste spreads out after being deposited), but adding another step of engraving small channels into the substrate can help keep it contained to sub-millimeter widths.

The really clever thing that’s not necessarily obvious from the video is that the hardware add-on is not communicating directly with the laser cutter head that it’s attached to. You’d think this would be an absolutely necessary step, because otherwise how does the add-on know where its location is and what it should be doing? But it does know these things, just indirectly, by using an accelerometer to track the movement of the fabrication head, and then converting specific movements that the fabrication head makes into instructions. To ‘program’ the hardware add-on, then, you just have to embed some custom movement instructions into your fabrication program (like a very specific little shimmy), which the hardware add-on will then read in order to trigger a specific function. This method is a clever one because it makes the hardware add-on more or less agnostic to the kind of fabricator that it’s working with. As long as your fabricator accepts custom movement instructions, this motion-based signaling technique means that it can control the hardware add-on.

For more about what it would take to get this kind of thing working on a consumer 3D printer, and what the future may hold for personal fabricators, we spoke with first author Martin Nisser via email.

IEEE Spectrum: Besides laser cutters, what other fabricators could your system work with, and what modifications would be required?

Martin Nisser: Our motion-based signaling technique obviates the need to communicate with a particular fabrication platform by encoding the fabrication instructions into the fabrication file itself. For the laser cutter we used, this entailed embedding the instructions into a pdf. This feature makes our add-on agnostic to specific fabrication platforms so that it is portable not only between various laser cutter brands, but even 3D printers— by translating the fabrication instructions to G-code, we were able to deploy our system onto a 3D printer too. 

The advantage of this is that our signaling technique can conceivably be used to deploy researchers’ custom hardware add-ons onto any machine that utilizes a 2-axis CNC platform, though modifications would be required first in hardware to physically connect it to different end effectors, and second in software to calibrate it to different accelerations. A drawback to this communications paradigm is that the communication is one-way: platform agnosticism means that the hardware add-on has no access to the fabrication file itself, and so relies on a form of dead reckoning which means it would be unable to self-correct in the event of an error.

What level of skill or experience does it take to operate LaserFactory? Who do you hope could benefit most from the kind of device that you're developing?

Once LaserFactory is assembled and mounted onto a laser cutter, it requires no further intervention to operate. In the near term, this kind of one-stop-shop for fabrication would be beneficial for researchers, educators, product developers and makers looking to rapidly prototype functional devices such as wearables, robots and printed electronics. It is also a compelling solution for logistically challenging environments such as space, where the ability to create functional devices remotely, on-demand and without human intervention is paramount. 

More generally, users stand to benefit where they need to create physical prototypes for devices but may not have the skills to make them; people shouldn’t in the future be expected to have an engineering degree to build robots any more than they should have a computer science degree to install software.

In what ways could you potentially extend this system?

We hope to build on this technology by exploring how to create a fuller range of 3D geometries, perhaps by integrating traditional 3D printing into the process. In addition, we would like to chart the full design space of what we can make, for example by leveraging the full volumetric space of the laser cutter platform to create devices up to 1 [meter] or greater in length. Beyond the engineering, we are also thinking about how this kind of one-stop-shop for fabrication devices could be optimally integrated into today’s existing supply chains for manufacturing, and what challenges we may need to solve to allow for that to happen.

What kind of future do you envision for personal fabricators?

From Star Trek’s Replicators to Richie Rich’s wishing machine, the media has a long history of inspiring speculation about a machine capable of creating arbitrary “things” for people on demand. In research, the idea of a machine to make machines—such as von Neumann’s universal constructor— has also received serious attention, and researchers have in recent decades worked actively towards a long-term vision of being able to download a device file and have it fabricated at the push of a button. We hope that one day, fabricating custom hardware through personal fabrication machines will allow for as much personalization and be just as straight-forward as downloading software is today.

Just before 4PM ET on February 18 (this Thursday), NASA’s Perseverance rover will attempt to land on Mars. Like its predecessor Curiosity, which has been exploring Mars since 2012, Perseverance is a semi-autonomous mobile science platform the size of a small car. It’s designed to spend years roving the red planet, looking for (among other things) any evidence of microbial life that may have thrived on Mars in the past.

This mission to Mars is arguably the most ambitious one ever launched, combining technically complex science objectives with borderline craziness that includes the launching of a small helicopter. Over the next two days, we’ll be taking an in-depth look at both that helicopter and how Perseverance will be leveraging autonomy to explore farther and faster that ever before, but for now, we’ll quickly go through all the basics about the Perseverance mission to bring you up to speed on everything that will happen later this week.

The Basics

Here’s a quick overview video of the Perseverance mission from JPL.

How is Perseverance different from Curiosity?

While the overall design of both rovers is very similar, including the radioisotope thermoelectric generator as a butt-mounted power source, Perseverance builds on the experience that JPL has with Curiosity, resulting in a larger, more durable, and more capable robot.

Image: NASA/JPL-Caltech Curiosity (left) and Perseverance (right) may look very similar, but there have been some significant design changes.

Perseverance is only a few centimeters larger than Curiosity, but is over 100kg heavier. Much of that extra chonk comes from a substantially heavier turret on the end of its robotic arm, which includes a coring drill. There are a bunch of other new science instruments as well, which we’ll get to in just a minute. Perseverance also has five more cameras than Curiosity (for a total of 23), and while its primary imaging camera is still only 2 megapixels, Perseverance’s version has a 28-100mm optical zoom. And for the first time, Perseverance will be taking along a couple of microphones so that we can hear what Mars sounds like.

Image: NASA/JPL A comparison between the wheels of Curiosity and Perseverance.

One of the problems that Curiosity has been having on Mars is wheel wear, and so Perseverance was designed with beefier wheels. Perseverance’s aluminum wheels are 1mm thicker, with a tread pattern that’s more resistant to wear caused by sharp rocks without sacrificing performance on sand. Gone, sadly, are the rectangular wheel cutouts that spell “JPL” in Morse code as Curiosity drives along. 

Perhaps the most significant difference between the two rovers in software is that Perseverance is much more autonomous than Curiosity. It’ll be able to plan its own driving paths, traveling farther every day. We’ll be covering this in more detail in a separate post.

Curiosity, by the way, is still making her way up Mount Sharp, and clocked its 3,000th day on Mars last month with a total distance traveled of over 24km. 

What science will Perseverance be doing?

Straight from NASA:

The rover’s goal is to study the site in detail for its past conditions and seek the very signs of past life. Its mission is to identify and collect the most compelling rock core and soil samples, which a future mission could retrieve and bring back to Earth for more detailed study. Perseverance will also test technologies needed for the future human and robotic exploration of Mars.

Image: NASA/JPL Science instruments on board the Perseverance rover.

Perseverance is bringing seven science instruments to Mars, including:

  • Mastcam-Z: Color cameras capable of panoramic and stereoscopic imagery. Most of the pretty pictures of the surface of Mars that we see will probably come from these cameras.
  • SuperCam: A combination camera, rock-vaporizing laser, and spectrometer that can identify the composition of rocks and soils in areas that the rover’s arm can’t reach.
  • SHERLOC: A close range microscopic camera and spectrometer that Perseverance can move within just a few centimeters of a rock for a detailed analysis, specifically designed to detect organic molecules. SHERLOC will also be observing bits of spacesuit material to see how well they handle the Martian atmosphere over time.
  • PIXL: Another microscopic analysis tool which includes an X-ray fluorescence spectrometer to detect very small scale (like, grain of salt scale) changes in the composition and texture of rocks.
  • RIMFAX: Ground-penetrating radar that can detect water or ice 10 meters beneath the surface underneath the rover.
  • MEDA: A suite of sensors that measure temperature, pressure, humidity, wind speed and direction, and atmospheric dust characteristics.
  • MOXIE: Moxie will try to convert Martian atmosphere (96% CO2) into useful oxygen with carbon monoxide as a byproduct via an electrolyzer heated to 800 degrees C, in a process that NASA says is a bit like a fuel cell running in reverse. Perseverance won’t be using the oxygen, but if the technology proves itself humans may one day use it for breathable air and rocket fuel.
  • Sample Caching System: A huge chunk of Perseverance is devoted to taking samples of the Martian surface, analyzing them, and storing them. These samples will be sealed up and left on the surface, with the idea that in a decade or so, another robot will come along, scoop them up, put them into a rocket, and fire them back to Earth. That last part hasn’t really been figured out yet, but Perseverance will be taking the first step anyway.

And the last thing that Perseverance is carrying with it to Mars is an honest-to-goodness helicopter.

How does the Mars Helicopter work?

At some point after Perseverance has landed successfully, it’ll make its way to a nice flat area and deploy the Mars Helicopter, named Ingenuity. The rover will then find a safe vantage point from which to watch as Ingenuity makes five flights over a period of 30 days. The helicopter won’t be doing any science, since its primary objective is to prove that controlled autonomous flight is possible in the Martian atmosphere, but we should end up with some cool pictures.

We will also be describing Ingenuity more in detail in a future post. 

Where on Mars is Perseverance landing?

The target is the bottom of Jezero crater, a 50km-diameter impact crater on the northwestern edge of a much larger impact basin called Isidis. In Slavic languages, “Jezero” means lake. Jezero was chosen because it looks (based on orbital imagery) like it was likely a huge lake in the ancient past, with rivers running into it and forming river deltas like we have here on Earth. Specifically, Perseverance is aiming to land close to the river delta in the image below:

Image: NASA/MSSS/USGS An oblique view of an ancient river delta in Jezero Crater.

The idea is that if there was life on Mars at one point, ancient lake beds and river deltas are the best place to look. 

How does the landing work?

It’s very similar to the way Curiosity landed back in 2012, including a massive parachute followed by a powered descent, and a final touchdown using a Skycrane system. The biggest improvement to the landing strategy is that for the first time, the powered descent stage will use visual localization to actively navigate to an ideal landing spot, hopefully bringing the rover closer to its target in a safe way. Here’s how everything is supposed to go:

How can I watch?

The official NASA live stream starts at 2:15pm ET on Thursday February 18.

What will actually happen on Thursday?

Most of the time, nothing will be happening, and we won’t be able to see any part of the landing happen in real time due to the time delay between Earth and Mars. All that we can do is follow along with mission control at JPL as they receive a series of signals from Perseverance that (hopefully) confirm the successful execution of each phase of the landing. It’ll be stressful and (again, hopefully) exhilarating, but again, by the time a signal arrives at Earth, whatever it’s communicating will already have happened.

JPL knows exactly what signals they should be receiving and when, so the most likely indication of something going wrong is just the lack of a signal when one is expected. This negative information won’t communicate much, and there are all kinds of non-catastrophic reasons why a signal may get missed. It could take minutes, hours, or days to determine what went wrong if there’s an anomaly.

If everything goes well, confirmation will be the rover sending a series of signals that says it’s successfully detached from the Skycrane and is stable and alive on the ground, at which point JPL mission control will go bananas (and so will we!). It may take some time (minutes to hours) for Perseverance to send back its first pictures from the surface, since the rover relies on communicating with other spacecraft in orbit to talk to Earth. And those pictures may not be all that great, since they’ll likely be taken with the rover’s hazard avoidance cameras that could have clear plastic protective covers still attached to them. Those glorious high resolution pictures of Mars that we’ve been enjoying from Curiosity will involve the rover unfolding its camera mast, and will take a bit longer to arrive. 

Once Perseverance is safe on Mars, the team at JPL won’t be in any immediate rush to start driving it around. If you’re watching the livestream, you can feel good hanging around for the first picture or two from the surface, but after that, things will likely slow way, way down.

After the landing, what’s the best way to follow along with the Perseverance Mission?

We’ll be following the Perseverance mission as closely as we can, and we’re looking forward to talking with JPL engineers to learn even more about rover autonomy and helicopter operations. For day to day updates, Perseverance herself is on Twitter, and the official NASA website is here.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

It's winter in Oregon, so everything is damp, all the time. No problem for Digit!

Also the case for summer in Oregon.

[ Agility Robotics ]

While other organisms form collective flocks, schools, or swarms for such purposes as mating, predation, and protection, the Lumbriculus variegatus worms are unusual in their ability to braid themselves together to accomplish tasks that unconnected individuals cannot. A new study reported by researchers at the Georgia Institute of Technology describes how the worms self-organize to act as entangled “active matter,” creating surprising collective behaviors whose principles have been applied to help blobs of simple robots evolve their own locomotion.

No, this doesn't squick me out at all, why would it.

[ Georgia Tech ]

A few years ago, we wrote about Zhifeng Huang's jet-foot equipped bipedal robot, and he's been continuing to work on it to the point where it can now step over gaps that are an absolutely astonishing 147% of its leg length.

[ Paper ]

Thanks Zhifeng!

The Inception Drive is a novel, ultra-compact design for an Infinitely Variable Transmission (IVT) that uses nested-pulleys to adjust the gear ratio between input and output shafts. This video shows the first proof-of-concept prototype for a "Fully Balanced" design, where the spinning masses within the drive are completely balanced to reduce vibration, thereby allowing the drive to operate more efficiently and at higher speeds than achievable on an unbalanced design.

As shown in this video, the Inception Drive can change both the speed and direction of rotation of the output shaft while keeping the direction and speed of the input shaft constant. This ability to adjust speed and direction within such a compact package makes the Inception Drive a compelling choice for machine designers in a wide variety of fields, including robotics, automotive, and renewable-energy generation.

[ SRI ]

Robots with kinematic loops are known to have superior mechanical performance. However, due to these loops, their modeling and control is challenging, and prevents a more widespread use. In this paper, we describe a versatile Inverse Kinematics (IK) formulation for the retargeting of expressive motions onto mechanical systems with loops.

[ Disney Research ]

Watch Engineered Arts put together one of its Mesmer robots in a not at all uncanny way.

[ Engineered Arts ]

There's been a bunch of interesting research into vision-based tactile sensing recently; here's some from Van Ho at JAIST:

[ Paper ]

Thanks Van!

This is really more of an automated system than a robot, but these little levitating pucks are very very slick.

ACOPOS 6D is based on the principle of magnetic levitation: Shuttles with integrated permanent magnets float over the surface of electromagnetic motor segments. The modular motor segments are 240 x 240 millimeters in size and can be arranged freely in any shape. A variety of shuttle sizes carry payloads of 0.6 to 14 kilograms and reach speeds of up to 2 meters per second. They can move freely in two-dimensional space, rotate and tilt along three axes and offer precise control over the height of levitation. All together, that gives them six degrees of motion control freedom.

[ ACOPOS ]

Navigation and motion control of a robot to a destination are tasks that have historically been performed with the assumption that contact with the environment is harmful. This makes sense for rigid-bodied robots where obstacle collisions are fundamentally dangerous. However, because many soft robots have bodies that are low-inertia and compliant, obstacle contact is inherently safe. We find that a planner that takes into account and capitalizes on environmental contact produces paths that are more robust to uncertainty than a planner that avoids all obstacle contact.

[ CHARM Lab ]

The quadrotor experts at UZH have been really cranking it up recently.

Aerodynamic forces render accurate high-speed trajectory tracking with quadrotors extremely challenging. These complex aerodynamic effects become a significant disturbance at high speeds, introducing large positional tracking errors, and are extremely difficult to model. To fly at high speeds, feedback control must be able to account for these aerodynamic effects in real-time. This necessitates a modelling procedure that is both accurate and efficient to evaluate. Therefore, we present an approach to model aerodynamic effects using Gaussian Processes, which we incorporate into a Model Predictive Controller to achieve efficient and precise real-time feedback control, leading to up to 70% reduction in trajectory tracking error at high speeds. We verify our method by extensive comparison to a state-of-the-art linear drag model in synthetic and real-world experiments at speeds of up to 14m/s and accelerations beyond 4g.

[ Paper ]

I have not heard much from Harvest Automation over the last couple years and their website was last updated in 2016, but I guess they're selling robots in France, so that's good?

[ Harvest Automation ]

Last year, Clearpath Robotics introduced a ROS package for Spot which enables robotics developers to leverage ROS capabilities out-of-the-box. Here at OTTO Motors, we thought it would be a compelling test case to see just how easy it would be to integrate Spot into our test fleet of OTTO materials handling robots.

[ OTTO Motors ]

Video showcasing recent robotics activities at PRISMA Lab, coordinated by Prof. Bruno Siciliano, at Università di Napoli Federico II.

[ PRISMA Lab ]

Thanks Fan!

State estimation framework developed by the team CoSTAR for the DARPA Subterranean Challenge, where the team achieved 2nd and 1st places in the Tunnel and Urban circuits.

[ Paper ]

Highlights from the 2020 ROS Industrial conference.

[ ROS Industrial ]

Thanks Thilo!

Not robotics, but entertaining anyway. From the CHI 1995 Technical Video Program, "The Tablet Newspaper: a Vision for the Future."

[ CHI 1995 ]

This week's GRASP on Robotics seminar comes from Allison Okamura at Stanford, on “Wearable Haptic Devices for Ubiquitous Communication."

Haptic devices allow touch-based information transfer between humans and intelligent systems, enabling communication in a salient but private manner that frees other sensory channels. For such devices to become ubiquitous, their physical and computational aspects must be intuitive and unobtrusive. We explore the design of a wide array of haptic feedback mechanisms, ranging from devices that can be actively touched by the fingertips to multi-modal haptic actuation mounted on the arm. We demonstrate how these devices are effective in virtual reality, human-machine communication, and human-human communication.

[ UPenn ]

Over the past few weeks, we’ve seen a couple of new robots from Hyundai Motor Group. This is a couple more robots than I think I’ve seen from Hyundai Motor Group, like, ever. We’re particularly interested in them right now mostly because Hyundai Motor Group are the new owners of Boston Dynamics, and so far, these robots represent one of the most explicit indications we’ve got about exactly what Hyundai Motor Group wants their robots to be doing.

We know it would be a mistake to read too much into these new announcements, but we can’t help reading something into them, right? So let’s take a look at what Hyundai Motor Group has been up to recently. This first robot is DAL-e, what HMG is calling an “Advanced Humanoid Robot.”

According to Hyundai, DAL-e is “designed to pioneer the future of automated customer services,” and is equipped with “state-of-the-art artificial intelligence technology for facial recognition as well as an automatic communication system based on a language-comprehension platform.” You’ll find it in car showrooms, but only in Seoul, for now.

We don’t normally write about robots like these because they tend not to represent much that’s especially new or interesting in terms of robotic technology, capabilities, or commercial potential. There’s certainly nothing wrong with DAL-e—it’s moderately cute and appears to be moderately functional. We’ve seen other platforms (like Pepper) take on similar roles, and our impression is that the long-term cost effectiveness of these greeter robots tends to be somewhat limited. And unless there’s some hidden functionality that we’re not aware of, this robot doesn’t really seem to be pushing the envelope, but we’d love to be wrong about that.

The other new robot, announced yesterday, is TIGER (Transforming Intelligent Ground Excursion Robot). It’s a bit more interesting, although you’ll have to skip ahead about 1:30 in the video to get to it.

We’ve talked about how adding wheels can make legged robots faster and more efficient, but I’m honestly not sure that it works all that well going the other way (adding legs to wheeled robots) because rather than adding a little complexity to get a multi-modal system that you can use much of the time, you’re instead adding a lot of complexity to get a multi-modal system that you’re going to use sometimes.

You could argue, as perhaps Hyundai would, that the multi-modal system is critical to get TIGER to do what they want it to do, which seems to be primarily remote delivery. They mention operating in urban areas as well, where TIGER could use its legs to climb stairs, but I think it would be beat by more traditional wheeled platforms, or even whegged platforms, that are almost as capable while being much simpler and cheaper. For remote delivery, though, legs might be a necessary feature.

That is, if you assume that using a ground-based system is really the best way to go.

The TIGER concept can be integrated with a drone to transport it from place to place, so why not just use the drone to make the remote delivery instead? I guess maybe if you’re dealing with a thick tree canopy, the drone could drop TIGER off in a clearing and the robot could drive to its destination, but now we’re talking about developing a very complex system for a very specific use case. Even though Hyundai has said that they’re going to attempt to commercialize TIGER over the next five years, I think it’ll be tricky for them to successfully do so.

The best part about these robots from Hyundai is that between the two of them, they suggest that the company is serious about developing commercial robots as well as willing to invest in something that seems a little crazy. And you know who else is both of those things? Boston Dynamics. To be clear, it’s almost certain that both of Hyundai’s robots were developed well before the company was even thinking about acquiring Boston Dynamics, so the real question is: Where do these two companies go from here?

Good as some drones are becoming at obstacle avoidance, accidents do still happen. And as far as robots go, drones are very much on the fragile side of things.  Any sort of significant contact between a drone and almost anything else usually results in a catastrophic, out-of-control spin followed by a death plunge to the ground. Bad times. Bad, expensive times.

A few years ago, we saw some interesting research into software that can keep the most common drone form factor, the quadrotor, aloft and controllable even after the failure of one motor. The big caveat to that software was that it relied on GPS for state estimation, meaning that without a GPS signal, the drone is unable to get the information it needs to keep itself under control. In a paper recently accepted to RA-L, researchers at the University of Zurich report that they have developed a vision-based system that brings state estimation completely on-board. The upshot: potentially any drone with some software and a camera can keep itself safe even under the most challenging conditions.

A few years ago, we wrote about first author Sihao Sun’s work on high speed controlled flight of a quadrotor with a non-functional motor. But that innovation relied on an external motion capture system. Since then, Sun has moved from Tu Delft to Davide Scaramuzza’s lab at UZH, and it looks like he’s been able to combine his work on controlled spinning flight with the Robotics and Perception Group’s expertise in vision. Now, a downward-facing camera is all it takes for a spinning drone to remain stable and controllable:

Remember, this software isn’t just about guarding against motor failure. Drone motors themselves don’t just up and fail all that often, either with respect to their software or hardware. But they do represent the most likely point of failure for any drone, usually because when you run into something, what ultimately causes your drone to crash is damage to a motor or a propeller that causes loss of control.

The reason that earlier solutions relied on GPS was because the spinning drone needs a method of state estimation—that is, in order to be closed-loop controllable, the drone needs to have a reasonable understanding of what its position is and how that position is changing over time. GPS is an easy way to take care of this, but GPS is also an external system that doesn’t work everywhere. Having a state estimation system that’s completely internal to the drone itself is much more fail safe, and Sun got his onboard system to work through visual feature tracking with a downward-facing camera, even as the drone is spinning at over 20 rad/s. 

While the system works well enough with a regular downward-facing camera—something that many consumer drones are equipped with for stabilization purposes—replacing it with an event camera (you remember event cameras, right?) makes the performance even better, especially in low light. 

For more details on this, including what you’re supposed to do with a rapidly spinning partially disabled quadrotor (as well as what it’ll take to make this a standard feature on consumer hardware), we spoke with Sihao Sun via email.

IEEE Spectrum: what usually happens when a drone spinning this fast lands? Is there any way to do it safely?

Sihao Sun: Our experience shows that we can safely land the drone while it is spinning. When the range sensor measurements are lower than a threshold (around 10 cm, indicating that the drone is close to the ground), we switch off the rotors. During the landing procedure, despite the fast spinning motion, the thrust direction oscillates around the gravity vector, thus the drone touches the ground with its legs without damaging other components.

Can your system handle more than one motor failure?

Yes, the system can also handle the failure of two opposing rotors. However, if two adjacent rotors or more than two rotors fail, our method cannot save the quadrotor. Some research has shown that it is possible to control a quadrotor with only one remaining rotor. But the drone requires a very special inertial property, which is hard to satisfy in real applications.

How different is your system's performance from a similar system that relies on GPS, in a favorable environment?

In a favorable environment, our system outperforms those relying on GPS signals because it obtains better position estimates. Since a damaged quadrotor spins fast, the accelerometer readings are largely affected by centrifugal forces. When the GPS signal is lost or degraded, a drone relying on GPS needs to integrate these biased accelerometer measurements for position estimation, leading to large position estimation errors. Feeding these erroneous estimates to the flight controller can easily crash the drone.

When you say that your solution requires “only onboard sensors and computation,” are those requirements specialized, or would they be generally compatible with the current generation of recreational and commercial quadrotors?

We use an NVIDIA Jetson TX2 to run our solution, which includes two parts: the control algorithm and the vision-based state estimation algorithm. The control algorithm is lightweight; thus, we believe that it is compatible with the current generation of quadrotors. On the other hand, the vision-based state estimation requires relatively more computational resources, which may not be affordable for cheap recreational platforms. But this is not an issue for commercial quadrotors because many of them have more powerful processors than a TX2.

What else can event cameras be used for, in recreational or commercial applications?

Many drone applications can benefit from event cameras, especially those in high-speed or low-light conditions, such as autonomous drone racing, cave exploration, drone delivery during night time, etc. Event cameras also consume very little power, which is a significant advantage for energy-critical missions, such as planetary aerial vehicles for Mars explorations. Regarding space applications, we are currently collaborating with JPL to explore the use of event cameras to address the key limitations of standard cameras for the next Mars helicopter.

[ UZH RPG ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China

Let us know if you have suggestions for next week, and enjoy today's videos.

Engineered Arts' latest Mesmer entertainment robot is Cleo. It sings, gesticulates, and even does impressions. 

[ Engineered Arts ]

I do not know what this thing is or what it's saying but Panasonic is going to be selling them and I will pay WHATEVER. IT. COSTS.

Slightly worrisome is that Google Translate persistently thinks that part of the description involves "sleeping and flatulence."

[ Panasonic ] via [ RobotStart ]

Spot Enterprise is here to help you safely ignore every alarm that goes off at work while you're snug at home in your jammies drinking cocoa.

That Spot needs a bath.

If you missed the launch event (with more on the arm), check it out here:

[ Boston Dynamics ]

PHASA-35, a 35m wingspan solar-electric aircraft successfully completed its maiden flight in Australia, February 2020. Designed to operate unmanned in the stratosphere, above the weather and conventional air traffic, PHASA-35 offers a persistent and affordable alternative to satellites combined with the flexibility of an aircraft, which could be used for a range of valuable applications including forest fire detection and maritime surveillance.

[ BAE Systems ]

As part of the Army Research Lab’s (ARL) Robotics Collaborative Technology Alliance (RCTA), we are developing new planning and control algorithms for quadrupedal robots. The goal of our project is to equip the robot LLAMA, developed by NASA JPL, with the skills it needs to move at operational tempo over difficult terrain to keep up with a human squad. This requires innovative perception, planning, and control techniques to make the robot both precise in execution for navigating technical obstacles and robust enough to reject disturbances and recover from unknown errors.

[ IHMC ]

Watch what happens to this drone when it tries to install a bird diverter on a high voltage power line:

[ GRVC ]

Soldiers navigate a wide variety of terrains to successfully complete their missions. As human/agent teaming and artificial intelligence advance, the same flexibility will be required of robots to maneuver across diverse terrain and become effective combat teammates.

[ Army ]

The goal of the GRIFFIN project is to create something similar to sort of robotic bird, which almost certainly won't look like this concept rendering.

While I think this research is great, at what point is it in fact easier to just, you know, train an actual bird?

[ GRIFFIN ]

Paul Newman narrates this video from two decades ago, which is a pretty neat trick.

[ Oxford Robotics Institute ]

The first step towards a LEGO-based robotic McMuffin creator is cracking and separating eggs.

[ Astonishing Studios ] via [ BB ]

Some interesting soft robotics projects at the University of Southern Denmark.

[ SDU ]

Chong Liu introduces Creature_02, his final presentation for Hod Lipson's Robotics Studio course at Columbia.

[ Chong Liu ]

The world needs more robot blimps.

[ Lab INIT Robots ]

Finishing its duty early, the KR CYBERTECH nano uses this time to play basketball.

[ Kuka ]

senseFly has a new aerial surveying drone that they call "affordable," although they don't say what the price is.

[ senseFly ]

In summer 2020 participated several science teams of the ETH Zurich at the "Art Safiental" in the mountains of Graubunden. After the scientists packed their hiking gear and their robots, their only mission was "over hill and dale to the summit". How difficult will it be to reach the summit with a legged robot and an exosceletton? What's the relation of synesthetic dance and robotic? How will the hikers react to these projects?

[ Rienerschnitzel Films ]

Thanks Robert!

Karen Liu: How robots perceive the physical world. A specialist in computer animation expounds upon her rapidly evolving specialty, known as physics-based simulation, and how it is helping robots become more physically aware of the world around them.

[ Stanford ]

This week's UPenn GRASP On Robotics seminar is by Maria Chiara Carrozza from Scuola Superiore Sant’Anna, on "Biorobotics for Personal Assistance – Translational Research and Opportunities for Human-Centered Developments."

The seminar will focus on the opportunities and challenges offered by the digital transformation of healthcare which was accelerated in the COVID-19 Pandemia. In this framework rehabilitation and social robotics can play a fundamental role as enabling technologies for providing innovative therapies and services to patients even at home or in remote environments.

[ UPenn ]

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment. 

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:

Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:

A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?

Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:

This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.

IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?

Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs. 

We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.

When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?

All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks. 

One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.

The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.

So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?

There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on. 

The video of Spot digging was pretty cool—how did that work?

That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.

The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks? 

A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.

Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.

Is Spot’s arm safe?

You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.

We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?

You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”

It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that. 

Photo: Boston Dynamics

There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down. 

During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.

The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.” 

Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.

Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”

Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that. 

There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference]

Let us know if you have suggestions for next week, and enjoy today's videos.

We're proud to announce Starship Delivery Robots have now completed 1,000,000 autonomous deliveries around the world. We were unsure where the one millionth delivery was going to take place, as there are around 15-20 service areas open globally, all with robots doing deliveries every minute. In the end it took place at Bowling Green, Ohio, to a student called Annika Keeton who is a freshman studying pre-health Biology at BGSU. Annika is now part of Starship’s history!

Starship ]

I adore this little DIY walking robot- with modular feet and little dials to let you easily adjust the walking parameters, it's an affordable kit that's way more nuanced than most.

It's called Bakiwi, and it costs €95. A squee cover made from feathers or fur is an extra €17. Here's a more serious look at what it can do:

[ Bakiwi ]

Thanks Oswald!

Savva Morozov, an AeroAstro junior, works on autonomous navigation for the MIT mini cheetah robot and reflects on the value of a crowded Infinite Corridor.

[ MIT ]

The world's most advanced haptic feedback gloves just got a huge upgrade! HaptX Gloves DK2 achieves a level of realism that other haptic devices can't match. Whether you’re training your workforce, designing a new product, or controlling robots from a distance, HaptX Gloves make it feel real.

They're the only gloves with true-contact haptics, with patented technology that displace your skin the same way a real object would. With 133 points of tactile feedback per hand, for full palm and fingertip coverage. HaptX Gloves DK2 feature the industry's most powerful force feedback, ~2X the strength of other force feedback gloves. They're also the most accurate motion tracking gloves, with 30 tracked degrees of freedom, sub-millimeter precision, no perceivable latency, and no occlusion.

[ HaptX ]

Yardroid is an outdoor robot "guided by computer vision and artificial intelligence" that seems like it can do almost everything.

These are a lot of autonomous capabilities, but so far, we've only seen the video. So, best not to get too excited until we know more about how it works.

[ Yardroid ]

Thanks Dan!

Since as far as we know, Pepper can't spread COVID, it had a busy year.

I somehow missed seeing that chimpanzee magic show, but here it is:

[ Simon Pierro ] via [ SoftBank Robotics ]

In spite of the pandemic, Professor Hod Lipson’s Robotics Studio persevered and even thrived— learning to work on global teams, to develop protocols for sharing blueprints and code, and to test, evaluate, and refine their designs remotely. Equipped with a 3D printer and a kit of electronics prototyping equipment, our students engineered bipedal robots that were conceptualized, fabricated, programmed, and endlessly iterated around the globe in bedrooms, kitchens, backyards, and any other makeshift laboratory you can imagine.

[ Hod Lipson ]

Thanks Fan!

We all know how much quadrupeds love ice!

[ Ghost Robotics ]

We took the opportunity of the last storm to put the Warthog in the snow of Université Laval. Enjoy!

[ Norlab ]

They've got a long way to go, but autonomous indoor firefighting drones seem like a fantastic idea.

[ CTU ]

Individual manipulators are limited by their vertical total load capacity. This places a fundamental limit on the weight of loads that a single manipulator can move. Cooperative manipulation with two arms has the potential to increase the net weight capacity of the overall system. However, it is critical that proper load sharing takes place between the two arms. In this work, we outline a method that utilizes mechanical intelligence in the form of a whiffletree.

And your word of the day is whiffletree, which is "a mechanism to distribute force evenly through linkages."

[ DART Lab ]

Thanks Raymond!

Some highlights of robotic projects at FZI in 2020, all using ROS.

[ FZI ]

Thanks Fan!

iRobot CEO Colin Angle threatens my job by sharing some cool robots.

[ iRobot ]

A fascinating new talk from Henry Evans on robotic caregivers.

[ HRL ]

The ANA Avatar XPRIZE semifinals selection submission for Team AVATRINA. The setting is a mock clinic, with the patient sitting on a wheelchair and nurse having completed an initial intake. Avatar enters the room controlled by operator (Doctor). A rolling tray table with medical supplies (stethoscope, pulse oximeter, digital thermometer, oxygen mask, oxygen tube) is by the patient’s side. Demonstrates head tracking, stereo vision, fine manipulation, bimanual manipulation, safe impedance control, and navigation.

[ Team AVATRINA ]

This five year old talk from Mikell Taylor, who wrote for us a while back and is now at Amazon Robotics, is entitled "Nobody Cares About Your Robot." For better or worse, it really doesn't sound like it was written five years ago.

Robotics for the consumer market - Mikell Taylor from Scott Handsaker on Vimeo.

[ Mikell Taylor ]

Fall River Community Media presents this wonderful guy talking about his love of antique robot toys.

If you enjoy this kind of slow media, Fall River also has weekly Hot Dogs Cool Cats adoption profiles that are super relaxing to watch.

[ YouTube ]

Anyone who’s seen an undersea nature documentary has marveled at the complex choreography that schooling fish display, a darting, synchronized ballet with a cast of thousands.

Those instinctive movements have inspired researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and the Wyss Institute for Biologically Inspired Engineering. The results could improve the performance and dependability of not just underwater robots, but other vehicles that require decentralized locomotion and organization, such as self-driving cars and robotic space exploration. 

The fish collective called Blueswarm was created by a team led by Radhika Nagpal, whose lab is a pioneer in self-organizing systems. The oddly adorable robots can sync their movements like biological fish, taking cues from their plastic-bodied neighbors with no external controls required. Nagpal told IEEE Spectrum that this marks a milestone, demonstrating complex 3D behaviors with implicit coordination in underwater robots. 

“Insights from this research will help us develop future miniature underwater swarms that can perform environmental monitoring and search in visually-rich but fragile environments like coral reefs,” Nagpal said. “This research also paves a way to better understand fish schools, by synthetically recreating their behavior.”

The research is published in Science Robotics, with Florian Berlinger as first author. Berlinger said the “Bluedot” robots integrate a trio of blue LED lights, a lithium-polymer battery, a pair of cameras, a Raspberry Pi computer and four controllable fins within a 3D-printed hull. The fish-lens cameras detect LED’s of their fellow swimmers, and apply a custom algorithm to calculate distance, direction and heading. 

Based on that simple production and detection of LED light, the team proved that Blueswarm could self-organize behaviors, including aggregation, dispersal and circle formation—basically, swimming in a clockwise synchronization. Researchers also simulated a successful search mission, an autonomous Finding Nemo. Using their dispersion algorithm, the robot school spread out until one could detect a red light in the tank. Its blue LEDs then flashed, triggering the aggregation algorithm to gather the school around it. Such a robot swarm might prove valuable in search-and-rescue missions at sea, covering miles of open water and reporting back to its mates. 

“Each Bluebot implicitly reacts to its neighbors’ positions,” Berlinger said. The fish—RoboCod, perhaps?—also integrate a Wifi module to allow uploading new behaviors remotely. The lab’s previous efforts include a 1,000-strong army of “Kilobots,” and a robotic construction crew inspired by termites. Both projects operated in two-dimensional space. But a 3D environment like air or water posed a tougher challenge for sensing and movement.

In nature, Berlinger notes, there’s no scaly CEO to direct the school’s movements. Nor do fish communicate their intentions. Instead, so-called “implicit coordination” guides the school’s collective behavior, with individual members executing high-speed moves based on what they see their neighbors doing. That decentralized, autonomous organization has long fascinated scientists, including in robotics. 

“In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient. By using implicit rules and 3D visual perception, we were able to create a system with a high degree of autonomy and flexibility underwater where things like GPS and WiFi are not accessible.”

Berlinger adds the research could one day translate to anything that requires decentralized robots, from self-driving cars and Amazon warehouse vehicles to exploration of faraway planets, where poor latency makes it impossible to transmit commands quickly. Today’s semi-autonomous cars face their own technical hurdles in reliably sensing and responding to their complex environments, including when foul weather obscures onboard sensors or road markers, or when they can’t fix position via GPS. An entire subset of autonomous-car research involves vehicle-to-vehicle (V2V) communications that could give cars a hive mind to guide individual or collective decisions— avoiding snarled traffic, driving safely in tight convoys, or taking group evasive action during a crash that’s beyond their sensory range.

“Once we have millions of cars on the road, there can’t be one computer orchestrating all the traffic, making decisions that work for all the cars,” Berlinger said.

The miniature robots could also work long hours in places that are inaccessible to humans and divers, or even large tethered robots. Nagpal said the synthetic swimmers could monitor and collect data on reefs or underwater infrastructure 24/7, and work into tiny places without disturbing fragile equipment or ecosystems. 

“If we could be as good as fish in that environment, we could collect information and be non-invasive, in cluttered environments where everything is an obstacle,” Nagpal said.

Research into robotic sensing has, understandably I guess, been very human-centric. Most of us navigate and experience the world visually and in 3D, so robots tend to get covered with things like cameras and lidar. Touch is important to us, as is sound, so robots are getting pretty good with understanding tactile and auditory information, too. Smell, though? In most cases, smell doesn’t convey nearly as much information for us, so while it hasn’t exactly been ignored in robotics, it certainly isn’t the sensing modality of choice in most cases.

Part of the problem with smell sensing is that we just don’t have a good way of doing it, from a technical perspective. This has been a challenge for a long time, and it’s why we either bribe or trick animals like dogs, rats, vultures, and other animals to be our sensing systems for airborne chemicals. If only they’d do exactly what we wanted them to do all the time, this would be fine, but they don’t, so it’s not. 

Until we get better at making chemical sensors, leveraging biology is the best we can do, and what would be ideal would be some sort of robot-animal hybrid cyborg thing. We’ve seen some attempts at remote controlled insects, but as it turns out, you can simplify things if you don’t use the entire insect, but instead just find a way to use its sensing system. Enter the Smellicopter.

There’s honestly not too much to say about the drone itself. It’s an open-source drone project called Crazyflie 2.0, with some additional off the shelf sensors for obstacle avoidance and stabilization. The interesting bits are a couple of passive fins that keep the drone pointed into the wind, and then the sensor, called an electroantennogram.

Image: UW The drone’s sensor, called an electroantennogram, consists of a "single excised antenna" from a Manduca sexta hawkmoth and a custom signal processing circuit.

To make one of these sensors, you just, uh, “harvest” an antenna from a live hawkmoth. Obligingly, the moth antenna is hollow, meaning that you can stick electrodes up it. Whenever the olfactory neurons in the antenna (which is still technically alive even though it’s not attached to the moth anymore) encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up. Plug the other ends of the electrodes into a voltage amplifier and filter, run it through an analog to digital converter, and you’ve got a chemical sensor that weighs just 1.5 gram and consumes only 2.7 mW of power. It’s significantly more sensitive than a conventional metal-oxide odor sensor, in a much smaller and more efficient form factor, making it ideal for drones. 

To localize an odor, the Smellicopter uses a simple bioinspired approach called crosswind casting, which involves moving laterally left and right and then forward when an odor is detected. Here’s how it works:

The vehicle takes off to a height of 40 cm and then hovers for ten seconds to allow it time to orient upwind. The smellicopter starts casting left and right crosswind. When a volatile chemical is detected, the smellicopter will surge 25 cm upwind, and then resume casting. As long as the wind direction is fairly consistent, this strategy will bring the insect or robot increasingly closer to a singular source with each surge.

Since odors are airborne, they need a bit of a breeze to spread very far, and the Smellicopter won’t be able to detect them unless it’s downwind of the source. But, that’s just how odors work— even if you’re right next to the source, if the wind is blowing from you towards the source rather than the other way around, you might not catch a whiff of it.

Whenever the olfactory neurons in the antenna encounter an odor that they’re looking for, they produce an electrical signal that the electrodes pick up

There are a few other constraints to keep in mind with this sensor as well. First, rather than detecting something useful (like explosives), it’s going to detect the smells of pretty flowers, because moths like pretty flowers. Second, the antenna will literally go dead on you within a couple hours, since it only functions while its tissues are alive and metaphorically kicking. Interestingly, it may be possible to use CRISPR-based genetic modification to breed moths with antennae that do respond to useful smells, which would be a neat trick, and we asked the researchers—Melanie Anderson, a doctoral student of mechanical engineering at the University of Washington, in Seattle; Thomas Daniel, a UW professor of biology; and Sawyer Fuller, a UW assistant professor of mechanical engineering—about this, along with some other burning questions, via email. 

IEEE Spectrum, asking the important questions first: So who came up with "Smellicopter"?

Melanie Anderson: Tom Daniel coined the term "Smellicopter". Another runner up was "OdorRotor"! 

In general, how much better are moths at odor localization than robots?  

Melanie Anderson: Moths are excellent at odor detection and odor localization and need to be in order to find mates and food. Their antennae are much more sensitive and specialized than any portable man-made odor sensor. We can't ask the moths how exactly they search for odors so well, but being able to have the odor sensitivity of a moth on a flying platform is a big step in that direction.

Tom Daniel: Our best estimate is that they outperform robotic sensing by at least three orders of magnitude.

How does the localization behavior of the Smellicopter compare to that of a real moth? 

Anderson: The cast-and-surge odor search strategy is a simplified version of what we believe the moth (and many other odor searching animals) are doing. It is a reactive strategy that relies on the knowledge that if you detect odor, you can assume that the source is somewhere up-wind of you. When you detect odor, you simply move upwind, and when you lose the odor signal you cast in a cross-wind direction until you regain the signal. 

Can you elaborate on the potential for CRISPR to be able to engineer moths for the detection of specific chemicals?  

Anderson: CRISPR is already currently being used to modify the odor detection pathways in moth species. It is one of our future efforts to specifically use this to make the antennae sensitive to other chemicals of interest, such as the chemical scent of explosives. 

Sawyer Fuller: We think that one of the strengths of using a moth's antenna, in addition to its speed, is that it may provide a path to both high chemical specificity as well as high sensitivity. By expressing a preponderance of only one or a few chemosensors, we are anticipating that a moth antenna will give a strong response only to that chemical. There are several efforts underway in other research groups to make such specific, sensitive chemical detectors. Chemical sensing is an area where biology exceeds man-made systems in terms of efficiency, small size, and sensitivity. So that's why we think that the approach of trying to leverage biological machinery that already exists has some merit.

You mention that the antennae lifespan can be extended for a few days with ice- how feasible do you think this technology is outside of a research context?

Anderson: The antennae can be stored in tiny vials in a standard refrigerator or just with an ice pack to extend their life to about a week. Additionally, the process for attaching the antenna to the electrical circuit is a teachable skill. It is definitely feasible outside of a research context.

Considering the trajectory that sensor development is on, how long do you think that this biological sensor system will outperform conventional alternatives?  

Anderson:  It's hard to speak toward what will happen in the future, but currently, the moth antenna still stands out among any commercially-available portable sensors.

There have been some experiments with cybernetic insects; what are the advantages and disadvantages of your approach, as opposed to (say) putting some sort of tracking system on a live moth?

Daniel: I was part of a cyber insect team a number of years ago.  The challenge of such research is that the animal has natural reactions to attempts to steer or control it.  

Anderson: While moths are better at odor tracking than robots currently, the advantage of the drone platform is that we have control over it. We can tell it to constrain the search to a certain area, and return after it finishes searching. 

What can you tell us about the health, happiness, and overall wellfare of the moths in your experiments?

Anderson: The moths are cold anesthetized before the antennae are removed. They are then frozen so that they can be used for teaching purposes or in other research efforts. 

What are you working on next?

Daniel: The four big efforts are (1) CRISPR modification, (2) experiments aimed at improving the longevity of the antennal preparation, (3) improved measurements of antennal electrical responses to odors combined with machine learning to see if we can classify different odors, and (4) flight in outdoor environments.

Fuller: The moth's antenna sensor gives us a new ability to sense with a much shorter latency than was previously possible with similarly-sized sensors (e.g. semiconductor sensors). What exactly a robot agent should do to best take advantage of this is an open question. In particular, I think the speed may help it to zero in on plume sources in complex environments much more quickly. Think of places like indoor settings with flow down hallways that splits out at doorways, and in industrial settings festooned with pipes and equipment. We know that it is possible to search out and find odors in such scenarios, as anybody who has had to contend with an outbreak of fruit flies can attest. It is also known that these animals respond very quickly to sudden changes in odor that is present in such turbulent, patchy plumes. Since it is hard to reduce such plumes to a simple model, we think that machine learning may provide insights into how to best take advantage of the improved temporal plume information we now have available.

Tom Daniel also points out that the relative simplicity of this project (now that the UW researchers have it all figured out, that is) means that even high school students could potentially get involved in it, even if it’s on a ground robot rather than a drone. All the details are in the paper that was just published in Bioinspiration & Biomimetics.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online] RoboSoft 2021 – April 12-16, 2021 – [Online]

Let us know if you have suggestions for next week, and enjoy today's videos.

A new parent STAR robot is presented. The parent robot has a tail on which the child robot can climb. By collaborating together, the two robots can reach locations that neither can reach on its own.

The parent robot can also supply the child robot with energy by recharging its batteries. The parent STAR can dispatch and recuperate the child STAR automatically (when aligned). The robots are fitted with sensors and controllers and have automatic capabilities but make no decisions on their own.

[ Bio-Inspired and Medical Robotics Lab ]

How TRI trains its robots.

[ TRI ]

The only thing more satisfying than one SCARA robot is two SCARA robots working together.

[ Fanuc ]

I'm not sure that this is strictly robotics, but it's so cool that it's worth a watch anyway.

[ Shinoda & Makino Lab ]

Flying insects heavily rely on optical flow for visual navigation and flight control. Roboticists have endowed small flying robots with optical flow control as well, since it requires just a tiny vision sensor. However, when using optical flow, the robots run into two problems that insects appear to have overcome. Firstly, since optical flow only provides mixed information on distances and velocities, using it for control leads to oscillations when getting closer to obstacles. Secondly, since optical flow provides very little information on obstacles in the direction of motion, it is hardest to detect obstacles that the robot is actually going to collide with! We propose a solution to these problems by means of a learning process.

[ Nature ]

A new Guinness World Record was set on Friday in north China for the longest animation performed by 600 unmanned aerial vehicles (UAVs).

[ Xinhua ]

Translucency is prevalent in everyday scenes. As such, perception of transparent objects is essential for robots to perform manipulation. In this work, we propose LIT, a two-stage method for transparent object pose estimation using light-field sensing and photorealistic rendering.

[ University of Michigan ] via [ Fetch Robotics ]

This paper reports the technological progress and performance of team “CERBERUS” after participating in the Tunnel and Urban Circuits of the DARPA Subterranean Challenge.

And here's a video report on the SubT Urban Beta Course performance:

[ CERBERUS ]

Congrats to Energy Robotics on 2 million euros in seed funding!

[ Energy Robotics ]

Thanks Stefan!

In just 2 minutes, watch HEBI robotics spending 23 minutes assembling a robot arm.

HEBI Robotics is hosting a webinar called 'Redefining the Robotic Arm' next week, which you can check out at the link below.

[ HEBI Robotics ]

Thanks Hardik!

Achieving versatile robot locomotion requires motor skills which can adapt to previously unseen situations. We propose a Multi-Expert Learning Architecture (MELA) that learns to generate adaptive skills from a group of representative expert skills. During training, MELA is first initialised by a distinct set of pre-trained experts, each in a separate deep neural network (DNN). Then by learning the combination of these DNNs using a Gating Neural Network (GNN), MELA can acquire more specialised experts and transitional skills across various locomotion modes.

[ Paper ]

Since the dawn of history, advances in science and technology have pursued “power” and “accuracy.” Initially, “hardness” in machines and materials was sought for reliable operations. In our area of Science of Soft Robots, we have combined emerging academic fields aimed at “softness” to increase the exposure and collaboration of researchers in different fields.

[ Science of Soft Robots ]

A team from the Laboratory of Robotics and IoT for Smart Precision Agriculture and Forestry at INESC TEC - Technology and Science are creating a ROS stack solution using Husky UGV for precision field crop agriculture.

[ Clearpath Robotics ]

Associate Professor Christopher J. Hasson in the Department of Physical Therapy is the director Neuromotor Systems Laboratory at Northeastern University. There he is working with a robotic arm to provide enhanced assistance to physical therapy patients, while maintaining the intimate therapist and patient relationship.

[ Northeastern ]

Mobile Robotic telePresence (MRP) systems aim to support enhanced collaboration between remote and local members of a given setting. But MRP systems also put the remote user in positions where they frequently rely on the help of local partners. Getting or ‘recruiting’ such help can be done with various verbal and embodied actions ranging in explicitness. In this paper, we look at how such recruitment occurs in video data drawn from an experiment where pairs of participants (one local, one remote) performed a timed searching task.

[ Microsoft Research ]

A presentation [from Team COSTAR] for the American Geophysical Union annual fall meeting on the application of robotic multi-sensor 3D Mapping for scientific exploration of caves. Lidar-based 3D maps are combined with visual/thermal/spectral/gas sensors to provide rich 3D context for scientific measurements map.

[ COSTAR ]

The United States Federal Aviation Administration has been desperately trying to keep up with the proliferation of recreational and commercial drones. They haven’t been as successful as all of us might have wanted, but some progress is certainly being made, most recently with some new rules about flying drones at night and over people and vehicles, as well as the requirement for a remote-identification system for all drones.

Over the next few years, FAA’s drone rules are going to affect you even if you just fly a drone for fun in your backyard, so we’ll take detailed look about what changes are coming and how you can prepare.

The first thing to acknowledge is that the FAA, as an agency, is turning out to be a very poor communicator where drones are concerned. I’ve written about this before, but understanding exactly what you can and cannot do with a drone, and where you’re allowed to do it, is super frustrating and way more complicated than it needs to be. So if some of this seems confusing, it’s not you.

What kind of drone pilot am I?

Part of the problem is that the FAA has separated drone pilots into two categories that have rules that are sometimes different in ways that don’t always make sense. There are recreational pilots, who fly drones “strictly for recreational purposes,” and then there are commercial pilots, who fly drones to make money, for non-profit work, for journalism, for education, or really for anything that has a goal besides fun.

Recreational pilots are allowed to fly under safety guidelines from a “community-based organization” like the Academy of Model Aeronautics (AMA), while commercial pilots have to fly under the rules found in Part 107 of the Federal Aviation Regulations. So, while the Part 107 rules have, for example, prohibited flying at night without a waiver from the FAA, the FAA also says that recreational flyers can fly at night as long as the drone “has lighting that allows you to know its location and orientation at all times.” Go figure.

What are the current rules for recreational and commercial pilots?

You can find these on FAA’s website:

What are the new drones rules that the FAA announced?

Late last year, the FAA released what it called in a press release “Two Much-Anticipated Drone Rules to Advance Safety and Innovation in the United States.”

The first update is for Part 107 pilots, and covers operations over people, over vehicles, and at night. Currently, Part 107 pilots need to apply to the FAA for waivers to do any of these things, and now you do not need a waiver to do them, as long as you follow the new rules.

The second new rule is about how drones identify themselves in flight, called Remote ID, and applies to everybody flying a drone, even if it’s just for fun. If you’re a recreational pilot, you can skip down to the part about Remote ID, which will affect you.

Can I fly at night?

Yup. The new rule allows for night flying with a properly lit up drone (“anti-collision lights that can be seen for 3 statute miles and have a flash rate sufficient to avoid a collision”). The rule also helpfully notes that these lights must be turned on.

This applies to Part 107 pilots only, and as we noted above, whether recreational fliers can fly at night isn’t as clear as it should be. And Part 107 pilots who want to take advantage of this new rule will need to take an updated knowledge test, which the FAA will provide more information on within the next few months.

Can I fly over moving vehicles?

Generally, yes, if you’re a Part 107 pilot. You can fly over moving vehicles as long as you’re just transiting over them, rather than maintaining sustained flight over them. If you want to maintain sustained flight, you can do that too, although in that case everyone in the vehicle needs to know that there’s a drone around and it has to be in an access controlled area.

Vehicles, as far as the FAA is concerned, includes anything where a person is moving more quickly than they’d be able to on foot, because this rule exists to try and mitigate the likelihood of a wayward drone hitting someone at a higher speed. Vehicles therefore include skateboards, rollerblades, bicycles, roller coasters, boats, and so on.

Is my drone allowed to fly over people?

Part 107 pilots are now allowed to fly over people in some circumstances, under restrictions that change depending on how big and scary your drone is. The FAA has separated drones into four risk categories, based on how much damage they could do to a human that they come into contact with.

  • Category 1: A Category 1 drone represents “a low risk of injury” to humans and therefore weighs 0.55 pounds (0.25 kg) or less including everything attached to the drone from takeoff to landing. Furthermore, a Category 1 drone cannot have “any exposed rotating parts that would lacerate human skin,” and whatever kind of protection that implies must not fall outside the weight limit. If your drone meets both of these criteria, there’s no need to do anything else about it.
  • Category 2: A Category 2 drone is the next step up, and since we’re now out of the “low risk of injury” category, the FAA will require a declaration of compliance from “anyone who designs, produces, or modifies a small unmanned aircraft” in this category. For Category 2, this declaration has to show that the drone “must not be capable of causing an injury to a human being that is more severe than an injury caused by a transfer of 11 ft-lbs of kinetic energy from a rigid object,” and the declaration must be approved by the FAA. Category 2 drones must also incorporate the same kind of laceration protection as Category 1, although one of the more interesting comments on the ruling came from Skydio, which asked whether a software-based safety system that could protect against skin laceration would be acceptable. The FAA said that’s fine, as long as it can be demonstrated to be effective through some as-yet unspecified process. 
  • Category 3: A Category 3 drone is just the same at Category 2, except bigger and/or faster, and it “must not be capable of causing an injury to a human being that is more severe than an injury caused by a transfer of 25 ft-lbs of kinetic energy from a rigid object.” Laceration protection also required.
  • Category 4: If you think your drone is safe to operate over people but it doesn’t fit into one of the categories above, you can apply to the FAA for an airworthiness certificate, which (if approved) will let you fly over people with your drone (sometimes) without applying for a waiver.
Great, so can I fly over people whenever I want?

To fly over people, you must be flying under Part 107, your drone must be in one of the four categories above, and you’ll need to follow these specific rules on outdoor flight over people. Note that the FAA defines “sustained flight over an open-air assembly” as “hovering above the heads of persons gathered in an open-air assembly, flying back and forth over an open-air assembly, or circling above the assembly in such a way that the small unmanned aircraft remains above some part of the assembly.”

  • Category 1: Sustained flight over groups of people outdoors is allowed as long as your drone is Remote ID compliant. We’ll get to the Remote ID stuff in a bit.
  • Category 2: Sustained flight over groups of people outdoors is allowed as long as your drone is Remote ID compliant. The big difference between Category 1 and Category 2 is that with a Category 1 drone, you can make your own prop guards or whatever and weigh it, and as long as it’s under 0.55 pound, you’re good to go. Category 2 drones have to go through a certification process with the FAA. If you buy a drone, the manufacturer will likely have done this already. If you build a drone, you’ll have to do it yourself.
  • Category 3: No sustained flight over groups of people. You also can’t fly a Category 3 drone over even a single person, unless it’s either a restricted area where anyone inside has been notified that a drone may be flying over them, or the people the drone is flying over are somehow protected (like under a shelter of some kind). Remote ID is also required.
  • Category 4: There’s a process, but you’ll need to talk with the FAA.
What if I want to do stuff that isn’t covered under these new rules?

Part 107 pilots can still apply to the FAA for waivers, just like before.

I fly recreationally and don’t have my Part 107. Can I fly at night, over moving vehicles, or over people?

Definitely not over people or vehicles. Maybe at night, but honestly, best not to do that either?

What’s Remote ID?

The FAA describes Remote ID as being like a digital license plate for your drone. If you’re following the rules, you’re currently required to register your drone (unless it’s very small) and then make that registration number visible on the drone somewhere.

This isn’t particularly useful if you’re someone on the ground trying to identify a drone flying overhead, so the FAA is instead requiring that all drones broadcast a unique identifying number whenever they’re airborne.

The FAA is requiring that all drones broadcast a unique identifying number whenever they’re airborne Does my drone have Remote ID?

Most likely not. This is a brand new requirement.

What drones will be required to broadcast Remote ID?

Every drone that weighs more than 0.55 pounds (0.25 kg). Drones weighing less than that may be required to have Remote ID if they’re being flown under Part 107.

If you have a drone that weighs under 0.55 pounds and fly recreationally, then lucky you, you don’t have to worry about Remote ID.

What kind of broadcast signal is Remote ID?

The FAA only says that drones “must be designed to maximize the range at which the broadcast can be received,” but it’ll be different for each drone. The target seems to be 400 feet, which is what the FAA figures maximum line of sight distance to be. There was some discussion about making network identification an option (like, if your drone can talk to the Internet somehow, it doesn’t have to broadcast directly), but the FAA thought that would be too complicated. 

What information will Remote ID be sending out?
  • An identifying number for your drone
  • The location of your drone (latitude, longitude, and altitude)
  • How fast your drone is moving
  • Your location (the location of the drone’s controller)
  • A status identifier that says whether your drone is experiencing an emergency
Who can access the Remote ID broadcast?

According to the FAA: “Most personal wireless devices within range of the broadcast.” In other words, anyone with interest and a mobile phone will be able to locate both nearby drones and the GPS coordinates of whoever is piloting them.

Only the FAA will be able to correlate the drone’s ID number with your personal information, although they’ll share with law enforcement if requested.

Only the FAA will be able to correlate the drone’s ID number with your personal information, although they’ll share with law enforcement if requested Can I turn Remote ID off?

Part of the Remote ID specification is that the user should not have the ability to disable it, and if you somehow manage to anyway, the drone should then refuse to take off.

When do I actually have to start worrying about Remote ID?

September 2023. You’ve got some time!

What are drone manufacturers going to do?

Manufacturers have 18 months to start integrating Remote ID into their products.

What happens to my old drone when the Remote ID requirement kicks in?

The good news is that at least in some cases, it sounds like even the current generation of drones will be able meet Remote ID requirements. As one example, we spoke with Brendan Groves, head of policy and regulatory affairs at Skydio, about what Skydio’s plans are for Remote ID going forward, and he made us feel a little better, saying they are tracking this issue closely and that they are “committed to making Skydio 2s in use now compliant with the new rule before the deadline.”

Of course, different drone makers will have different answers, so if you own a drone you should ask the manufacturer about for more information.

What if my drone isn’t going to get updated for Remote ID?

Remote ID doesn't have to be directly integrated into your drone, and the FAA expects that add-on Remote ID broadcast modules will be available.

Can I make my own module?

Sure, but the FAA has to approve it.

Remote ID sucks and I won’t do it! What are my options?

The FAA will partner with educational and research institutions and community-based organizations to establish defined areas in which drones can fly in line of sight only without Remote ID enabled. 

Is there an upside to any of this?

Besides the obvious impact on safety and security, Remote ID will be particularly important for drones that have a significant amount of autonomy. According to the FAA, Remote ID is critical to enabling advanced autonomous operations—like routine flights beyond visual-line-of-sight—by providing airspace awareness.

Where can I find more details?

Executive summaries are here and here, and the full rules are available through the FAA’s website here.

Pages