IEEE Spectrum Automation

IEEE Spectrum Automaton blog recent content
Subscribe to IEEE Spectrum Automation feed

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

With rapidly growing demands on health care systems, nurses typically spend 18 to 40 percent of their time performing direct patient care tasks, oftentimes for many patients and with little time to spare. Personal care robots that brush your hair could provide substantial help and relief.

While the hardware set-up looks futuristic and shiny, the underlying model of the hair fibers is what makes it tick. CSAIL postdoc Josie Hughes and her team’s approach examined entangled soft fiber bundles as sets of entwined double helices - think classic DNA strands. This level of granularity provided key insights into mathematical models and control systems for manipulating bundles of soft fibers, with a wide range of applications in the textile industry, animal care, and other fibrous systems.

[ MIT CSAIL ]

Sometimes CIA​ needs to get creative when collecting intelligence. Charlie, for instance, is a robotic catfish that collects water samples. While never used operationally, the unmanned underwater vehicle (UUV) fish was created to study aquatic robot technology.

[ CIA ]

It's really just a giant drone, even if it happens to be powered by explosions.

[ SpaceX ]

Somatic's robot will clean your bathrooms for 40 hours a week and will cost you just $1,000 a month. It looks like it works quite well, as long as your bathrooms are the normal level of gross as opposed to, you know, super gross.

[ Somatic ]

NASA’s Ingenuity Mars Helicopter successfully completed a fourth, more challenging flight on the Red Planet on April 30, 2021. Flight Test No. 4 aimed for a longer flight time, longer distance, and more image capturing to begin to demonstrate its ability to serve as a scout on Mars. Ingenuity climbed to an altitude of 16 feet (5 meters) before flying south and back for an 872-foot (266-meter) round trip. In total, Ingenuity was in the air for 117 seconds, another set of records for the helicopter.

[ Ingenuity ]

The Perseverance rover is all new and shiny, but let's not forget about Curiosity, still hard at work over in Gale crater.

NASA’s Curiosity Mars rover took this 360-degree panorama while atop “Mont Mercou,” a rock formation that offered a view into Gale Crater below. The panorama is stitched together from 132 individual images taken on April 15, 2021, the 3,090th Martian day, or sol, of the mission. The panorama has been white-balanced so that the colors of the rock materials resemble how they would appear under daytime lighting conditions on Earth. Images of the sky and rover hardware were not included in this terrain mosaic.

[ MSL ]

Happy Star Wars Day from Quanser!

[ Quanser ]

Thanks Arman!

Lingkang Zhang's 12 DOF Raspberry Pi-powered quadruped robot, Yuki Mini, is complete!

Adorable, right? It runs ROS and the hardware is open source as well.

[ Yuki Mini ]

Thanks Lingkang!

Honda and AutoX have been operating a fully autonomous, no safety driver taxi service in China for a couple of months now.

If you thought SF was hard, well, I feel like this is even harder.

[ AutoX ]

This is the kind of drone delivery that I can get behind.

[ WeRobotics ]

The Horizon 2020 EU-funded PRO-ACT project will aim to develop and demonstrate a cooperation and manipulation capabilities between three robots for assembling an in-situ resource utilisation (ISRU) plant. PRO-ACT will show how robot working agents, or RWAs, can work together collaboratively to achieve a common goal.

[ Pro-Act ]

Thanks Fan!

This brief quadruped simulation video, from Jerry Pratt at IHMC, dates back to 2003 (!).

[ IHMC ]

Extend Robotics' vision is to extend human capability beyond physical presence​. We build affordable robotic arms capable of remote operation from anywhere in the world, using cloud-based teleoperation software​.

[ Extend Robotics ]

Meet Maria Vittoria Minniti, robotics engineer and PhD student at NCCR Digital Fabrication and ETH Zurich. Maria Vittoria makes it possible for simple robots to do complicated things.

[ NCCR Women ]

Thanks Fan!

iCub has been around for 10 years now, and it's almost like it hasn't gotten any taller! This IFRR Robotics Global Colloquium celebrates the past decade of iCub.

[ iCub ]

This CMU RI Seminar is by Cynthia Sung from UPenn, on Dynamical Robots via Origami-Inspired Design.

Origami-inspired engineering produces structures with high strength-to-weight ratios and simultaneously lower manufacturing complexity. This reliable, customizable, cheap fabrication and component assembly technology is ideal for robotics applications in remote, rapid deployment scenarios that require platforms to be quickly produced, reconfigured, and deployed. Unfortunately, most examples of folded robots are appropriate only for small-scale, low-load applications. In this talk, I will discuss efforts in my group to expand origami-inspired engineering to robots with the ability to withstand and exert large loads and to execute dynamic behaviors.

[ CMU RI ]

How can feminist methodologies and approaches be applied and be transformative when developing AI and ADM systems? How can AI innovation and social systems innovation be catalyzed concomitantly to create a positive movement for social change larger than the sum of the data science or social science parts? How can we produce actionable research that will lead to the profound changes needed—from scratch—in the processes to produce AI? In this seminar, 2020 CCSRE Race and Technology Practitioner Fellow Renata Avila discusses ideas and experiences from different disciplines that could help draft a blueprint for a better modeled digital future.

[ CMU RI ]

From what I’ve seen of humanoid robotics, there’s a fairly substantial divide between what folks in the research space traditionally call robotics, and something like animatronics, which tends to be much more character-driven.

There’s plenty of technology embodied in animatronic robotics, but usually under some fairly significant constraints—like, they’re not autonomously interactive, or they’re stapled to the floor and tethered for power, things like that. And there are reasons for doing it this way: namely, dynamic untethered humanoid robots are already super hard, so why would anyone stress themselves out even more by trying to make them into an interactive character at the same time? That would be crazy!

At Walt Disney Imagineering, which is apparently full of crazy people, they’ve spent the last three years working on Project Kiwi: a dynamic untethered humanoid robot that’s an interactive character at the same time. We asked them (among other things) just how they managed to stuff all of the stuff they needed to stuff into that costume, and how they expect to enable children (of all ages) to interact with the robot safely.

Project Kiwi is an untethered bipedal humanoid robot that Disney Imagineering designed not just to walk without falling over, but to walk without falling over with some character. At about 0.75 meters tall, Kiwi is a bit bigger than a NAO and a bit smaller than an iCub, and it’s just about completely self-contained, with the tether you see in the video being used for control rather than for power. Kiwi can manage 45 minutes of operating time, which is pretty impressive considering its size and the fact that it incorporates a staggering 50 degrees of freedom, a requirement for lifelike motion.

This version of the robot is just a prototype, and it sounds like there’s plenty to do in terms of hardware optimization to improve efficiency and add sensing and interactivity. The most surprising thing to me is that this is not a stage robot: Disney does plan to have some future version of Kiwi wandering around and interacting directly with park guests, and I’m sure you can imagine how that’s likely to go. Interaction at this level, where there’s a substantial risk of small children tackling your robot with a vicious high-speed hug, could be a uniquely Disney problem for a robot with this level of sophistication. And it’s one of the reasons they needed to build their own robot—when Universal Studios decided to try out a Steampunk Spot, for example, they had to put a fence plus a row of potted plants between it and any potential hugs, because Spot is very much not a hug-safe robot.  

So how the heck do you design a humanoid robot from scratch with personality and safe human interaction in mind? We asked Scott LaValley, Project Kiwi lead, who came to Disney Imagineering by way of Boston Dynamics and some of our favorite robots ever (including RHex, PETMAN, and Atlas), to explain how they pulled it off.

IEEE Spectrum: What are some of the constraints of Disney’s use case that meant you had to develop your own platform from the ground up?

Scott LaValley: First and foremost, we had to consider the packaging constraints. Our robot was always intended to serve as a bipedal character platform capable of taking on the role of a variety of our small-size characters. While we can sometimes take artistic liberties, for the most part, the electromechanical design had to fit within a minimal character profile to allow the robot to be fully themed with shells, skin, and costuming. When determining the scope of the project, a high-performance biped that matched our size constraints just did not exist. 

Equally important was the ability to move with style and personality, or the "emotion of motion." To really capture a specific character performance, a robotic platform must be capable of motions that range from fast and expressive to extremely slow and nuanced. In our case, this required developing custom high-speed actuators with the necessary torque density to be packaged into the mechanical structure. Each actuator is also equipped with a mechanical clutch and inline torque sensor to support low-stiffness control for compliant interactions and reduced vibration. 

Designing custom hardware also allowed us to include additional joints that are uncommon in humanoid robots. For example, the clavicle and shoulder alone include five degrees of freedom to support a shrug function and an extended configuration space for more natural gestures. We were also able to integrate onboard computing to support interactive behaviors.

What compromises were required to make sure that your robot was not only functional, but also capable of becoming an expressive character?

As mentioned previously, we face serious challenges in terms of packaging and component selection due to the small size and character profile. This has led to a few compromises on the design side. For example, we currently rely on rigid-flex circuit boards to fit our electronics onto the available surface area of our parts without additional cables or connectors. Unfortunately, these boards are harder to design and manufacture than standard rigid boards, increasing complexity, cost, and build time. We might also consider increasing the size of the hip and knee actuators if they no longer needed to fit within a themed costume.

Designing a reliable walking robot is in itself a significant challenge, but adding style and personality to each motion is a new layer of complexity. From a software perspective, we spend a significant amount of time developing motion planning and animation tools that allow animators to author stylized gaits, gestures, and expressions for physical characters. Unfortunately, unlike on-screen characters, we do not have the option to bend the laws of physics and must validate each motion through simulation. As a result, we are currently limited to stylized walking and dancing on mostly flat ground, but we hope to be skipping up stairs in the future!

Of course, there is always more that can be done to better match the performance you would expect from a character. We are excited about some things we have in the pipeline, including a next generation lower body and an improved locomotion planner.

How are you going to make this robot safe for guests to be around?

First let us say, we take safety extremely seriously, and it is a top priority for any Disney experience. Ultimately, we do intend to allow interactions with guests of all ages, but it will take a measured process to get there. Proper safety evaluation is a big part of productizing any Research & Development project, and we plan to conduct playtests with our Imagineers, cast members and guests along the way. Their feedback will help determine exactly what an experience with a robotic character will look like once implemented.

From a design standpoint, we believe that small characters are the safest type of biped for human-robot interaction due to their reduced weight and low center of mass. We are also employing compliant control strategies to ensure that the robot’s actuators are torque-limited and backdrivable. Perception and behavior design may also play a key role, but in the end, we will rely on proper show design to permit a safe level of interaction as the technology evolves.

What do you think other roboticists working on legged systems could learn from Project Kiwi?

We are often inspired by other roboticists working on legged systems ourselves but would be happy to share some lessons learned. Remember that robotics is fundamentally interdisciplinary, and a good team typically consists of a mix of hardware and software engineers in close collaboration. In our experience, however, artists and animators play an equally valuable role in bringing a new vision to life. We often pull in ideas from the character animation and game development world, and while robotic characters are far more constrained than their virtual counterparts, we are solving many of the same problems. Another tip is to leverage motion studies (either through animation, motion capture, and/or simulation tools) early in the design process to generate performance-driven requirements for any new robot.

Now that Project Kiwi has de-stealthed, I hope the Disney Imagineering folks will be able to be a little more open with all of the sweet goo inside of the fuzzy skin of this metaphor that has stopped making sense. Meeting a new humanoid robot is always exciting, and the approach here (with its technical capability combined with an emphasis on character and interaction) is totally unique. And if they need anyone to test Kiwi’s huggability, I volunteer! You know, for science.

The Ingenuity Mars Helicopter has been doing an amazing job flying on Mars. Over the last several weeks it has far surpassed its original goal of proving that flight on Mars was simply possible, and is now showing how such flights are not only practical but also useful.

To that end, NASA has decided that the little helicopter deserves to not freeze to death quite so soon, and the agency has extended its mission for at least another month, giving it the opportunity to scout a new landing site to keep up with Perseverance as the rover starts its own science mission.

Some quick context: the Mars Helicopter mission was originally scheduled to last 30 days, and we’re currently a few weeks into that. The helicopter has flown successfully four times; the most recent flight was on April 30, and was a 266 meter round-trip at 5 meters altitude that took 117 seconds. Everything has worked nearly flawlessly, with (as far as we know) the only hiccup being a minor software bug that has a small chance of preventing the helicopter from entering flight mode. This bug has kicked in once, but JPL just tried doing the flight again, and then everything was fine. 

In a press conference last week, NASA characterized Ingenuity’s technical performance as “exceeding all expectations,” and the helicopter met all of its technical goals (and then some) earlier than anyone expected. Originally, that wouldn’t have made a difference, and Perseverance would have driven off and left Ingenuity behind no matter how well it was performing. But some things have changed, allowing Ingenuity to transition from a tech demo into an extended operational demo, as Jennifer Trosper, Perseverance deputy project manager, explained:

“We had not originally planned to do this operational demo with the helicopter, but two things have happened that have enabled us to do it. The first thing is that originally, we thought that we’d be driving away from the location that we landed at, but the [Perseverance] science team is actually really interested in getting initial samples from this region that we’re in right now. Another thing that happened is that the helicopter is operating in a fantastic way. The communications link is overperforming, and even if we move farther away, we believe that the rover and the helicopter will still have strong communications, and we’ll be able to continue the operational demo.”

The communications link was one of the original reasons why Perseverance’s mission was going to be capped at 30 days. It’s a little bit counter-intuitive, but it turns out that the helicopter simply cannot keep up with the rover, which Ingenuity relies on for communication with Earth. Ingenuity is obviously faster in flight, but once you factor in recharge time, if the rover is driving a substantial distance, the helicopter would not be able to stay within communications range.

And there’s another issue with the communications link: as a tech demo, Ingenuity’s communication system wasn’t tested to make sure that it can’t be disrupted by electronic interference generated by other bits and pieces of the Perseverance rover. Consequently, Ingenuity’s 30-day mission was planned such that when the helicopter was in the air, Perseverance was perfectly stationary. This is why we don’t have video where Perseverance pans its cameras to follow the helicopter—using those actuators might have disrupted the communications link.

Going forward, Perseverance will be the priority, not Ingenuity. The helicopter will have to do its best to stay in contact with the rover as it starts focusing on its own science mission. Ingenuity will have to stay in range (within a kilometer or so) and communicate when it can, even if the rover is busy doing other stuff. This extended mission will initially last 30 more days, and if it turns out that Ingenuity can’t do what it needs to do without needing more from Perseverance, well, that’ll be the end of the Mars helicopter mission. Even best case, it sounds like we won’t be getting any more pictures of Ingenuity in flight, since planning that kind of stuff took up a lot of the rover’s time. 

With all that in mind, here’s what NASA says we should be expecting:

“With short drives expected for Perseverance in the near term, Ingenuity may execute flights that land near the rover’s current location or its next anticipated parking spot. The helicopter can use these opportunities to perform aerial observations of rover science targets, potential rover routes, and inaccessible features while also capturing stereo images for digital elevation maps. The lessons learned from these efforts will provide significant benefit to future mission planners. These scouting flights are a bonus and not a requirement for Perseverance to complete its science mission.

The cadence of flights during Ingenuity’s operations demonstration phase will slow from once every few days to about once every two or three weeks, and the forays will be scheduled to avoid interfering with Perseverance’s science operations. The team will assess flight operations after 30 sols and will complete flight operations no later than the end of August.”

Specifically, Ingenuity spent its recent Flight 4 scouting for a new airfield to land at, and Flight 5 will be the first flight of this new operations phase, where it’ll attempt to land at this new airfield, a place it’s never touched down before about 60m south of its current position on Mars. NASA expects that there might be one or two flights after this, but nobody’s quite sure how it’s going to go, and NASA wasn’t willing to speculate about what’ll happen longer term.

It’s important to remember that all of this is happening in the context of Ingenuity being a 30 day tech demo. The hardware on the helicopter was designed with that length of time in mind, and not a multi-month mission. NASA said during their press conference that the landing gear is probably good for at least 100 landings, and the solar panel and sun angle will be able to meet energy requirements for at least a few months. The expectation is that with enough day/night thermal cycles, a solder joint will snap, rendering Ingenuity inoperable in some way. Nobody knows when that’ll happen, but again, this is a piece of hardware designed to function for 30 days, and despite JPL’s legacy of ridiculously long-loved robotic explorers, we should adjust our expectations accordingly. MiMi Aung, Mars Helicopter Project Manager, has it exactly right when she says that “we will be celebrating each day that Ingenuity survives and operates beyond that original window.” We’re just glad that there will be more to celebrate going forward. 

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Ascend is a smart knee orthosis designed to improve mobility and relieve knee pain. The customized, lightweight, and comfortable design reduces burden on the knee and intuitively adjusts support as needed. Ascend provides a safe and non-surgical solution for patients with osteoarthritis, knee instability, and/or weak quadriceps.

Each one of these is custom-built, and you can pre-order one now.

[ Roam Robotics ]

Ingenuity’s third flight achieved a longer flight time and more sideways movement than previously attempted. During the 80-second flight, the helicopter climbed to 16 feet (5 meters) and flew 164 feet (50 meters) downrange and back, for a total distance of 328 feet (100 meters). The third flight test took place at “Wright Brothers Field” in Jezero Crater, Mars, on April 25, 2021.

[ NASA ]

This right here, the future of remote work.

The robot will run you about $3,000 USD.

[ VStone ] via [ Robotstart ]

Texas-based aerospace robotics company, Wilder Systems, enhanced their existing automation capabilities to aid in the fight against COVID-19. Their recent development of a robotic testing system is both increasing capacity for COVID-19 testing and delivering faster results to individuals. The system conducts saliva-based PCR tests, which is considered the gold standard for COVID testing. Based on a protocol developed by Yale and authorized by the FDA, the system does not need additional approvals. This flexible, modular system can run up to 2,000 test samples per day, and can be deployed anywhere where standard electric power is available.

[ ARM Institute ]

Tests show that people do not like being nearly hit by drones.

But seriously, this research has resulted in some useful potential lessons for deploying drones in areas where they have a chance of interacting with humans.

[ Paper ]

The Ingenuity helicopter made history on April 19, 2021, with the first powered, controlled flight of an aircraft on another planet. How do engineers talk to a helicopter all the way out on Mars? We’ll hear about it from Nacer Chahat of NASA’s Jet Propulsion Laboratory, who worked on the helicopter’s antenna and telecommunication system.

[ NASA ]

A team of scientists from the Max Planck Institute for Intelligent Systems has developed a system with which they can fabricate miniature robots building block by building block, which function exactly as required.

[ Max Planck Institute ]

Well this was inevitable, wasn't it?

The pilot regained control and the drone was fine, though.

[ PetaPixel ]

NASA’s Ingenuity Mars Helicopter takes off and lands in this video captured on April 25, 2021, by Mastcam-Z, an imager aboard NASA’s Perseverance Mars rover. As expected, the helicopter flew out of its field of vision while completing a flight plan that took it 164 feet (50 meters) downrange of the landing spot. Keep watching, the helicopter will return to stick the landing. Top speed for today's flight was about 2 meters per second, or about 4.5 miles-per-hour.

[ NASA ]

U.S. Naval Research Laboratory engineers recently demonstrated Hybrid Tiger, an electric unmanned aerial vehicle (UAV) with multi-day endurance flight capability, at Aberdeen Proving Grounds, Maryland.

[ NRL ]

This week's CMU RI Seminar is by Avik De from Ghost Robotics, on “Design and control of insect-scale bees and dog-scale quadrupeds.”

Did you watch the Q&A? If not, you should watch the Q&A.

[ CMU ]

Autonomous quadrotors will soon play a major role in search-and-rescue, delivery, and inspection missions, where a fast response is crucial. However, their speed and maneuverability are still far from those of birds and human pilots. What does it take to make drones navigate as good or even better than human pilots?

[ GRASP Lab ]

With the current pandemic accelerating the revolution of AI in healthcare, where is the industry heading in the next 5-10 years? What are the key challenges and most exciting opportunities? These questions will be answered by HAI’s Co-Director, Fei-Fei Li and the Founder of DeepLearning.AI, Andrew Ng in this fireside chat virtual event.

[ Stanford HAI ]

Autonomous robots have the potential to serve as versatile caregivers that improve quality of life for millions of people with disabilities worldwide. Yet, physical robotic assistance presents several challenges, including risks associated with physical human-robot interaction, difficulty sensing the human body, and a lack of tools for benchmarking and training physically assistive robots. In this talk, I will present techniques towards addressing each of these core challenges in robotic caregiving.

[ GRASP Lab ]

What does it take to empower persons with disabilities, and why is educating ourselves on this topic the first step towards better inclusion? Why is developing assistive technologies for people with disabilities important in order to contribute to their integration in society? How do we implement the policies and actions required to enable everyone to live their lives fully? ETH Zurich and the Global Shapers Zurich Hub invited to an online dialogue on the topic “For a World without Barriers-Removing Obstacles in Daily Life for People with Disabilities.”

[ Cybathlon ]

Drone autonomy is getting more and more impressive, but we’re starting to get to the point where it’s getting significantly more difficult to improve on existing capabilities. Companies like Skydio are selling (for cheap!) commercial drones that have no problem dynamically path planning around obstacles at high speeds while tracking you, which is pretty amazing, and it can also autonomously create 3D maps of structures. In both of these cases, there’s a human indirectly in the loop, either saying “follow me” or “map this specific thing.” In other words, the level of autonomous flight is very high, but there’s still some reliance on a human for high-level planning. Which, for what Skydio is doing, is totally fine and the right way to do it.

Exyn, a drone company with roots in the GRASP Lab at the University of Pennsylvania, has been developing drones for inspections of large unstructured spaces like mines. This is an incredibly challenging environment, being GPS-denied, dark, dusty, and dangerous, to name just a few of the challenges. While Exyn’s lidar-equipped drones have been autonomous for a while now, they’re now able to operate without any high-level planning from a human at all. At this level of autonomy, which Exyn calls Level 4A, the operator simply defines a volume for the drone to map, and then from takeoff to landing, the drone will methodically explore the entire space and generate a high resolution map all by itself, even if it goes far beyond communications range to do so.

Let’s be specific about what “Level 4A” autonomy means, because until now, there haven’t really been established autonomy levels for drones. And the reason that there are autonomy levels for drones all of a sudden is because Exyn just went ahead and invented some. To be fair, Exyn took inspiration from the SAE autonomy levels, so there is certainly some precedent here, but it’s still worth keeping in mind that this whole system is for the moment just something that Exyn came up with by themselves and applied to their own system. They did put a bunch of thought into it, at least, and you can read a whitepaper on the whole thing here.

Graphic: Exyn Larger version here.

A couple things about exactly what Exyn is doing: Their drone, which carries lights, a GoPro, some huge computing power, an even huger battery, and a rotating Velodyne lidar, is able to operate completely independently of a human operator or really any kind of external inputs at all. No GPS, no base station, no communications, no prior understanding of the space, nothing. You tell the drone where you want it to map, and it’ll take off and then decide on its own where and how to explore the space that it’s in, building up an obscenely high resolution lidar map as it goes and continuously expanding that map until it runs out of unexplored areas, at which point it’ll follow the map back home and land itself. “When we’re executing the exploration,” Exyn CTO Jason Derenick tells us, “what we’re doing is finding the boundary between the visible and explored space, and the unknown space. We then compute viewpoint candidates, which are locations along that boundary where we can infer how much potential information our sensors can gain, and then the system selects the one with the most opportunity for seeing as much of the environment as possible.”

Flying at up to 2 m/s, Exyn’s drone can explore 16 million cubic meters in a single flight (about nine football stadiums worth of volume), and if the area you want it to explore is larger than that, it can go back out for more rounds after a battery swap.

It’s important to understand, though, what the limitations of this drone’s autonomy are. We’re told that it can sense things like power lines, although probably not something narrow like fishing wire. Which so far hasn’t been a problem, because it’s an example of a “pathological” obstacle—something that is not normal, and would typically only be encountered if it was placed there specifically to screw you up. Dynamic obstacles (like humans or vehicles) moving at walking speed are also fine. Dust can be tricky at times, although the drone can identify excessive amounts of dust in the air, and it’ll wait a bit for the dust to settle before updating its map.

Photo; Exyn

The commercial applications of a totally hands-off system that’s able to autonomously generate detailed lidar maps of unconstrained spaces in near real-time are pretty clear. But what we’re most excited about are the potential search and rescue use cases, especially when Exyn starts to get multiple drones working together collaboratively. You can imagine a situation in which you need to find a lost person in a cave or a mine, and you unload a handful of drones at the entrance, tell them “go explore until you find a human,” and then just let them do their thing.

To make this happen, though, Exyn will need to add an additional level of understanding to their system, which is something they’re working on now, says Derenick. This means both understanding what objects are, as well as reasoning about them, which could mean what the object represents in a more abstract sense as well as how things like dynamic obstacles may move. Autonomous cars have to do this routinely, but for a drone with severe size and power constraints, it’s a much bigger challenge, but one that I’m pretty sure Exyn will figure out.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Within the last four days, the Ingenuity has flown twice (!) on Mars.

This is an enhanced video showing some of the dust that the helicopter kicked up as it took off:

Data is still incoming for the second flight, but we know that it went well, at least:

[ NASA ]

Can someone who knows a lot about HRI please explain to me why I'm absolutely fascinated by Flatcat?

You can now back Flatcat on Kickstarter for a vaguely distressing $1,200.

[ Flatcat ]

Digit navigates a novel indoor environment without pre-mapping or markers, with dynamic obstacle avoidance. Waypoints are defined relative to the global reference frame determined at power-on. No bins were harmed in filming.

[ Agility Robotics ]

The Yellow Drum Machine, popped up on YouTube again this week for some reason. And it's still one of my favorite robots of all time.

[ Robotshop ]

This video shows results of high-speed autonomous flight in a forest through trees. Path planning uses a trajectory library with pre-established correspondences for collision checking. Decisions are made in 0.2-0.3ms enabling the flight at the speed of 10m/s. No prior map is used.

[ Near Earth ]

We present ManipulaTHOR, a framework that facilitates visual manipulation of objects using a robotic arm. Our framework is built upon a physics engine and enables realistic interactions with objects while navigating through scenes and performing tasks.

[ Allen Institute ]

Well this is certainly one of the more unusual multirotor configurations I've ever seen.

[ KAIST ]

Thailand’s Mahidol University and the Institute of Molecular Biosciences chose ABB's YuMi cobot & IRB 1100 robot to work together to fast-track Covid-19 vaccine development. The robots quickly perform repetitive tasks such as unscrewing vials and transporting them to test stations, protecting human workers from injury or harm.

[ ABB ]

Skydio's 3D scan functionality is getting more and more impressive.

[ Skydio ]

With more than 50 service locations across Europe, Stadler Service is focused on increasing train availability, reliability, and safety. ANYbotics is partnering with Stadler Service to explore the potential of mobile robots to increase the efficiency and quality of routine inspection and maintenance of rolling stock.

[ ANYbotics ]

Inspection engineers at Kiwa Inspecta used the Elios 2 to inspect a huge decommissioned oil cavern. The inspection would have required six months and a million Euros if conducted manually but with the Elios 2 it was completed in just a few days at a significantly lower cost.

[ Flyability ]

RightHand Robotics builds a data-driven intelligent piece-picking platform, providing flexible and scalable automation for predictable order fulfillment. RightPick™ 3 is the newest generation of our award-winning autonomous, industrial robot system.

[ RightHand Robotics ]

NASA's Unmanned Aircraft Systems Traffic Management project, or UTM, is working to safely integrate drones into low-altitude airspace. In 2019, the project completed its final phase of flight tests. The research results are being transferred to the Federal Aviation Administration, who will continue development of the UTM system and implement it over time.

[ NASA ]

At the Multi-Robot Planning and Control lab, our research vision is to build multi-robot systems that are capable of acting competently in the real world. We study, develop and combine automated planning, coordination, and control methods to achieve this capability. We find that some of the most interesting basic research questions derive from the problem features and constraints imposed by real-world applications. This video illustrates some of these research questions.

[ Örebro ]

Thanks Fan!

The University of Texas at Austin’s Cockrell School of Engineering and College of Natural Sciences are partnering on life-changing research in artificial intelligence and robotics—ensuring that UT continues to lead the way in launching tomorrow’s technologies.

[ UT Robotics ]

Thanks Fan!

Over the past ten years various robotics and remote technologies have been introduced at Fukushima sites for such tasks as inspection, rubble removal, and sampling showing success and revealing challenges. Successful decommissioning will rely on the development of highly reliable robotic technologies that can be deployed rapidly and efficiently into the sites. The discussion will focus on the decommissioning challenges and robotic technologies that have been used in Fukushima. The panel will conclude with the lessons learned from Fukushima’s past 10-year experience and how robotics must prepare to be ready to respond in the event of future disasters.

[ IFRR ]

Over the last few weeks, we’ve posted several articles about the next generation of warehouse manipulation robots designed to handle the non-stop stream of boxes that provide the foundation for modern ecommerce. But once these robots take boxes out of the back of a trailer or off of a pallet, there are yet more robots ready to autonomously continue the flow through a warehouse or distribution center. One of the beefiest of these autonomous mobile robots is the OTTO 1500, which is called the OTTO 1500 because (you guessed it) it can handle 1500 kg of cargo. Plus another 400kg of cargo, for a total of 1900 kg of cargo. Yeah, I don’t get it either. Anyway, it’s undergone a major update, which is a good excuse for us to ask OTTO CTO Ryan Gariepy some questions about it.

The earlier version, also named OTTO 1500, has over a million hours of real-world operation, which is impressive. Even more impressive is being able to move that much stuff that quickly without being a huge safety hazard in warehouse environments full of unpredictable humans. Although, that might become less of a problem over time, as other robots take over some of the tasks that humans have been doing. OTTO Motors and Clearpath Robotics have an ongoing partnership with Boston Dynamics, and we fully expect to see these AMRs hauling boxes for Stretch in the near future.

For a bit more, we spoke with OTTO CTO Ryan Gariepy via email.

IEEE Spectrum: What are the major differences between today’s OTTO 1500 and the one introduced six years ago, and why did you decide to make those changes?

Ryan Gariepy: Six years isn’t a long shelf life for an industrial product, but it’s a lifetime in the software world. We took the original OTTO 1500 and stripped it down to the chassis and drivetrain, and re-built it with more modern components (embedded controller, state-of-the-art sensors, next-generation lithium batteries, and more). But the biggest difference is in how we’ve integrated our autonomous software and our industrial safety systems. Our systems are safe throughout the entirety of the vehicle dynamics envelope from straight line motion to aggressive turning at speed in tight spaces. It corners at 2m/s and has 60% more throughput. No “simple rectangular” footprints here! On top of this, the entire customization, development, and validation process is done in a way which respects that our integration partners need to be able to take advantage of these capabilities themselves without needing to become experts in vehicle dynamics. 

As for “why now,” we’ve always known that an ecosystem of new sensors and controllers was going to emerge as the world caught on to the potential of heavy-load AMRs. We wanted to give the industry some time to settle out—making sure we had reliable and low-cost 3D sensors, for example, or industrial grade fanless computers which can still mount a reasonable GPU, or modular battery systems which are now built-in view of new certifications requirements. And, possibly most importantly, partners who see the promise of the market enough to accommodate our feedback in their product roadmaps.

How has the reception differed from the original introduction of the OTTO 1500 and the new version?
 
That’s like asking the difference between the public reception to the introduction of the first iPod in 2001 and the first iPhone in 2007. When we introduced our first AMR, very few people had even heard of them, let alone purchased one before. We spent a great deal of time educating the market on the basic functionality of an AMR: What it is and how it works kind of stuff. Today’s buyers are way more sophisticated, experienced, and approach automation from a more strategic perspective. What was once a tactical purchase to plug a hole is now part of a larger automation initiative. And while the next generation of AMRs closely resemble the original models from the outside, the software functionality and integration capabilities are night and day.

What’s the most valuable lesson you’ve learned?

We knew that our customers needed incredible uptime: 365 days, 24/7 for 10 years is the typical expectation. Some of our competitors have AMRs working in facilities where they can go offline for a few minutes or a few hours without any significant repercussions to the workflow. That’s not the case with our customers, where any stoppage at any point means everything shuts down. And, of course, Murphy’s law all but guarantees that it shuts down at 4:00 a.m. on Saturday, Japan Standard Time. So the humbling lesson wasn’t knowing that our customers wanted maintenance service levels with virtually no down time, the humbling part was the degree of difficulty in building out a service organization as rapidly as we rolled out customer deployments. Every customer in a new geography needed a local service infrastructure as well. Finally, service doesn’t mean anything without spare parts availability, which brings with it customs and shipping challenges. And, of course, as a Canadian company, we need to build all of that international service and logistics infrastructure right from the beginning. Fortunately, the groundwork we’d laid with Clearpath Robotics served as a good foundation for this.

How were you able to develop a new product with COVID restrictions in place?

We knew we couldn’t take an entire OTTO 1500 and ship it to every engineer’s home that needed to work on one, so we came up with the next best thing. We call it a ‘wall-bot’ and it’s basically a deconstructed 1500 that our engineers can roll into their garage. We were pleasantly surprised with how effective this was, though it might be the heaviest dev kit in the robot world. 

Also don’t forget that much of robotics is software driven. Our software development life cycle had already had a strong focus on Gazebo-based simulation for years due to it being unfeasible to give every in-office developer a multi-ton loaded robot to play with, and we’d already had a redundant VPN setup for the office. Finally, we’ve always been a remote-work-friendly culture ever since we started adopting telepresence robots and default-on videoconferencing in the pre-OTTO days. In retrospect, it seems like the largest area of improvement for us for the future is how quickly we could get people good home office setups while amid a pandemic.

Kate Darling is an expert on human robot interaction, robot ethics, intellectual property, and all sorts of other things at the MIT Media Lab. She’s written several excellent articles for us in the past, and we’re delighted to be able to share this excerpt from her new book, which comes out today. Entitled The New Breed: What Our History with Animals Reveals about Our Future with Robots, Kate’s book is an exploration of how animals can help us understand our robot relationships, and how far that comparison can really be extended. It’s solidly based on well-cited research, including many HRI studies that we’ve written about in the past, but Kate brings everything together and tells us what it all could mean as robots continue to integrate themselves into our lives. 

The following excerpt is The Power of Movement, a section from the chapter Robots Versus Toasters, which features one of the saddest robot videos I’ve ever seen, even after nearly a decade. Enjoy!

When the first black-and-white motion pictures came to the screen, an 1896 film showing in a Paris cinema is said to have caused a stampede: the first-time moviegoers, watching a giant train barrel toward them, jumped out of their seats and ran away from the screen in panic. According to film scholar Martin Loiperdinger, this story is no more than an urban legend. But this new media format, “moving pictures,” proved to be both immersive and compelling, and was here to stay. Thanks to a baked-in ability to interpret motion, we’re fascinated even by very simple animation because it tells stories we intuitively understand.

In a seminal study from the 1940s, psychologists Fritz Heider and Marianne Simmel showed participants a black-and-white movie of simple, geometrical shapes moving around on a screen. When instructed to describe what they were seeing, nearly every single one of their participants interpreted the shapes to be moving around with agency and purpose. They described the behavior of the triangles and circle the way we describe people’s behavior, by assuming intent and motives. Many of them went so far as to create a complex narrative around the moving shapes. According to one participant: “A man has planned to meet a girl and the girl comes along with another man. [ . . . ] The girl gets worried and races from one corner to the other in the far part of the room. [ . . . ] The girl gets out of the room in a sudden dash just as man number two gets the door open. The two chase around the outside of the room together, followed by man number one. But they finally elude him and get away. The first man goes back and tries to open his door, but he is so blinded by rage and frustration that he can not open it.”

What brought the shapes to life for Heider and Simmel’s participants was solely their movement. We can interpret certain movement in other entities as “worried,” “frustrated,” or “blinded by rage,” even when the “other” is a simple black triangle moving across a white background. A number of studies document how much information we can extract from very basic cues, getting us to assign emotions and gender identity to things as simple as moving points of light. And while we might not run away from a train on a screen, we’re still able to interpret the movement and may even get a little thrill from watching the train in a more modern 3D screening. (There are certainly some embarrassing videos of people—maybe even of me—when we first played games wearing virtual reality headsets.)

Many scientists believe that autonomous movement activates our “life detector.” Because we’ve evolved needing to quickly identify natural predators, our brains are on constant lookout for moving agents. In fact, our perception is so attuned to movement that we separate things into objects and agents, even if we’re looking at a still image. Researchers Joshua New, Leda Cosmides, and John Tooby showed people photos of a variety of scenes, like a nature landscape, a city scene, or an office desk. Then, they switched in an identical image with one addition; for example, a bird, a coffee mug, an elephant, a silo, or a vehicle. They measured how quickly the participants could identify the new appearance. People were substantially quicker and more accurate at detecting the animals compared to all of the other categories, including larger objects and vehicles.

The researchers also found evidence that animal detection activated an entirely different region of people’s brains. Research like this suggests that a specific part of our brain is constantly monitoring for lifelike animal movement. This study in particular also suggests that our ability to separate animals and objects is more likely to be driven by deep ancestral priorities than our own life experiences. Even though we have been living with cars for our whole lives, and they are now more dangerous to us than bears or tigers, we’re still much quicker to detect the presence of an animal.

The biological hardwiring that detects and interprets life in autonomous agent movement is even stronger when it has a body and is in the room with us. John Harris and Ehud Sharlin at the University of Calgary tested this projection with a moving stick. They took a long piece of wood, about the size of a twirler’s baton, and attached one end to a base with motors and eight degrees of freedom. This allowed the researchers to control the stick remotely and wave it around: fast, slow, doing figure eights, etc. They asked the experiment participants to spend some time alone in a room with the moving stick. Then, they had the participants describe their experience.

Only two of the thirty participants described the stick’s movement in technical terms. The others told the researchers that the stick was bowing or otherwise greeting them, claimed it was aggressive and trying to attack them, described it as pensive, “hiding something,” or even “purring happily.” At least ten people said the stick was “dancing.” One woman told the stick to stop pointing at her.

If people can imbue a moving stick with agency, what happens when they meet R2-D2? Given our social tendencies and ingrained responses to lifelike movement in our physical space, it’s fairly unsurprising that people perceive robots as being alive. Robots are physical objects in our space that often move in a way that seems (to our lizard brains) to have agency. A lot of the time, we don’t perceive robots as objects—to us, they are agents. And, while we may enjoy the concept of pet rocks, we love to anthropomorphize agent behavior even more.

We already have a slew of interesting research in this area. For example, people think a robot that’s present in a room with them is more enjoyable than the same robot on a screen and will follow its gaze, mimic its behavior, and be more willing to take the physical robot’s advice. We speak more to embodied robots, smile more, and are more likely to want to interact with them again. People are more willing to obey orders from a physical robot than a computer. When left alone in a room and given the opportunity to cheat on a game, people cheat less when a robot is with them. And children learn more from working with a robot compared to the same character on a screen. We are better at recognizing a robot’s emotional cues and empathize more with physical robots. When researchers told children to put a robot in a closet (while the robot protested and said it was afraid of the dark), many of the kids were hesitant. 

Even adults will hesitate to switch off or hit a robot, especially when they perceive it as intelligent. People are polite to robots and try to help them. People greet robots even if no greeting is required and are friendlier if a robot greets them first. People reciprocate when robots help them. And, like the socially inept [software office assistant] Clippy, when people don’t like a robot, they will call it names. What’s noteworthy in the context of our human comparison is that the robots don’t need to look anything like humans for this to happen. In fact, even very simple robots, when they move around with “purpose,” elicit an inordinate amount of projection from the humans they encounter. Take robot vacuum cleaners. By 2004, a million of them had been deployed and were sweeping through people’s homes, vacuuming dirt, entertaining cats, and occasionally getting stuck in shag rugs. The first versions of the disc-shaped devices had sensors to detect things like steep drop-offs, but for the most part they just bumbled around randomly, changing direction whenever they hit a wall or a chair.

iRobot, the company that makes the most popular version (the Roomba) soon noticed that their customers would send their vacuum cleaners in for repair with names (Dustin Bieber being one of my favorites). Some Roomba owners would talk about their robot as though it were a pet. People who sent in malfunctioning devices would complain about the company’s generous policy to offer them a brand-new replacement, demanding that they instead fix “Meryl Sweep” and send her back. The fact that the Roombas roamed around on their own lent them a social presence that people’s traditional, handheld vacuum cleaners lacked. People decorated them, talked to them, and felt bad for them when they got tangled in the curtains.

Tech journalists reported on the Roomba’s effect, calling robovacs “the new pet craze.” A 2007 study found that many people had a social relationship with their Roombas and would describe them in terms that evoked people or animals. Today, over 80 percent of Roombas have names. I don’t have access to naming statistics for the handheld Dyson vacuum cleaner, but I’m pretty sure the number is lower.

Robots are entering our lives in many shapes and forms, and even some of the most simple or mechanical robots can prompt a visceral response. And the design of robots isn’t likely to shift away from evoking our biological reactions—especially because some robots are designed to mimic lifelike movement on purpose.

Excerpted from THE NEW BREED: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.

Kate’s book is available today from Annie Bloom’s Books in SW Portland, Oregon. It’s also available from Powell’s Books, and if you don’t have the good fortune of living in Portland, you can find it in both print and digital formats pretty much everywhere else books are sold.

As for Robovie, the claustrophobic robot that kept getting shoved in a closet, we recently checked in with Peter Kahn, the researcher who created the experiment nearly a decade ago, to make sure that the poor robot ended up okay. “Robovie is doing well,” Khan told us. “He visited my lab on 2-3 other occasions and participated in other experiments. Now he’s back in Japan with the person who helped make him, and who cares a lot about him.” That person is Takayuki Kanda at ATR, who we’re happy to report is still working with Robovie in the context of human-robot interaction. Thanks Robovie! 

Earlier today, at about 11am Mars time, the Ingenuity Mars Helicopter successfully completed its very first flight on Mars. The little helicopter, which is about the size of a box of tissues, did exactly what it was supposed to do, ascending vertically to 3 meters, hovering for 30 seconds, pivoting towards the Perseverance rover, and then landing again, for a total flight time of about 40 seconds.

With this flight, Ingenuity’s mission is officially a success, opening up the skies of Mars to autonomous robots that can explore farther, faster than ever before.

What data has helicopter sent back to Earth so far?

The first data products to make it back confirmed that Ingenuity is safe and healthy, which was the most important thing. As far as the actual flight went, the helicopter initially sent back confirmations of each of its flight phases, including an altimeter plot, showing that it started its mission on the ground, ascended, hovered, descended, and ended its flight in good enough shape to transmit back to Earth via Perseverance as a relay. 

Screenshot: NASA TV Data showing the flight trajectory from Ingenuity.

We’ve also seen the first picture from Ingenuity’s downward-facing navigation camera, along with a few frames of animation from Perseverance showing the flight itself.

Screen Capture: NASATV Ingenuity’s first flight as seen from the Perseverance rover, about 100 meters away. These are still frames that are stitched together to make a video, which is why the flight looks short.

When will there be more pictures and video?

More data should be arriving back at Earth over the course of the day today.

Wait, wasn’t this supposed to have happened a week ago?

The first flight attempt was originally scheduled for April 12, but on April 9, a high-speed spin test revealed a command sequencing issue that JPL needed some extra time to diagnose and fix. This solution worked 85% of the time, and failed safely, which was good enough for the attempt today.

What does Ingenuity do next?

The clock is ticking on Ingenuity’s 30 day mission window, so there will be a lot more happening over the next few weeks. Here’s JPL’s tentative plan for the next several flights:

Flight Test No. 2 could be expanded to include climbing to 16 feet (5 meters) and then flying horizontally for a few feet (meters), flying horizontally back to descend, and landing within the airfield. Total flight time could be up to 90 seconds. Images from the helicopter’s navigation camera will later be used by project team members on Earth to evaluate the helicopter’s navigation performance.

If the second experimental test flight is a success, the goals of Flight Test No. 3 could be expanded to test the helicopter’s ability to fly farther and faster–up to 160 feet (50 meters) from the airfield and then return. Total flight time could be up to 90 seconds.

If the project timeline allows for Flight Tests No. 4 and 5, the goals and flight plans will be based on data returned from the first three tests. The flights could further explore Ingenuity’s aerial capabilities, including flying at a time of day where higher winds are expected and traveling farther downrange with more changes in altitude, heading, and airspeed.

Photo: NASA/JPL-Caltech/ASU A photo of Ingenuity taken by Perseverance after the helicopter's pre-flight rotor spin test.

[ Mars 2020 ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

Researchers from the Biorobotics Lab in the School of Computer Science’s Robotics Institute at Carnegie Mellon University tested the hardened underwater modular robot snake (HUMRS) last month in the pool, diving the robot through underwater hoops, showing off its precise and smooth swimming, and demonstrating its ease of control.

The robot's modular design allows it to adapt to different tasks, whether squeezing through tight spaces under rubble, climbing up a tree or slithering around a corner underwater. For the underwater robot snake, the team used existing watertight modules that allow the robot to operate in bad conditions. They then added new modules containing the turbines and thrusters needed to maneuver the robot underwater.

[ CMU ]

Robots are learning how not to fall over after stepping on your foot and kicking you in the shin.

[ B-Human ]

Like boot prints on the Moon, NASA's OSIRIS-REx spacecraft left its mark on asteroid Bennu. Now, new images—taken during the spacecraft's final fly-over on April 7, 2021—reveal the aftermath of the historic Touch-and-Go (TAG) sample acquisition event from Oct. 20, 2020.

[ NASA ]

In recognition of National Robotics Week, Conan O'Brien thanks one of the robots that works for him.

[ YouTube ]

The latest from Wandercraft's self-balancing Atalante exo.

[ Wandercraft ]

Stocking supermarket shelves is one of those things that's much more difficult than it looks for robots, involving in-hand manipulation, motion planning, vision, and tactile sensing. Easy for humans, but robots are getting better.

[ Article ]

Thanks Marco!

Draganfly​ drone spraying Varigard disinfectant at the Smoothie King stadium. Our drone sanitization spraying technology is up to 100% more efficient and effective than conventional manual spray sterilization processes.

[ Draganfly ]

Baubot is a mobile construction robot that can do pretty much everything, apparently.

I’m pretty skeptical of robots like these; especially ones that bill themselves as platforms that can be monetized by third-party developers. From what we've seen, the most successful robots instead focus on doing one thing very well.

[ Baubot ]

In this demo, a remote operator sends an unmanned ground vehicle on an autonomous inspection mission via Clearpath’s web-based Outdoor Navigation Software.

[ Clearpath ]

Aurora’s Odysseus aircraft is a high-altitude pseudo-satellite that can change how we use the sky. At a fraction of the cost of a satellite and powered by the sun, Odysseus offers vast new possibilities for those who need to stay connected and informed.

[ Aurora ]

This video from 1999 discusses the soccer robot research activities at Carnegie Mellon University. CMUnited, the team of robots developed by Manuela Veloso and her students, won the small-size competition in both 1997 and 1998.

[ CMU ]

Thanks Fan!

This video propose an overview of our participation to the DARPA subterranean challenge, with a focus on the urban edition taking place Feb. 18-27, 2020, at Satsop Business Park west of Olympia, Washington.

[ Norlab ]

In today’s most advanced warehouses, Magazino’s autonomous robot TORU works side by side with human colleagues. The robot is specialized in picking, transporting, and stowing objects like shoe boxes in e-commerce warehouses.

[ Magazino ]

A look at the Control Systems Lab at the National Technical University of Athens.

[ CSL ]

Thanks Fan!

Doug Weber of MechE and the Neuroscience Institute discusses his group’s research on harnessing the nervous system's ability to control not only our bodies, but the machines and prostheses that can enhance our bodies, especially for those with disabilities.

[ CMU ]

Mark Yim, Director of the GRASP Lab at UPenn, gives a talk on “Is Cost Effective Robotics Interesting?” Yes, yes it is.

Robotic technologies have shown the capability to do amazing things. But many of those things are too expensive to be useful in any real sense. Cost reduction has often been shunned by research engineers and scientists in academia as “just engineering.” For robotics to make a larger impact on society the cost problem must be addressed.

[ CMU ]

There are all kinds of “killer robots” debates going on, but if you want an informed, grounded, nuanced take on AI and the future of war-fighting, you want to be watching debates like these instead. Professor Rebecca Crootof speaks with Brigadier General Patrick Huston, Assistant Judge Advocate General for Military Law and Operations, at Duke Law School's 26th Annual National Security Law conference.

[ Lawfire ]

This week’s Lockheed Martin Robotics Seminar is by Julie Adams from Oregon State, on “Human-Collective Teams: Algorithms, Transparency .”

Biological inspiration for artificial systems abounds. The science to support robotic collectives continues to emerge based on their biological inspirations, spatial swarms (e.g., fish and starlings) and colonies (e.g., honeybees and ants). Developing effective human-collective teams requires focusing on all aspects of the integrated system development. Many of these fundamental aspects have been developed independently, but our focus is an integrated development process to these complex research questions. This presentation will focus on three aspects: algorithms, transparency, and resilience for collectives.

[ UMD ]

Human-robot interaction goes both ways. You’ve got robots understanding (or attempting to understand) humans, as well as humans understanding (or attempting to understand) robots. Humans, in my experience, are virtually impossible to understand even under the best of circumstances. But going the other way, robots have all kinds of communication tools at their disposal. Lights, sounds, screens, haptics—there are lots of options. That doesn’t mean that robot to human (RtH) communication is easy, though, because the ideal communication modality is something that is low cost and low complexity while also being understandable to almost anyone.

One good option for something like a collaborative robot arm can be to use human-inspired gestures (since it doesn’t require any additional hardware), although it’s important to be careful when you start having robots doing human stuff, because it can set unreasonable expectations if people think of the robot in human terms. In order to get around this, roboticists from Aachen University are experimenting with animal-like gestures for cobots instead, modeled after the behavior of puppies. Puppies!

For robots that are low-cost and appearance-constrained, animal-inspired (zoomorphic) gestures can be highly effective at state communication. We know this because of tails on Roombas:

While this is an adorable experiment, adding tails to industrial cobots is probably not going to happen. That’s too bad, because humans have an intuitive understanding of dog gestures, and this extends even to people who aren’t dog owners. But tails aren’t necessary for something to display dog gestures; it turns out that you can do it with a standard robot arm:

In a recent preprint in IEEE Robotics and Automation Letters (RA-L), first author Vanessa Sauer used puppies to inspire a series of communicative gestures for a Franka Emika Panda arm. Specifically, the arm was to be used in a collaborative assembly task, and needed to communicate five states to the human user, including greeting the user, prompting the user to take a part, waiting for a new command, an error condition when a container was empty of parts, and then shutting down. From the paper:

For each use case, we mirrored the intention of the robot (e.g., prompting the user to take a part) to an intention, a dog may have (e.g., encouraging the owner to play). In a second step, we collected gestures that dogs use to express the respective intention by leveraging real-life interaction with dogs, online videos, and literature. We then translated the dog gestures into three distinct zoomorphic gestures by jointly applying the following guidelines inspired by:

  • Mimicry. We mimic specific dog behavior and body language to communicate robot states.
  • Exploiting structural similarities. Although the cobot is functionally designed, we exploit certain components to make the gestures more “dog-like,” e.g., the camera corresponds to the dog’s eyes, or the end-effector corresponds to the dog’s snout.
  • Natural flow. We use kinesthetic teaching and record a full trajectory to allow natural and flowing movements with increased animacy.

A user study comparing the zoomorphic gestures to a more conventional light display for state communication during the assembly task showed that the zoomorphic gestures were easily recognized by participants as dog-like, even if the participants weren’t dog people. And the zoomorphic gestures were also more intuitively understood than the light displays, although the classification of each gesture wasn’t perfect. People also preferred the zoomorphic gestures over more abstract gestures designed to communicate the same concept. Or as the paper puts it, “Zoomorphic gestures are significantly more attractive and intuitive and provide more joy when using.” An online version of the study is here, so give it a try and provide yourself with some joy.

While zoomorphic gestures (at least in this very preliminary research) aren’t nearly as accurate at state communication as using something like a screen, they’re appealing because they’re compelling, easy to understand, inexpensive to implement, and less restrictive than sounds or screens. And there’s no reason why you can’t use both!

For a few more details, we spoke with the first author on this paper, Vanessa Sauer. 

IEEE Spectrum: Where did you get the idea for this research from, and why do you think it hasn't been more widely studied or applied in the context of practical cobots?

Vanessa Sauer: I'm a total dog person. During a conversation about dogs and how their ways of communicating with their owner has evolved over time (e.g., more expressive face, easy to understand even without owning a dog), I got the rough idea for my research. I was curious to see if this intuitive understanding many people have of dog behavior could also be applied to cobots that communicate in a similar way. Especially in social robotics, approaches utilizing zoomorphic gestures have been explored. I guess due to the playful nature, less research and applications have been done in the context of industry robots, as they often have a stronger focus on efficiency.

How complex of a concept can be communicated in this way?

In our “proof-of-concept” style approach, we used rather basic robot states to be communicated. The challenge with more complex robot states would be to find intuitive parallels in dog behavior. Nonetheless, I believe that more complex states can also be communicated with dog-inspired gestures.

How would you like to see your research be put into practice?

I would enjoy seeing zoomorphic gestures offered as modality-option on cobots, especially cobots used in industry. I think that could have the potential to reduce inhibitions towards collaborating with robots and make the interaction more fun.

Photos, Robots: Franka Emika; Dogs: iStockphoto Zoomorphic Gestures for Communicating Cobot States, by Vanessa Sauer, Axel Sauer, and Alexander Mertens from Aachen University and TUM, will be published in RA-L.

Today at ProMat, a company called Pickle Robots is announcing Dill, a robot that can unload boxes from the back of a trailer at places like ecommerce fulfillment warehouses at very high speeds. With a peak box unloading rate of 1800 boxes per hour and a payload of up to 25 kg, Dill can substantially outperform even an expert human, and it can keep going pretty much forever as long as you have it plugged into the wall. 

Pickle Robots says that Dill’s approach to the box unloading task is unique in a couple of ways. First, it can handle messy trailers filled with a jumble of boxes of different shapes, colors, sizes, and weights. And second, from the get-go it’s intended to work under human supervision, relying on people to step in and handle edge cases.

Pickle’s “Dill” robot is based around a Kuka arm with up to 30 kg of payload. It uses two Intel L515s (Lidar-based RGB-D cameras) for box detection. The system is mounted on a wheeled base, and after getting positioned at the back of a trailer by a human operator, it’ll crawl forward by itself as it picks its way into the trailer. We’re told that the rate at which the robot can shift boxes averages 1600 per hour, with a peak speed closer to 1800 boxes per hour. A single human in top form can move about 800 boxes per hour, so Dill is very, very fast. In the video, you can see the robot slow down on some packages, and Pickle CEO Andrew Meyer says that’s because “we probably have a tenuous grasp on that package. As we continue to improve the gripper, we will be able to keep the speed up on more cycles.”

While the video shows Dill operating at speed autonomously, the company says it’s designed to function under human supervision. From the press release: “To maintain these speeds, Dill needs people to supervise the operation and lend an occasional helping hand, stepping in every so often to pick up any dropped packages and handle irregular items.” Typically, Meyer says, that means one person for every five robots depending on the use case. Although if you have only one robot, it’ll still require someone to keep an eye on it. A supervisor is not occupied with the task full-time, to be clear. They can also be doing something else while the robot works—although the longer a human takes to respond to issues the robot may have, the slower its effective speed will be. Typically, the company says, a human will need to help out the robot once every five minutes when it’s doing something particularly complex. But even in situations with lots of hard-to-handle boxes resulting in relatively low efficiency, Meyer says that users can expect speeds exceeding 1000 boxes per hour.

Photo: Pickle Robots Pickle Robots’ gripper, which includes a high contact area suction system and a retractable plate to help the robot quickly flip boxes.

From Pickle Robots’ video, it’s fairly obvious that the comparison that Pickle wants you to make is to Boston Dynamics’ Stretch robot, which has a peak box moving rate of 800 boxes per hour. Yes, Pickle’s robot is twice as fast. But it’s also a unitasker, designed to unload boxes from trucks, and that’s it. Focusing on a very specific problem is a good approach for robots, because then you can design a robot that does an excellent job of solving that problem, which is what Pickle has done. Boston Dynamics has chosen a different route with  Stretch, which is to build a robot that has the potential to do many other warehouse tasks, although not nearly as optimally.

The other big difference between Boston Dynamics and Pickle is, of course, that Boston Dynamics is focusing on autonomy. Meanwhile, Pickle, Meyer says in a press release, “resisted the fool’s errand of trying to create a system that could work entirely unsupervised.” Personally, I disagree that trying to create a system that could work entirely unsupervised is a fool’s errand. Approaching practical commercial robotics (in any context) from a perspective of requiring complete unsupervised autonomy is generally not practical right now outside of highly structured environments. But many companies do have goals that include unsupervised operation while still acknowledging that occasionally their robots will need a human to step in and help. In fact, these companies are (generally) doing exactly what Pickle is doing in practice: they’re deploying robots with the goal of fully unsupervised autonomy, while keeping humans available as they work their way towards that goal. The difference, perhaps, is philosophical—some companies see unsupervised operation as the future of robotics in these specific contexts, while Pickle does not. We asked Meyer about why this is. He replied:

Some problems are hardware-related and not likely to yield an automated solution anytime soon. For example, the gripper is physically incapable of grasping some objects, like car tires, no matter what intelligence the robot has. A part might start to wear out, like a spring on the gripper, and the gripper can behave unpredictably. Things can be too heavy. A sensor might get knocked out of place, dust might get on the camera lens. Or an already damaged package falls apart when you pick it up, and dumps its contents on the ground.

Other problems can go away over time as the algorithms learn and the engineers innovate in small ways. For example, learning not to pick packages that will cause a bunch more to fall down, learning to approach boxes in the corner from the side, or—and this was a real issue in production for a couple days—learning to avoid picking directly on labels where they might peel off from suction.

Machine learning algorithms, on both the perception and action sides of the story, are critical ingredients for making any of this work. However, even with them your engineering team still has to do a lot of problem solving wherever the AI is struggling. At some point you run out of engineering resources to solve all these problems in the long tail. When we talk about problems that require AI algorithms as capable as people are, we mean ones where the target on the reliability curve (99.99999% in the case of self driving, for example) is out of reach in this way. I think the big lesson from self-driving cars is that chasing that long tail of edge cases is really, really hard. We realized that in the loading dock, you can still deliver tremendous value to the customer even if you assume you can only handle 98% of the cases.  

These long-tail problems are everywhere in robotics, but again, some people believe that levels of reliability that are usable for unsupervised operation (at least in some specific contexts) are more near-term achievable than others do. In Pickle’s case, emphasizing human supervision means that they may be able to deploy faster and more reliably and at lower cost and with higher performance—we’ll just have to see how long it takes for other companies to come through with robots that are able to do the same tasks without human supervision.

Photo: Pickle Robots Pickle robots is also working on other high speed package sorting systems.

We asked Meyer how much Dill costs, and to our surprise, he gave us a candid answer: Depending on the configuration, the system can cost anywhere from $50-100k to deploy and about that same amount per year to operate. Meyer points out that you can’t really compare the robot to a human (or humans) simply on speed, since with the robot, you don’t have to worry about injuries or improper sorting of packages or training or turnover. While Pickle is currently working on several other configurations of robots for package handling, this particular truck unloading configuration will be shipping to customers next year.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

What if seeing devices looked like us? Eyecam is a prototype exploring the potential future design of sensing devices. Eyecam is a webcam shaped like a human eye that can see, blink, look around and observe us.

And it's open source, so you can build your own!

[ Eyecam ]

Looks like Festo will be turning some of its bionic robots into educational kits, which is a pretty cool idea.

[ Bionics4Education ]

Underwater soft robots are challenging to model and control because of their high degrees of freedom and their intricate coupling with water. In this paper, we present a method that leverages the recent development in differentiable simulation coupled with a differentiable, analytical hydrodynamic model to assist with the modeling and control of an underwater soft robot. We apply this method to Starfish, a customized soft robot design that is easy to fabricate and intuitive to manipulate.

[ MIT CSAIL ]

Rainbow Robotics, the company who made HUBO, has a new collaborative robot arm.

[ Rainbow Robotics ]

Thanks Fan!

We develop an integrated robotic platform for advanced collaborative robots and demonstrates an application of multiple robots collaboratively transporting an object to different positions in a factory environment. The proposed platform integrates a drone, a mobile manipulator robot, and a dual-arm robot to work autonomously, while also collaborating with a human worker. The platform also demonstrates the potential of a novel manufacturing process, which incorporates adaptive and collaborative intelligence to improve the efficiency of mass customization for the factory of the future.

[ Paper ]

Thanks Poramate!

In Sevastopol State University the team of the Laboratory of Underwater Robotics and Control Systems and Research and Production Association “Android Technika” performed tests of an underwater anropomorphic manipulator robot.

[ Sevastopol State ]

Thanks Fan!

Taiwanese company TCI Gene created a COVID test system based on their fully automated and enclosed gene testing machine QVS-96S. The system includes two ABB robots and carries out 1800 tests per day, operating 24/7. Every hour 96 virus samples tests are made with an accuracy of 99.99%.

[ ABB ]

A short video showing how a Halodi Robotics can be used in a commercial guarding application.

[ Halodi ]

During the past five years, under the NASA Early Space Innovations program, we have been developing new design optimization methods for underactuated robot hands, aiming to achieve versatile manipulation in highly constrained environments. We have prototyped hands for NASA’s Astrobee robot, an in-orbit assistive free flyer for the International Space Station.

[ ROAM Lab ]

The new, improved OTTO 1500 is a workhorse AMR designed to move heavy payloads through demanding environments faster than any other AMR on the market, with zero compromise to safety.

[ ROAM Lab ]

Very, very high performance sensing and actuation to pull this off.

[ Ishikawa Group ]

We introduce a conversational social robot designed for long-term in-home use to help with loneliness. We present a novel robot behavior design to have simple self-reflection conversations with people to improve wellness, while still being feasible, deployable, and safe.

[ HCI Lab ]

We are one of the 5 winners of the Start-up Challenge. This video illustrates what we achieved during the Swisscom 5G exploration week. Our proof-of-concept tele-excavation system is composed of a Menzi Muck M545 walking excavator automated & customized by Robotic Systems Lab and IBEX motion platform as the operator station. The operator and remote machine are connected for the first time via a 5G network infrastructure which was brought to our test field by Swisscom.

[ RSL ]

This video shows LOLA balancing on different terrain when being pushed in different directions. The robot is technically blind, not using any camera-based or prior information on the terrain (hard ground is assumed).

[ TUM ]

Autonomous driving when you cannot see the road at all because it's buried in snow is some serious autonomous driving.

[ Norlab ]

A hierarchical and robust framework for learning bipedal locomotion is presented and successfully implemented on the 3D biped robot Digit. The feasibility of the method is demonstrated by successfully transferring the learned policy in simulation to the Digit robot hardware, realizing sustained walking gaits under external force disturbances and challenging terrains not included during the training process.

[ OSU ]

This is a video summary of the Center for Robot-Assisted Search and Rescue's deployments under the direction of emergency response agencies to more than 30 disasters in five countries from 2001 (9/11 World Trade Center) to 2018 (Hurricane Michael). It includes the first use of ground robots for a disaster (WTC, 2001), the first use of small unmanned aerial systems (Hurricane Katrina 2005), and the first use of water surface vehicles (Hurricane Wilma, 2005).

[ CRASAR ]

In March, a team from the Oxford Robotics Institute collected a week of epic off-road driving data, as part of the Sense-Assess-eXplain (SAX) project.

[ Oxford Robotics ]

As a part of the AAAI 2021 Spring Symposium Series, HEBI Robotics was invited to present an Industry Talk on the symposium's topic: Machine Learning for Mobile Robot Navigation in the Wild. Included in this presentation was a short case study on one of our upcoming mobile robots that is being designed to successfully navigate unstructured environments where today's robots struggle.

[ HEBI Robotics ]

Thanks Hardik!

This Lockheed Martin Robotics Seminar is from Chad Jenkins at the University of Michigan, on “Semantic Robot Programming... and Maybe Making the World a Better Place.”

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

[ UMD ]

On April 11, the Mars helicopter Ingenuity will take to the skies of Mars for the first time. It will do so fully autonomously, out of necessity—the time delay between Ingenuity’s pilots at the Jet Propulsion Laboratory and Jezero Crater on Mars makes manual or even supervisory control impossible. So the best that the folks at JPL can do is practice as much as they can in simulation, and then hope that the helicopter can handle everything on its own.

Here on Earth, simulation is a critical tool for many robotics applications, because it doesn’t rely on access to expensive hardware, is non-destructive, and can be run in parallel and at faster-than-real-time speeds to focus on solving specific problems. Once you think you’ve gotten everything figured out in simulation, you can always give it a try on the real robot and see how close you came. If it works in real life, great! And if not, well, you can tweak some stuff in the simulation and try again.

For the Mars helicopter, simulation is much more important, and much higher stakes. Testing the Mars helicopter under conditions matching what it’ll find on Mars is not physically possible on Earth. JPL has flown engineering models in Martian atmospheric conditions, and they’ve used an actuated tether to mimic Mars gravity, but there’s just no way to know what it’ll be like flying on Mars until they’ve actually flown on Mars. With that in mind, the Ingenuity team has been relying heavily on simulation, since that’s one of the best tools they have to prepare for their Martian flights. We talk with Ingenuity’s Chief Pilot, Håvard Grip, to learn how it all works.

Ingenuity Facts:

Body Size: a box of tissues

Brains: Qualcomm Snapdragon 801

Weight: 1.8 kilograms

Propulsion: Two 1.2m carbon fiber rotors

Navigation sensors: VGA camera, laser altimeter, inclinometer

Ingenuity is scheduled to make its first flight no earlier than April 11. Before liftoff, the Ingenuity team will conduct a variety of pre-flight checks, including verifying the responsiveness of the control system and spinning the blades up to full speed (2,537 rpm) without lifting off. If everything looks good, the first flight will consist of a 1 meter per second climb to 3 meters, 30 seconds of hover at 3 meters while rotating in place a bit, and then a descent to landing. If Ingenuity pulls this off, that will have made its entire mission a success. There will be more flights over the next few weeks, but all it takes is one to prove that autonomous helicopter flight on Mars is possible.

Last month, we spoke with Mars Helicopter Operations Lead Tim Canham about Ingenuity’s hardware, software, and autonomy, but we wanted to know more about how the Ingenuity team has been using simulation for everything from vehicle design to flight planning. To answer our questions, we talked with JPL’s Håvard Grip, who led the development of Ingenuity’s navigation and flight control systems. Grip also has the title of Ingenuity Chief Pilot, which is pretty awesome. He summarizes this role as “operating the flight control system to make the helicopter do what we want it to do.”

IEEE Spectrum: Can you tell me about the simulation environment that JPL uses for Ingenuity’s flight planning?

Håvard Grip: We developed a Mars helicopter simulation ourselves at JPL, based on a multi-body simulation framework that’s also developed at JPL, called DARTS/DSHELL. That's a system that has been in development at JPL for about 30 years now, and it's been used in a number of missions. And so we took that multibody simulation framework, and based on it we built our own Mars helicopter simulation, put together our own rotor model, our own aerodynamics models, and everything else that's needed in order to simulate a helicopter. We also had a lot of help from the rotorcraft experts at NASA Ames and NASA Langley.

Image: NASA/JPL Ingenuity in JPL’s flight simulator.

Without being able to test on Mars, how much validation are you able to do of what you’re seeing in simulation?

We can do a fair amount, but it requires a lot of planning. When we made our first real prototype (with a full-size rotor that looked like what we were thinking of putting on Mars) we first spent a lot of time designing it and using simulation tools to guide that design, and when we were sufficiently confident that we were close enough, and that we understood enough about it, then we actually built the thing and designed a whole suite of tests in a vacuum chamber where where we could replicate Mars atmospheric conditions. And those tests were before we tried to fly the helicopter—they were specifically targeted at what we call system identification, which has to do with figuring out what the true properties, the true dynamics of a system are, compared to what we assumed in our models. So then we got to see how well our models did, and in the places where they needed adjustment, we could go back and do that. 

The simulation work that we really started after that very first initial lift test, that’s what allowed us to unlock all of the secrets to building a helicopter that can fly on Mars. —Håvard Grip, Ingenuity Chief Pilot

We did a lot of this kind of testing. It was a big campaign, in several stages. But there are of course things that you can't fully replicate, and you do depend on simulation to tie things together. For example, we can't truly replicate Martian gravity on Earth. We can replicate the atmosphere, but not the gravity, and so we have to do various things when we fly—either make the helicopter very light, or we have to help it a little bit by pulling up on it with a string to offload some of the weight. These things don't fully replicate what it will be like on Mars. We also can't simultaneously replicate the Mars aerodynamic environment and the physical and visual surroundings that the helicopter will be flying in. These are places where simulation tools definitely come in handy, with the ability to do full flight tests from A to B, with the helicopter taking off from the ground, running the flight software that it will be running on board, simulating the images that the navigation camera takes of the ground below as it flies, feeding that back into the flight software, and then controlling it.

To what extent can simulation really compensate for the kinds of physical testing that you can’t do on Earth?

It gives you a few different possibilities. We can take certain tests on Earth where we replicate key elements of the environment, like the atmosphere or the visual surroundings for example, and you can validate your simulation on those parameters that you can test on Earth. Then, you can combine those things in simulation, which gives you the ability to set up arbitrary scenarios and do lots and lots of tests. We can Monte Carlo things, we can do a flight a thousand times in a row, with small perturbations of various parameters and tease out what our sensitivities are to those things. And those are the kinds of things that you can't do with physical tests, both because you can't fully replicate the environment and also because of the resources that would be required to do the same thing a thousand times in a row.

Because there are limits to the physical testing we can do on Earth, there are elements where we know there's more uncertainty. On those aspects where the uncertainty is high, we tried to build in enough margin that we can handle a range of things. And simulation gives you the ability to then maybe play with those parameters, and put them at their outer limits, and test them beyond where the real parameters are going to be to make sure that you have robustness even in those extreme cases.

How do you make sure you’re not relying on simulation too much, especially since in some ways it’s your only option?

It’s about anchoring it in real data, and we’ve done a lot of that with our physical testing. I think what you’re referring to is making your simulation too perfect, and we’re careful to model the things that matter. For example, the simulated sensors that we use have realistic levels of simulated noise and bias in them, the navigation camera images have realistic levels of degradation, we have realistic disturbances from wind gusts. If you don’t properly account for those things, then you’re missing important details. So, we try to be as accurate as we can, and to capture that by overbounding in areas where we have a high degree of uncertainty.

What kinds of simulated challenges have you put the Mars helicopter through, and how do you decide how far to push those challenges?

One example is that we can simulate going over rougher terrain. We can push that, and see how far we can go and still have the helicopter behave the way that we want it to. Or we can inject levels of noise that maybe the real sensors don't see, but you want to just see how far you can push things and make sure that it's still robust.

Where we put the limits on this and what we consider to be realistic is often a challenge. We consider this on a case by case basis—if you have a sensor that you're dealing with, you try to do testing with it to characterize it and understand its performance as much as possible, and you build a level of confidence in it that allows you to find the proper balance.

When it comes to things like terrain roughness, it's a little bit of a different thing, because we're actually picking where we're flying the helicopter. We have made that choice, and we know what the terrain looks like around us, so we don’t have to wonder about that anymore. 

Image: NASA/JPL-Caltech/University of Arizona Satellite image of the Ingenuity flight area.

The way that we’re trying to approach this operationally is that we should be done with the engineering at this point. We’re not depending on going back and resimulating things, other than a few checks here and there. 

Are there any examples of things you learned as part of the simulation process that resulted in changes to the hardware or mission?

You know, it’s been a journey. One of the early things that we discovered as part of modeling the helicopter was that the rotor dynamics were quite different for a helicopter on Mars, in particular with respect to how the rotor responds to the up and down bending of the blades because they’re not perfectly rigid. That motion is a very important influence on the overall flight dynamics of the helicopter, and what we discovered as we started modeling was that this motion is damped much less on Mars. Under-damped oscillatory things like that, you kind of figure might pose a control issue, and that is the case here: if you just naively design it as you might a helicopter on Earth, without taking this into account, you could have a system where the response to control inputs becomes very sluggish. So that required changes to the vehicle design from some of the very early concepts, and it led us to make a rotor that’s extremely light and rigid.

The design cycle for the Mars helicopter—it’s not like we could just build something and take it out to the back yard and try it and then come back and tweak it if it doesn’t work. It’s a much bigger effort to build something and develop a test program where you have to use a vacuum chamber to test it. So you really want to get as close as possible up front, on your first iteration, and not have to go back to the drawing board on the basic things.

So how close were you able to get on your first iteration of the helicopter design?

[This video shows] a very early demo which was done more or less just assuming that things were going to behave as they would on Earth, and that we’d be able to fly in a Martian atmosphere just spinning the rotor faster and having a very light helicopter. We were basically just trying to demonstrate that we could produce enough lift. You can see the helicopter hopping around, with someone trying to joystick it, but it turned out to be very hard to control. This was prior to doing any of the modeling that I talked about earlier. But once we started seriously focusing on the modeling and simulation, we then went on to build a prototype vehicle which had a full-size rotor that’s very close to the rotor that will be flying on Mars. One difference is that prototype had cyclic control only on the lower rotor, and later we added cyclic control on the upper rotor as well, and that decision was informed in large part by the work we did in simulation—we’d put in the kinds of disturbances that we thought we might see on Mars, and decided that we needed to have the extra control authority. 

How much room do you think there is for improvement in simulation, and how could that help you in the future?

The tools that we have were definitely sufficient for doing the job that we needed to do in terms of building a helicopter that can fly on Mars. But simulation is a compute-intensive thing, and so I think there’s definitely room for higher fidelity simulation if you have the compute power to do so. For a future Mars helicopter, you could get some benefits by more closely coupling together high-fidelity aerodynamic models with larger multi-body models, and doing that in a fast way, where you can iterate quickly. There’s certainly more potential for optimizing things.

Photo: NASA/JPL-Caltech Ingenuity preparing for flight.

Watching Ingenuity’s first flight take place will likely be much like watching the Perseverance landing—we’ll be able to follow along with the Ingenuity team while they send commands to the helicopter and receive data back, although the time delay will mean that any kind of direct control won’t be possible. If everything goes the way it’s supposed to, there will hopefully be some preliminary telemetry from Ingenuity saying so, but it sounds like we’ll likely have to wait until April 12 before we get pictures or video of the flight itself.

Because Mars doesn’t care what time it is on Earth, the flight will actually be taking place very early on April 12, with the JPL Mission Control livestream starting at 3:30 a.m. EDT (12:30 a.m. PDT). Details are here.

The DARPA Subterranean Challenge Final Event is scheduled to take place at the Louisville Mega Cavern in Louisville, Kentucky, from September 21 to 23. We’ve followed SubT teams as they’ve explored their way through abandoned mines, unfinished nuclear reactors, and a variety of caves, and now everything comes together in one final course where the winner of the Systems Track will take home the $2 million first prize.

It’s a fitting reward for teams that have been solving some of the hardest problems in robotics, but winning isn’t going to be easy, and we’ll talk with SubT Program Manager Tim Chung about what we have to look forward to.

Since we haven’t talked about SubT in a little while (what with the unfortunate covid-related cancellation of the Systems Track Cave Circuit), here’s a quick refresher of where we are: the teams have made it through the Tunnel Circuit, the Urban Circuit, and a virtual version of the Cave Circuit, and some of them have been testing in caves of their own. The Final Event will include all of these environments, and the teams of robots will have 60 minutes to autonomously map the course, locating artifacts to score points. Since I’m not sure where on Earth there’s an underground location that combines tunnels and caves with urban structures, DARPA is going to have to get creative, and the location in which they’ve chosen to do that is Louisville, Kentucky.

The Louisville Mega Cavern is a former limestone mine, most of which is under the Louisville Zoo. It’s not all that deep, mostly less than 30 meters under the surface, but it’s enormous: with 370,000 square meters of rooms and passages, the cavern currently hosts (among other things) a business park, a zipline course, and mountain bike trails, because why not. While DARPA is keeping pretty quiet on the details, I’m guessing that they’ll be taking over a chunk of the cavern and filling it with features representing as many of the environmental challenges as they can.

To learn more about how the SubT Final Event is going to go, we spoke with SubT Program Manager Tim Chung. But first, we talked about Tim’s perspective on the success of the Urban Circuit, and how teams have been managing without an in-person Cave Circuit.

IEEE Spectrum: How did the SubT Urban Circuit go?

Tim Chung: On a couple fronts, Urban Circuit was really exciting. We were in this unfinished nuclear power plant—I’d be surprised if any of the competitors had prior experience in such a facility, or anything like it. I think that was illuminating both from an experiential point of view for the competitors, but also from a technology point of view, too.

One thing that I thought was really interesting was that we, DARPA, didn't need to make the venue more challenging. The real world is really that hard. There are places that were just really heinous for these robots to have to navigate through in order to look in every nook and cranny for artifacts. There were corners and doorways and small corridors and all these kind of things that really forced the teams to have to work hard, and the feedback was, why did DARPA have to make it so hard? But we didn’t, and in fact there were places that for the safety of the robots and personnel, we had to ensure the robots couldn’t go.

It sounds like some teams thought this course was on the more difficult side—do you think you tuned it to just the right amount of DARPA-hard?

Our calibration worked quite well. We were able to tease out and help refine and better understand what technologies are both useful and critical and also those technologies that might not necessarily get you the leap ahead capability. So as an example, the Urban Circuit really emphasized verticality, where you have to be able to sense, understand, and maneuver in three dimensions. Being able to capitalize on their robot technologies to address that verticality really stratified the teams, and showed how critical those capabilities are. 

We saw teams that brought a lot of those capabilities do very well, and teams that brought baseline capabilities do what they could on the single floor that they were able to operate on. And so I think we got the Goldilocks solution for Urban Circuit that combined both difficulty and ambition.

Photos: Evan Ackerman/IEEE Spectrum Two SubT Teams embedded networking equipment in balls that they could throw onto the course.

One of the things that I found interesting was that two teams independently came up with throwable network nodes. What was DARPA’s reaction to this? Is any solution a good solution, or was it more like the teams were trying to game the system?

You mean, do we want teams to game the rules in any way so as to get a competitive advantage? I don't think that's what the teams were doing. I think they were operating not only within the bounds of the rules, which permitted such a thing as throwable sensors where you could stand at the line and see how far you could chuck these things—not only was that acceptable by the rules, but anticipated. Behind the scenes, we tried to do exactly what these teams are doing and think through different approaches, so we explicitly didn't forbid such things in our rules because we thought it's important to have as wide an aperture as possible. 

With these comms nodes specifically, I think they’re pretty clever. They were in some cases hacked together with a variety of different sports paraphernalia to see what would provide the best cushioning. You know, a lot of that happens in the field, and what it captured was that sometimes you just need to be up at two in the morning and thinking about things in a slightly different way, and that's when some nuggets of innovation can arise, and we see this all the time with operators in the field as well. They might only have duct tape or Styrofoam or whatever the case may be and that's when they come up with different ways to solve these problems. I think from DARPA’s perspective, and certainly from my perspective, wherever innovation can strike, we want to try to encourage and inspire those opportunities. I thought it was great, and it’s all part of the challenge.

Is there anything you can tell us about what your original plan had been for the Cave Circuit?

I can say that we’ve had the opportunity to go through a number of these caves scattered all throughout the country, and engage with caving communities—cavers clubs, speleologists that conduct research, and then of course the cave rescue community. The single biggest takeaway 
is that every cave, and there are tens of thousands of them in the US alone, every cave has its own personality, and a lot of that personality is quite hidden from humans, because we can’t explore or access all of the cave. This led us to a number of different caves that were intriguing from a DARPA perspective but also inspirational for our Cave Circuit Virtual Competition.

How do you feel like the tuning was for the Virtual Cave Circuit?

The Virtual Competition, as you well know, was exciting in the sense that we could basically combine eight worlds into one competition, whereas the systems track competition really didn’t give us that opportunity. Even if we were able have held the Cave Circuit Systems Competition in person, it would have been at one site, and it would have been challenging to represent the level of diversity that we could with the Virtual Competition. So I think from that perspective, it’s clearly an advantage in terms of calibration—diversity gets you the ability to aggregate results to capture those that excel across all worlds as well as those that do well in one world or some worlds and not the others. I think the calibration was great in the sense that we were able to see the gamut of performance. Those that did well, did quite well, and those that have room to grow showed where those opportunities are for them as well. 

We had to find ways to capture that diversity and that representativeness, and I think one of the fun ways we did that was with the different cave world tiles that we were able to combine in a variety of different ways. We also made use of a real world data set that we were able to take from a laser scan. Across the board, we had a really great chance to illustrate why virtual testing and simulation still plays such a dominant role in robotics technology development, and why I think it will continue to play an increasing role for developing these types of autonomy solutions.

Photo: Team CSIRO Data 61

How can systems track teams learn from their testing in whatever cave is local to them and effectively apply that to whatever cave environment is part of the final considering what the diversity of caves is?

I think that hits the nail on the head for what we as technologists are trying to discover—what are the transferable generalizable insights and how does that inform our technology development? As roboticists we want to optimize our systems to perform well at the tasks that they were designed to do, and oftentimes that means specialization because we get increased performance at the expense of being a generalist robot. I think in the case of SubT, we want to have our cake and eat it too—we want robots that perform well and reliably, but we want them to do so not just in one environment, which is how we tend to think about robot performance, but we want them to operate well in many environments, many of which have yet to be faced. 

And I think that's kind of the nuance here, that we want robot systems to be generalists for the sake of being able to handle the unknown, namely the real world, but still achieve a high level of performance and perhaps they do that to their combined use of different technologies or advances in autonomy or perception approaches or novel mechanisms or mobility, but somehow they're still able, at least in aggregate, to achieve high performance.

We know these teams eagerly await any type of clue that DARPA can provide like about the SubT environments. From the environment previews for Tunnel, Urban, and even Cave, the teams were pivoting around and thinking a little bit differently. The takeaway, however, was that they didn't go to a clean sheet design—their systems were flexible enough that they could incorporate some of those specialist trends while still maintaining the notion of a generalist framework.

Looking ahead to the SubT Final, what can you tell us about the Louisville Mega Cavern?

As always, I’ll keep you in suspense until we get you there, but I can say that from the beginning of the SubT Challenge we had always envisioned teams of robots that are able to address not only the uncertainty of what's right in front of them, but also the uncertainty of what comes next. So I think the teams will be advantaged by thinking through subdomain awareness, or domain awareness if you want to generalize it, whether that means tuning multi-purpose robots, or deploying different robots, or employing your team of robots differently. Knowing which subdomain you are in is likely to be helpful, because then you can take advantage of those unique lessons learned through all those previous experiences then capitalize on that.

As far as specifics, I think the Mega Cavern offers many of the features important to what it means to be underground, while giving DARPA a pretty blank canvas to realize our vision of the SubT Challenge. 

The SubT Final will be different from the earlier circuits in that there’s just one 60-minute run, rather than two. This is going to make things a lot more stressful for teams who have experienced bad robot days—why do it this way?

The preliminary round has two 30-minute runs, and those two runs are very similar to how we have done it during the circuits, of a single run per configuration per course. Teams will have the opportunity to show that their systems can face the obstacles in the final course, and it's the sum of those scores much like we did during the circuits, to help mitigate some of the concerns that you mentioned of having one robot somehow ruin their chances at a prize. 

The prize round does give DARPA as well as the community a chance to focus on the top six teams from the preliminary round, and allows us to understand how they came to be at the top of the pack while emphasizing their technological contributions. The prize round will be one and done, but all of these teams we anticipate will be putting their best robot forward and will show the world why they deserve to win the SubT Challenge. 

We’ve always thought that when called upon these robots need to operate in really challenging environments, and in the context of real world operations, there is no second chance. I don't think it's actually that much of a departure from our interests and insistence on bringing reliable technologies to the field, and those teams that might have something break here and there, that's all part of the challenge, of being resilient. Many teams struggled with robots that were debilitated on the course, and they still found ways to succeed and overcome that in the field, so maybe the rules emphasize that desire for showing up and working on game day which is consistent, I think, with how we've always envisioned it. This isn’t to say that these systems have to work perfectly, they just have to work in a way such that the team is resilient enough to tackle anything that they face.

It’s not too late for teams to enter for both the Virtual Track and the Systems Track to compete in the SubT Final, right?

Yes, that's absolutely right. Qualifications are still open, we are eager to welcome new teams to join in along with our existing competitors. I think any dark horse competitors coming into the Finals may be able to bring something that we haven't seen before, and that would be really exciting. I think it'll really make for an incredibly vibrant and illuminating final event.

The final event qualification deadline for the Systems Competition is April 21, and the qualification deadline for the Virtual Competition is June 29. More details here.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Festo's Bionic Learning Network for 2021 presents a flock of BionicSwifts.

To execute the flight maneuvers as true to life as possible, the wings are modeled on the plumage of birds. The individual lamellae are made of an ultralight, flexible but very robust foam and lie on top of each other like shingles. Connected to a carbon quill, they are attached to the actual hand and arm wings as in the natural model.

During the wing upstroke, the individual lamellae fan out so that air can flow through the wing. This means that the birds need less force to pull the wing up. During the downstroke, the lamellae close up so that the birds can generate more power to fly. Due to this close-to-nature replica of the wings, the BionicSwifts have a better flight profile than previous wing-beating drives.

[ Festo ]

While we've seen a wide variety of COVID-motivated disinfecting robots, they're usually using either ultraviolet light or a chemical fog. This isn't the way that humans clean—we wipe stuff down, which gets rid of surface dirt and disinfects at the same time. Fraunhofer has been working on a mobile manipulator that can clean in the same ways that we do.

It's quite the technical challenge, but it has the potential to be both more efficient and more effective.

[ Fraunhofer ]

In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says. The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view.

[ MIT ]

Ingenuity is now scheduled to fly on April 11.

[ JPL ]

The legendary Zenta is back after a two year YouTube hiatus with "a kind of freaky furry hexapod bunny creature."

[ Zenta ]

It is with great pride and excitement that the South Australia Police announce a new expansion to their kennel by introducing three new Police Dog (PD) recruits. These dogs have been purposely targeted to bring a whole new range of dog operational capabilities known as the ‘small area urban search and guided evacuation’ dogs. Police have been working closely with specialist vets and dog trainers to ascertain if the lightweight dogs could be transported safely by drones and released into hard-to-access areas where at the moment the larger PDs just simply cannot get in due to their size.

[ SA Police ]

SoftBank may not have Spot cheerleading robots for their baseball team anymore, but they've more than made up for it with a full century of Peppers. And one dude doing the robot.

[ SoftBank ]

MAB Robotics is a Polish company developing walking robots for inspection, and here's a prototype they've been working on.

[ MAB Robotics ]

Thanks Jakub!

DoraNose: Smell your way to a better tomorrow.

[ Dorabot ]

Our robots need to learn how to cope with their new neighbors, and we have just the solution for this, the egg detector! Using cutting-edge AI, it provides incredible precision in detecting a vast variety of eggs. We have deployed this new feature on Boston Dynamics Spot, one of our fleet's robots. It can now detect eggs with its cameras and avoid them on his autonomous missions.

[ Energy Robotics ]

When dropping a squishy robot from an airplane 1,000 feet up, make sure that you land as close to people's cars as you can.

Now do it from orbit!

[ Squishy Robotics ]

An autonomous robot that is able to physically guide humans through narrow and cluttered spaces could be a big boon to the visually-impaired. Most prior robotic guiding systems are based on wheeled platforms with large bases with actuated rigid guiding canes. The large bases and the actuated arms limit these prior approaches from operating in narrow and cluttered environments. We propose a method that introduces a quadrupedal robot with a leash to enable the robot-guiding-human system to change its intrinsic dimension (by letting the leash go slack) in order to fit into narrow spaces.

[ Hybrid Robotics ]

How to prove that your drone is waterproof.

[ UNL ]

Well this ought to be pretty good once it gets out of simulation.

[ Hybrid Robotics ]

MIDAS is Aurora’s AI-enabled, multi-rotor sUAV outfitted with optical sensors and a customized payload that can defeat multiple small UAVs per flight with low-collateral effects.

[ Aurora ]

The robots​ of the DFKI have the advantage of being able to reach extreme environments: they can be used for decontamination purposes in high-risk areas or inspect and maintain underwater​ structures, for which they are tested in the North Sea near Heligoland​.

[ DFKI ]

After years of trying, 60 Minutes cameras finally get a peek inside the workshop at Boston Dynamics, where robots move in ways once only thought possible in movies. Anderson Cooper reports.

[ 60 Minutes ]

In 2007, Noel Sharky stated that “we are sleepwalking into a brave new world where robots decide who, where and when to kill.” Since then thousands of AI and robotics researchers have joined his calls to regulate “killer robots.” But sometime this year, Turkey will deploy fully autonomous home-built kamikaze drones on its border with Syria. What are the ethical choices we need to consider? Will we end up in an episode of Black Mirror? Or is the UN listening to calls and starting the process of regulating this space? Prof. Toby Walsh will discuss this important issue, consider where we are at and where we need to go.

[ ICRA 2020 ]

In the second session of HAI's spring conference, artists and technologists discussed how technology can enhance creativity, reimagine meaning, and support racial and social justice. The conference, called “Intelligence Augmentation: AI Empowering People to Solve Global Challenges,” took place on 25 March 2021.

[ Stanford HAI ]

This spring 2021 GRASP SFI comes from Monroe Kennedy III at Stanford University, on “Considerations for Human-Robot Collaboration.”

The field of robotics has evolved over the past few decades. We’ve seen robots progress from the automation of repetitive tasks in manufacturing to the autonomy of mobilizing in unstructured environments to the cooperation of swarm robots that are centralized or decentralized. These abilities have required advances in robotic hardware, modeling, and artificial intelligence. The next frontier is robots collaborating in complex tasks with human teammates, in environments traditionally configured for humans. While solutions to this challenge must utilize all the advances of robotics, the human element adds a unique aspect that must be addressed. Collaborating with a human teammate means that the robot must have a contextual understanding of the task as well as all participant’s roles. We will discuss what constitutes an effective teammate and how we can capture this behavior in a robotic collaborator.

[ UPenn ]

Most humans are bipeds, but even the best of us are really only bipeds until things get tricky. While our legs may be our primary mobility system, there are lots of situations in which we leverage our arms as well, either passively to keep balance or actively when we put out a hand to steady ourselves on a nearby object. And despite how unstable bipedal robots tend to be, using anything besides legs for mobility has been a challenge in both software and hardware, a significant limitation in highly unstructured environments.

Roboticists from TUM in Germany (with support from the German Research Foundation) have recently given their humanoid robot LOLA some major upgrades to make this kind of multi-contact locomotion possible. While it’s still in the early stages, it’s already some of the most human-like bipedal locomotion we’ve seen.

It’s certainly possible for bipedal robots to walk over challenging terrain without using limbs for support, but I’m sure you can think of lots of times where using your arms to assist with your own bipedal mobility was a requirement. It’s not a requirement because your leg strength or coordination or sense of balance is bad, necessarily. It’s just that sometimes, you might find yourself walking across something that’s highly unstable or in a situation where the consequences of a stumble are exceptionally high. And it may not even matter how much sensing you do beforehand, and how careful you are with your footstep planning: there are limits to how much you can know about your environment beforehand, and that can result in having a really bad time of it. This is why using multi-contact locomotion, whether it’s planned in advance or not, is a useful skill for humans, and should be for robots, too.

As the video notes (and props for being explicit up front about it), this isn’t yet fully autonomous behavior, with foot positions and arm contact points set by hand in advance. But it’s not much of a stretch to see how everything could be done autonomously, since one of the really hard parts (using multiple contact points to dynamically balance a moving robot) is being done onboard and in real time. 

Getting LOLA to be able to do this required a major overhaul in hardware as well as software. And Philipp Seiwald, who works with LOLA at TUM, was able to tell us more about it.

IEEE Spectrum: Can you summarize the changes to LOLA’s hardware that are required for multi-contact locomotion?

Philipp Seiwald: The original version of LOLA has been designed for fast biped walking. Although it had two arms, they were not meant to get into contact with the environment but rather to compensate for the dynamic effects of the feet during fast walking. Also, the torso had a relatively simple design that was fine for its original purpose; however, it was not conceived to withstand the high loads coming from the hands during multi-contact maneuvers. Thus, we redesigned the complete upper body of LOLA from scratch. Starting from the pelvis, the strength and stiffness of the torso have been increased. We used the finite element method to optimize critical parts to obtain maximum strength at minimum weight. Moreover, we added additional degrees of freedom to the arms to increase the hands' reachable workspace. The kinematic topology of the arms, i.e., the arrangement of joints and link lengths, has been obtained from an optimization that takes typical multi-contact scenarios into account.

Why is this an important problem for bipedal humanoid robots?

Maintaining balance during locomotion can be considered the primary goal of legged robots. Naturally, this task is more challenging for bipeds when compared to robots with four or even more legs. Although current high-end prototypes show impressive progress, humanoid robots still do not have the robustness and versatility they need for most real-world applications. With our research, we try to contribute to this field and help to push the limits further. Recently, we showed our latest work on walking over uneven terrain without multi-contact support. Although the robustness is already high, there still exist scenarios, such as walking on loose objects, where the robot's stabilization fails when using only foot contacts. The use of additional hand-environment support during this (comparatively) fast walking allows a further significant increase in robustness, i.e., the robot's capability to compensate disturbances, modeling errors, or inaccurate sensor input. Besides stabilization on uneven terrain, multi-contact locomotion also enables more complex motions, e.g., stepping over a tall obstacle or toe-only contacts, as shown in our latest multi-contact video.

How can LOLA decide whether a surface is suitable for multi-contact locomotion?

LOLA’s visual perception system is currently developed by our project partners from the Chair for Computer Aided Medical Procedures & Augmented Reality at the TUM. This system relies on a novel semantic Simultaneous Localization and Mapping (SLAM) pipeline that can robustly extract the scene's semantic components (like floor, walls, and objects therein) by merging multiple observations from different viewpoints and by inferring therefrom the underlying scene graph. This provides a reliable estimate of which scene parts can be used to support the locomotion, based on the assumption that certain structural elements such as walls are fixed, while chairs, for example, are not.

Also, the team plans to develop a specific dataset with annotations further describing the attributes of the object (such as roughness of the surface or its softness) and that will be used to master multi-contact locomotion in even more complex scenes. As of today, the vision and navigation system is not finished yet; thus, in our latest video, we used pre-defined footholds and contact points for the hands. However, within our collaboration, we are working towards a fully integrated and autonomous system.

Is LOLA capable of both proactive and reactive multi-contact locomotion?

The software framework of LOLA has a hierarchical structure. On the highest level, the vision system generates an environment model and estimates the 6D-pose of the robot in the scene. The walking pattern generator then uses this information to plan a dynamically feasible future motion that will lead LOLA to a target position defined by the user. On a lower level, the stabilization module modifies this plan to compensate for model errors or any kind of disturbance and keep overall balance. So our approach currently focuses on proactive multi-contact locomotion. However, we also plan to work on a more reactive behavior such that additional hand support can also be triggered by an unexpected disturbance instead of being planned in advance.

What are some examples of unique capabilities that you are working towards with LOLA?

One of the main goals for the research with LOLA remains fast, autonomous, and robust locomotion on complex, uneven terrain. We aim to reach a walking speed similar to humans. Currently, LOLA can do multi-contact locomotion and cross uneven terrain at a speed of 1.8 km/h, which is comparably fast for a biped robot but still slow for a human. On flat ground, LOLA's high-end hardware allows it to walk at a relatively high maximum speed of 3.38 km/h.

Fully autonomous multi-contact locomotion for a life-sized humanoid robot is a tough task. As algorithms get more complex, computation time increases, which often results in offline motion planning methods. For LOLA, we restrict ourselves to gaited multi-contact locomotion, which means that we try to preserve the core characteristics of bipedal gait and use the arms only for assistance. This allows us to use simplified models of the robot which lead to very efficient algorithms running in real-time and fully onboard. 

A long-term scientific goal with LOLA is to understand essential components and control policies of human walking. LOLA's leg kinematics is relatively similar to the human body. Together with scientists from kinesiology, we try to identify similarities and differences between observed human walking and LOLA’s “engineered” walking gait. We hope this research leads, on the one hand, to new ideas for the control of bipeds, and on the other hand, shows via experiments on bipeds if biomechanical models for the human gait are correctly understood. For a comparison of control policies on uneven terrain, LOLA must be able to walk at comparable speeds, which also motivates our research on fast and robust walking.

While it makes sense why the researchers are using LOLA’s arms primarily to assist with a conventional biped gait, looking ahead a bit it’s interesting to think about how robots that we typically consider to be bipeds could potentially leverage their limbs for mobility in decidedly non-human ways.

We’re used to legged robots being one particular morphology, I guess because associating them with either humans or dogs or whatever is just a comfortable way to do it, but there’s no particular reason why a robot with four limbs has to choose between being a quadruped and being a biped with arms, or some hybrid between the two, depending on what its task is. The research being done with LOLA could be a step in that direction, and maybe a hand on the wall in that direction, too.

Today, Boston Dynamics is announcing Stretch, a mobile robot designed to autonomously move boxes around warehouses. At first glance, you might be wondering why the heck this is a Boston Dynamics robot at all, since the dynamic mobility that we associate with most of their platforms is notably absent. The combination of strength and speed in Stretch’s arm is something we haven’t seen before in a mobile robot, and it’s what makes this a unique and potentially exciting entry into the warehouse robotics space. 

Useful mobile manipulation in any environment that’s not almost entirely structured is still a significant challenge in robotics, and it requires a very difficult combination of sensing, intelligence, and dynamic motion, all of which are classic Boston Dynamics. But also classic Boston Dynamics is building really cool platforms, and only later trying to figure out a way of making them commercially viable. So why Stretch, why boxes, why now, and (the real question) why not Handle? We talk with Boston Dynamics’ Vice President of Product Engineering Kevin Blankespoor to find out.

Stretch is very explicitly a box-handling mobile robot for relatively well structured warehouses. It’s in no way designed to be a generalist that many of Boston Dynamics’ other robots are. And to be fair, this is absolutely how to make a robot that’s practical and cost effective right out of the crate: Identify a task that is dull or dirty or dangerous for humans, design a robot to do that task safely and efficiently, and deploy it with the expectation that it’ll be really good at that task but not necessarily much else. This is a very different approach than a robot like Spot, where the platform came first and the practical applications came later—with Stretch, it’s all about that specific task in a specific environment.

There are already robotic solutions for truck unloading, palletizing, and depalletizing, but Stretch seems to be uniquely capable. For truck unloading, the highest performance systems that I’m aware of are monstrous things (here’s one example from Honeywell) that use a ton of custom hardware to just sort of ingest the cargo within a trailer all at once. In a highly structured and predictable warehouse, this sort of thing may pay off over the long term, but it’s going to be extremely expensive and not very versatile at all.

Palletizing and depalletizing robots are much more common in warehouses today. They’re almost always large industrial arms surrounded by a network of custom conveyor belts and whatnot, suffering from the same sorts of constraints as a truck unloader— very capable in some situations, but generally high cost and low flexibility.

Photo: Boston Dynamics

Stretch is probably not going to be able to compete with either of these types of dedicated systems when it comes to sheer speed, but it offers lots of other critical advantages: It’s fast and easy to deploy, easy to use, and adaptable to a variety of different tasks without costly infrastructure changes. It’s also very much not Handle, which was Boston Dynamics’ earlier (although not that much earlier) attempt at a box-handling robot for warehouses, and (let’s be honest here) a much more Boston Dynamics-y thing than Stretch seems to be. To learn more about why the answer is Stretch rather than Handle, and how Stretch will fit into the warehouse of the very near future, we spoke with Kevin Blankespoor, Boston Dynamics’ VP of Product Engineering and chief engineer for both Handle and Stretch.

IEEE Spectrum: Tell me about Stretch!

Kevin Blankespoor: Stretch is the first mobile robot that we’ve designed specifically for the warehouse. It’s all about moving boxes. Stretch is a flexible robot that can move throughout the warehouse and do different tasks. During a typical day in the life of Stretch in the future, it might spend the morning on the inbound side of the warehouse unloading boxes from trucks. It might spend the afternoon in the aisles of the warehouse building up pallets to go to retailers and e-commerce facilities, and it might spend the evening on the outbound side of the warehouse loading boxes into the trucks. So, it really goes to where the work is.

There are already other robots that include truck unloading robots, palletizing and depalletizing robots, and mobile bases with arms on them. What makes Boston Dynamics the right company to introduce a new robot in this space?

We definitely thought through this, because there are already autonomous mobile robots [AMRs] out there. Most of them, though, are more like pallet movers or tote movers—they don't have an arm, and most of them are really just about moving something from point A to point B without manipulation capability. We've seen some experiments where people put arms on AMRs, but nothing that's made it very far in the market. And so when we started looking at Stretch, we realized we really needed to make a custom robot, and that it was something we could do quickly. 

“We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks.”

Stretch is built with pieces from Spot and Atlas and that gave us a big head start. For example, if you look at Stretch’s vision system, it's 2D cameras, depth sensors, and software that allows it to do obstacle detection, box detection, and localization. Those are all the same sensors and software that we've been using for years on our legged robots. And if you look closely at Stretch’s wrist joints, they're actually the same as Spot’s hips. They use the same electric motors, the same gearboxes, the same sensors, and they even have the same closed-loop controller controlling the joints. 

If you were to buy an existing industrial robot arm with this kind of performance, it would be about four times heavier than the arm we built, and it's really hard to make that into a mobile robot. A lot of this came from our leg technology because it’s so important for our leg designs to be lightweight for the robots to balance. We took that same strength to weight advantage that we have, and built it into this arm. We're able to rapidly piece together things from our other robots to get us out of the gate quickly, so even though this looks like a totally different robot, we think we have a good head start going into this market. 

At what point did you decide to go with an arm on a statically stable base on Stretch, rather than something more, you know, dynamic-y?

Stretch looks really different than the robots that Boston Dynamics has done in the past. But you'd be surprised how much similarity there is between our legged robots and Stretch under the hood. Looking back, we actually got our start on moving boxes with Atlas, and at that point it was just research and development. We were really trying to do force control for box grasping. We were picking up heavy boxes and maintaining balance and working on those fundamentals. We released a video of that as our first next-gen Atlas video, and it was interesting. We got a lot of interest from people who wanted to put Atlas to work in the warehouse, but we knew that we could build a simpler robot to do some of those same tasks. 

So at this point we actually came up with Handle. The intent of Handle was to do a couple things—one was, we thought we could build a simpler robot that had Atlas’ attributes. Handle has a small footprint so it can fit in tight spaces, but it can pick up heavy boxes. And in addition to that, we had always really wanted to combine wheels and legs. We’d been talking about doing that for a decade and so Handle was a chance for us to try it. 

We built a couple versions of Handle, and the first one was really just a prototype to kind of explore the morphology. But the second one was more purpose-built for warehouse tasks, and we started building pallets with that one and it looked pretty good. And then we started doing truck unloading with Handle, which was the pivotal moment. Handle could do it, but it took too long. Every time Handle grasped a box, it would have to roll back and then get to a place where it could spin itself to face forward and place the box, and trucks are very tight for a robot this size, so there's not a lot of room to maneuver. We knew the whole time that there was a robot like Stretch that was another alternative, but that's really when it became clear that Stretch would have a lot of advantages, and we started working on it about a year ago. 

Stretch is certainly impressive in a practical way, but I’ll admit to really hoping that something like Handle could have turned out to be a viable warehouse robot.

I love the Handle project as well, and I’m very passionate about that robot. And there was a stage before we built Stretch where we thought, “this would be pretty standard looking compared to Handle, is it going to capture enough of the Boston Dynamics secret sauce?” But when you actually dissect all the problems within Stretch that you have to tackle, there are a lot of cool robotics problems left in there—the vision system, the planning, the manipulation, the grasping of the boxes—it's a lot harder to solve than it looks, and we're excited that we're actually getting fairly far down that road now.

What happens to Handle now?

Stretch has really taken over our team as far as warehouse products go. Handle we still use occasionally as a research robot, but it’s not actively under development. Stretch is really Handle’s descendent. Handle’s not retired, exactly, but we’re just using it for things like the dance video

There’s still potential to do cool stuff with Handle. I do think that combining wheels with legs is very cool, and largely unexplored compared to its potential. So I still think that you're gonna see versions of robots combining wheels and legs like Handle, and maybe a version of Handle in the future that does more of that. But because we're switching this thread from research into product, Stretch is really the main focus now.

How autonomous is Stretch?

Stretch is semi-autonomous, and that means it really needs to work with people to tap into its full potential. With truck unloading, for example, a person will drive Stretch into the back of the truck and then basically point Stretch in the right direction and say go. And from that point on, everything’s autonomous. Stretch has its vision system and its mobility and it can detect all the boxes, grasp all boxes, and move them onto a conveyor all autonomously. This is something that takes people hours to do manually, and Stretch can go all the way until it gets to the last box, and the truck is empty.  There are some parts of the truck unloading task that do require people, like verifying that the truck is in the right place and opening the doors. But this takes a person just a few minutes, and then the robot can spend hours or as long as it takes to do its job autonomously. 

There are also other tasks in the warehouse where the autonomy will increase in the future. After truck unloading, the second thing we’ll take on is order building, which will be more in the aisles of a warehouse. For that, Stretch will be navigating around the warehouse, finding the right pallet it needs to take a box from, and loading it onto a new pallet. This will be a different model with more autonomy; you’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently. 

What kinds of constraints is Stretch operating under? Do the boxes all have to be stacked neatly in the back of the truck, do they have to be the same size, the same color, etc?

“This will be a different model with more autonomy. You’ll still have people involved to some degree, but the robot will have a higher percentage of the time where it can work independently.”

If you think about manufacturing, where there's been automation for decades, you can go into a modern manufacturing facility and there are robot arms and conveyors and other machines. But if you look at the actual warehouse space, 90+ percent is manually operated, and that's because of what you just asked about—  things that are less structured, where there’s more variety, and it's more challenging for a robot. But this is starting to change. This is really, really early days, and you’re going to be seeing a lot more robots in the warehouse space.

The warehouse robotics industry is going to grow a lot over the next decade, and a lot of that boils down to vision—the ability for robots to navigate and to understand what they’re seeing. Actually seeing boxes in real world scenarios is challenging, especially when there's a lot of variety. We've been testing our machine learning-based box detection system on Pick for a few years now, and it's gotten far enough that we know it’s one of the technical hurdles you need to overcome to succeed in the warehouse.

Can you compare the performance of Stretch to the performance of a human in a box-unloading task?

Stretch can move cases up to 50 pounds which is the OSHA limit for how much a single person's allowed to move. The peak case rate for Stretch is 800 cases per hour. You really need to keep up with the flow of goods throughout the warehouse, and 800 cases per hour should be enough for most applications. This is similar to a really good human; most humans are probably slower, and it’s hard for a human to sustain that rate, and one of the big issues with people doing this jobs is injury rates. Imagine moving really heavy boxes all day, and having to reach up high or bend down to get them—injuries are really common in this area. Truck unloading is one of the hardest jobs in a warehouse, and that’s one of the reasons we’re starting there with Stretch.

Is Stretch safe for humans to be around?

We looked at using collaborative robot arms for Stretch, but they don’t have the combination of strength and speed and reach to do this task. That’s partially just due to the laws of physics—if you want to move a 50lb box really fast, that’s a lot of energy there. So, Stretch does need to maintain separation from humans, but it’s pretty safe when it’s operating in the back of a truck.

In the middle of a warehouse, Stretch will have a couple different modes. When it's traveling around it'll be kind of like an AMR, and use a safety-rated lidar making sure that it slows down or stops as people get closer. If it's parked and the arm is moving, it'll do the same thing, monitoring anyone getting close and either slow down or stop.

How do you see Stretch interacting with other warehouse robots?

For building pallet orders, we can do that in a couple of different ways, and we’re experimenting with partners in the AMR space. So you might have an AMR that moves the pallet around and then rendezvous with Stretch, and Stretch does the manipulation part and moves boxes onto the pallet, and then the AMR scuttles off to the next rendezvous point where maybe a different Stretch meets it. We’re developing prototypes of that behavior now with a few partners. Another way to do it is Stretch can actually pull the pallet around itself and do both tasks. There are two fundamental things that happen in the warehouse: there's movement of goods, and there's manipulation of goods, and Stretch can do both.

You’re aware that Hello Robot has a mobile manipulator called Stretch, right?

Great minds think alike! We know Aaron [Edsinger] from the Google days; we all used to be in the same company, and he’s a great guy. We’re in very different applications and spaces, though— Aaron’s robot is going into research and maybe a little bit into the consumer space, while this robot is on a much bigger scale aimed at industrial applications, so I think there’s actually a lot of space between our robots, in terms of how they’ll be used. 

Editor’s Note: We did check in with Aaron Edsinger at Hello Robot, and he sees things a little bit differently. “We're disappointed they chose our name for their robot,” Edsinger told us. “We're seriously concerned about it and considering our options.” We sincerely hope that Boston Dynamics and Hello Robot can come to an amicable solution on this.

What’s the timeline for commercial deployment of Stretch?

This is a prototype of the Stretch robot, and anytime we design a new robot, we always like to build a prototype as quickly as possible so we can figure out what works and what doesn't work. We did that with our bipeds and quadrupeds as well. So, we get an early look at what we need to iterate, because any time you build the first thing, it's not the right thing, and you always need to make changes to get to the final version. We've got about six of those Stretch prototypes operating now. In parallel, our hardware team is finishing up the design of the productized version of Stretch. That version of Stretch looks a lot like the prototype, but every component has been redesigned from the ground up to be manufacturable, to be reliable, and to be higher performance. 

For the productized version of Stretch, we’ll build up the first units this summer, and then it’ll go on sale next year. So this is kind of a sneak peak into what the final product will be.

How much does it cost, and will you be selling Stretch, or offering it as a service?

We’re not quite ready to talk about cost yet, but it’ll be cost effective, and similar in cost to existing systems if you were to combine an industrial robot arm, custom gripper, and mobile base. We’re considering both selling and leasing as a service, but we’re not quite ready to narrow it down yet. 

Photo: Boston Dynamics

As with all mobile manipulators, what Stretch can do long-term is constrained far more by software than by hardware. With a fast and powerful arm, a mobile base, a solid perception system, and 16 hours of battery life, you can imagine how different grippers could enable all kinds of different capabilities. But we’re getting ahead of ourselves, because it’s a long, long way from getting a prototype to work pretty well to getting robots into warehouses in a way that’s commercially viable long-term, even when the use case is as clear as it seems to be for Stretch.

Stretch also could signal a significant shift in focus for Boston Dynamics. While Blankespoor’s comments about Stretch leveraging Boston Dynamics’ expertise with robots like Spot and Atlas are well taken, Stretch is arguably the most traditional robot that the company has designed, and they’ve done so specifically to be able to sell robots into industry. This is what you do if you’re a robotics company who wants to make money by selling robots commercially, which (historically) has not been what Boston Dynamics is all about. Despite its bonkers valuation, Boston Dynamics ultimately needs to make money, and robots like Stretch are a good way to do it. With that in mind, I wouldn’t be surprised to see more robots like this from Boston Dynamics—robots that leverage the company’s unique technology, but that are designed to do commercially useful tasks in a somewhat less flashy way. And if this strategy keeps Boston Dynamics around (while funding some occasional creative craziness), then I’m all for it.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference] ICRA 2021 – May 30-5, 2021 – Xi'an, China DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

The Shadow Robot team couldn't resist! Our Operator, Joanna, is using the Shadow Teleoperation System which, fun and games aside, can help those in difficult, dangerous and distant jobs.

Shadow could challenge this MIT Jenga-playing robot, but I bet they wouldn't win:

[ Shadow Robot ]

Digit is gradually stomping the Agility Robotics logo into a big grassy field fully autonomously.

[ Agility Robotics ]

This is a pretty great and very short robotic magic show.

[ Mario the Magician ]

A research team at the Georgia Institute of Technology has developed a modular solution for drone delivery of larger packages without the need for a complex fleet of drones of varying sizes. By allowing teams of small drones to collaboratively lift objects using an adaptive control algorithm, the strategy could allow a wide range of packages to be delivered using a combination of several standard-sized vehicles.

[ GA Tech ]

I've seen this done using vision before, but Flexiv's Rizon 4s can keep a ball moving along a specific trajectory using only force sensing and control.

[ Flexiv ]

Thanks Yunfan!

This combination of a 3D aerial projection system and a sensing interface can be used as an interactive and intuitive control system for things like robot arms, but in this case, it's being used to make simulated pottery. Much less messy than the traditional way of doing it.

More details on Takafumi Matsumaru's work at the Bio-Robotics & Human-Mechatronics Laboratory at Waseda University at the link below.

[ BLHM ]

U.S. Vice President Kamala Harris called astronauts Shannon Walker and Kate Rubins on the ISS, and they brought up Astrobee, at which point Shannon reaches over and rips Honey right off of her charging dock to get her on camera.

[ NASA ]

Here's a quick three minute update on Perseverance and Ingenuity from JPL.

[ Mars 2020 ]

Rigid grippers used in existing aerial manipulators require precise positioning to achieve successful grasps and transmit large contact forces that may destabilize the drone. This limits the speed during grasping and prevents “dynamic grasping,” where the drone attempts to grasp an object while moving. On the other hand, biological systems (e.g. birds) rely on compliant and soft parts to dampen contact forces and compensate for grasping inaccuracy, enabling impressive feats. This paper presents the first prototype of a soft drone—a quadrotor where traditional (i.e. rigid) landing gears are replaced with a soft tendon-actuated gripper to enable aggressive grasping.

[ MIT ]

In this video we present results from a field deployment inside the Løkken Mine underground pyrite mine in Norway. The Løkken mine was operative from 1654 to 1987 and contains narrow but long corridors, alongside vast rooms and challenging vertical stopes. In this field study we evaluated selected autonomous exploration and visual search capabilities of a subset of the aerial robots of Team CERBERUS towards the goal of complete subterranean autonomy.

[ Team CERBERUS ]

What you can do with a 1,000 FPS projector with a high speed tracking system.

[ Ishikawa Group ]

ANYbotics’ collaboration with BASF, one of the largest global chemical manufacturers, displays the efficiency, quality, and scalability of robotic inspection and data-collection capabilities in complex industrial environments.

[ ANYbotics ]

Does your robot arm need a stylish jacket?

[ Fraunhofer ]

Trossen Robotics unboxes a Unitree A1, and it's actually an unboxing where they have to figure out everything from scratch.

[ Trossen ]

Robots have learned to drive cars, assist in surgeries―and vacuum our floors. But can they navigate the unwritten rules of a busy sidewalk? Until they can, robotics experts Leila Takayama and Chris Nicholson believe, robots won’t be able to fulfill their immense potential. In this conversation, Chris and Leila explore the future of robotics and the role open source will play in it.

[ Red Hat ]

Christoph Bartneck's keynote at the 6th Joint UAE Symposium on Social Robotics, focusing on what roles robots can play during the Covid crisis and why so many social robots fail in the market.

[ HIT Lab ]

Decision-making based on arbitrary criteria is legal in some contexts, such as employment, and not in others, such as criminal sentencing. As algorithms replace human deciders, HAI-EIS fellow Kathleen Creel argues arbitrariness at scale is morally and legally problematic. In this HAI seminar, she explains how the heart of this moral issue relates to domination and a lack of sufficient opportunity for autonomy. It relates in interesting ways to the moral wrong of discrimination. She proposes technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harm identified.

[ Stanford HAI ]

Sawyer B. Fuller speaks on Autonomous Insect-Sized Robots at the UC Berkeley EECS Colloquium series.

Sub-gram (insect-sized) robots have enormous potential that is largely untapped. From a research perspective, their extreme size, weight, and power (SWaP) constraints also forces us to reimagine everything from how they compute their control laws to how they are fabricated. These questions are the focus of the Autonomous Insect Robotics Laboratory at the University of Washington. I will discuss potential applications for insect robots and recent advances from our group. These include the first wireless flights of a sub-gram flapping-wing robot that weighs barely more than a toothpick. I will describe efforts to expand its capabilities, including the first multimodal ground-flight locomotion, the first demonstration of steering control, and how to find chemical plume sources by integrating the smelling apparatus of a live moth. I will also describe a backpack for live beetles with a steerable camera and conceptual design of robots that could scale all the way down to the “gnat robots” first envisioned by Flynn & Brooks in the ‘80s.

[ UC Berkeley ]

Thanks Fan!

Joshua Vander Hook, Computer Scientist, NIAC Fellow, and Technical Group Supervisor at NASA JPL, presents an overview of the AI Group(s) at JPL, and recent work on single and multi-agent autonomous systems supporting space exploration, Earth science, NASA technology development, and national defense programs.

[ UMD ]

Over the past few years, we’ve seen 3D printers used in increasingly creative ways. There’s been a realization that fundamentally, a 3D printer is a full-fledged, multi-axis robotic manipulation system—which is an extraordinarily versatile thing to have in your home. Rather than just printing static objects, folks are now using 3D printers as pick-and-place systems to manufacture drones, and as custom filament printers to make objects out of programmable materials, to highlight just two examples.

In an update to some research first presented at the end of 2019, researchers from Meiji University in Japan have developed one of the cleverest 3D printer enhancements that we’ve yet seen. Called Functgraph, it turns a conventional 3D printer into a “personal factory automation” system by printing and manipulating the tools required to do complex tasks entirely on the print bed. A paper on Functgraph, by Yuto Kuroki and Keita Watanabe, was presented at the Conference on 4D and Functional Fabrication 2020 in October.

Far as I can tell, this is a bone-stock 3D printer with the exception of two modifications, both of which it presumably printed itself. The first is a tool holder on the print head, and the second is a tool release mechanism that sits off to the side. These two things, taken together, give Functgraph access to custom tools limited only by what it can print; and when used in combination with 3D printed objects designed to interact with these tools (support structures with tool interfaces to snap them off, for example), it really is possible to print, assemble, manipulate, and actuate entire small-scale factories.

Yuto Kuroki, first author on the paper describing Functgraph, describes his inspiration for some of the particular tasks shown in the demo video:

The future that Functgraph aims for is as a new platform that downloads apps like smartphones and provides physical support in the real world— the realization of personal factory automation. 

When it comes to sandwich apps, there are many ways to look at recipes, but in the end, humans have to make them. I made a prototype based on the idea of ​​how easy it would be if I could wake up in the morning saying "OK Google, make a breakfast sandwich." 

Regarding the rabbit factory, it’s an application that mass-produces and packs rabbit figures. The box on the right is an interior box to prevent the product from slipping, and the box on the left is an exterior box that is placed in the store and catches the eyes of customers. This is a realization that the manufactured figure is packed as it is and ready for shipment. In this video, two are packed in a row, so in principle it is possible to make hundreds or thousands of them in a row. 

The reason for making a prototype of an app to make a car is a strange story, but the idea is that if you send a 3D printer to a remote place like space, it will be able to generate what you need on the spot. Even if you’re exploring the Moon and your car breaks, I think that you can procure it on the spot again if you have a 3D printer, even without specialized knowledge, dedicated machines, and human hands. This research shows that 3D printers can realize individual desires and purposes unattended and automatically. I think that 3D printers can truly evolve into ‘machines that can do anything’ with Functgraph.

Pages