Feed aggregator

We developed a system to evaluate the skill of operating a hydraulic excavator. The system employs a remotely controlled (RC) excavator and virtual reality (VR) technology. We remodeled the RC excavator so that it can be operated in the same manner as a real excavator and proceeded to measure the excavator's state. To evaluate the skill of operating this system, we calculated several indices from the data recorded during excavation work and compared the indices obtained for expert and non-expert operators. The results revealed that it is possible to distinguish whether an expert or non-expert is operating the RC excavator. We calculated the same indices from the data recorded during excavation with a real excavator and verified that there exists a high correlation between the indices of the RC excavator and those of the real excavator. Thus, we confirmed that the indices of the real excavator and those of the simulator exhibited similar trends. This suggests that it is possible to partly evaluate the operation characteristics of a real excavator by using an RC excavator with different dynamics compared with a real excavator.

This robot is Hiro-chan. It’s made by Vstone, a Japanese robotics company known for producing a variety of totally normal educational and hobby robotics kits and parts. Hiro-chan is not what we would call totally normal, since it very obviously does not have a face. Vstone calls Hiro-chan a “healing communication device,” and while the whole faceless aspect is definitely weird, there is a reason for it, which unsurprisingly involves Hiroshi Ishiguro and his ATR Lab.

Hiro-chan’s entire existence seems to be based around transitioning from sad to happy in response to hugs. If left alone, Hiro-chan’s mood will gradually worsen and it’ll start crying. If you pick it up and hug it, an accelerometer will sense the motion, and Hiro-chan’s mood will improve until it starts to laugh. This is the extent of the interaction, but you’ll be glad to know that the robot has access to over 100 utterance variations collected from an actual baby (or babies) to make sure that mood changes are fluid and seamless. 

According to Japanese blog RobotStart, the target demographic for Hiro-chan is seniors, although it’s simple enough in operation that pretty much anyone could likely pick one up and figure out what they’re supposed to do with it. The end goal is the “healing effect” (a sense of accomplishment, I guess?) that you’d get from making the robot feel better.

Photo: Vstone At 5,500 JPY (about US $50), Vstone expects that Hiro-chan could be helpful with seniors in nursing homes.

So why doesn’t the robot have a face? Since the functionality of the robot depends on you getting it go from sad to happy, Vstone says that giving the robot a face (and a fixed expression) would make that much less convincing and emotionally fulfilling—the robot would have the “wrong” expression half the time. Instead, the user can listen to Hiro-chan’s audio cues and imagine a face. Or not. Either way, the Uncanny Valley effect is avoided (as long as you can get over the complete lack of face, which I personally couldn’t), and the cost of the robot is kept low since there’s no need for actuators or a display.

Photo: Hiroshi Ishiguro/Osaka University/ATR

The Telenoid robot developed by Hiroshi Ishiguro’s group at ATR in Japan.

This concept that a user could imagine or project features and emotions onto a robot as long as it provides a blank enough slate came from Hiroshi Ishiguro with Telenoid, followed by Elfoid and Hugvie. While Telenoid and Elfoid did have faces, those faces were designed to look neither young nor old, and neither male nor female. When you communicate with another human through Telenoid or Elfoid, the neutral look of the robot makes it easier for you to imagine that it looks something like whoever’s on the other end. Or that’s the idea, anyway. Hiro-chan itself was developed in cooperation with Hidenobu Sumioka, who leads the Presence Media Research Group at Hiroshi Ishiguro Laboratory at ATR.

Vstone says the lack of a face is expected to enhance user attachment to the robot, and that testing during product development “showed that designs without faces were as popular as designs with faces.” Users can also enhance attachment by making clothing for the robot, Vstone suggests, and will provide patterns on its website when Hiro-chan is released. Otherwise, there’s really not much to the robot: It runs on AA batteries, has an on-off switch, and mercifully, a volume control, although the FAQ on the robot suggests that it may sometimes laugh even if it’s all by itself in a different room, which is not creepy at all.

Photo: Vstone Vstone says the lack of a face is expected to enhance user attachment to the robot.

At 5,500 JPY (about US $50), Vstone expects that Hiro-chan could be helpful with seniors in nursing homes, relating this anecdote: 

In tests at nursing homes that cooperated with the development of Hiro-chan, even those who did not respond to facility staff etc., spontaneously started crying when Hiro-chan started crying, When "Hiro-chan" started laughing, she was seen smiling. By introducing "Hiro-chan", you can expect not only the healing of the user himself, but also the effect of reducing the labor of the facility staff.

Sounds like a great idea, but I still don’t want one.

[ Vstone ]

This robot is Hiro-chan. It’s made by Vstone, a Japanese robotics company known for producing a variety of totally normal educational and hobby robotics kits and parts. Hiro-chan is not what we would call totally normal, since it very obviously does not have a face. Vstone calls Hiro-chan a “healing communication device,” and while the whole faceless aspect is definitely weird, there is a reason for it, which unsurprisingly involves Hiroshi Ishiguro and his ATR Lab.

Hiro-chan’s entire existence seems to be based around transitioning from sad to happy in response to hugs. If left alone, Hiro-chan’s mood will gradually worsen and it’ll start crying. If you pick it up and hug it, an accelerometer will sense the motion, and Hiro-chan’s mood will improve until it starts to laugh. This is the extent of the interaction, but you’ll be glad to know that the robot has access to over 100 utterance variations collected from an actual baby (or babies) to make sure that mood changes are fluid and seamless. 

According to Japanese blog RobotStart, the target demographic for Hiro-chan is seniors, although it’s simple enough in operation that pretty much anyone could likely pick one up and figure out what they’re supposed to do with it. The end goal is the “healing effect” (a sense of accomplishment, I guess?) that you’d get from making the robot feel better.

Photo: Vstone At 5,500 JPY (about US $50), Vstone expects that Hiro-chan could be helpful with seniors in nursing homes.

So why doesn’t the robot have a face? Since the functionality of the robot depends on you getting it go from sad to happy, Vstone says that giving the robot a face (and a fixed expression) would make that much less convincing and emotionally fulfilling—the robot would have the “wrong” expression half the time. Instead, the user can listen to Hiro-chan’s audio cues and imagine a face. Or not. Either way, the Uncanny Valley effect is avoided (as long as you can get over the complete lack of face, which I personally couldn’t), and the cost of the robot is kept low since there’s no need for actuators or a display.

Photo: Hiroshi Ishiguro/Osaka University/ATR

The Telenoid robot developed by Hiroshi Ishiguro’s group at ATR in Japan.

This concept that a user could imagine or project features and emotions onto a robot as long as it provides a blank enough slate came from Hiroshi Ishiguro with Telenoid, followed by Elfoid and Hugvie. While Telenoid and Elfoid did have faces, those faces were designed to look neither young nor old, and neither male nor female. When you communicate with another human through Telenoid or Elfoid, the neutral look of the robot makes it easier for you to imagine that it looks something like whoever’s on the other end. Or that’s the idea, anyway. Hiro-chan itself was developed in cooperation with Hidenobu Sumioka, who leads the Presence Media Research Group at Hiroshi Ishiguro Laboratory at ATR.

Vstone says the lack of a face is expected to enhance user attachment to the robot, and that testing during product development “showed that designs without faces were as popular as designs with faces.” Users can also enhance attachment by making clothing for the robot, Vstone suggests, and will provide patterns on its website when Hiro-chan is released. Otherwise, there’s really not much to the robot: It runs on AA batteries, has an on-off switch, and mercifully, a volume control, although the FAQ on the robot suggests that it may sometimes laugh even if it’s all by itself in a different room, which is not creepy at all.

Photo: Vstone Vstone says the lack of a face is expected to enhance user attachment to the robot.

At 5,500 JPY (about US $50), Vstone expects that Hiro-chan could be helpful with seniors in nursing homes, relating this anecdote: 

In tests at nursing homes that cooperated with the development of Hiro-chan, even those who did not respond to facility staff etc., spontaneously started crying when Hiro-chan started crying, When "Hiro-chan" started laughing, she was seen smiling. By introducing "Hiro-chan", you can expect not only the healing of the user himself, but also the effect of reducing the labor of the facility staff.

Sounds like a great idea, but I still don’t want one.

[ Vstone ]

When Anki abruptly shut down in April of last year, things looked bleak for Vector, Cozmo, and the Overdrive little racing cars. Usually, abrupt shutdowns don’t end well, with assets and intellectual property getting liquidated and effectively disappearing forever. Despite some vague promises (more like hopes, really) from Anki at the time that their cloud-dependent robots would continue to operate, it was pretty clear that Anki’s robots wouldn’t have much of a future—at best, they’d continue to work only as long as there was money to support the cloud servers that gave them their spark of life.

A few weeks ago, The Robot Report reported that Anki’s intellectual property (patents, trademarks, and data) was acquired by Digital Dream Labs, an education tech startup based in Pittsburgh. Over the weekend, a new post on the Vector Kickstarter page (the campaign happened in 2018) from Digital Dream Labs CEO Jacob Hanchar announced that not only will Vector’s cloud servers keep running indefinitely, but that the next few months will see a new Kickstarter to add new features and future-proofing to Vectors everywhere.

Here’s the announcement from Hanchar:

I wanted to let you know that we have purchased Anki's assets and intend to restore the entire platform and continue to develop the robot we all know and love, Vector!

The most important part of this update is to let you know we have taken over the cloud servers and are going to maintain them going forward.  Therefore, if you were concerned about Vector 'dying' one day, you no longer have to worry!  

The next portion of this update is to let you know what we have planned next and we will be announcing a KickStarter under Digital Dream Labs in the next month or two.  While we are still brainstorming we are thinking the Kickstarter will focus on two features we have seen as major needs in the Vector community:

1)  We will develop an "Escape Pod".  This will, safely, expose settings and allow the user to move and set endpoints, and by doing so, remove the need for the cloud server.  In other words, if you're concerned Anki's demise could also happen to us, this is your guarantee that no matter what happens, you'll always get to play with Vector!

2)  We will develop a "Dev Vector".  Many users have asked us for open source and the ability to do more with their Vector even to the point of hosting him on their own servers.  With this feature, developers will be able to customize their robot through a bootloader we will develop.  With the robot unlocked, technologists and hobbyists across the globe will finally be able to hack, with safe guards in place, away at Vector for the ultimate AI and machine learning experience!

As a bonus, we will see about putting together an SDK so users can play with Vector's audio stream and system, which we have discovered is a major feature you guys love about this little guy!

This is just the beginning and subject to change, but because you have shown such loyalty and got this project off the ground in the first place, I felt it was necessary to communicate these developments as soon as possible! 

There are a few more details in the comments on this post—Hanchar notes that they didn’t get any of Anki’s physical inventory, meaning that at least for now, you won’t be able to buy any robots from them. However, Hanchar told The Robot Report that they’ve been talking with ex-Anki employees and manufacturers about getting new robots, with a goal of having the whole family (Vector, Cozmo, and Overdrive) available for the 2020 holidays. 

Photo: Anki Anki’s Cozmo robot.

Despite the announcement on the Vector Kickstarter page, it sounds like Cozmo will be the initial focus, because Cozmo works best with Digital Dream Labs’ existing educational products. The future of Vector, presumably, will depend on how well the forthcoming Kickstarter does. In its FAQ about the Anki acquisition, Digital Dream Labs says that they “will need to examine the business model surrounding Vector before we can relaunch that product,” and speaking with The Robot Report, Hanchar suggested that “monthly subscription packages” in a few different tiers might be the way to make sure that Vector stays profitable. 

It’s probably too early to get super excited about this, but it’s definitely far better news than we were expecting, and Anki’s robots now seem like they could potentially have a future. Hanchar even mentioned something about a “Vector 2.0,” whatever that means. In the short term, I think most folks would be pretty happy with a Vector 1.0 with support, some new features, and no expiration date, and that could be exactly what we’re getting. 

[ Anki Vector ]

When Anki abruptly shut down in April of last year, things looked bleak for Vector, Cozmo, and the Overdrive little racing cars. Usually, abrupt shutdowns don’t end well, with assets and intellectual property getting liquidated and effectively disappearing forever. Despite some vague promises (more like hopes, really) from Anki at the time that their cloud-dependent robots would continue to operate, it was pretty clear that Anki’s robots wouldn’t have much of a future—at best, they’d continue to work only as long as there was money to support the cloud servers that gave them their spark of life.

A few weeks ago, The Robot Report reported that Anki’s intellectual property (patents, trademarks, and data) was acquired by Digital Dream Labs, an education tech startup based in Pittsburgh. Over the weekend, a new post on the Vector Kickstarter page (the campaign happened in 2018) from Digital Dream Labs CEO Jacob Hanchar announced that not only will Vector’s cloud servers keep running indefinitely, but that the next few months will see a new Kickstarter to add new features and future-proofing to Vectors everywhere.

Here’s the announcement from Hanchar:

I wanted to let you know that we have purchased Anki's assets and intend to restore the entire platform and continue to develop the robot we all know and love, Vector!

The most important part of this update is to let you know we have taken over the cloud servers and are going to maintain them going forward.  Therefore, if you were concerned about Vector 'dying' one day, you no longer have to worry!  

The next portion of this update is to let you know what we have planned next and we will be announcing a KickStarter under Digital Dream Labs in the next month or two.  While we are still brainstorming we are thinking the Kickstarter will focus on two features we have seen as major needs in the Vector community:

1)  We will develop an "Escape Pod".  This will, safely, expose settings and allow the user to move and set endpoints, and by doing so, remove the need for the cloud server.  In other words, if you're concerned Anki's demise could also happen to us, this is your guarantee that no matter what happens, you'll always get to play with Vector!

2)  We will develop a "Dev Vector".  Many users have asked us for open source and the ability to do more with their Vector even to the point of hosting him on their own servers.  With this feature, developers will be able to customize their robot through a bootloader we will develop.  With the robot unlocked, technologists and hobbyists across the globe will finally be able to hack, with safe guards in place, away at Vector for the ultimate AI and machine learning experience!

As a bonus, we will see about putting together an SDK so users can play with Vector's audio stream and system, which we have discovered is a major feature you guys love about this little guy!

This is just the beginning and subject to change, but because you have shown such loyalty and got this project off the ground in the first place, I felt it was necessary to communicate these developments as soon as possible! 

There are a few more details in the comments on this post—Hanchar notes that they didn’t get any of Anki’s physical inventory, meaning that at least for now, you won’t be able to buy any robots from them. However, Hanchar told The Robot Report that they’ve been talking with ex-Anki employees and manufacturers about getting new robots, with a goal of having the whole family (Vector, Cozmo, and Overdrive) available for the 2020 holidays. 

Photo: Anki Anki’s Cozmo robot.

Despite the announcement on the Vector Kickstarter page, it sounds like Cozmo will be the initial focus, because Cozmo works best with Digital Dream Labs’ existing educational products. The future of Vector, presumably, will depend on how well the forthcoming Kickstarter does. In its FAQ about the Anki acquisition, Digital Dream Labs says that they “will need to examine the business model surrounding Vector before we can relaunch that product,” and speaking with The Robot Report, Hanchar suggested that “monthly subscription packages” in a few different tiers might be the way to make sure that Vector stays profitable. 

It’s probably too early to get super excited about this, but it’s definitely far better news than we were expecting, and Anki’s robots now seem like they could potentially have a future. Hanchar even mentioned something about a “Vector 2.0,” whatever that means. In the short term, I think most folks would be pretty happy with a Vector 1.0 with support, some new features, and no expiration date, and that could be exactly what we’re getting. 

[ Anki Vector ]

Photo: Caltech This lower-body exoskeleton, developed by Wandercraft, will allow disabled users to walk more dynamically.

Bipedal robots have long struggled to walk as humans do—balancing on two legs and moving with that almost-but-not-quite falling forward motion that most of us have mastered by the time we’re a year or two old. It’s taken decades of work, but robots are starting to get comfortable with walking, putting them in a position to help people in need.

Roboticists at the California Institute of Technology have launched an initiative called RoAMS (Robotic Assisted Mobility Science), which uses the latest research in robotic walking to create a new kind of medical exoskeleton. With the ability to move dynamically, using neurocontrol interfaces, these exoskeletons will allow users to balance and walk without the crutches that are necessary with existing medical exoskeletons. This might not seem like much, but consider how often you find yourself standing up and using your hands at the same time.

“The only way we’re going to get exoskeletons into the real world helping people do everyday tasks is through dynamic locomotion,” explains Aaron Ames, a professor of civil and mechanical engineering at Caltech and colead of the RoAMS initiative. “We’re imagining deploying these exoskeletons in the home, where a user might want to do things like make a sandwich and bring it to the couch. And on the clinical side, there are a lot of medical benefits to standing upright and walking.”

The Caltech researchers say their exoskeleton is ready for a major test: They plan to demonstrate dynamic walking through neurocontrol this year.

Getting a bipedal exoskeleton to work so closely with a human is a real challenge. Ames explains that researchers have a deep and detailed understanding of how their robotic creations operate, but biological systems still present many unknowns. “So how do we get a human to successfully interface with these devices?” he asks.

There are other challenges as well. Ashraf S. Gorgey, an associate professor of physical medicine and rehabilitation at Virginia Commonwealth University, in Richmond, who has researched exoskeletons, says factors such as cost, durability, versatility, and even patients’ desire to use the device are just as important as the technology itself. But he adds that as a research system, Caltech’s approach appears promising: “Coming up with an exoskeleton that can provide balance to patients, I think that’s huge.”

Photo: Caltech Caltech researchers prepare for a walking demonstration with the exoskeleton.

One of Ames’s colleagues at Caltech, Joel Burdick, is developing a spinal stimulator that can potentially help bypass spinal injuries, providing an artificial connection between leg muscles and the brain. The RoAMS initiative will attempt to use this technology to exploit the user’s own nerves and muscles to assist with movement and control of the exoskeleton—even for patients with complete paraplegia. Coordinating nerves and muscles with motion can also be beneficial for people undergoing physical rehabilitation for spinal cord injuries or stroke, where walking with the support and assistance of an exoskeleton can significantly improve recovery, even if the exoskeleton does most of the work.

“You want to train up that neurocircuitry again, that firing of patterns that results in locomotion in the corresponding muscles,” explains Ames. “And the only way to do that is have the user moving dynamically like they would if they weren’t injured.”

Caltech is partnering with a French company called Wandercraft to transfer this research to a clinical setting. Wandercraft has developed an exoskeleton that has received clinical approval in Europe, where it has already enabled more than 20 paraplegic patients to walk. In 2020, the RoAMS initiative will focus on directly coupling brain or spine interfaces with Wandercraft’s exoskeleton to achieve stable dynamic walking with integrated neurocontrol, which has never been done before.

Ames notes that these exoskeletons are designed to meet very specific challenges. For now, their complexity and cost will likely make them impractical for most people with disabilities to use, especially when motorized wheelchairs can more affordably fulfill many of the same functions. But he is hoping that the RoAMS initiative is the first step toward bringing the technology to everyone who needs it, providing an option for situations that a wheelchair or walker can’t easily handle.

“That’s really what RoAMS is about,” Ames says. “I think this is something where we can make a potentially life-changing difference for people in the not-too-distant future.”

This article appears in the January 2020 print issue as “This Exoskeleton Will Obey Your Brain.”

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

IIT’s new HyQReal quadruped robot was released in May 2019. This highlight video shows previously unpublished footage of how we prepared the robot to pull a 3.3 ton airplane. Additionally, it shows the robot walking over unstructured terrain and during public events in October 2019. Including a face-to-face with a dog.

[ IIT ]

Thanks Claudio!

Agility Robotics has had a very busy 2019, and all 10 minutes of this video is worth watching.

Also: double Digits.

[ Agility Robotics ]

Happy (belated) holidays from Franka Emika!

[ Franka Emika ]

Thanks Anna!

Happy (belated) holidays from the GRASP lab!

[ GRASP Lab ]

Happy (belated) holidays from the Autonomous Robots Lab at the University of Nevada!

[ ARL ]

Happy (belated) holidays from the Georgia Tech Systems Research Lab!

[ GA Tech ]

Thanks Qiuyang!

NASA’s Jet Propulsion Laboratory has attached the Mars 2020 Helicopter to the belly of the Mars 2020 rover.

[ JPL ]

This isn’t a Roomba, mind you—are we at the point where “Roomba” is like “Xerox” or “Velcro,” representing a category rather than a brand?—but it does have a flying robot vacuum in it.

[ YouTube ] via [ Gizmodo ]

We’ve said it before, and it’s still true: Every quadrotor should have failsafe software like this.

[ Verity ]

KUKA robots are on duty at one of the largest tea factories in the world located in Rize, Turkey.

[ Kuka ]

This year, make sure and take your robot for more walks.

[ Sphero ]

Dorabot’s Robot for recycling, can identify, pick, and sort recyclable items such as plastic bottles, glass bottles, paper, cartons, and aluminum cans. The robot has deep learning-based computer vision and dynamic planning to select items in a moving conveyor belt. It also includes customized and erosion resistant grippers to pick irregularly shaped items, which results in a cost-effective integrated solution.

[ Dorabot ]

This cute little boat takes hyperlapse pictures autonomously, while more or less not sinking.

[ rctestflight ] via [ PetaPixel ]

Roboy’s Research Reviews takes a look at the OmniSkins paper from 2018.

[ RRR ]

When thinking about robot ethics (and robots in general), it’s typical to use humans and human ethics as a baseline. But what if we considered animals as a point of comparison instead? Ryan Calo, Kate Darling, and Paresh Kathrani were on a panel at the Animal Law Conference last month entitled Persons yet Unknown: Animals, Chimeras, Artificial Intelligence and Beyond where this idea was explored.

[ YouTube ]

Sasha Iatsenia, who was until very recently head of product at Kiwibot, gives a candid talk about “How (not) to build autonomous robots.”

We should mention that Kiwibot does seem to still be alive.

[ CCC ]

On this episode of the Artificial Intelligence Podcast, Lex Fridman interviews Sebastian Thrun.

[ AI Podcast ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

IIT’s new HyQReal quadruped robot was released in May 2019. This highlight video shows previously unpublished footage of how we prepared the robot to pull a 3.3 ton airplane. Additionally, it shows the robot walking over unstructured terrain and during public events in October 2019. Including a face-to-face with a dog.

[ IIT ]

Thanks Claudio!

Agility Robotics has had a very busy 2019, and all 10 minutes of this video is worth watching.

Also: double Digits.

[ Agility Robotics ]

Happy (belated) holidays from Franka Emika!

[ Franka Emika ]

Thanks Anna!

Happy (belated) holidays from the GRASP lab!

[ GRASP Lab ]

Happy (belated) holidays from the Autonomous Robots Lab at the University of Nevada!

[ ARL ]

Happy (belated) holidays from the Georgia Tech Systems Research Lab!

[ GA Tech ]

Thanks Qiuyang!

NASA’s Jet Propulsion Laboratory has attached the Mars 2020 Helicopter to the belly of the Mars 2020 rover.

[ JPL ]

This isn’t a Roomba, mind you—are we at the point where “Roomba” is like “Xerox” or “Velcro,” representing a category rather than a brand?—but it does have a flying robot vacuum in it.

[ YouTube ] via [ Gizmodo ]

We’ve said it before, and it’s still true: Every quadrotor should have failsafe software like this.

[ Verity ]

KUKA robots are on duty at one of the largest tea factories in the world located in Rize, Turkey.

[ Kuka ]

This year, make sure and take your robot for more walks.

[ Sphero ]

Dorabot’s Robot for recycling, can identify, pick, and sort recyclable items such as plastic bottles, glass bottles, paper, cartons, and aluminum cans. The robot has deep learning-based computer vision and dynamic planning to select items in a moving conveyor belt. It also includes customized and erosion resistant grippers to pick irregularly shaped items, which results in a cost-effective integrated solution.

[ Dorabot ]

This cute little boat takes hyperlapse pictures autonomously, while more or less not sinking.

[ rctestflight ] via [ PetaPixel ]

Roboy’s Research Reviews takes a look at the OmniSkins paper from 2018.

[ RRR ]

When thinking about robot ethics (and robots in general), it’s typical to use humans and human ethics as a baseline. But what if we considered animals as a point of comparison instead? Ryan Calo, Kate Darling, and Paresh Kathrani were on a panel at the Animal Law Conference last month entitled Persons yet Unknown: Animals, Chimeras, Artificial Intelligence and Beyond where this idea was explored.

[ YouTube ]

Sasha Iatsenia, who was until very recently head of product at Kiwibot, gives a candid talk about “How (not) to build autonomous robots.”

We should mention that Kiwibot does seem to still be alive.

[ CCC ]

On this episode of the Artificial Intelligence Podcast, Lex Fridman interviews Sebastian Thrun.

[ AI Podcast ]

Photo: FarmWise FarmWise’s AI-powered robots drive autonomously through crops, looking for weeds to kill.

At first glance, the crops don’t look any different from other crops blanketing the Salinas Valley, in California, which is often called “America’s salad bowl.” All you see are rows and rows of lettuce, broccoli, and cauliflower stretching to the horizon. But then the big orange robots roll through.

The machines are on a search-and-destroy mission. Their target? Weeds. Equipped with tractorlike wheels and an array of cameras and environmental sensors, they drive autonomously up and down the rows of produce, hunting for any leafy green invaders. Rather than spraying herbicides, they deploy a retractable hoe that kills the weeds swiftly and precisely.

The robots belong to FarmWise, a San Francisco startup that wants to use robotics and artificial intelligence to make agriculture more sustainable—and tastier. The company has raised US $14.5 million in a recent funding round, and in 2020 it plans to deploy its first commercial fleet of robots, with more than 10 machines serving farmers in the Salinas Valley.

FarmWise says that although its robots are currently optimized for weeding, future designs will do much more. “Our goal is to become a universal farming platform,” says cofounder and CEO ­Sébastien Boyer. “We want to automate pretty much all tasks from seeding all the way to harvesting.”

Boyer envisions the robots collecting vast amounts of data, including detailed images of the crops and parameters that affect their health such as temperature, humidity, and soil conditions. But it’s what the robots will do with the data that makes them truly remarkable. Using machine learning, they’ll identify each plant individually, determine whether it’s thriving, and tend to it accordingly. Thanks to these AI-powered robots, every broccoli stalk will get the attention it needs to be the best broccoli it can be.

Automation is not new to agriculture. Wheeled harvesters are increasingly autonomous, and farmers have long been flying drones to monitor their crops from above. Also under development are robots designed to pick fruits and vegetables—apples, peppers, strawberries, tomatoes, grapes, cucumbers, asparagus. More recently, a number of robotics companies have turned their attention to ways they can improve the quality or yield of crops.

Farming robots are still a “very nascent market,” says Rian Whitton, a senior analyst at ABI Research, in London, but it’s one that will “expand significantly over the next 10 years.” ABI forecasts that annual shipments of mobile robots for agriculture will exceed 100,000 units globally by 2030, 100 times the volume deployed today.

It’s still a small number compared with the millions of tractors and other farming vehicles sold each year, but Whitton notes that demand for automation will likely accelerate due to labor shortages in many parts of the world.

Photo: FarmWise FarmWise plans to deploy its first commercial fleet of robots in the Salinas Valley, in California.

FarmWise says it has worked closely with farmers to understand their needs and develop its robots based on their feedback. So how do they work? Boyer is not prepared to reveal specifics about the company’s technology, but he says the machines operate in three steps.

First, the sensor array captures images and other relevant data about the crops and stores that information on both onboard computers and cloud servers. The second step is the decision-making process, in which specialized deep-learning algorithms analyze the data. There’s an algorithm trained to detect plants in an image, and the robots combine that output with GPS and other location data to precisely identify each plant. Another algorithm is trained to decide whether a plant is, say, a lettuce head or a weed. The final step is the physical action that the machines perform on the crops—for example, deploying the weeding hoe.

Boyer says the robots perform the three steps in less than a second. Indeed, the robots can drive through the fields clearing the soil at a pace that would be virtually impossible for humans to match. FarmWise says its robots have removed weeds from more than 10 million plants to date.

Whitton, the ABI analyst, says focusing on weeding as an initial application makes sense. “There are potentially billions of dollars to be saved from less pesticide use, so that’s the fashionable use case,” he says. But he adds that commercial success for agriculture automation startups will depend on whether they can expand their services to perform additional farming tasks as well as operate in a variety of regions and climates.

Already FarmWise has a growing number of competitors. Deepfield Robotics, a spin-out of the German conglomerate Robert Bosch, is testing an autonomous vehicle that kills weeds by punching them into the ground. The Australian startup Agerris is developing mobile robots for monitoring and spraying crops. And Sunnyvale, Calif.–based Blue River Technology, acquired by John Deere in 2017, is building robotic machines for weeding large field crops like cotton and soybeans.

FarmWise says it has recently completed a redesign of its robots. The new version is better suited to withstand the harsh conditions often found in the field, including mud, dust, and water. The company is now expanding its staff as it prepares to deploy its robotic fleet in California, and eventually in other parts of the United States and abroad.

Boyer is confident that farms everywhere will one day be filled with robots—and that they’ll grow some of the best broccoli you’ve ever tasted.

Photo: Boeing No cockpit mars the clean lines of this unpiloted blue streak.

If you drive along the main northern road through South Australia with a good set of binoculars, you may soon be able to catch a glimpse of a strange, windowless jet, one that is about to embark on its maiden flight. It’s a prototype of the next big thing in aerial combat: a self-piloted warplane designed to work together with human-piloted aircraft.

The Royal Australian Air Force (RAAF) and Boeing Australia are building this fighterlike plane for possible operational use in the mid-2020s. Trials are set to start this year, and although the RAAF won’t confirm the exact location, the quiet electromagnetic environment, size, and remoteness of the Woomera Prohibited Area make it a likely candidate. Named for ancient Aboriginal spear throwers, Woomera spans an area bigger than North Korea, making it the largest weapons-testing range on the planet.

The autonomous plane, formally called the Airpower Teaming System but often known as “Loyal Wingman,” is 11 meters (38 feet) long and clean cut, with sharp angles offset by soft curves. The look is quietly aggressive.

Three prototypes will be built under a project first revealed by Boeing and the RAAF in February 2019. Those prototypes are not meant to meet predetermined specifications but rather to help aviators and engineers work out the future of air combat. This may be the first experiment to truly portend the end of the era of crewed warplanes.

“We want to explore the viability of an autonomous system and understand the challenges we’ll face,” says RAAF Air Commodore Darren Goldie.

Australia has chipped in US $27 million (AU $40 million), but the bulk of the cost is borne by Boeing, and the company will retain ownership of the three prototypes. Boeing says the project is the largest investment in uncrewed aircraft it’s ever made outside the United States, although a spokesperson would not give an exact figure.

The RAAF already operates a variety of advanced aircraft, such as Lockheed Martin F-35 jets, but these $100 million fighters are increasingly seen as too expensive to send into contested airspace. You don’t swat a fly with a gold mallet. The strategic purpose of the Wingman project is to explore whether comparatively cheap and expendable autonomous fighters could bulk up Australia’s air power. Sheer strength in numbers may prove handy in deterring other regional players, notably China, which are expanding their own fleets.

“Quantity has a quality of its own,” Goldie says.

The goal of the project is to put cost before capability, creating enough “combat mass” to overload enemy calculations. During operations, Loyal Wingman aircraft will act as extensions of the piloted aircraft they accompany. They could collect intelligence, jam enemy electronic systems, and possibly drop bombs or shoot down other planes.

“They could have a number of uses,” Goldie says. “An example might be a manned aircraft giving it a command to go out in advance to trigger enemy air defense systems—similar to that achieved by [U.S.-military] Miniature Air-Launched Decoys.”

The aircraft are also designed to operate as a swarm. Many of these autonomous fighters with cheap individual sensors, for example, could fly in a “distributed antenna” geometry, collectively creating a greater electromagnetic aperture than you could get with a single expensive sensor. Such a distributed antenna could also help the system resist jamming.

“This is a really big concept, because you’re giving the pilots in manned aircraft a bigger picture,” Boeing Australia director Shane Arnott says. These guidelines have created two opposing goals: On one hand, the Wingman must be stealthy, fast, and maneuverable, and with some level of autonomy. On the other, it must be cheap enough to be expendable.

The development of Wingman began with numerical simulations, as Boeing Australia and the RAAF applied computational fluid dynamics to calculate the aerodynamic properties of the plane. Physical prototypes were then built for testing in wind tunnels, designing electrical wiring, and the other stages of systems engineering. Measurements from sensors attached to a prototype were used to create and refine a “digital twin,” which Arnott describes as one of the most comprehensive Boeing has ever made. “That will become important as we upgrade the system, integrate new sensors, and come up with different approaches to help us with the certification phase,” Arnott says.

The physical result is a clean-sheet design with a custom exterior and a lot of off-the-shelf components inside. The composite exterior is designed to reflect radar as weakly as possible. Sharply angled surfaces, called chines, run from the nose to the air intakes on either side of the lower fuselage; chines then run further back from those intakes to the wings and to twin tail fins, which are slightly canted from the vertical.

This design avoids angles that might reflect radar signals straight back to the source, like a ball bouncing off the inside corner of a box. Instead, the design deflects them erratically. Payloads are hidden in the belly. Of course, if the goal is to trigger enemy air defense systems, such a plane could easily turn nonstealthy.

The design benefits from the absence of a pilot. There is no cockpit to break the line, nor a human who must be protected from the brain-draining forces of acceleration.

“The ability to remove the human means you’re fundamentally allowing a change in the design of the aircraft, particularly the pronounced forward part of the fuselage,” Goldie says. “Lowering the profile can lower the radar cross section and allow a widened flight envelope.”

The trade-off is cost. To keep it down, the Wingman uses what Boeing calls a “very light commercial jet engine” to achieve a range of about 3,700 km (2,300 miles), roughly the distance between Seville and Moscow. The internal sensors are derived from those miniaturized for commercial applications.

Additional savings have come from Boeing’s prior investments in automating its supply chains. The composite exterior is made using robotic manufacturing techniques first developed for commercial planes at Boeing’s aerostructures fabrication site in Melbourne, the company’s largest factory outside the United States.

The approach has yielded an aircraft that is cheaper, faster, and more agile than today’s drones. The most significant difference, however, is that the Wingman can make its own decisions. “Unmanned aircraft that are flown from the ground are just manned from a different part of the system. This is a different concept,” Goldie says. “There’s nobody physically telling the system to iteratively go up, left, right, or down. The aircraft could be told to fly to a position and do a particular role. Inherent in its design is an ability to achieve that reliably.”

Setting the exact parameters of the Loyal Wingman’s autonomy—which decisions will be made by the machine and which by a human—is the main challenge. If too much money is invested in perfecting the software, the Wingman could become too expensive; too little, however, may leave it incapable of carrying out the required operations.

The software itself has been developed using the digital twin, a simulation that has been digitally “flown” thousands of times. Boeing is also using 15 test-bed aircraft to “refine autonomous control algorithms, data fusion, object-detection systems, and collision-avoidance behaviors,” the company says on its website. These include five higher-performance test jets.

“We understand radar cross sections and g-force stress on an aircraft. We need to know more about the characteristics of the autonomy that underpins that, what it can achieve and how reliable it can be,” Goldie says.

“Say you have an autonomous aircraft flying in a fighter formation, and it suddenly starts jamming frequencies the other aircraft are using or [are] reliant upon,” he continues. “We can design the aircraft to not do those things, but how do we do that and keep costs down? That’s a challenge.”

Arnott also emphasizes the exploratory nature of the Loyal Wingman program. “Just as we’ve figured out what is ‘good enough’ for the airframe, we’re figuring out what level of autonomy is also ‘good enough,’ ” Arnott says. “That’s a big part of what this program is doing.”

The need to balance capability and cost also affects how the designers can protect the aircraft against enemy countermeasures. The Wingman’s stealth and maneuverability will make it harder to hit with antiaircraft missiles that rely on impact to destroy their targets, so the most plausible countermeasures are cybertechniques that hack the aircraft’s communications, perhaps to tell it to fly home, or electromagnetic methods that fry the airplane’s internal electronics.

Stealth protection can go only so far. And investing heavily in each aircraft’s defenses would raise costs. “How much do you build in resilience, or just accept this aircraft is not meant to be survivable?” Goldie says.

This year’s test flights should help engineers weigh trade-offs between resilience and cost. Those flights will also answer specific questions: Can the Wingman run low on fuel and decide to come home? Or can it decide to sacrifice itself to save a human pilot? And at the heart of it all is the fundamental question facing militaries the world over: Should air power be cheap and expendable or costly and capable?

Other countries have taken different approaches. The United Kingdom’s Royal Air Force has selected Boeing and several other contractors to produce design ideas for the Lightweight Affordable Novel Combat Aircraft program, with test flights planned in 2022. Boeing has also expressed interest in the U.S. Air Force’s similar Skyborg program, which uses the XQ-58 Valkyrie, a fighterlike drone made by Kratos, of San Diego.

China is also in the game. It has displayed the GJ-11 unmanned stealth combat aircraft and the GJ-2 reconnaissance and strike aircraft; the level of autonomy in these aircraft is not clear. China has also developed the LJ-1, a drone akin to the Loyal Wingman, which may also function as a cruise missile.

Military aerospace projects often have specific requirements that contractors must fulfill. The Loyal Wingman is instead trying to decide what the requirements themselves should be. “We are creating a market,” Arnott says.

The Australian project, in other words, is agnostic as to what role autonomous aircraft should play. It could result in an aircraft that is cheaper than the weapons that will shoot it down, meaning each lost Wingman is actually a net win. It could also result in an aircraft that can almost match a crewed fighter jet’s capabilities at half the cost.

This article appears in the January 2020 print issue as “A Robot Is My Wingman.”

Photo: United Parcel Service This large quadcopter delivers medical samples at a Raleigh hospital complex.

When Amazon made public its plans to deliver packages by drone six years ago, many skeptics scoffed—including some at this magazine. It just didn’t seem safe or practical to have tiny buzzing robotic aircraft crisscrossing the sky with Amazon orders. Today, views on the prospect of getting stuff swiftly whisked to you this way have shifted, in part because some packages are already being delivered by drone, including examples in Europe, Australia, and Africa, sometimes with life-saving consequences. In 2020, we should see such operations multiply, even in the strictly regulated skies over the United States.

There are several reasons to believe that package delivery by drone may soon be coming to a city near you. The most obvious one is that technical barriers standing in the way are crumbling.

The chief challenge, of course, is the worry that an autonomous package-delivery drone might collide with an aircraft carrying people. In 2020, however, it’s going to be easier to ensure that won’t happen, because as of 1 January, airplanes and helicopters are required to broadcast their positions by radio using what is known as automatic dependent surveillance–broadcast out (ADS-B Out) equipment carried on board. (There are exceptions to that requirement, such as for gliders and balloons, or for aircraft operating only in uncontrolled airspace.) This makes it relatively straightforward for the operator of a properly equipped drone to determine whether a conventional airplane or helicopter is close enough to be of concern.

Indeed, DJI, the world’s leading drone maker, has promised that from here on out it will equip any drone it sells weighing over 250 grams (9 ounces) with the ability to receive ADS-B signals and to inform the operator that a conventional airplane or helicopter is flying nearby. DJI calls this feature AirSense. “It works very well,” says Brendan Schulman, vice president for policy and legal affairs at DJI—noting, though, that it works only “in one direction.” That is, pilots don’t get the benefit of ADS-B signals from drones.

Drones will not carry ADS-B Out equipment, Schulman explains, because the vast number of small drones would overwhelm air-traffic controllers with mostly useless information about their whereabouts. But it will eventually be possible for pilots and others to determine whether there are any drones close enough to worry about; the key is a system for the remote identification of drones that the U.S. Federal Aviation Administration is now working to establish. The FAA took the first formal step in that direction yesterday, when the agency published a Notice of Proposed Rulemaking on remote ID for drones.

Before the new regulations go into effect, the FAA will have to receive and react to public comments on its proposed rules for drone ID. That will take many months. But some form of electronic license plates for drones is definitely coming, and we’ll likely see that happening even before the FAA mandates it. This identification system will pave the way for package delivery and other beyond-line-of-sight operations that fly over people. (Indeed, the FAA has stated that it does not intend to establish rules for drone flights over people until remote ID is in place.)

Photo: United Parcel Service Technicians carry out certain preflight procedures, as with any airline.

One of the few U.S. sites where drones are making commercial deliveries already is Wake County, N.C. Since March of last year, drones have been ferrying medical samples at WakeMed’s sprawling hospital campus on the east side of Raleigh. Last September, UPS Flight Forward, the subsidiary of United Parcel Service that is carrying out these drone flights, obtained formal certification from the FAA as an air carrier. The following month, Wing, a division of Alphabet, Google’s parent company, launched the first residential drone-based delivery service to begin commercial operations in the United States, ferrying small packages from downtown Christiansburg, Va., to nearby neighborhoods. These projects in North Carolina and Virginia, two of a handful being carried out under the FAA’s UAS Integration Pilot Program, show that the idea of using drones to deliver packages is slowly but surely maturing.

“We’ve been operating this service five days a week, on the hour,” says Stuart Ginn, a former airline pilot who is now a head-and-neck surgeon at WakeMed. He was instrumental in bringing drone delivery to this hospital system in partnership with UPS and California-based Matternet.

Right now the drone flying at WakeMed doesn’t travel beyond the operators’ line of sight. But Ginn says that he and others behind the project should soon get FAA clearance to fly packages to the hospital by drone from a clinic located some 16 kilometers away. “I’d be surprised and disappointed if that doesn’t happen in 2020,” says Ginn. The ability to connect nearby medical facilities by drone, notes Ginn, will get used “in ways we don’t anticipate.”

This article appears in the January 2020 print issue as “The Delivery Drones Are Coming.”

aside.inlay.xlrg.XploreFree { font-family: "Georgia", serif; border-width: 4px 0; border-top: solid #888; border-bottom: solid #888; padding: 10px 0; font-size: 19px; font-weight: bold; text-align: center; } span.FreeRed { color: red; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; } span.XploreBlue { color: #03a6e3; font-family: "Theinhardt-Medium", sans-serif; }

How can headphone-wearing pedestrians tune out the chaotic world around them without compromising their own safety? One solution may come from the pedestrian equivalent of a vehicle collision warning system that aims to detect nearby vehicles based purely on sound.

The intelligent headphone system uses machine learning algorithms to interpret sounds and alert pedestrians to the location of vehicles up to 60 meters away. A prototype of the Pedestrian Audio Warning System (PAWS) can only detect the location but not the trajectory of a nearby vehicle—never mind the locations or trajectories of multiple vehicles. Still, it’s a first step for a possible pedestrian-centered safety aid at a time when the number of pedestrians killed on U.S. roads reached a three-decade high in 2018.

“Sometimes the newer vehicles have sensors that can tell if there are pedestrians, but pedestrians usually don’t have a way to tell if vehicles are on a collision trajectory,” says Xiaofan Jiang, an assistant professor of electrical engineering and member of the Data Science Institute at Columbia University.

The idea first came to Jiang when he noticed that a new pair of noise-cancelling headphones was distracting him more than usual from his surroundings during a walk to work. That insight spurred Jiang and his colleagues at Columbia, the University of North Carolina at Chapel Hill, and Barnard College to develop PAWS and publish their work in the October 2019 issue of the IEEE Internet of Things Journal.

Photo: Electrical Engineering and Data Science Institute/Columbia University The Pedestrian Audio Warning System detects nearby cars by using microphones and machine learning algorithms to analyze vehicle sounds. 

Many cars with collision warning systems rely upon visual cameras, radar, or lidar to detect nearby objects. But Jiang and his colleagues soon realized that a pedestrian-focused system would need a low-power sensor that could operate for more than six hours on standard batteries. “So we decided to go with an array of microphones, which are very inexpensive and low-power sensors,” Jiang says.

Read this article for free on IEEE Xplore until 28 January 2020.

The array of four microphones is located in different parts of the headphone. But the wearable warning system’s main hardware is designed to fit inside the left ear housing of commercial headphones and draws power from a rechargeable lithium-ion battery. A custom integrated circuit saves on power by only extracting the most relevant sound features from the captured audio and transmitting that information to a paired smartphone app.

The smartphone hosts the machine learning algorithms that were trained on audio from 60 different types of vehicles in a variety of environments: a street adjacent to a university campus and residential area, the side of a windy highway during hurricane season, and the busy streets of Manhattan.

However, relying purely on sound to detect vehicles has proven tricky. For one thing, the system tends to focus on localizing the loudest vehicle, which may not be the vehicle closest to the pedestrian.The system also still has trouble locating multiple vehicles or even estimating how many vehicles are present.

Photo: Electrical Engineering and Data Science Institute/Columbia University The hardware for the Pedestrian Audio Warning System can fit inside the ear housing of commercial headphones.

As it stands, the PAWS capability to localize a vehicle up to 60 meters away might provide at least several seconds of warning depending on the speed of an oncoming vehicle. But a truly useful warning system would also be able to track the trajectory of a nearby vehicle and only provide a warning if it’s on course to potentially hit the pedestrian. That may require the researchers to figure out better ways to track both the pedestrian’s location and trajectory along with the same information for vehicles.

“If you imagine one person walking along the street, many cars may pass by but none will hit the person,” Jiang explains. “We have to take into account other information to make this collision detection more useful.”

More work continues on how the system would use noises or other signals to alert headphone wearers. Joshua New, a behavioral psychologist at Barnard College, plans to conduct experiments to see what warning cue works best to give people a heads up. For now, the team is leaning toward either providing a warning beep on one side of a stereo headphone or possibly simulating 3D warning sounds to provide more spatially-relevant information.

Beyond ordinary pedestrians, police officers performing a traffic stop on a busy road or construction workers wearing ear protection might also benefit from such technology, Jiang says. The PAWS project has already received US $1.2 million from the National Science Foundation, and the team has an eye on eventually handing a more refined version of the technology over to a company to commercialize it.

Of course, one technology will not solve the challenges of pedestrian safety. In its 2019 report, the Governors Highway Safety Association blamed higher numbers of pedestrian deaths on many factors such as a lack of safe road crossings, and generally unsafe driving by speeding, distracted, or drunk drivers. A headphone equipped with PAWS is unlikely to prevent even a majority of pedestrian deaths—but a few seconds’ warning might help spare some lives.

Back to IEEE Journal Watch

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores

Let us know if you have suggestions for next week, and enjoy today’s videos.

Thank you to our readers and Happy Holidays from IEEE Spectrum’s robotics team!
—Erico, Evan, and Fan

Happy Holidays from FZI Living Lab!

This is what a robot holiday video should be. Amazing work from FZI!

[ FZI ]

Thanks Arne!

This is the robot I’m most excited about for 2020:

[ IIT ]

Happy Holidays from ETH Zurich’s Autonomous Systems Lab!

ASL ]

Digit v2 demonstrates autonomous pick and place with multiple boxes.

[ Agility Robotics ]

Happy Holidays from EPFL LMTS, whose soft robots we wrote about this week!

NOW SMACK THEM!

[ LMTS ]

Happy Holidays from ETH Zurich’s Robotic Systems Lab!

[ RSL ]

Happy Holidays from OTTO Motors!

OTTO Motors is based in Ontario, which, being in Canada, is basically the North Pole.

[ OTTO Motors ]

Happy Holidays from FANUC!

[ FANUC ]

Brain Corp makes the brains required to turn manual cleaning machines into autonomous robotic cleaning machines.

Braaains.

[ Brain Corp ]

Happy Holidays from RE2 Robotics!

[ RE2 ]

Happy Holidays from Denso Robotics!

[ Denso ]

Happy Holidays from Robodev!

That sandwich thing looks pretty good, but I'm not sold on the potato.

[ Robodev ]

Thanks Andreas!

Happy Holidays from Kawasaki Robotics!

[ Kawasaki ]

On Dec. 17, 2019, engineers took NASA’s next Mars rover for its first spin. The test took place in the Spacecraft Assembly Facility clean room at NASA’s Jet Propulsion Laboratory in Pasadena, California. This was the first drive test for the new rover, which will move to Cape Canaveral, Florida, in the beginning of next year to prepare for its launch to Mars in the summer. Engineers are checking that all the systems are working together properly, the rover can operate under its own weight, and the rover can demonstrate many of its autonomous navigation functions. The launch window for Mars 2020 opens on July 17, 2020. The rover will land at Mars' Jezero Crater on Feb. 18, 2021.

[ JPL ]

Happy Holidays from Laval University’s Northern Robotics Laboratory!

[ Norlab ]

The Chaparral is a hybrid-electric vertical takeoff and landing (VTOL) cargo aircraft being developed by the team at Elroy Air in San Francisco, CA. The system will carry 300lbs of cargo over a 300mi range. This video reveals a bit more about the system than we've shown in the past. Enjoy!

[ Elroy Air ]

FANUC's new CRX-10iA and CRX-10iA/L collaborative robots feature quick setup, easy programming and reliable performance.

[ FANUC ]

Omron’s ping pong robot is pretty good at the game, as long as you’re only pretty good at the game. If you’re much better than pretty good, it’s pretty bad.

[ Omron ]

The Voliro drone may not look like it’s doing anything all that difficult but wait until it flips 90 degrees and stands on its head!

[ Voliro ]

Based on a unique, patented technology, ROVéo can swiftly tackle rough terrain, as well as steps and stairs, by simply adapting to their shape. It is ideal to monitor security both outside AND inside big industrial sites.

[ Rovenso ]

A picture says more than a thousand words, a video more than a thousand pictures. For this reason, we have produced a series of short films that present the researchers at the Max Planck Institute for Intelligent Systems, their projects and goals. We want to give an insight into our institute, making the work done here understandable for everyone. We continue the series with a portrait of the "Dynamic Locomotion" Max Planck research group lead by Dr. Alexander Badri-Spröwitz.

[ Max Planck ]

Thanks Fan!

This is a 13-minute-long IREX demo of Kawasaki’s Kaleido humanoid.

[ Kawasaki ]

Learn how TRI is working to build an uncrashable car, use robotics to amplify people’s capabilities as they age and leverage artificial intelligence to enable discovery of new materials for batteries and fuel cells.

[ Girl Geek X ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores

Let us know if you have suggestions for next week, and enjoy today’s videos.

Thank you to our readers and Happy Holidays from IEEE Spectrum’s robotics team!
—Erico, Evan, and Fan

Happy Holidays from FZI Living Lab!

This is what a robot holiday video should be. Amazing work from FZI!

[ FZI ]

Thanks Arne!

This is the robot I’m most excited about for 2020:

[ IIT ]

Happy Holidays from ETH Zurich’s Autonomous Systems Lab!

ASL ]

Digit v2 demonstrates autonomous pick and place with multiple boxes.

[ Agility Robotics ]

Happy Holidays from EPFL LMTS, whose soft robots we wrote about this week!

NOW SMACK THEM!

[ LMTS ]

Happy Holidays from ETH Zurich’s Robotic Systems Lab!

[ RSL ]

Happy Holidays from OTTO Motors!

OTTO Motors is based in Ontario, which, being in Canada, is basically the North Pole.

[ OTTO Motors ]

Happy Holidays from FANUC!

[ FANUC ]

Brain Corp makes the brains required to turn manual cleaning machines into autonomous robotic cleaning machines.

Braaains.

[ Brain Corp ]

Happy Holidays from RE2 Robotics!

[ RE2 ]

Happy Holidays from Denso Robotics!

[ Denso ]

Happy Holidays from Robodev!

That sandwich thing looks pretty good, but I'm not sold on the potato.

[ Robodev ]

Thanks Andreas!

Happy Holidays from Kawasaki Robotics!

[ Kawasaki ]

On Dec. 17, 2019, engineers took NASA’s next Mars rover for its first spin. The test took place in the Spacecraft Assembly Facility clean room at NASA’s Jet Propulsion Laboratory in Pasadena, California. This was the first drive test for the new rover, which will move to Cape Canaveral, Florida, in the beginning of next year to prepare for its launch to Mars in the summer. Engineers are checking that all the systems are working together properly, the rover can operate under its own weight, and the rover can demonstrate many of its autonomous navigation functions. The launch window for Mars 2020 opens on July 17, 2020. The rover will land at Mars' Jezero Crater on Feb. 18, 2021.

[ JPL ]

Happy Holidays from Laval University’s Northern Robotics Laboratory!

[ Norlab ]

The Chaparral is a hybrid-electric vertical takeoff and landing (VTOL) cargo aircraft being developed by the team at Elroy Air in San Francisco, CA. The system will carry 300lbs of cargo over a 300mi range. This video reveals a bit more about the system than we've shown in the past. Enjoy!

[ Elroy Air ]

FANUC's new CRX-10iA and CRX-10iA/L collaborative robots feature quick setup, easy programming and reliable performance.

[ FANUC ]

Omron’s ping pong robot is pretty good at the game, as long as you’re only pretty good at the game. If you’re much better than pretty good, it’s pretty bad.

[ Omron ]

The Voliro drone may not look like it’s doing anything all that difficult but wait until it flips 90 degrees and stands on its head!

[ Voliro ]

Based on a unique, patented technology, ROVéo can swiftly tackle rough terrain, as well as steps and stairs, by simply adapting to their shape. It is ideal to monitor security both outside AND inside big industrial sites.

[ Rovenso ]

A picture says more than a thousand words, a video more than a thousand pictures. For this reason, we have produced a series of short films that present the researchers at the Max Planck Institute for Intelligent Systems, their projects and goals. We want to give an insight into our institute, making the work done here understandable for everyone. We continue the series with a portrait of the "Dynamic Locomotion" Max Planck research group lead by Dr. Alexander Badri-Spröwitz.

[ Max Planck ]

Thanks Fan!

This is a 13-minute-long IREX demo of Kawasaki’s Kaleido humanoid.

[ Kawasaki ]

Learn how TRI is working to build an uncrashable car, use robotics to amplify people’s capabilities as they age and leverage artificial intelligence to enable discovery of new materials for batteries and fuel cells.

[ Girl Geek X ]

Non-destructive handling of soft biological samples at the cellular level is becoming increasingly relevant in life sciences. In particular, spatially dense arrangements of soft manipulators with the capability of in situ monitoring via optical and electron microscopes promises new and exciting experimental techniques. The currently available manipulation technologies offer high positioning accuracy, yet these devices significantly grow in complexity in achieving compliance. We explore soft and compliant actuator material with a mechanical response similar to gel-like samples for perspective miniaturized manipulators. First, we demonstrate three techniques for rendering the bulk sheet-like electroactive material, the ionic and capacitive laminate (ICL), into a practical manipulator. We then show that these manipulators are also highly compatible with electron optics. Finally, we explore the performance of an ICL manipulator in handling a single large cell. Intrinsic compliance, miniature size, simple current-driven actuation, and negligible interference with the imaging technologies suggest a considerable perspective for the ICL in spatially dense arrays of compliant manipulators for microscopy.

Researchers at EPFL have developed a soft robotic insect that uses artificial soft muscles called dielectric elastomer actuators to drive tiny feet that propel the little bot along at a respectable speed. And since the whole thing is squishy and already mostly flat, you can repeatedly smash it into the ground with a fly swatter, and then peel it off and watch it start running again. Get ready for one of the most brutal robot abuse videos you’ve ever seen.

We’re obligated to point out that the version of the robot that survives being raged on with the swatter is a tethered one, not the autonomous version with the battery and microcontroller and sensors, which might not react so well to repeated batterings. But still, it’s pretty cool to see it get peeled right off and keep on going, and the researchers say they’ve been able to do this smash n’ peel eight times in a row without destroying the robot.

Powered by dielectric elastomer actuators

One of the tricky things about building robots like these (that rely on very high-speed actuation) is power—the power levels themselves are usually low, in the milliwatt range, but the actuators generally require several kilovolts to function, meaning that you need a bunch of electronics that can boost the battery voltage up to something you can use. Even miniaturized power systems are in the tens of grams, which is obviously impractical for a robot that weighs one gram or less. Dielectric elastomer actuators, or DEAs, are no exception to this, so the researchers instead used a stack of DEAs that could run at a significantly lower voltage. These low-voltage stacked DEAs (LVSDEAs, because more initialisms are better) run at just 450 volts, but cycle at up to 600 hertz, using power electronics weighing just 780 milligrams.

Image: EPFL Each soft robot uses three LVSDEAs to operate three independent legs.

The LVSDEA actuation is converted into motion by using flexible angled legs, similar to a bristlebot. One leg on each side allows the robot to turn, pivoting around a third supporting leg in the front. Top speed of the 190-mg tethered robot is 18 mm/s (0.5 body-lengths/s), while the autonomous version with an 800-g payload of batteries and electronics and sensors could move at 12 mm/s for 14 minutes before running out of juice. Interestingly, stiffening the structure of the robot by holding it in a curved shape with a piece of tape significantly increased its performance, nearly doubling its speed to 30 mm/s (0.85 body-lengths/s) and boosting its payload capacity as well.

What we’re all waiting for, of course, is a soft robot that can be smashable and untethered at the same time. This is always the issue with soft robots—they’re almost always just mostly soft, requiring either off-board power or rigid components in the form of electronics or batteries. The EPFL researchers say that they’re “currently working on an untethered and entirely soft version” in partnership with Stanford, which we’re very excited to see.

[ EPFL ]

Researchers at EPFL have developed a soft robotic insect that uses artificial soft muscles called dielectric elastomer actuators to drive tiny feet that propel the little bot along at a respectable speed. And since the whole thing is squishy and already mostly flat, you can repeatedly smash it into the ground with a fly swatter, and then peel it off and watch it start running again. Get ready for one of the most brutal robot abuse videos you’ve ever seen.

We’re obligated to point out that the version of the robot that survives being raged on with the swatter is a tethered one, not the autonomous version with the battery and microcontroller and sensors, which might not react so well to repeated batterings. But still, it’s pretty cool to see it get peeled right off and keep on going, and the researchers say they’ve been able to do this smash n’ peel eight times in a row without destroying the robot.

Powered by dielectric elastomer actuators

One of the tricky things about building robots like these (that rely on very high-speed actuation) is power—the power levels themselves are usually low, in the milliwatt range, but the actuators generally require several kilovolts to function, meaning that you need a bunch of electronics that can boost the battery voltage up to something you can use. Even miniaturized power systems are in the tens of grams, which is obviously impractical for a robot that weighs one gram or less. Dielectric elastomer actuators, or DEAs, are no exception to this, so the researchers instead used a stack of DEAs that could run at a significantly lower voltage. These low-voltage stacked DEAs (LVSDEAs, because more initialisms are better) run at just 450 volts, but cycle at up to 600 hertz, using power electronics weighing just 780 milligrams.

Image: EPFL Each soft robot uses three LVSDEAs to operate three independent legs.

The LVSDEA actuation is converted into motion by using flexible angled legs, similar to a bristlebot. One leg on each side allows the robot to turn, pivoting around a third supporting leg in the front. Top speed of the 190-mg tethered robot is 18 mm/s (0.5 body-lengths/s), while the autonomous version with an 800-g payload of batteries and electronics and sensors could move at 12 mm/s for 14 minutes before running out of juice. Interestingly, stiffening the structure of the robot by holding it in a curved shape with a piece of tape significantly increased its performance, nearly doubling its speed to 30 mm/s (0.85 body-lengths/s) and boosting its payload capacity as well.

What we’re all waiting for, of course, is a soft robot that can be smashable and untethered at the same time. This is always the issue with soft robots—they’re almost always just mostly soft, requiring either off-board power or rigid components in the form of electronics or batteries. The EPFL researchers say that they’re “currently working on an untethered and entirely soft version” in partnership with Stanford, which we’re very excited to see.

[ EPFL ]

This paper presents a method to grasp objects that cannot be picked directly from a table, using a soft, underactuated hand. These grasps are achieved by dragging the object to the edge of a table, and grasping it from the protruding part, performing so-called slide-to-edge grasps. This type of approach, which uses the environment to facilitate the grasp, is named Environmental Constraint Exploitation (ECE), and has been shown to improve the robustness of grasps while reducing the planning effort. The paper proposes two strategies, namely Continuous Slide and Grasp and Pivot and Re-Grasp, that are designed to deal with different objects. In the first strategy, the hand is positioned over the object and assumed to stick to it during the sliding until the edge, where the fingers wrap around the object and pick it up. In the second strategy, instead, the sliding motion is performed using pivoting, and thus the object is allowed to rotate with respect to the hand that drags it toward the edge. Then, as soon as the object reaches the desired position, the hand detaches from the object and moves to grasp the object from the side. In both strategies, the hand positioning for grasping the object is implemented using a recently proposed functional model for soft hands, the closure signature, whereas the sliding motion on the table is executed by using a hybrid force-velocity controller. We conducted 320 grasping trials with 16 different objects using a soft hand attached to a collaborative robot arm. Experiments showed that the Continuous Slide and Grasp is more suitable for small objects (e.g., a credit card), whereas the Pivot and Re-Grasp performs better with larger objects (e.g., a big book). The gathered data were used to train a classifier that selects the most suitable strategy to use, according to the object size and weight. Implementing ECE strategies with soft hands is a first step toward their use in real-world scenarios, where the environment should be seen more as a help than as a hindrance.

For the most part, robots are a mystery to end users. And that’s part of the point: Robots are autonomous, so they’re supposed to do their own thing (presumably the thing that you want them to do) and not bother you about it. But as humans start to work more closely with robots, in collaborative tasks or social or assistive contexts, it’s going to be hard for us to trust them if their autonomy is such that we find it difficult to understand what they’re doing.

In a paper published in Science Robotics, researchers from UCLA have developed a robotic system that can generate different kinds of real-time, human-readable explanations about its actions, and then did some testing to figure which of the explanations were the most effective at improving a human’s trust in the system. Does this mean we can totally understand and trust robots now? Not yet—but it’s a start.

This work was funded by DARPA’s Explainable AI (XAI) program, which has a goal of being able to “understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena.” According to DARPA, “explainable AI—especially explainable machine learning—will be essential if [humans] are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.”

There are a few different issues that XAI has to tackle. One of those is the inherent opaqueness of machine learning models, where you throw a big pile of training data at some kind of network, which then does what you want it to do most of the time but also sometimes fails in weird ways that are very difficult to understand or predict. A second issue is figuring out how AI systems (and the robots that they inhabit) can effectively communicate what they’re doing with humans, via what DARPA refers to as an explanation interface. This is what UCLA has been working on.

The present project aims to disentangle explainability from task performance, measuring each separately to gauge the advantages and limitations of two major families of representations—symbolic representations and data-driven representations—in both task performance and fostering human trust. The goals are to explore (i) what constitutes a good performer for a complex robot manipulation task? (ii) How can we construct an effective explainer to explain robot behavior and foster human trust?

UCLA’s Baxter robot learned how to open a safety-cap medication bottle (tricky for robots and humans alike) by learning a manipulation model from haptic demonstrations provided by humans opening medication bottles while wearing a sensorized glove. This was combined with a symbolic action planner to allow the robot adjust its actions to adapt to bottles with different kinds of caps, and it does a good job without the inherent mystery of a neural network.

Intuitively, such an integration of the symbolic planner and haptic model enables the robot to ask itself: “On the basis of the human demonstration, the poses and forces I perceive right now, and the action sequence I have executed thus far, which action has the highest likelihood of opening the bottle?”

Both the haptic model and the symbolic planner can be leveraged to provide human-compatible explanations of what the robot is doing. The haptic model can visually explain an individual action that the robot is taking, while the symbolic planner can show a sequence of actions that are (ideally) leading towards a goal. What’s key here is that these explanations are coming from the planning system itself, rather than something that’s been added later to try and translate between a planner and a human.

Image: Science Robotics

As the robot performs a set of actions (top row of images), its symbolic planner (middle row) and haptic model (bottom row) generate explanations for each action. The red on the robot gripper’s palm indicates a large magnitude of force applied by the gripper, and green indicates no force. These explanations are provided in real time as the robot executes the actions.

To figure out whether these explanations made a difference in the level of a human’s trust or confidence or belief that the robot would be successful at its task, the researchers conducted a psychological study with 150 participants. While watching a video of the robot opening a medicine bottle, groups of participants were shown the haptic planner, the symbolic planner, or both planners at the same time, while two other groups were either shown no explanation at all, or a human-generated one-sentence summary of what the robot did. Survey results showed that the highest trust rating came from the group that had access to both the symbolic and haptic explanations, although the symbolic explanation was more impactful.

In general, humans appear to need real-time, symbolic explanations of the robot’s internal decisions for performed action sequences to establish trust in machines performing multistep complex tasks… Information at the haptic level may be excessively tedious and may not yield a sense of rational agency that allows the robot to gain human trust. To establish human trust in machines and enable humans to predict robot behaviors, it appears that an effective explanation should provide a symbolic interpretation and maintain a tight temporal coupling between the explanation and the robot’s immediate behavior.

This paper focuses on a very specific interpretation of the word “explain.” The robot is able to explain what it’s doing (i.e. the steps that it’s taking) in a way that is easy for humans to interpret, and it’s effective in doing so. However, it’s really just explaining the “what” rather than the “why,” because at least in this case, the “why” (as far as the robot knows) is really just “because a human did it this way” due to the way the robot learned to do the task.

While the “what” explanations did foster more trust in humans in this study, long term, XAI will need to include “why” as well, and the example of the robot unscrewing a medicine bottle illustrates a situation in which it would be useful.

Image: Science Robotics In one study, the researchers showed participants a video of the robot opening the bottle (A). Different groups saw different explanation panels along with the video: (B) Symbolic explanation panel; (C) Haptic explanation panel; (D) Text explanation panel.

You can see that there are several repetitive steps in this successful bottle opening, and as an observer, I have no way of knowing if the robot is repeating an action because the first action failed, or if that was just part of its plan. Maybe the opening the bottle really just takes one single grasp-push-twist sequence, but the robot’s gripper slipped the first time. 

Personally, when I think of a robot explaining what it’s doing, this is what I’m thinking of. Knowing what a robot was “thinking,” or at least the reasoning behind its actions or non-actions, would significantly increase my comfort with and confidence around robotic systems, because they wouldn’t seem so… Dumb? For example, is that robot just sitting there and not doing anything because it’s broken, or because it’s doing some really complicated motion planning? Is my Roomba wandering around randomly because it’s lost, or is it wandering around pseudorandomly because that’s the most efficient way to clean? Does that medicine bottle need to be twisted again because a gripper slipped the first time, or because it takes two twists to open?

Knowing what a robot was “thinking,” or at least the reasoning behind its actions or non-actions, would significantly increase my confidence around robotic systems. For example, is that robot just sitting there and not doing anything because it’s broken, or because it’s doing some really complicated motion planning?

Even if the robot makes a decision that I would disagree with, this level of “why” explanation or “because” explanation means that I can have confidence that the robot isn’t dumb or broken, but is either doing what it was programmed to do, or dealing with some situation that it wasn’t prepared for. In either case, I feel like my trust in it would significantly improve, because I know it’s doing what it’s supposed to be doing and/or the best it can, rather than just having some kind of internal blue screen of death experience or something like that. And if it is dead inside, well, I’d want to know that, too.

Longer-term, the UCLA researchers are working on the “why” as well, but it’s going to take a major shift in the robotics community for even the “what” to become a priority. The fundamental problem is that right now, roboticists in general are relentlessly focused on optimization for performance—who cares what’s going on inside your black box system as long as it can successfully grasp random objects 99.9 percent of the time?

But people should care, says lead author of the UCLA paper Mark Edmonds. “I think that explanation should be considered along with performance,” he says. “Even if you have better performance, if you’re not able to provide an explanation, is that actually better?” He added: “The purpose of XAI in general is not to encourage people to stop going down that performance-driven path, but to instead take a step back, and ask, ‘What is this system really learning, and how can we get it to tell us?’ ”

It’s a little scary, I think, to have systems (and in some cases safety critical systems) that work just because they work—because they were fed a ton of training data and consequently seem to do what they’re supposed to do to the extent that you’re able to test them. But you only ever have the vaguest of ideas why these systems are working, and as robots and AI become a more prominent part of our society, explainability will be a critical factor in allowing us to comfortably trust them.

“A Tale of Two Explanations: Enhancing Human Trust by Explaining Robot Behavior,” by M. Edmonds, F. Gao, H. Liu, X. Xie, S. Qi, Y. Zhu, Y.N. Wu, H. Lu, and S.-C. Zhu from the University of California, Los Angeles, and B. Rothrock from the California Institute of Technology, in Pasadena, Calif., appears in the current issue of Science Robotics.

For the most part, robots are a mystery to end users. And that’s part of the point: Robots are autonomous, so they’re supposed to do their own thing (presumably the thing that you want them to do) and not bother you about it. But as humans start to work more closely with robots, in collaborative tasks or social or assistive contexts, it’s going to be hard for us to trust them if their autonomy is such that we find it difficult to understand what they’re doing.

In a paper published in Science Robotics, researchers from UCLA have developed a robotic system that can generate different kinds of real-time, human-readable explanations about its actions, and then did some testing to figure which of the explanations were the most effective at improving a human’s trust in the system. Does this mean we can totally understand and trust robots now? Not yet—but it’s a start.

This work was funded by DARPA’s Explainable AI (XAI) program, which has a goal of being able to “understand the context and environment in which they operate, and over time build underlying explanatory models that allow them to characterize real world phenomena.” According to DARPA, “explainable AI—especially explainable machine learning—will be essential if [humans] are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.”

There are a few different issues that XAI has to tackle. One of those is the inherent opaqueness of machine learning models, where you throw a big pile of training data at some kind of network, which then does what you want it to do most of the time but also sometimes fails in weird ways that are very difficult to understand or predict. A second issue is figuring out how AI systems (and the robots that they inhabit) can effectively communicate what they’re doing with humans, via what DARPA refers to as an explanation interface. This is what UCLA has been working on.

The present project aims to disentangle explainability from task performance, measuring each separately to gauge the advantages and limitations of two major families of representations—symbolic representations and data-driven representations—in both task performance and fostering human trust. The goals are to explore (i) what constitutes a good performer for a complex robot manipulation task? (ii) How can we construct an effective explainer to explain robot behavior and foster human trust?

UCLA’s Baxter robot learned how to open a safety-cap medication bottle (tricky for robots and humans alike) by learning a manipulation model from haptic demonstrations provided by humans opening medication bottles while wearing a sensorized glove. This was combined with a symbolic action planner to allow the robot adjust its actions to adapt to bottles with different kinds of caps, and it does a good job without the inherent mystery of a neural network.

Intuitively, such an integration of the symbolic planner and haptic model enables the robot to ask itself: “On the basis of the human demonstration, the poses and forces I perceive right now, and the action sequence I have executed thus far, which action has the highest likelihood of opening the bottle?”

Both the haptic model and the symbolic planner can be leveraged to provide human-compatible explanations of what the robot is doing. The haptic model can visually explain an individual action that the robot is taking, while the symbolic planner can show a sequence of actions that are (ideally) leading towards a goal. What’s key here is that these explanations are coming from the planning system itself, rather than something that’s been added later to try and translate between a planner and a human.

Image: Science Robotics

As the robot performs a set of actions (top row of images), its symbolic planner (middle row) and haptic model (bottom row) generate explanations for each action. The red on the robot gripper’s palm indicates a large magnitude of force applied by the gripper, and green indicates no force. These explanations are provided in real time as the robot executes the actions.

To figure out whether these explanations made a difference in the level of a human’s trust or confidence or belief that the robot would be successful at its task, the researchers conducted a psychological study with 150 participants. While watching a video of the robot opening a medicine bottle, groups of participants were shown the haptic planner, the symbolic planner, or both planners at the same time, while two other groups were either shown no explanation at all, or a human-generated one-sentence summary of what the robot did. Survey results showed that the highest trust rating came from the group that had access to both the symbolic and haptic explanations, although the symbolic explanation was more impactful.

In general, humans appear to need real-time, symbolic explanations of the robot’s internal decisions for performed action sequences to establish trust in machines performing multistep complex tasks… Information at the haptic level may be excessively tedious and may not yield a sense of rational agency that allows the robot to gain human trust. To establish human trust in machines and enable humans to predict robot behaviors, it appears that an effective explanation should provide a symbolic interpretation and maintain a tight temporal coupling between the explanation and the robot’s immediate behavior.

This paper focuses on a very specific interpretation of the word “explain.” The robot is able to explain what it’s doing (i.e. the steps that it’s taking) in a way that is easy for humans to interpret, and it’s effective in doing so. However, it’s really just explaining the “what” rather than the “why,” because at least in this case, the “why” (as far as the robot knows) is really just “because a human did it this way” due to the way the robot learned to do the task.

While the “what” explanations did foster more trust in humans in this study, long term, XAI will need to include “why” as well, and the example of the robot unscrewing a medicine bottle illustrates a situation in which it would be useful.

Image: Science Robotics In one study, the researchers showed participants a video of the robot opening the bottle (A). Different groups saw different explanation panels along with the video: (B) Symbolic explanation panel; (C) Haptic explanation panel; (D) Text explanation panel.

You can see that there are several repetitive steps in this successful bottle opening, and as an observer, I have no way of knowing if the robot is repeating an action because the first action failed, or if that was just part of its plan. Maybe the opening the bottle really just takes one single grasp-push-twist sequence, but the robot’s gripper slipped the first time. 

Personally, when I think of a robot explaining what it’s doing, this is what I’m thinking of. Knowing what a robot was “thinking,” or at least the reasoning behind its actions or non-actions, would significantly increase my comfort with and confidence around robotic systems, because they wouldn’t seem so… Dumb? For example, is that robot just sitting there and not doing anything because it’s broken, or because it’s doing some really complicated motion planning? Is my Roomba wandering around randomly because it’s lost, or is it wandering around pseudorandomly because that’s the most efficient way to clean? Does that medicine bottle need to be twisted again because a gripper slipped the first time, or because it takes two twists to open?

Knowing what a robot was “thinking,” or at least the reasoning behind its actions or non-actions, would significantly increase my confidence around robotic systems. For example, is that robot just sitting there and not doing anything because it’s broken, or because it’s doing some really complicated motion planning?

Even if the robot makes a decision that I would disagree with, this level of “why” explanation or “because” explanation means that I can have confidence that the robot isn’t dumb or broken, but is either doing what it was programmed to do, or dealing with some situation that it wasn’t prepared for. In either case, I feel like my trust in it would significantly improve, because I know it’s doing what it’s supposed to be doing and/or the best it can, rather than just having some kind of internal blue screen of death experience or something like that. And if it is dead inside, well, I’d want to know that, too.

Longer-term, the UCLA researchers are working on the “why” as well, but it’s going to take a major shift in the robotics community for even the “what” to become a priority. The fundamental problem is that right now, roboticists in general are relentlessly focused on optimization for performance—who cares what’s going on inside your black box system as long as it can successfully grasp random objects 99.9 percent of the time?

But people should care, says lead author of the UCLA paper Mark Edmonds. “I think that explanation should be considered along with performance,” he says. “Even if you have better performance, if you’re not able to provide an explanation, is that actually better?” He added: “The purpose of XAI in general is not to encourage people to stop going down that performance-driven path, but to instead take a step back, and ask, ‘What is this system really learning, and how can we get it to tell us?’ ”

It’s a little scary, I think, to have systems (and in some cases safety critical systems) that work just because they work—because they were fed a ton of training data and consequently seem to do what they’re supposed to do to the extent that you’re able to test them. But you only ever have the vaguest of ideas why these systems are working, and as robots and AI become a more prominent part of our society, explainability will be a critical factor in allowing us to comfortably trust them.

“A Tale of Two Explanations: Enhancing Human Trust by Explaining Robot Behavior,” by M. Edmonds, F. Gao, H. Liu, X. Xie, S. Qi, Y. Zhu, Y.N. Wu, H. Lu, and S.-C. Zhu from the University of California, Los Angeles, and B. Rothrock from the California Institute of Technology, in Pasadena, Calif., appears in the current issue of Science Robotics.

Pages