Feed aggregator

Kate Darling is an expert on human robot interaction, robot ethics, intellectual property, and all sorts of other things at the MIT Media Lab. She’s written several excellent articles for us in the past, and we’re delighted to be able to share this excerpt from her new book, which comes out today. Entitled The New Breed: What Our History with Animals Reveals about Our Future with Robots, Kate’s book is an exploration of how animals can help us understand our robot relationships, and how far that comparison can really be extended. It’s solidly based on well-cited research, including many HRI studies that we’ve written about in the past, but Kate brings everything together and tells us what it all could mean as robots continue to integrate themselves into our lives. 

The following excerpt is The Power of Movement, a section from the chapter Robots Versus Toasters, which features one of the saddest robot videos I’ve ever seen, even after nearly a decade. Enjoy!

When the first black-and-white motion pictures came to the screen, an 1896 film showing in a Paris cinema is said to have caused a stampede: the first-time moviegoers, watching a giant train barrel toward them, jumped out of their seats and ran away from the screen in panic. According to film scholar Martin Loiperdinger, this story is no more than an urban legend. But this new media format, “moving pictures,” proved to be both immersive and compelling, and was here to stay. Thanks to a baked-in ability to interpret motion, we’re fascinated even by very simple animation because it tells stories we intuitively understand.

In a seminal study from the 1940s, psychologists Fritz Heider and Marianne Simmel showed participants a black-and-white movie of simple, geometrical shapes moving around on a screen. When instructed to describe what they were seeing, nearly every single one of their participants interpreted the shapes to be moving around with agency and purpose. They described the behavior of the triangles and circle the way we describe people’s behavior, by assuming intent and motives. Many of them went so far as to create a complex narrative around the moving shapes. According to one participant: “A man has planned to meet a girl and the girl comes along with another man. [ . . . ] The girl gets worried and races from one corner to the other in the far part of the room. [ . . . ] The girl gets out of the room in a sudden dash just as man number two gets the door open. The two chase around the outside of the room together, followed by man number one. But they finally elude him and get away. The first man goes back and tries to open his door, but he is so blinded by rage and frustration that he can not open it.”

What brought the shapes to life for Heider and Simmel’s participants was solely their movement. We can interpret certain movement in other entities as “worried,” “frustrated,” or “blinded by rage,” even when the “other” is a simple black triangle moving across a white background. A number of studies document how much information we can extract from very basic cues, getting us to assign emotions and gender identity to things as simple as moving points of light. And while we might not run away from a train on a screen, we’re still able to interpret the movement and may even get a little thrill from watching the train in a more modern 3D screening. (There are certainly some embarrassing videos of people—maybe even of me—when we first played games wearing virtual reality headsets.)

Many scientists believe that autonomous movement activates our “life detector.” Because we’ve evolved needing to quickly identify natural predators, our brains are on constant lookout for moving agents. In fact, our perception is so attuned to movement that we separate things into objects and agents, even if we’re looking at a still image. Researchers Joshua New, Leda Cosmides, and John Tooby showed people photos of a variety of scenes, like a nature landscape, a city scene, or an office desk. Then, they switched in an identical image with one addition; for example, a bird, a coffee mug, an elephant, a silo, or a vehicle. They measured how quickly the participants could identify the new appearance. People were substantially quicker and more accurate at detecting the animals compared to all of the other categories, including larger objects and vehicles.

The researchers also found evidence that animal detection activated an entirely different region of people’s brains. Research like this suggests that a specific part of our brain is constantly monitoring for lifelike animal movement. This study in particular also suggests that our ability to separate animals and objects is more likely to be driven by deep ancestral priorities than our own life experiences. Even though we have been living with cars for our whole lives, and they are now more dangerous to us than bears or tigers, we’re still much quicker to detect the presence of an animal.

The biological hardwiring that detects and interprets life in autonomous agent movement is even stronger when it has a body and is in the room with us. John Harris and Ehud Sharlin at the University of Calgary tested this projection with a moving stick. They took a long piece of wood, about the size of a twirler’s baton, and attached one end to a base with motors and eight degrees of freedom. This allowed the researchers to control the stick remotely and wave it around: fast, slow, doing figure eights, etc. They asked the experiment participants to spend some time alone in a room with the moving stick. Then, they had the participants describe their experience.

Only two of the thirty participants described the stick’s movement in technical terms. The others told the researchers that the stick was bowing or otherwise greeting them, claimed it was aggressive and trying to attack them, described it as pensive, “hiding something,” or even “purring happily.” At least ten people said the stick was “dancing.” One woman told the stick to stop pointing at her.

If people can imbue a moving stick with agency, what happens when they meet R2-D2? Given our social tendencies and ingrained responses to lifelike movement in our physical space, it’s fairly unsurprising that people perceive robots as being alive. Robots are physical objects in our space that often move in a way that seems (to our lizard brains) to have agency. A lot of the time, we don’t perceive robots as objects—to us, they are agents. And, while we may enjoy the concept of pet rocks, we love to anthropomorphize agent behavior even more.

We already have a slew of interesting research in this area. For example, people think a robot that’s present in a room with them is more enjoyable than the same robot on a screen and will follow its gaze, mimic its behavior, and be more willing to take the physical robot’s advice. We speak more to embodied robots, smile more, and are more likely to want to interact with them again. People are more willing to obey orders from a physical robot than a computer. When left alone in a room and given the opportunity to cheat on a game, people cheat less when a robot is with them. And children learn more from working with a robot compared to the same character on a screen. We are better at recognizing a robot’s emotional cues and empathize more with physical robots. When researchers told children to put a robot in a closet (while the robot protested and said it was afraid of the dark), many of the kids were hesitant. 

Even adults will hesitate to switch off or hit a robot, especially when they perceive it as intelligent. People are polite to robots and try to help them. People greet robots even if no greeting is required and are friendlier if a robot greets them first. People reciprocate when robots help them. And, like the socially inept [software office assistant] Clippy, when people don’t like a robot, they will call it names. What’s noteworthy in the context of our human comparison is that the robots don’t need to look anything like humans for this to happen. In fact, even very simple robots, when they move around with “purpose,” elicit an inordinate amount of projection from the humans they encounter. Take robot vacuum cleaners. By 2004, a million of them had been deployed and were sweeping through people’s homes, vacuuming dirt, entertaining cats, and occasionally getting stuck in shag rugs. The first versions of the disc-shaped devices had sensors to detect things like steep drop-offs, but for the most part they just bumbled around randomly, changing direction whenever they hit a wall or a chair.

iRobot, the company that makes the most popular version (the Roomba) soon noticed that their customers would send their vacuum cleaners in for repair with names (Dustin Bieber being one of my favorites). Some Roomba owners would talk about their robot as though it were a pet. People who sent in malfunctioning devices would complain about the company’s generous policy to offer them a brand-new replacement, demanding that they instead fix “Meryl Sweep” and send her back. The fact that the Roombas roamed around on their own lent them a social presence that people’s traditional, handheld vacuum cleaners lacked. People decorated them, talked to them, and felt bad for them when they got tangled in the curtains.

Tech journalists reported on the Roomba’s effect, calling robovacs “the new pet craze.” A 2007 study found that many people had a social relationship with their Roombas and would describe them in terms that evoked people or animals. Today, over 80 percent of Roombas have names. I don’t have access to naming statistics for the handheld Dyson vacuum cleaner, but I’m pretty sure the number is lower.

Robots are entering our lives in many shapes and forms, and even some of the most simple or mechanical robots can prompt a visceral response. And the design of robots isn’t likely to shift away from evoking our biological reactions—especially because some robots are designed to mimic lifelike movement on purpose.

Excerpted from THE NEW BREED: What Our History with Animals Reveals about Our Future with Robots by Kate Darling. Published by Henry Holt and Company. Copyright © 2021 by Kate Darling. All rights reserved.

Kate’s book is available today from Annie Bloom’s Books in SW Portland, Oregon. It’s also available from Powell’s Books, and if you don’t have the good fortune of living in Portland, you can find it in both print and digital formats pretty much everywhere else books are sold.

As for Robovie, the claustrophobic robot that kept getting shoved in a closet, we recently checked in with Peter Kahn, the researcher who created the experiment nearly a decade ago, to make sure that the poor robot ended up okay. “Robovie is doing well,” Khan told us. “He visited my lab on 2-3 other occasions and participated in other experiments. Now he’s back in Japan with the person who helped make him, and who cares a lot about him.” That person is Takayuki Kanda at ATR, who we’re happy to report is still working with Robovie in the context of human-robot interaction. Thanks Robovie! 

This paper studies a defense approach against one or more swarms of adversarial agents. In our earlier work, we employed a closed formation (“StringNet”) of defending agents (defenders) around a swarm of adversarial agents (attackers) to confine their motion within given bounds, and guide them to a safe area. The adversarial agents were assumed to remain close enough to each other, i.e., within a prescribed connectivity region. To handle situations when the attackers no longer stay within such a connectivity region, but rather split into smaller swarms (clusters) to maximize the chance or impact of attack, this paper proposes an approach to learn the attacking sub-swarms and reassign defenders toward the attackers. We use a “Density-based Spatial Clustering of Application with Noise (DBSCAN)” algorithm to identify the spatially distributed swarms of the attackers. Then, the defenders are assigned to each identified swarm of attackers by solving a constrained generalized assignment problem. We also provide conditions under which defenders can successfully herd all the attackers. The efficacy of the approach is demonstrated via computer simulations, as well as hardware experiments with a fleet of quadrotors.

Earlier today, at about 11am Mars time, the Ingenuity Mars Helicopter successfully completed its very first flight on Mars. The little helicopter, which is about the size of a box of tissues, did exactly what it was supposed to do, ascending vertically to 3 meters, hovering for 30 seconds, pivoting towards the Perseverance rover, and then landing again, for a total flight time of about 40 seconds.

With this flight, Ingenuity’s mission is officially a success, opening up the skies of Mars to autonomous robots that can explore farther, faster than ever before.

What data has helicopter sent back to Earth so far?

The first data products to make it back confirmed that Ingenuity is safe and healthy, which was the most important thing. As far as the actual flight went, the helicopter initially sent back confirmations of each of its flight phases, including an altimeter plot, showing that it started its mission on the ground, ascended, hovered, descended, and ended its flight in good enough shape to transmit back to Earth via Perseverance as a relay. 

Screenshot: NASA TV Data showing the flight trajectory from Ingenuity.

We’ve also seen the first picture from Ingenuity’s downward-facing navigation camera, along with a few frames of animation from Perseverance showing the flight itself.

Screen Capture: NASATV Ingenuity’s first flight as seen from the Perseverance rover, about 100 meters away. These are still frames that are stitched together to make a video, which is why the flight looks short.

When will there be more pictures and video?

More data should be arriving back at Earth over the course of the day today.

Wait, wasn’t this supposed to have happened a week ago?

The first flight attempt was originally scheduled for April 12, but on April 9, a high-speed spin test revealed a command sequencing issue that JPL needed some extra time to diagnose and fix. This solution worked 85% of the time, and failed safely, which was good enough for the attempt today.

What does Ingenuity do next?

The clock is ticking on Ingenuity’s 30 day mission window, so there will be a lot more happening over the next few weeks. Here’s JPL’s tentative plan for the next several flights:

Flight Test No. 2 could be expanded to include climbing to 16 feet (5 meters) and then flying horizontally for a few feet (meters), flying horizontally back to descend, and landing within the airfield. Total flight time could be up to 90 seconds. Images from the helicopter’s navigation camera will later be used by project team members on Earth to evaluate the helicopter’s navigation performance.

If the second experimental test flight is a success, the goals of Flight Test No. 3 could be expanded to test the helicopter’s ability to fly farther and faster–up to 160 feet (50 meters) from the airfield and then return. Total flight time could be up to 90 seconds.

If the project timeline allows for Flight Tests No. 4 and 5, the goals and flight plans will be based on data returned from the first three tests. The flights could further explore Ingenuity’s aerial capabilities, including flying at a time of day where higher winds are expected and traveling farther downrange with more changes in altitude, heading, and airspeed.

Photo: NASA/JPL-Caltech/ASU A photo of Ingenuity taken by Perseverance after the helicopter's pre-flight rotor spin test.

[ Mars 2020 ]

Cranes are widely used in the field of construction, logistics, and the manufacturing industry. Cranes that use wire ropes as the main lifting mechanism are deeply troubled by the swaying of heavy objects, which seriously restricts the working efficiency of the crane and even cause accidents. Compared with the single-pendulum crane, the double-pendulum effect crane model has stronger nonlinearity, and its controller design is challenging. In this paper, cranes with a double-pendulum effect are considered, and their nonlinear dynamical models are established. Then, a controller based on the radial basis function (RBF) neural network compensation adaptive method is designed, and a stability analysis is also presented. Finally, the hardware-in-the-loop experimental results show that the neural network compensation control can effectively improve the control performance of the controller in practice.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

Researchers from the Biorobotics Lab in the School of Computer Science’s Robotics Institute at Carnegie Mellon University tested the hardened underwater modular robot snake (HUMRS) last month in the pool, diving the robot through underwater hoops, showing off its precise and smooth swimming, and demonstrating its ease of control.

The robot's modular design allows it to adapt to different tasks, whether squeezing through tight spaces under rubble, climbing up a tree or slithering around a corner underwater. For the underwater robot snake, the team used existing watertight modules that allow the robot to operate in bad conditions. They then added new modules containing the turbines and thrusters needed to maneuver the robot underwater.

[ CMU ]

Robots are learning how not to fall over after stepping on your foot and kicking you in the shin.

[ B-Human ]

Like boot prints on the Moon, NASA's OSIRIS-REx spacecraft left its mark on asteroid Bennu. Now, new images—taken during the spacecraft's final fly-over on April 7, 2021—reveal the aftermath of the historic Touch-and-Go (TAG) sample acquisition event from Oct. 20, 2020.

[ NASA ]

In recognition of National Robotics Week, Conan O'Brien thanks one of the robots that works for him.

[ YouTube ]

The latest from Wandercraft's self-balancing Atalante exo.

[ Wandercraft ]

Stocking supermarket shelves is one of those things that's much more difficult than it looks for robots, involving in-hand manipulation, motion planning, vision, and tactile sensing. Easy for humans, but robots are getting better.

[ Article ]

Thanks Marco!

Draganfly​ drone spraying Varigard disinfectant at the Smoothie King stadium. Our drone sanitization spraying technology is up to 100% more efficient and effective than conventional manual spray sterilization processes.

[ Draganfly ]

Baubot is a mobile construction robot that can do pretty much everything, apparently.

I’m pretty skeptical of robots like these; especially ones that bill themselves as platforms that can be monetized by third-party developers. From what we've seen, the most successful robots instead focus on doing one thing very well.

[ Baubot ]

In this demo, a remote operator sends an unmanned ground vehicle on an autonomous inspection mission via Clearpath’s web-based Outdoor Navigation Software.

[ Clearpath ]

Aurora’s Odysseus aircraft is a high-altitude pseudo-satellite that can change how we use the sky. At a fraction of the cost of a satellite and powered by the sun, Odysseus offers vast new possibilities for those who need to stay connected and informed.

[ Aurora ]

This video from 1999 discusses the soccer robot research activities at Carnegie Mellon University. CMUnited, the team of robots developed by Manuela Veloso and her students, won the small-size competition in both 1997 and 1998.

[ CMU ]

Thanks Fan!

This video propose an overview of our participation to the DARPA subterranean challenge, with a focus on the urban edition taking place Feb. 18-27, 2020, at Satsop Business Park west of Olympia, Washington.

[ Norlab ]

In today’s most advanced warehouses, Magazino’s autonomous robot TORU works side by side with human colleagues. The robot is specialized in picking, transporting, and stowing objects like shoe boxes in e-commerce warehouses.

[ Magazino ]

A look at the Control Systems Lab at the National Technical University of Athens.

[ CSL ]

Thanks Fan!

Doug Weber of MechE and the Neuroscience Institute discusses his group’s research on harnessing the nervous system's ability to control not only our bodies, but the machines and prostheses that can enhance our bodies, especially for those with disabilities.

[ CMU ]

Mark Yim, Director of the GRASP Lab at UPenn, gives a talk on “Is Cost Effective Robotics Interesting?” Yes, yes it is.

Robotic technologies have shown the capability to do amazing things. But many of those things are too expensive to be useful in any real sense. Cost reduction has often been shunned by research engineers and scientists in academia as “just engineering.” For robotics to make a larger impact on society the cost problem must be addressed.

[ CMU ]

There are all kinds of “killer robots” debates going on, but if you want an informed, grounded, nuanced take on AI and the future of war-fighting, you want to be watching debates like these instead. Professor Rebecca Crootof speaks with Brigadier General Patrick Huston, Assistant Judge Advocate General for Military Law and Operations, at Duke Law School's 26th Annual National Security Law conference.

[ Lawfire ]

This week’s Lockheed Martin Robotics Seminar is by Julie Adams from Oregon State, on “Human-Collective Teams: Algorithms, Transparency .”

Biological inspiration for artificial systems abounds. The science to support robotic collectives continues to emerge based on their biological inspirations, spatial swarms (e.g., fish and starlings) and colonies (e.g., honeybees and ants). Developing effective human-collective teams requires focusing on all aspects of the integrated system development. Many of these fundamental aspects have been developed independently, but our focus is an integrated development process to these complex research questions. This presentation will focus on three aspects: algorithms, transparency, and resilience for collectives.

[ UMD ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRA 2021 – May 30-5, 2021 – [Online Event] RoboCup 2021 – June 22-28, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA ROSCon 20201 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

Researchers from the Biorobotics Lab in the School of Computer Science’s Robotics Institute at Carnegie Mellon University tested the hardened underwater modular robot snake (HUMRS) last month in the pool, diving the robot through underwater hoops, showing off its precise and smooth swimming, and demonstrating its ease of control.

The robot's modular design allows it to adapt to different tasks, whether squeezing through tight spaces under rubble, climbing up a tree or slithering around a corner underwater. For the underwater robot snake, the team used existing watertight modules that allow the robot to operate in bad conditions. They then added new modules containing the turbines and thrusters needed to maneuver the robot underwater.

[ CMU ]

Robots are learning how not to fall over after stepping on your foot and kicking you in the shin.

[ B-Human ]

Like boot prints on the Moon, NASA's OSIRIS-REx spacecraft left its mark on asteroid Bennu. Now, new images—taken during the spacecraft's final fly-over on April 7, 2021—reveal the aftermath of the historic Touch-and-Go (TAG) sample acquisition event from Oct. 20, 2020.

[ NASA ]

In recognition of National Robotics Week, Conan O'Brien thanks one of the robots that works for him.

[ YouTube ]

The latest from Wandercraft's self-balancing Atalante exo.

[ Wandercraft ]

Stocking supermarket shelves is one of those things that's much more difficult than it looks for robots, involving in-hand manipulation, motion planning, vision, and tactile sensing. Easy for humans, but robots are getting better.

[ Article ]

Thanks Marco!

Draganfly​ drone spraying Varigard disinfectant at the Smoothie King stadium. Our drone sanitization spraying technology is up to 100% more efficient and effective than conventional manual spray sterilization processes.

[ Draganfly ]

Baubot is a mobile construction robot that can do pretty much everything, apparently.

I’m pretty skeptical of robots like these; especially ones that bill themselves as platforms that can be monetized by third-party developers. From what we've seen, the most successful robots instead focus on doing one thing very well.

[ Baubot ]

In this demo, a remote operator sends an unmanned ground vehicle on an autonomous inspection mission via Clearpath’s web-based Outdoor Navigation Software.

[ Clearpath ]

Aurora’s Odysseus aircraft is a high-altitude pseudo-satellite that can change how we use the sky. At a fraction of the cost of a satellite and powered by the sun, Odysseus offers vast new possibilities for those who need to stay connected and informed.

[ Aurora ]

This video from 1999 discusses the soccer robot research activities at Carnegie Mellon University. CMUnited, the team of robots developed by Manuela Veloso and her students, won the small-size competition in both 1997 and 1998.

[ CMU ]

Thanks Fan!

This video propose an overview of our participation to the DARPA subterranean challenge, with a focus on the urban edition taking place Feb. 18-27, 2020, at Satsop Business Park west of Olympia, Washington.

[ Norlab ]

In today’s most advanced warehouses, Magazino’s autonomous robot TORU works side by side with human colleagues. The robot is specialized in picking, transporting, and stowing objects like shoe boxes in e-commerce warehouses.

[ Magazino ]

A look at the Control Systems Lab at the National Technical University of Athens.

[ CSL ]

Thanks Fan!

Doug Weber of MechE and the Neuroscience Institute discusses his group’s research on harnessing the nervous system's ability to control not only our bodies, but the machines and prostheses that can enhance our bodies, especially for those with disabilities.

[ CMU ]

Mark Yim, Director of the GRASP Lab at UPenn, gives a talk on “Is Cost Effective Robotics Interesting?” Yes, yes it is.

Robotic technologies have shown the capability to do amazing things. But many of those things are too expensive to be useful in any real sense. Cost reduction has often been shunned by research engineers and scientists in academia as “just engineering.” For robotics to make a larger impact on society the cost problem must be addressed.

[ CMU ]

There are all kinds of “killer robots” debates going on, but if you want an informed, grounded, nuanced take on AI and the future of war-fighting, you want to be watching debates like these instead. Professor Rebecca Crootof speaks with Brigadier General Patrick Huston, Assistant Judge Advocate General for Military Law and Operations, at Duke Law School's 26th Annual National Security Law conference.

[ Lawfire ]

This week’s Lockheed Martin Robotics Seminar is by Julie Adams from Oregon State, on “Human-Collective Teams: Algorithms, Transparency .”

Biological inspiration for artificial systems abounds. The science to support robotic collectives continues to emerge based on their biological inspirations, spatial swarms (e.g., fish and starlings) and colonies (e.g., honeybees and ants). Developing effective human-collective teams requires focusing on all aspects of the integrated system development. Many of these fundamental aspects have been developed independently, but our focus is an integrated development process to these complex research questions. This presentation will focus on three aspects: algorithms, transparency, and resilience for collectives.

[ UMD ]

Human-robot interaction goes both ways. You’ve got robots understanding (or attempting to understand) humans, as well as humans understanding (or attempting to understand) robots. Humans, in my experience, are virtually impossible to understand even under the best of circumstances. But going the other way, robots have all kinds of communication tools at their disposal. Lights, sounds, screens, haptics—there are lots of options. That doesn’t mean that robot to human (RtH) communication is easy, though, because the ideal communication modality is something that is low cost and low complexity while also being understandable to almost anyone.

One good option for something like a collaborative robot arm can be to use human-inspired gestures (since it doesn’t require any additional hardware), although it’s important to be careful when you start having robots doing human stuff, because it can set unreasonable expectations if people think of the robot in human terms. In order to get around this, roboticists from Aachen University are experimenting with animal-like gestures for cobots instead, modeled after the behavior of puppies. Puppies!

For robots that are low-cost and appearance-constrained, animal-inspired (zoomorphic) gestures can be highly effective at state communication. We know this because of tails on Roombas:

While this is an adorable experiment, adding tails to industrial cobots is probably not going to happen. That’s too bad, because humans have an intuitive understanding of dog gestures, and this extends even to people who aren’t dog owners. But tails aren’t necessary for something to display dog gestures; it turns out that you can do it with a standard robot arm:

In a recent preprint in IEEE Robotics and Automation Letters (RA-L), first author Vanessa Sauer used puppies to inspire a series of communicative gestures for a Franka Emika Panda arm. Specifically, the arm was to be used in a collaborative assembly task, and needed to communicate five states to the human user, including greeting the user, prompting the user to take a part, waiting for a new command, an error condition when a container was empty of parts, and then shutting down. From the paper:

For each use case, we mirrored the intention of the robot (e.g., prompting the user to take a part) to an intention, a dog may have (e.g., encouraging the owner to play). In a second step, we collected gestures that dogs use to express the respective intention by leveraging real-life interaction with dogs, online videos, and literature. We then translated the dog gestures into three distinct zoomorphic gestures by jointly applying the following guidelines inspired by:

  • Mimicry. We mimic specific dog behavior and body language to communicate robot states.
  • Exploiting structural similarities. Although the cobot is functionally designed, we exploit certain components to make the gestures more “dog-like,” e.g., the camera corresponds to the dog’s eyes, or the end-effector corresponds to the dog’s snout.
  • Natural flow. We use kinesthetic teaching and record a full trajectory to allow natural and flowing movements with increased animacy.

A user study comparing the zoomorphic gestures to a more conventional light display for state communication during the assembly task showed that the zoomorphic gestures were easily recognized by participants as dog-like, even if the participants weren’t dog people. And the zoomorphic gestures were also more intuitively understood than the light displays, although the classification of each gesture wasn’t perfect. People also preferred the zoomorphic gestures over more abstract gestures designed to communicate the same concept. Or as the paper puts it, “Zoomorphic gestures are significantly more attractive and intuitive and provide more joy when using.” An online version of the study is here, so give it a try and provide yourself with some joy.

While zoomorphic gestures (at least in this very preliminary research) aren’t nearly as accurate at state communication as using something like a screen, they’re appealing because they’re compelling, easy to understand, inexpensive to implement, and less restrictive than sounds or screens. And there’s no reason why you can’t use both!

For a few more details, we spoke with the first author on this paper, Vanessa Sauer. 

IEEE Spectrum: Where did you get the idea for this research from, and why do you think it hasn't been more widely studied or applied in the context of practical cobots?

Vanessa Sauer: I'm a total dog person. During a conversation about dogs and how their ways of communicating with their owner has evolved over time (e.g., more expressive face, easy to understand even without owning a dog), I got the rough idea for my research. I was curious to see if this intuitive understanding many people have of dog behavior could also be applied to cobots that communicate in a similar way. Especially in social robotics, approaches utilizing zoomorphic gestures have been explored. I guess due to the playful nature, less research and applications have been done in the context of industry robots, as they often have a stronger focus on efficiency.

How complex of a concept can be communicated in this way?

In our “proof-of-concept” style approach, we used rather basic robot states to be communicated. The challenge with more complex robot states would be to find intuitive parallels in dog behavior. Nonetheless, I believe that more complex states can also be communicated with dog-inspired gestures.

How would you like to see your research be put into practice?

I would enjoy seeing zoomorphic gestures offered as modality-option on cobots, especially cobots used in industry. I think that could have the potential to reduce inhibitions towards collaborating with robots and make the interaction more fun.

Photos, Robots: Franka Emika; Dogs: iStockphoto Zoomorphic Gestures for Communicating Cobot States, by Vanessa Sauer, Axel Sauer, and Alexander Mertens from Aachen University and TUM, will be published in RA-L.

Human-robot interaction goes both ways. You’ve got robots understanding (or attempting to understand) humans, as well as humans understanding (or attempting to understand) robots. Humans, in my experience, are virtually impossible to understand even under the best of circumstances. But going the other way, robots have all kinds of communication tools at their disposal. Lights, sounds, screens, haptics—there are lots of options. That doesn’t mean that robot to human (RtH) communication is easy, though, because the ideal communication modality is something that is low cost and low complexity while also being understandable to almost anyone.

One good option for something like a collaborative robot arm can be to use human-inspired gestures (since it doesn’t require any additional hardware), although it’s important to be careful when you start having robots doing human stuff, because it can set unreasonable expectations if people think of the robot in human terms. In order to get around this, roboticists from Aachen University are experimenting with animal-like gestures for cobots instead, modeled after the behavior of puppies. Puppies!

For robots that are low-cost and appearance-constrained, animal-inspired (zoomorphic) gestures can be highly effective at state communication. We know this because of tails on Roombas:

While this is an adorable experiment, adding tails to industrial cobots is probably not going to happen. That’s too bad, because humans have an intuitive understanding of dog gestures, and this extends even to people who aren’t dog owners. But tails aren’t necessary for something to display dog gestures; it turns out that you can do it with a standard robot arm:

In a recent preprint in IEEE Robotics and Automation Letters (RA-L), first author Vanessa Sauer used puppies to inspire a series of communicative gestures for a Franka Emika Panda arm. Specifically, the arm was to be used in a collaborative assembly task, and needed to communicate five states to the human user, including greeting the user, prompting the user to take a part, waiting for a new command, an error condition when a container was empty of parts, and then shutting down. From the paper:

For each use case, we mirrored the intention of the robot (e.g., prompting the user to take a part) to an intention, a dog may have (e.g., encouraging the owner to play). In a second step, we collected gestures that dogs use to express the respective intention by leveraging real-life interaction with dogs, online videos, and literature. We then translated the dog gestures into three distinct zoomorphic gestures by jointly applying the following guidelines inspired by:

  • Mimicry. We mimic specific dog behavior and body language to communicate robot states.
  • Exploiting structural similarities. Although the cobot is functionally designed, we exploit certain components to make the gestures more “dog-like,” e.g., the camera corresponds to the dog’s eyes, or the end-effector corresponds to the dog’s snout.
  • Natural flow. We use kinesthetic teaching and record a full trajectory to allow natural and flowing movements with increased animacy.

A user study comparing the zoomorphic gestures to a more conventional light display for state communication during the assembly task showed that the zoomorphic gestures were easily recognized by participants as dog-like, even if the participants weren’t dog people. And the zoomorphic gestures were also more intuitively understood than the light displays, although the classification of each gesture wasn’t perfect. People also preferred the zoomorphic gestures over more abstract gestures designed to communicate the same concept. Or as the paper puts it, “Zoomorphic gestures are significantly more attractive and intuitive and provide more joy when using.” An online version of the study is here, so give it a try and provide yourself with some joy.

While zoomorphic gestures (at least in this very preliminary research) aren’t nearly as accurate at state communication as using something like a screen, they’re appealing because they’re compelling, easy to understand, inexpensive to implement, and less restrictive than sounds or screens. And there’s no reason why you can’t use both!

For a few more details, we spoke with the first author on this paper, Vanessa Sauer. 

IEEE Spectrum: Where did you get the idea for this research from, and why do you think it hasn't been more widely studied or applied in the context of practical cobots?

Vanessa Sauer: I'm a total dog person. During a conversation about dogs and how their ways of communicating with their owner has evolved over time (e.g., more expressive face, easy to understand even without owning a dog), I got the rough idea for my research. I was curious to see if this intuitive understanding many people have of dog behavior could also be applied to cobots that communicate in a similar way. Especially in social robotics, approaches utilizing zoomorphic gestures have been explored. I guess due to the playful nature, less research and applications have been done in the context of industry robots, as they often have a stronger focus on efficiency.

How complex of a concept can be communicated in this way?

In our “proof-of-concept” style approach, we used rather basic robot states to be communicated. The challenge with more complex robot states would be to find intuitive parallels in dog behavior. Nonetheless, I believe that more complex states can also be communicated with dog-inspired gestures.

How would you like to see your research be put into practice?

I would enjoy seeing zoomorphic gestures offered as modality-option on cobots, especially cobots used in industry. I think that could have the potential to reduce inhibitions towards collaborating with robots and make the interaction more fun.

Photos, Robots: Franka Emika; Dogs: iStockphoto Zoomorphic Gestures for Communicating Cobot States, by Vanessa Sauer, Axel Sauer, and Alexander Mertens from Aachen University and TUM, will be published in RA-L.

In comparison to field crops such as cereals, cotton, hay and grain, specialty crops often require more resources, are usually more sensitive to sudden changes in growth conditions and are known to produce higher value products. Providing quality and quantity assessment of specialty crops during harvesting is crucial for securing higher returns and improving management practices. Technical advancements in computer and machine vision have improved the detection, quality assessment and yield estimation processes for various fruit crops, but similar methods capable of exporting a detailed yield map for vegetable crops have yet to be fully developed. A machine vision-based yield monitor was designed to perform size categorization and continuous counting of shallots in-situ during the harvesting process. Coupled with a software developed in Python, the system is composed of a video logger and a global navigation satellite system. Computer vision analysis is performed within the tractor while an RGB camera collects real-time video data of the crops under natural sunlight conditions. Vegetables are first segmented using Watershed segmentation, detected on the conveyor, and then classified by size. The system detected shallots in a subsample of the dataset with a precision of 76%. The software was also evaluated on its ability to classify the shallots into three size categories. The best performance was achieved in the large class (73%), followed by the small class (59%) and medium class (44%). Based on these results, the occasional occlusion of vegetables and inconsistent lighting conditions were the main factors that hindered performance. Although further enhancements are envisioned for the prototype system, its modular and novel design permits the mapping of a selection of other horticultural crops. Moreover, it has the potential to benefit many producers of small vegetable crops by providing them with useful harvest information in real-time.

Medical training simulators have the potential to provide remote and automated assessment of skill vital for medical training. Consequently, there is a need to develop “smart” training devices with robust metrics that can quantify clinical skills for effective training and self-assessment. Recently, metrics that quantify motion smoothness such as log dimensionless jerk (LDLJ) and spectral arc length (SPARC) are increasingly being applied in medical simulators. However, two key questions remain about the efficacy of such metrics: how do these metrics relate to clinical skill, and how to best compute these metrics from sensor data and relate them with similar metrics? This study addresses these questions in the context of hemodialysis cannulation by enrolling 52 clinicians who performed cannulation in a simulated arteriovenous (AV) fistula. For clinical skill, results demonstrate that the objective outcome metric flash ratio (FR), developed to measure the quality of task completion, outperformed traditional skill indicator metrics (years of experience and global rating sheet scores). For computing motion smoothness metrics for skill assessment, we observed that the lowest amount of smoothing could result in unreliable metrics. Furthermore, the relative efficacy of motion smoothness metrics when compared with other process metrics in correlating with skill was similar for FR, the most accurate measure of skill. These results provide guidance for the computation and use of motion-based metrics for clinical skill assessment, including utilizing objective outcome metrics as ideal measures for quantifying skill.

Strong adhesion between hydrogels and various engineering surfaces has been achieved; yet, achieving fatigue-resistant hydrogel adhesion remains challenging. Here, we examine the fatigue of a specific type of hydrogel adhesion enabled by hydrogen bonds and wrinkling and show that the physical interactions–based hydrogel adhesion can resist fatigue damage. We synthesize polyacrylamide hydrogel as the adherend and poly(acrylic acid-co-acrylamide) hydrogel as the adhesive. The adherend and the adhesive interact via hydrogen bonds. We further introduce wrinkles at the interface by biaxially prestretching and then releasing the adherends and perform butt-joint tests to probe the adhesion performance. Experimental results reveal that the samples with a wrinkled interface resist fatigue damage, while the samples with a flat interface fail in ~9,000 cycles at stress levels of 70 and 63% peak stresses in static failure. The endurance limit of the wrinkled-interface samples is comparable to the peak stress of the flat-interface samples. Moreover, we find that the nearly perfectly elastic polyacrylamide hydrogel also suffers fatigue damage, which limits the fatigue life of the wrinkled-interface samples. When cohesive failure ensues, the evolutions of the elastic modulus of wrinkled-interface samples and hydrogel bulk, both in satisfactory agreements with the predictions of damage accumulation theory, are alike. We observe similar behaviors in different material systems with polyacrylamide hydrogels with different water contents. This work proves that physical interactions can be engaged in engineering fatigue-resistant adhesion between soft materials such as hydrogels.

Conceptual knowledge about objects is essential for humans, as well as for animals, to interact with their environment. On this basis, the objects can be understood as tools, a selection process can be implemented and their usage can be planned in order to achieve a specific goal. The conceptual knowledge, in this case, is primarily concerned about the physical properties and functional properties observed in the objects. Similarly tool-use applications in robotics require such conceptual knowledge about objects for substitute selection among other purposes. State-of-the-art methods employ a top-down approach where hand-crafted symbolic knowledge, which is defined from a human perspective, is grounded into sensory data afterwards. However, due to different sensing and acting capabilities of robots, a robot's conceptual understanding of objects (e.g., light/heavy) will vary and therefore should be generated from the robot's perspective entirely, which entails robot-centric conceptual knowledge about objects. A similar bottom-up argument has been put forth in cognitive science that humans and animals alike develop conceptual understanding of objects based on their own perceptual experiences with objects. With this goal in mind, we propose an extensible property estimation framework which consists of estimations methods to obtain the quantitative measurements of physical properties (rigidity, weight, etc.) and functional properties (containment, support, etc.) from household objects. This property estimation forms the basis for our second contribution: Generation of robot-centric conceptual knowledge. Our approach employs unsupervised clustering methods to transform numerical property data into symbols, and Bivariate Joint Frequency Distributions and Sample Proportion to generate conceptual knowledge about objects using the robot-centric symbols. A preliminary implementation of the proposed framework is employed to acquire a dataset comprising six physical and four functional properties of 110 household objects. This Robot-Centric dataSet (RoCS) is used to evaluate the framework regarding the property estimation methods and the semantics of the considered properties within the dataset. Furthermore, the dataset includes the derived robot-centric conceptual knowledge using the proposed framework. The application of the conceptual knowledge about objects is then evaluated by examining its usefulness in a tool substitution scenario.

Today at ProMat, a company called Pickle Robots is announcing Dill, a robot that can unload boxes from the back of a trailer at places like ecommerce fulfillment warehouses at very high speeds. With a peak box unloading rate of 1800 boxes per hour and a payload of up to 25 kg, Dill can substantially outperform even an expert human, and it can keep going pretty much forever as long as you have it plugged into the wall. 

Pickle Robots says that Dill’s approach to the box unloading task is unique in a couple of ways. First, it can handle messy trailers filled with a jumble of boxes of different shapes, colors, sizes, and weights. And second, from the get-go it’s intended to work under human supervision, relying on people to step in and handle edge cases.

Pickle’s “Dill” robot is based around a Kuka arm with up to 30 kg of payload. It uses two Intel L515s (Lidar-based RGB-D cameras) for box detection. The system is mounted on a wheeled base, and after getting positioned at the back of a trailer by a human operator, it’ll crawl forward by itself as it picks its way into the trailer. We’re told that the rate at which the robot can shift boxes averages 1600 per hour, with a peak speed closer to 1800 boxes per hour. A single human in top form can move about 800 boxes per hour, so Dill is very, very fast. In the video, you can see the robot slow down on some packages, and Pickle CEO Andrew Meyer says that’s because “we probably have a tenuous grasp on that package. As we continue to improve the gripper, we will be able to keep the speed up on more cycles.”

While the video shows Dill operating at speed autonomously, the company says it’s designed to function under human supervision. From the press release: “To maintain these speeds, Dill needs people to supervise the operation and lend an occasional helping hand, stepping in every so often to pick up any dropped packages and handle irregular items.” Typically, Meyer says, that means one person for every five robots depending on the use case. Although if you have only one robot, it’ll still require someone to keep an eye on it. A supervisor is not occupied with the task full-time, to be clear. They can also be doing something else while the robot works—although the longer a human takes to respond to issues the robot may have, the slower its effective speed will be. Typically, the company says, a human will need to help out the robot once every five minutes when it’s doing something particularly complex. But even in situations with lots of hard-to-handle boxes resulting in relatively low efficiency, Meyer says that users can expect speeds exceeding 1000 boxes per hour.

Photo: Pickle Robots Pickle Robots’ gripper, which includes a high contact area suction system and a retractable plate to help the robot quickly flip boxes.

From Pickle Robots’ video, it’s fairly obvious that the comparison that Pickle wants you to make is to Boston Dynamics’ Stretch robot, which has a peak box moving rate of 800 boxes per hour. Yes, Pickle’s robot is twice as fast. But it’s also a unitasker, designed to unload boxes from trucks, and that’s it. Focusing on a very specific problem is a good approach for robots, because then you can design a robot that does an excellent job of solving that problem, which is what Pickle has done. Boston Dynamics has chosen a different route with  Stretch, which is to build a robot that has the potential to do many other warehouse tasks, although not nearly as optimally.

The other big difference between Boston Dynamics and Pickle is, of course, that Boston Dynamics is focusing on autonomy. Meanwhile, Pickle, Meyer says in a press release, “resisted the fool’s errand of trying to create a system that could work entirely unsupervised.” Personally, I disagree that trying to create a system that could work entirely unsupervised is a fool’s errand. Approaching practical commercial robotics (in any context) from a perspective of requiring complete unsupervised autonomy is generally not practical right now outside of highly structured environments. But many companies do have goals that include unsupervised operation while still acknowledging that occasionally their robots will need a human to step in and help. In fact, these companies are (generally) doing exactly what Pickle is doing in practice: they’re deploying robots with the goal of fully unsupervised autonomy, while keeping humans available as they work their way towards that goal. The difference, perhaps, is philosophical—some companies see unsupervised operation as the future of robotics in these specific contexts, while Pickle does not. We asked Meyer about why this is. He replied:

Some problems are hardware-related and not likely to yield an automated solution anytime soon. For example, the gripper is physically incapable of grasping some objects, like car tires, no matter what intelligence the robot has. A part might start to wear out, like a spring on the gripper, and the gripper can behave unpredictably. Things can be too heavy. A sensor might get knocked out of place, dust might get on the camera lens. Or an already damaged package falls apart when you pick it up, and dumps its contents on the ground.

Other problems can go away over time as the algorithms learn and the engineers innovate in small ways. For example, learning not to pick packages that will cause a bunch more to fall down, learning to approach boxes in the corner from the side, or—and this was a real issue in production for a couple days—learning to avoid picking directly on labels where they might peel off from suction.

Machine learning algorithms, on both the perception and action sides of the story, are critical ingredients for making any of this work. However, even with them your engineering team still has to do a lot of problem solving wherever the AI is struggling. At some point you run out of engineering resources to solve all these problems in the long tail. When we talk about problems that require AI algorithms as capable as people are, we mean ones where the target on the reliability curve (99.99999% in the case of self driving, for example) is out of reach in this way. I think the big lesson from self-driving cars is that chasing that long tail of edge cases is really, really hard. We realized that in the loading dock, you can still deliver tremendous value to the customer even if you assume you can only handle 98% of the cases.  

These long-tail problems are everywhere in robotics, but again, some people believe that levels of reliability that are usable for unsupervised operation (at least in some specific contexts) are more near-term achievable than others do. In Pickle’s case, emphasizing human supervision means that they may be able to deploy faster and more reliably and at lower cost and with higher performance—we’ll just have to see how long it takes for other companies to come through with robots that are able to do the same tasks without human supervision.

Photo: Pickle Robots Pickle robots is also working on other high speed package sorting systems.

We asked Meyer how much Dill costs, and to our surprise, he gave us a candid answer: Depending on the configuration, the system can cost anywhere from $50-100k to deploy and about that same amount per year to operate. Meyer points out that you can’t really compare the robot to a human (or humans) simply on speed, since with the robot, you don’t have to worry about injuries or improper sorting of packages or training or turnover. While Pickle is currently working on several other configurations of robots for package handling, this particular truck unloading configuration will be shipping to customers next year.

Today at ProMat, a company called Pickle Robots is announcing Dill, a robot that can unload boxes from the back of a trailer at places like ecommerce fulfillment warehouses at very high speeds. With a peak box unloading rate of 1800 boxes per hour and a payload of up to 25 kg, Dill can substantially outperform even an expert human, and it can keep going pretty much forever as long as you have it plugged into the wall. 

Pickle Robots says that Dill’s approach to the box unloading task is unique in a couple of ways. First, it can handle messy trailers filled with a jumble of boxes of different shapes, colors, sizes, and weights. And second, from the get-go it’s intended to work under human supervision, relying on people to step in and handle edge cases.

Pickle’s “Dill” robot is based around a Kuka arm with up to 30 kg of payload. It uses two Intel L515s (Lidar-based RGB-D cameras) for box detection. The system is mounted on a wheeled base, and after getting positioned at the back of a trailer by a human operator, it’ll crawl forward by itself as it picks its way into the trailer. We’re told that the rate at which the robot can shift boxes averages 1600 per hour, with a peak speed closer to 1800 boxes per hour. A single human in top form can move about 800 boxes per hour, so Dill is very, very fast. In the video, you can see the robot slow down on some packages, and Pickle CEO Andrew Meyer says that’s because “we probably have a tenuous grasp on that package. As we continue to improve the gripper, we will be able to keep the speed up on more cycles.”

While the video shows Dill operating at speed autonomously, the company says it’s designed to function under human supervision. From the press release: “To maintain these speeds, Dill needs people to supervise the operation and lend an occasional helping hand, stepping in every so often to pick up any dropped packages and handle irregular items.” Typically, Meyer says, that means one person for every five robots depending on the use case. Although if you have only one robot, it’ll still require someone to keep an eye on it. A supervisor is not occupied with the task full-time, to be clear. They can also be doing something else while the robot works—although the longer a human takes to respond to issues the robot may have, the slower its effective speed will be. Typically, the company says, a human will need to help out the robot once every five minutes when it’s doing something particularly complex. But even in situations with lots of hard-to-handle boxes resulting in relatively low efficiency, Meyer says that users can expect speeds exceeding 1000 boxes per hour.

Photo: Pickle Robots Pickle Robots’ gripper, which includes a high contact area suction system and a retractable plate to help the robot quickly flip boxes.

From Pickle Robots’ video, it’s fairly obvious that the comparison that Pickle wants you to make is to Boston Dynamics’ Stretch robot, which has a peak box moving rate of 800 boxes per hour. Yes, Pickle’s robot is twice as fast. But it’s also a unitasker, designed to unload boxes from trucks, and that’s it. Focusing on a very specific problem is a good approach for robots, because then you can design a robot that does an excellent job of solving that problem, which is what Pickle has done. Boston Dynamics has chosen a different route with  Stretch, which is to build a robot that has the potential to do many other warehouse tasks, although not nearly as optimally.

The other big difference between Boston Dynamics and Pickle is, of course, that Boston Dynamics is focusing on autonomy. Meanwhile, Pickle, Meyer says in a press release, “resisted the fool’s errand of trying to create a system that could work entirely unsupervised.” Personally, I disagree that trying to create a system that could work entirely unsupervised is a fool’s errand. Approaching practical commercial robotics (in any context) from a perspective of requiring complete unsupervised autonomy is generally not practical right now outside of highly structured environments. But many companies do have goals that include unsupervised operation while still acknowledging that occasionally their robots will need a human to step in and help. In fact, these companies are (generally) doing exactly what Pickle is doing in practice: they’re deploying robots with the goal of fully unsupervised autonomy, while keeping humans available as they work their way towards that goal. The difference, perhaps, is philosophical—some companies see unsupervised operation as the future of robotics in these specific contexts, while Pickle does not. We asked Meyer about why this is. He replied:

Some problems are hardware-related and not likely to yield an automated solution anytime soon. For example, the gripper is physically incapable of grasping some objects, like car tires, no matter what intelligence the robot has. A part might start to wear out, like a spring on the gripper, and the gripper can behave unpredictably. Things can be too heavy. A sensor might get knocked out of place, dust might get on the camera lens. Or an already damaged package falls apart when you pick it up, and dumps its contents on the ground.

Other problems can go away over time as the algorithms learn and the engineers innovate in small ways. For example, learning not to pick packages that will cause a bunch more to fall down, learning to approach boxes in the corner from the side, or—and this was a real issue in production for a couple days—learning to avoid picking directly on labels where they might peel off from suction.

Machine learning algorithms, on both the perception and action sides of the story, are critical ingredients for making any of this work. However, even with them your engineering team still has to do a lot of problem solving wherever the AI is struggling. At some point you run out of engineering resources to solve all these problems in the long tail. When we talk about problems that require AI algorithms as capable as people are, we mean ones where the target on the reliability curve (99.99999% in the case of self driving, for example) is out of reach in this way. I think the big lesson from self-driving cars is that chasing that long tail of edge cases is really, really hard. We realized that in the loading dock, you can still deliver tremendous value to the customer even if you assume you can only handle 98% of the cases.  

These long-tail problems are everywhere in robotics, but again, some people believe that levels of reliability that are usable for unsupervised operation (at least in some specific contexts) are more near-term achievable than others do. In Pickle’s case, emphasizing human supervision means that they may be able to deploy faster and more reliably and at lower cost and with higher performance—we’ll just have to see how long it takes for other companies to come through with robots that are able to do the same tasks without human supervision.

Photo: Pickle Robots Pickle robots is also working on other high speed package sorting systems.

We asked Meyer how much Dill costs, and to our surprise, he gave us a candid answer: Depending on the configuration, the system can cost anywhere from $50-100k to deploy and about that same amount per year to operate. Meyer points out that you can’t really compare the robot to a human (or humans) simply on speed, since with the robot, you don’t have to worry about injuries or improper sorting of packages or training or turnover. While Pickle is currently working on several other configurations of robots for package handling, this particular truck unloading configuration will be shipping to customers next year.

The unprecedented shock caused by the COVID-19 pandemic has severely influenced the delivery of regular healthcare services. Most non-urgent medical activities, including elective surgeries, have been paused to mitigate the risk of infection and to dedicate medical resources to managing the pandemic. In this regard, not only surgeries are substantially influenced, but also pre- and post-operative assessment of patients and training for surgical procedures have been significantly impacted due to the pandemic. Many countries are planning a phased reopening, which includes the resumption of some surgical procedures. However, it is not clear how the reopening safe-practice guidelines will impact the quality of healthcare delivery. This perspective article evaluates the use of robotics and AI in 1) robotics-assisted surgery, 2) tele-examination of patients for pre- and post-surgery, and 3) tele-training for surgical procedures. Surgeons interact with a large number of staff and patients on a daily basis. Thus, the risk of infection transmission between them raises concerns. In addition, pre- and post-operative assessment also raises concerns about increasing the risk of disease transmission, in particular, since many patients may have other underlying conditions, which can increase their chances of mortality due to the virus. The pandemic has also limited the time and access that trainee surgeons have for training in the OR and/or in the presence of an expert. In this article, we describe existing challenges and possible solutions and suggest future research directions that may be relevant for robotics and AI in addressing the three tasks mentioned above.

The rise of rehabilitation robotics has ignited a global investigation into the human machine interface (HMI) between device and user. Previous research on wearable robotics has primarily focused on robotic kinematics and controls but rarely on the actual design of the physical HMI (pHMI). This paper presents a data-driven statistical forearm surface model for designing a forearm orthosis in exoskeleton applications. The forearms of 6 subjects were 3D scanned in a custom-built jig to capture data in extreme pronation and supination poses, creating 3D point clouds of the forearm surface. Resulting data was characterized into a series of ellipses from 20 to 100% of the forearm length. Key ellipse parameters in the model include: normalized major and minor axis length, normalized center point location, tilt angle, and circularity ratio. Single-subject (SS) ellipse parameters were normalized with respect to forearm radiale-stylion (RS) length and circumference and then averaged over the 6 subjects. Averaged parameter profiles were fit with 3rd-order polynomials to create combined-subjects (CS) elliptical models of the forearm. CS models were created in the jig as-is (CS1) and after alignment to ellipse centers at 20 and 100% of the forearm length (CS2). Normalized curve fits of ellipse major and minor axes in model CS2 achieve R2 values ranging from 0.898 to 0.980 indicating a high degree of correlation between cross-sectional size and position along the forearm. Most other parameters showed poor correlation with forearm position (0.005 < R2 < 0.391) with the exception of tilt angle in pronation (0.877) and circularity in supination (0.657). Normalized RMSE of the CS2 ellipse-fit model ranged from 0.21 to 0.64% of forearm circumference and 0.22 to 0.46% of forearm length. The average and peak surface deviation between the scaled CS2 model and individual scans along the forearm varied from 0.56 to 2.86 mm (subject averages) and 3.86 to 7.16 (subject maximums), with the peak deviation occurring between 45 and 50% RS length. The developed equations allow reconstruction of a scalable 3D model that can be sized based on two user measures, RS length and forearm circumference, or based on generic arm measurements taken from existing anthropometric databases.

The COVID-19 pandemic has highly impacted the communities globally by reprioritizing the means through which various societal sectors operate. Among these sectors, healthcare providers and medical workers have been impacted prominently due to the massive increase in demand for medical services under unprecedented circumstances. Hence, any tool that can help the compliance with social guidelines for COVID-19 spread prevention will have a positive impact on managing and controlling the virus outbreak and reducing the excessive burden on the healthcare system. This perspective article disseminates the perspectives of the authors regarding the use of novel biosensors and intelligent algorithms embodied in wearable IoMT frameworks for tackling this issue. We discuss how with the use of smart IoMT wearables certain biomarkers can be tracked for detection of COVID-19 in exposed individuals. We enumerate several machine learning algorithms which can be used to process a wide range of collected biomarkers for detecting (a) multiple symptoms of SARS-CoV-2 infection and (b) the dynamical likelihood of contracting the virus through interpersonal interaction. Eventually, we enunciate how a systematic use of smart wearable IoMT devices in various social sectors can intelligently help controlling the spread of COVID-19 in communities as they enter the reopening phase. We explain how this framework can benefit individuals and their medical correspondents by introducing Systems for Symptom Decoding (SSD), and how the use of this technology can be generalized on a societal level for the control of spread by introducing Systems for Spread Tracing (SST).

With impressive developments in human–robot interaction it may seem that technology can do anything. Especially in the domain of social robots which suggest to be much more than programmed machines because of their anthropomorphic shape, people may overtrust the robot's actual capabilities and its reliability. This presents a serious problem, especially when personal well-being might be at stake. Hence, insights about the development and influencing factors of overtrust in robots may form an important basis for countermeasures and sensible design decisions. An empirical study [N = 110] explored the development of overtrust using the example of a pet feeding robot. A 2 × 2 experimental design and repeated measurements contrasted the effect of one's own experience, skill demonstration, and reputation through experience reports of others. The experiment was realized in a video environment where the participants had to imagine they were going on a four-week safari trip and leaving their beloved cat at home, making use of a pet feeding robot. Every day, the participants had to make a choice: go to a day safari without calling options (risk and reward) or make a boring car trip to another village to check if the feeding was successful and activate an emergency call if not (safe and no reward). In parallel to cases of overtrust in other domains (e.g., autopilot), the feeding robot performed flawlessly most of the time until in the fourth week; it performed faultily on three consecutive days, resulting in the cat's death if the participants had decided to go for the day safari on these days. As expected, with repeated positive experience about the robot's reliability on feeding the cat, trust levels rapidly increased and the number of control calls decreased. Compared to one's own experience, skill demonstration and reputation were largely neglected or only had a temporary effect. We integrate these findings in a conceptual model of (over)trust over time and connect these to related psychological concepts such as positivism, instant rewards, inappropriate generalization, wishful thinking, dissonance theory, and social concepts from human–human interaction. Limitations of the present study as well as implications for robot design and future research are discussed.

There has been an explosion of ideas in soft robotics over the past decade, resulting in unprecedented opportunities for end effector design. Soft robot hands offer benefits of low-cost, compliance, and customized design, with the promise of dexterity and robustness. The space of opportunities is vast and exciting. However, new tools are needed to understand the capabilities of such manipulators and to facilitate manipulation planning with soft manipulators that exhibit free-form deformations. To address this challenge, we introduce a sampling based approach to discover and model continuous families of manipulations for soft robot hands. We give an overview of the soft foam robots in production in our lab and describe novel algorithms developed to characterize manipulation families for such robots. Our approach consists of sampling a space of manipulation actions, constructing Gaussian Mixture Model representations covering successful regions, and refining the results to create continuous successful regions representing the manipulation family. The space of manipulation actions is very high dimensional; we consider models with and without dimensionality reduction and provide a rigorous approach to compare models across different dimensions by comparing coverage of an unbiased test dataset in the full dimensional parameter space. Results show that some dimensionality reduction is typically useful in populating the models, but without our technique, the amount of dimensionality reduction to use is difficult to predict ahead of time and can depend on the hand and task. The models we produce can be used to plan and carry out successful, robust manipulation actions and to compare competing robot hand designs.

Pages