Feed aggregator



Back in February of 2019, we wrote about a sort of humanoid robot thing (?) under development at Caltech, called Leonardo. LEO combines lightweight bipedal legs with torso-mounted thrusters powerful enough to lift the entire robot off the ground, which can handily take care of on-ground dynamic balancing while also enabling some slick aerial maneuvers.

In a paper published today in Science Robotics, the Caltech researchers get us caught up on what they've been doing with LEO for the past several years, and it can now skateboard, slackline, and make dainty airborne hops with exceptionally elegant landings.

Those heels! Seems like a real sponsorship opportunity, right?

The version of LEO you see here is significantly different from the version we first met two years ago. Most importantly, while "Leonardo" used to stand for "LEg ON Aerial Robotic DrOne," it now stands for "LEgs ONboARD drOne," which may be the first even moderately successful re-backronym I've ever seen. Otherwise, the robot has been completely redesigned, with the version you see here sharing zero parts in hardware or software with the 2019 version. We're told that the old robot, and I'm quoting from the researchers here, "unfortunately never worked," in the sense that it was much more limited than the new one—the old design had promise, but it couldn't really walk and the thrusters were only useful for jumping augmentation as opposed to sustained flight.

To enable the new LEO to fly, it now has much lighter weight legs driven by lightweight servo motors. The thrusters have been changed from two coaxial propellers to four tilted propellers, enabling attitude control in all directions. And everything is now onboard, including computers, batteries, and a new software stack. I particularly love how LEO lands into a walking gait so gently and elegantly. Professor Soon-Jo Chung from Caltech's Aerospace Robotics and Control Lab explains how they did it:

Creatures that have more than two locomotion modes must learn and master how to properly switch between them. Birds, for instance, undergo a complex yet intriguing behavior at the transitional interface of their two locomotion modes of flying and walking. Similarly, the Leonardo robot uses synchronized control of distributed propeller-based thrusters and leg joints to realize smooth transitions between its flying and walking modes. In particular, the LEO robot follows a smooth flying trajectory up to the landing point prior to landing. The forward landing velocity is then matched to the chosen walking speed, and the walking phase is triggered when one foot touches the ground. After the touchdown, the robot continues to walk by tracking its walking trajectory. A state machine is run on-board LEO to allow for these smooth transitions, which are detected using contact sensors embedded in the foot.

It's very cool how Leo neatly solves some of the most difficult problems with bipedal robotics, including dynamic balancing and traversing large changes in height. And Leo can also do things that no biped (or human) can do, like actually fly short distances. As a multimodal hybrid of a bipedal robot and a drone, though, it's important to note that Leo's design includes some significant compromises as well. The robot has to be very lightweight in order to fly at all, which limits how effective it can be as a biped without using its thrusters for assistance. And because so much of its balancing requires active input from the thrusters, it's very inefficient relative to both drones and other bipedal robots.

When walking on the ground, LEO (which weighs 2.5kg and is 75cm tall) sucks down 544 watts, of which 445 watts go to the propellers and 99 watts are used by the electronics and legs. When flying, LEO's power consumption almost doubles, but it's obviously much faster—the robot has a cost of transport (a measure of efficiency of self-movement) of 108 when walking at a speed of 20 cm/s, dropping to 15.5 when flying at 3 m/s. Compare this to the cost of transport for an average human, which is well under 1, or a typical quadrupedal robot, which is in the low single digits. The most efficient humanoid we've ever seen, SRI's DURUS, has a cost of transport of about 1, whereas the rumor is that the cost of transport for a robot like Atlas is closer to 20.

Long term, this low efficiency could be a problem for LEO, since its battery life is good for only about 100 seconds of flight or 3.5 minutes of walking. But, explains Soon-Jo Chung, efficiency hasn't yet been a priority, and there's more that can potentially be done to improve LEO's performance, although always with some compromises:

The extreme balancing ability of LEO comes at the cost of continuously running propellers, which leads to higher energy consumption than leg-based ground robots. However, this stabilization with propellers allowed the use of low-power leg servo motors and lightweight legs with flexibility, which was a design choice to minimize the overall weight of LEO to improve its flying performance.

There are possible ways to improve the energy efficiency by making different design tradeoffs. For instance, LEO could walk with the reduced support from the propellers by adopting finite feet for better stability or higher power [leg] motors with torque control for joint actuation that would allow for fast and accurate enough foot position tracking to stabilize the walking gait. In such a case, propellers may need to turn on only when the legs fail to maintain stability on the ground without having to run continuously. These solutions would cause a weight increase and lead to a higher energy consumption during flight maneuvers, but they would lower energy consumption during walking. In the case of LEO, we aimed to achieve balanced aerial and ground locomotion capabilities, and we opted for lightweight legs. Achieving efficient walking with lightweight legs similar to LEO's is still an open challenge in the field of bipedal robots, and it remains to be investigated in future work.

A rendering of a future version of LEO with fancy yellow skins

At this point in its development, the Caltech researchers have been focusing primarily on LEO's mobility systems, but they hope to get LEO doing useful stuff out in the world, and that almost certainly means giving the robot autonomy and manipulation capabilities. At the moment, LEO isn't particularly autonomous, in the sense that it follows predefined paths and doesn't decide on its own whether it should be using walking or flying to traverse a given obstacle. But the researchers are already working on ways in which LEO can make these decisions autonomously through vision and machine learning.

As for manipulation, Chung tells us that "a new version of LEO could be appended with lightweight manipulators that have similar linkage design to its legs and servo motors to expand the range of tasks it can perform," with the goal of "enabling a wide range of robotic missions that are hard to accomplish by the sole use of ground or aerial robots."

Perhaps the most well-suited applications for LEO would be the ones that involve physical interactions with structures at a high altitude, which are usually dangerous for human workers and could use robotic workers. For instance, high voltage line inspection or monitoring of tall bridges could be good applications for LEO, and LEO has an onboard camera that can be used for such purposes. In such applications, conventional biped robots have difficulties with reaching the site, and standard multi-rotor drones have an issue with stabilization in high disturbance environments. LEO uses the ground contact to its advantage and, compared to a standard multi-rotor, is more resistant to external disturbances such as wind. This would improve the safety of the robot operation in an outdoor environment where LEO can maintain contact with a rigid surface.

It's also tempting to look at LEO's ability to more or less just bypass so many of the challenges in bipedal robotics and think about ways in which it could be useful in places where bipedal robots tend to struggle. But it's important to remember that because of the compromises inherent in its multimodal design, LEO will likely be best suited for very specific tasks that can most directly leverage what it's particularly good at. High voltage line and bridge inspection is a good start, and you can easily imagine other inspection tasks that require stability combined with vertical agility. Hopefully, improvements in efficiency and autonomy will make this possible, although I'm still holding out for what Caltech's Chung originally promised: "the ultimate form of demonstration for us will be to build two of these Leonardo robots and then have them play tennis or badminton."



When DARPA announced that the Subterranean Challenge Final Event would take place in a giant cavern complex in Louisville, and that it would include elements of tunnels, caves, and the urban underground, we had very high hopes for what the agency would put together. And, predictably, DARPA vastly exceeded those hopes. Inside of the Louisville Mega Cavern, DARPA worked for months to construct an incredible course from scratch, full of the kind of detail that you'd expect to see on a movie set. But just like a movie set, it was all temporary, and even worse, very few people will ever be able to really appreciate what DARPA has done—the course was never open to the public, and most of the teams themselves only really experienced the course through their robots and during a very brief post-competition tour.

After the competition ended on Friday, though, DARPA did give any teams that were interested an opportunity to run their robots around the course for a couple of hours. We were able to tag along with Team CERBERUS and Team CSIRO Data61, the first and second place SubT Challenge winners (separated in score by just one single minute!), as they and their robots explored the course in person and unsupervised for the first (and last) time.


As you look through these pictures, try to appreciate everything that DARPA has done to make the SubT Final course as realistic as possible. Everything was designed and built and sculpted and painted entirely by hand, based on real underground environments. DARPA also added an assortment of robot-specific challenges for both perception and mobility, which had a side-effect of making some parts of the course challenging for humans to traverse as well. Inside, it was often very dark, very close, and frequently slippery and wet. I was thankful both for my hard hat and for the well-lit robots that I followed around the course with their teams of human operators as we explored DARPA's fantasy subterranean world.
































Special thanks to Team CERBERUS and Team CSIRO Data61 for letting me tag along with them.

Last week, DARPA also posted some videos of the course, including walkthroughs with artifact placements and also remote footage of all of the final competition runs. We've included a couple below, but the rest can be found on DARPA's YouTube channel.

DARPA Subterranean Challenge Final Event Course Walkthrough - Artifact Configuration 3
DARPA Subterranean Challenge Finals Event Prize Round Scored Run CSIRO Data61




When DARPA announced that the Subterranean Challenge Final Event would take place in a giant cavern complex in Louisville, and that it would include elements of tunnels, caves, and the urban underground, we had very high hopes for what the agency would put together. And, predictably, DARPA vastly exceeded those hopes. Inside of the Louisville Mega Cavern, DARPA worked for months to construct an incredible course from scratch, full of the kind of detail that you'd expect to see on a movie set. But just like a movie set, it was all temporary, and even worse, very few people will ever be able to really appreciate what DARPA has done—the course was never open to the public, and most of the teams themselves only really experienced the course through their robots and during a very brief post-competition tour.

After the competition ended on Friday, though, DARPA did give any teams that were interested an opportunity to run their robots around the course for a couple of hours. We were able to tag along with Team CERBERUS and Team CSIRO Data61, the first and second place SubT Challenge winners (separated in score by just one single minute!), as they and their robots explored the course in person and unsupervised for the first (and last) time.


As you look through these pictures, try to appreciate everything that DARPA has done to make the SubT Final course as realistic as possible. Everything was designed and built and sculpted and painted entirely by hand, based on real underground environments. DARPA also added an assortment of robot-specific challenges for both perception and mobility, which had a side-effect of making some parts of the course challenging for humans to traverse as well. Inside, it was often very dark, very close, and frequently slippery and wet. I was thankful both for my hard hat and for the well-lit robots that I followed around the course with their teams of human operators as we explored DARPA's fantasy subterranean world.
































Special thanks to Team CERBERUS and Team CSIRO Data61 for letting me tag along with them.

Last week, DARPA also posted some videos of the course, including walkthroughs with artifact placements and also remote footage of all of the final competition runs. We've included a couple below, but the rest can be found on DARPA's YouTube channel.

DARPA Subterranean Challenge Final Event Course Walkthrough - Artifact Configuration 3
DARPA Subterranean Challenge Finals Event Prize Round Scored Run CSIRO Data61


On the roadmap to building completely autonomous artificial bio-robots, all major aspects of robotic functions, namely, energy generation, processing, sensing, and actuation, need to be self-sustainable and function in the biological realm. Microbial Fuel Cells (MFCs) provide a platform technology for achieving this goal. In a series of experiments, we demonstrate that MFCs can be used as living, autonomous sensors in robotics. In this work, we focus on thermal sensing that is akin to thermoreceptors in mammalian entities. We therefore designed and tested an MFC-based thermosensor system for utilization within artificial bio-robots such as EcoBots. In open-loop sensor characterization, with a controlled load resistance and feed rate, the MFC thermoreceptor was able to detect stimuli of 1 min directed from a distance of 10 cm causing a temperature rise of ∼1°C at the thermoreceptor. The thermoreceptor responded to continuous stimuli with a minimum interval of 384 s. In a practical demonstration, a mobile robot was fitted with two artificial thermosensors, as environmental thermal detectors for thermotactic application, mimicking thermotaxis in biology. In closed-loop applications, continuous thermal stimuli were detected at a minimum time interval of 160 s, without the need for complete thermoreceptor recovery. This enabled the robot to detect thermal stimuli and steer away from a warmer thermal source within the rise of 1°C. We envision that the thermosensor can be used for future applications in robotics, including as a potential sensor mechanism for maintaining thermal homeostasis.

The SMOOTH-robot is a mobile robot that—due to its modularity—combines a relatively low price with the possibility to be used for a large variety of tasks in a wide range of domains. In this article, we demonstrate the potential of the SMOOTH-robot through three use cases, two of which were performed in elderly care homes. The robot is designed so that it can either make itself ready or be quickly changed by staff to perform different tasks. We carefully considered important design parameters such as the appearance, intended and unintended interactions with users, and the technical complexity, in order to achieve high acceptability and a sufficient degree of utilization of the robot. Three demonstrated use cases indicate that such a robot could contribute to an improved work environment, having the potential to free resources of care staff which could be allocated to actual care-giving tasks. Moreover, the SMOOTH-robot can be used in many other domains, as we will also exemplify in this article.

The passive, mechanical adaptation of slender, deformable robots to their environment, whether the robot be made of hard materials or soft ones, makes them desirable as tools for medical procedures. Their reduced physical compliance can provide a form of embodied intelligence that allows the natural dynamics of interaction between the robot and its environment to guide the evolution of the combined robot-environment system. To design these systems, the problems of analysis, design optimization, control, and motion planning remain of great importance because, in general, the advantages afforded by increased mechanical compliance must be balanced against penalties such as slower dynamics, increased difficulty in the design of control systems, and greater kinematic uncertainty. The models that form the basis of these problems should be reasonably accurate yet not prohibitively expensive to formulate and solve. In this article, the state-of-the-art modeling techniques for continuum robots are reviewed and cast in a common language. Classical theories of mechanics are used to outline formal guidelines for the selection of appropriate degrees of freedom in models of continuum robots, both in terms of number and of quality, for geometrically nonlinear models built from the general family of one-dimensional rod models of continuum mechanics. Consideration is also given to the variety of actuators found in existing designs, the types of interaction that occur between continuum robots and their biomedical environments, the imposition of constraints on degrees of freedom, and to the numerical solution of the family of models under study. Finally, some open problems of modeling are discussed and future challenges are identified.

A key challenge in achieving effective robot teleoperation is minimizing teleoperators’ cognitive workload and fatigue. We set out to investigate the extent to which gaze tracking data can reveal how teleoperators interact with a system. In this study, we present an analysis of gaze tracking, captured as participants completed a multi-stage task: grasping and emptying the contents of a jar into a container. The task was repeated with different combinations of visual, haptic, and verbal feedback. Our aim was to determine if teleoperation workload can be inferred by combining the gaze duration, fixation count, task completion time, and complexity of robot motion (measured as the sum of robot joint steps) at different stages of the task. Visual information of the robot workspace was captured using four cameras, positioned to capture the robot workspace from different angles. These camera views (aerial, right, eye-level, and left) were displayed through four quadrants (top-left, top-right, bottom-left, and bottom-right quadrants) of participants’ video feedback computer screen, respectively. We found that the gaze duration and the fixation count were highly dependent on the stage of the task and the feedback scenario utilized. The results revealed that combining feedback modalities reduced the cognitive workload (inferred by investigating the correlation between gaze duration, fixation count, task completion time, success or failure of task completion, and robot gripper trajectories), particularly in the task stages that require more precision. There was a significant positive correlation between gaze duration and complexity of robot joint movements. Participants’ gaze outside the areas of interest (distractions) was not influenced by feedback scenarios. A learning effect was observed in the use of the controller for all participants as they repeated the task with different feedback combination scenarios. To design a system for teleoperation, applicable in healthcare, we found that the analysis of teleoperators’ gaze can help understand how teleoperators interact with the system, hence making it possible to develop the system from the teleoperators’ stand point.

The scalability of traveling salesperson problem (TSP) algorithms for handling large-scale problem instances has been an open problem for a long time. We arranged a so-called Santa Claus challenge and invited people to submit their algorithms to solve a TSP problem instance that is larger than 1 M nodes given only 1 h of computing time. In this article, we analyze the results and show which design choices are decisive in providing the best solution to the problem with the given constraints. There were three valid submissions, all based on local search, including k-opt up to k = 5. The most important design choice turned out to be the localization of the operator using a neighborhood graph. The divide-and-merge strategy suffers a 2% loss of quality. However, via parallelization, the result can be obtained within less than 2 min, which can make a key difference in real-life applications.

Shape-sensing in real-time is a key requirement for the development of advanced algorithms for concentric tube continuum robots when safe interaction with the environment is important e.g., for path planning, advanced control, and human-machine interaction. We propose a real-time shape-estimation algorithm for concentric tube continuum robots based on the force-torque information measured at the tubes’ basis. It extends a shape estimation algorithm for elastic rods based on discrete Kirchhoff rod theory. For simplicity and efficiency of calculation, we combine it with a model under piece-wise constant curvature assumption, in which we model a concentric tube continuum robot as a combination of segments of planar constant curvatures lying on different equilibrium planes. We evaluate our approach for a single and two combined additively manufactured tubes and achieve an estimation frequency of 333 Hz for two combined tubes with a mean deviation along the backbone of the tubes of 1.91–5.22 mm.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2021 – September 27-1, 2021 – [Online Event]Robo Boston – October 1-2, 2021 – Boston, MA, USAWearRAcon Europe 2021 – October 5-7, 2021 – [Online Event]ROSCon 2021 – October 20-21, 2021 – [Online Event]

Let us know if you have suggestions for next week, and enjoy today's videos, more below!

Mini Pupper is now on Kickstarter!

The basic kit is $250, which includes just the custom parts, so you'll need to add your own 3D printed parts, some of the electronics, and the battery. A complete Mini Pupper kit is $500, or get it fully assembled for an extra $60.

Everything should (with all the usual Kickstarter caveats in mind) ship in November, which is plenty of time to get it to me for the holidays (for any of my family reading this).

[ Mini Pupper ]

An Inflatable robotic hand design gives amputees real-time tactile control and enables a wide range of daily activities, such as zipping a suitcase, shaking hands, and petting a cat. The smart hand is soft and elastic, weighs about half a pound, and costs a fraction of comparable prosthetics.

[ MIT ]

Among the first electronic mobile robots were the experimental machines of neuroscientist W. Grey Walter. Walter studied the brain's electrical activity at the Burden Neurological Institute (BNI) near Bristol, England. His battery-powered robots were models to test his theory that a minimum number of brain cells can control complex behavior and choice.

[ NMAH ]

Autonomous Micro Aerial Vehicles (MAVs) have the potential to be employed for surveillance and monitoring tasks. By perching and staring on one or multiple locations aerial robots can save energy while concurrently increasing their overall mission time without actively flying. In this paper, we address the estimation, planning, and control problems for autonomous perching on inclined surfaces with small quadrotors using visual and inertial sensing.

[ ARPL NYU ]

Human environments are filled with large open spaces that are separated by structures like walls, facades, glass windows, etc. Most often, these structures are largely passive offering little to no interactivity. In this paper, we present Duco, a large-scale electronics fabrication robot that enables room-scale & building-scale circuitry to add interactivity to vertical everyday surfaces. Duco negates the need for any human intervention by leveraging a hanging robotic system that automatically sketches multi-layered circuity to enable novel large-scale interfaces.

The key idea behind Duco is that it achieves single-layer or multi-layer circuit fabrication on 2D surfaces as well as 2D cutouts that can be assembled into 3D objects by loading various functional inks (e.g., conductive, dielectric, or cleaning) to the wall-hanging drawing robot, as well as employing an optional laser cutting head as a cutting tool.

[ Duco ]

Thanks Sai!

When you can't have robots fight each other in person because pandemic, you have to get creative.

[ ROBO-ONE ]

Baidu researchers have proposed a novel reinforcement learning-based evolutionary foot trajectory generator that can continually optimize the shape of the output trajectory for a quadrupedal robot, from walking over the balance beam to climbing up and down slopes. Our approach can solve a range of challenging tasks in simulation by learning from scratch, including walking on a balance beam and crawling through a cave. To further verify the effectiveness of our approach, we deploy the controller learned in the simulation on a 12-DoF quadrupedal robot, and it can successfully traverse challenging scenarios with efficient gaits.

[ Paper ]

This is neat: a robot with just one depth camera can poke around a little bit where it can't see, and then use those contacts to give it a better idea of what's in front of it.

[ CLASP ]

Here's a robotics problem: objects that look very similar but aren't! How can you efficiently tell the difference between objects that look almost the same, and how do you know when you need to make that determination?

[ Paper ]

Hyundai Motor Group has introduced its first project with Boston Dynamics. Meet the new 'Factory Safety Service Robot', based on Boston Dynamics' quadruped, Spot, and to support industrial site safety.

[ Boston Dynamics ]

I don't necessarily know how much credit to give DARPA for making this happen, but even small drones make constrained obstacle avoidance look so easy now.

[ ARL ]

Huh, maybe all in-home robots should have spiky wheels and articulated designs, since this seems very effective.

[ Transcend Robotics ]

Robotiq, who makes the grippers that everybody uses for everything, now has a screw driving solution.

[ Robotiq ]

Kodiak's latest autonomous truck design is interesting because of how they've structured their sensors: almost everything seems to be in two chonky pods that take the place of the wing mirrors.

[ Kodiak ]

Thanks Kylee!

An ICRA 2021 plenary talk from Robert Wood, on Soft Robotics for Delicate and Dexterous Manipulation.

[ ICRA 2021 ]

This week's Lockheed Martin Robotics Seminar features Henrik Christensen on "Deploying autonomous vehicles for micro-mobility on a university campus."

[ UMD ]





Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2021 – September 27-1, 2021 – [Online Event]Robo Boston – October 1-2, 2021 – Boston, MA, USAWearRAcon Europe 2021 – October 5-7, 2021 – [Online Event]ROSCon 2021 – October 20-21, 2021 – [Online Event]

Let us know if you have suggestions for next week, and enjoy today's videos, more below!

Mini Pupper is now on Kickstarter!

The basic kit is $250, which includes just the custom parts, so you'll need to add your own 3D printed parts, some of the electronics, and the battery. A complete Mini Pupper kit is $500, or get it fully assembled for an extra $60.

Everything should (with all the usual Kickstarter caveats in mind) ship in November, which is plenty of time to get it to me for the holidays (for any of my family reading this).

[ Mini Pupper ]

An Inflatable robotic hand design gives amputees real-time tactile control and enables a wide range of daily activities, such as zipping a suitcase, shaking hands, and petting a cat. The smart hand is soft and elastic, weighs about half a pound, and costs a fraction of comparable prosthetics.

[ MIT ]

Among the first electronic mobile robots were the experimental machines of neuroscientist W. Grey Walter. Walter studied the brain's electrical activity at the Burden Neurological Institute (BNI) near Bristol, England. His battery-powered robots were models to test his theory that a minimum number of brain cells can control complex behavior and choice.

[ NMAH ]

Autonomous Micro Aerial Vehicles (MAVs) have the potential to be employed for surveillance and monitoring tasks. By perching and staring on one or multiple locations aerial robots can save energy while concurrently increasing their overall mission time without actively flying. In this paper, we address the estimation, planning, and control problems for autonomous perching on inclined surfaces with small quadrotors using visual and inertial sensing.

[ ARPL NYU ]

Human environments are filled with large open spaces that are separated by structures like walls, facades, glass windows, etc. Most often, these structures are largely passive offering little to no interactivity. In this paper, we present Duco, a large-scale electronics fabrication robot that enables room-scale & building-scale circuitry to add interactivity to vertical everyday surfaces. Duco negates the need for any human intervention by leveraging a hanging robotic system that automatically sketches multi-layered circuity to enable novel large-scale interfaces.

The key idea behind Duco is that it achieves single-layer or multi-layer circuit fabrication on 2D surfaces as well as 2D cutouts that can be assembled into 3D objects by loading various functional inks (e.g., conductive, dielectric, or cleaning) to the wall-hanging drawing robot, as well as employing an optional laser cutting head as a cutting tool.

[ Duco ]

Thanks Sai!

When you can't have robots fight each other in person because pandemic, you have to get creative.

[ ROBO-ONE ]

Baidu researchers have proposed a novel reinforcement learning-based evolutionary foot trajectory generator that can continually optimize the shape of the output trajectory for a quadrupedal robot, from walking over the balance beam to climbing up and down slopes. Our approach can solve a range of challenging tasks in simulation by learning from scratch, including walking on a balance beam and crawling through a cave. To further verify the effectiveness of our approach, we deploy the controller learned in the simulation on a 12-DoF quadrupedal robot, and it can successfully traverse challenging scenarios with efficient gaits.

[ Paper ]

This is neat: a robot with just one depth camera can poke around a little bit where it can't see, and then use those contacts to give it a better idea of what's in front of it.

[ CLASP ]

Here's a robotics problem: objects that look very similar but aren't! How can you efficiently tell the difference between objects that look almost the same, and how do you know when you need to make that determination?

[ Paper ]

Hyundai Motor Group has introduced its first project with Boston Dynamics. Meet the new 'Factory Safety Service Robot', based on Boston Dynamics' quadruped, Spot, and to support industrial site safety.

[ Boston Dynamics ]

I don't necessarily know how much credit to give DARPA for making this happen, but even small drones make constrained obstacle avoidance look so easy now.

[ ARL ]

Huh, maybe all in-home robots should have spiky wheels and articulated designs, since this seems very effective.

[ Transcend Robotics ]

Robotiq, who makes the grippers that everybody uses for everything, now has a screw driving solution.

[ Robotiq ]

Kodiak's latest autonomous truck design is interesting because of how they've structured their sensors: almost everything seems to be in two chonky pods that take the place of the wing mirrors.

[ Kodiak ]

Thanks Kylee!

An ICRA 2021 plenary talk from Robert Wood, on Soft Robotics for Delicate and Dexterous Manipulation.

[ ICRA 2021 ]

This week's Lockheed Martin Robotics Seminar features Henrik Christensen on "Deploying autonomous vehicles for micro-mobility on a university campus."

[ UMD ]



Learning to play a musical instrument involves skill learning and requires long-term practicing to reach expert levels. Research has already proven that the assistance of a robot can improve children’s motivation and performance during practice. In an earlier study, we showed that the specific role (evaluative role versus nonevaluative role) the robot plays can determine children’s motivation and performance. In the current study, we argue that the role of the robot has to be different for children in different learning stages (musical instrument expertise levels). Therefore, this study investigated whether children in different learning stages would have higher motivation when assisted by a robot in different supporting roles (i.e., evaluative role versus nonevaluative role). We conducted an empirical study in a real practice room of a music school with 31 children who were at different learning stages (i.e., beginners, developing players, and advanced players). In this study, every child practiced for three sessions: practicing alone, assisted by the evaluative robot, or assisted by the nonevaluative robot (in a random order). We measured motivation by using a questionnaire and analyzing video data. Results showed a significant interaction between condition (i.e., alone, evaluative robot, and nonevaluative robot) and learning stage groups indicating that children in different learning stage groups had different levels of motivation when practicing alone or with an evaluative or nonevaluative robot. More specifically, beginners had higher persistence when practicing with the nonevaluative robot, while advanced players expressed higher motivation after practicing with a robot than alone, but no difference was found between the two robot roles. Exploratory results also indicated that gender might have an interaction effect with the robot roles on child’s motivation in music practice with social robots. This study offers more insight into the child-robot interaction and robot role design in musical instrument learning. Specifically, our findings shed light on personalization in HRI, that is, from adapting the role of the robot to the characteristics and the development level of the user.

Human-object interaction is of great relevance for robots to operate in human environments. However, state-of-the-art robotic hands are far from replicating humans skills. It is, therefore, essential to study how humans use their hands to develop similar robotic capabilities. This article presents a deep dive into hand-object interaction and human demonstrations, highlighting the main challenges in this research area and suggesting desirable future developments. To this extent, the article presents a general definition of the hand-object interaction problem together with a concise review for each of the main subproblems involved, namely: sensing, perception, and learning. Furthermore, the article discusses the interplay between these subproblems and describes how their interaction in learning from demonstration contributes to the success of robot manipulation. In this way, the article provides a broad overview of the interdisciplinary approaches necessary for a robotic system to learn new manipulation skills by observing human behavior in the real world.

This paper presents a novel mechatronic exoskeleton architecture for finger rehabilitation. The system consists of an underactuated kinematic structure that enables the exoskeleton to act as an adaptive finger stimulator. The exoskeleton has sensors for motion detection and control. The proposed architecture offers three main advantages. First, the exoskeleton enables accurate quantification of subject-specific finger dynamics. The configuration of the exoskeleton can be fully reconstructed using measurements from three angular position sensors placed on the kinematic structure. In addition, the actuation force acting on the exoskeleton is recorded. Thus, the range of motion (ROM) and the force and torque trajectories of each finger joint can be determined. Second, the adaptive kinematic structure allows the patient to perform various functional tasks. The force control of the exoskeleton acts like a safeguard and limits the maximum possible joint torques during finger movement. Last, the system is compact, lightweight and does not require extensive peripherals. Due to its safety features, it is easy to use in the home. Applicability was tested in three healthy subjects.



We are well into the third wave of major investment in artificial intelligence. So it's a fine time to take a historical perspective on the current success of AI. In the 1960s, the early AI researchers often breathlessly predicted that human-level intelligent machines were only 10 years away. That form of AI was based on logical reasoning with symbols, and was carried out with what today seem like ludicrously slow digital computers. Those same researchers considered and rejected neural networks.

This article is part of our special report on AI, “The Great AI Reckoning.”

In the 1980s, AI's second age was based on two technologies: rule-based expert systems—a more heuristic form of symbol-based logical reasoning—and a resurgence in neural networks triggered by the emergence of new training algorithms. Again, there were breathless predictions about the end of human dominance in intelligence.

The third and current age of AI arose during the early 2000s with new symbolic-reasoning systems based on algorithms capable of solving a class of problems called 3SAT and with another advance called simultaneous localization and mapping. SLAM is a technique for building maps incrementally as a robot moves around in the world.

In the early 2010s, this wave gathered powerful new momentum with the rise of neural networks learning from massive data sets. It soon turned into a tsunami of promise, hype, and profitable applications.



Regardless of what you might think about AI, the reality is that just about every successful deployment has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low. In 2002, iRobot, a company that I cofounded, introduced the first mass-market autonomous home-cleaning robot, the Roomba, at a price that severely constricted how much AI we could endow it with. The limited AI wasn't a problem, though. Our worst failure scenarios had the Roomba missing a patch of floor and failing to pick up a dustball.

That same year we started deploying the first of thousands of robots in Afghanistan and then Iraq to be used to help troops disable improvised explosive devices. Failures there could kill someone, so there was always a human in the loop giving supervisory commands to the AI systems on the robot.



These days AI systems autonomously decide what advertisements to show us on our Web pages. Stupidly chosen ads are not a big deal; in fact they are plentiful. Likewise search engines, also powered by AI, show us a list of choices so that we can skip over their mistakes with just a glance. On dating sites, AI systems choose who we see, but fortunately those sites are not arranging our marriages without us having a say in it.

So far the only self-driving systems deployed on production automobiles, no matter what the marketing people may say, are all Level 2. These systems require a human driver to keep their hands on the wheel and to stay attentive at all times so that they can take over immediately if the system is making a mistake. And there have already been fatal consequences when people were not paying attention.

Just about every successful deployment of AI has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low.

These haven't been the only terrible failures of AI systems when no person was in the loop. For example, people have been wrongly arrested based on face-recognition technology that works poorly on racial minorities, making mistakes that no attentive human would make.

Sometimes we are in the loop even when the consequences of failure aren't dire. AI systems power the speech and language understanding of our smart speakers and the entertainment and navigation systems in our cars. We, the consumers, soon adapt our language to each such AI agent, quickly learning what they can and can't understand, in much the same way as we might with our children and elderly parents. The AI agents are cleverly designed to give us just enough feedback on what they've heard us say without getting too tedious, while letting us know about anything important that may need to be corrected. Here, we, the users, are the people in the loop. The ghost in the machine, if you will.

Ask not what your AI system can do for you, but instead what it has tricked you into doing for it.

This article appears in the October 2021 print issue as "A Human in the Loop."


Special Report: The Great AI Reckoning

READ NEXT: How Deep Learning Works

Or see the full report for more articles on the future of AI.




We are well into the third wave of major investment in artificial intelligence. So it's a fine time to take a historical perspective on the current success of AI. In the 1960s, the early AI researchers often breathlessly predicted that human-level intelligent machines were only 10 years away. That form of AI was based on logical reasoning with symbols, and was carried out with what today seem like ludicrously slow digital computers. Those same researchers considered and rejected neural networks.

This article is part of our special report on AI, “The Great AI Reckoning.”

In the 1980s, AI's second age was based on two technologies: rule-based expert systems—a more heuristic form of symbol-based logical reasoning—and a resurgence in neural networks triggered by the emergence of new training algorithms. Again, there were breathless predictions about the end of human dominance in intelligence.

The third and current age of AI arose during the early 2000s with new symbolic-reasoning systems based on algorithms capable of solving a class of problems called 3SAT and with another advance called simultaneous localization and mapping. SLAM is a technique for building maps incrementally as a robot moves around in the world.

In the early 2010s, this wave gathered powerful new momentum with the rise of neural networks learning from massive data sets. It soon turned into a tsunami of promise, hype, and profitable applications.



Regardless of what you might think about AI, the reality is that just about every successful deployment has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low. In 2002, iRobot, a company that I cofounded, introduced the first mass-market autonomous home-cleaning robot, the Roomba, at a price that severely constricted how much AI we could endow it with. The limited AI wasn't a problem, though. Our worst failure scenarios had the Roomba missing a patch of floor and failing to pick up a dustball.

That same year we started deploying the first of thousands of robots in Afghanistan and then Iraq to be used to help troops disable improvised explosive devices. Failures there could kill someone, so there was always a human in the loop giving supervisory commands to the AI systems on the robot.



These days AI systems autonomously decide what advertisements to show us on our Web pages. Stupidly chosen ads are not a big deal; in fact they are plentiful. Likewise search engines, also powered by AI, show us a list of choices so that we can skip over their mistakes with just a glance. On dating sites, AI systems choose who we see, but fortunately those sites are not arranging our marriages without us having a say in it.

So far the only self-driving systems deployed on production automobiles, no matter what the marketing people may say, are all Level 2. These systems require a human driver to keep their hands on the wheel and to stay attentive at all times so that they can take over immediately if the system is making a mistake. And there have already been fatal consequences when people were not paying attention.

Just about every successful deployment of AI has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low.

These haven't been the only terrible failures of AI systems when no person was in the loop. For example, people have been wrongly arrested based on face-recognition technology that works poorly on racial minorities, making mistakes that no attentive human would make.

Sometimes we are in the loop even when the consequences of failure aren't dire. AI systems power the speech and language understanding of our smart speakers and the entertainment and navigation systems in our cars. We, the consumers, soon adapt our language to each such AI agent, quickly learning what they can and can't understand, in much the same way as we might with our children and elderly parents. The AI agents are cleverly designed to give us just enough feedback on what they've heard us say without getting too tedious, while letting us know about anything important that may need to be corrected. Here, we, the users, are the people in the loop. The ghost in the machine, if you will.

Ask not what your AI system can do for you, but instead what it has tricked you into doing for it.

This article appears in the October 2021 print issue as "A Human in the Loop."


Special Report: The Great AI Reckoning

READ NEXT: How Deep Learning Works

Or see the full report for more articles on the future of AI.




Yesterday, Amazon announced a mobile home robot called Astro, which we've been expecting for the past several years. Astro's got wheels. It's got cameras. It's got a screen. It has two cup holders for some reason. It costs $1,000. I am very, very confused.

"What are we going to do with a robot," a woman asks in this PR video about Astro. Watch as the rest of the video fails to answer that very question, which is a question you'd ideally want to have completely locked down before you build a robot and shoot a PR video:

According to Amazon, the following "are just a few of the ways Astro can be used around the house," although I'm frankly astonished they were able to come up with even this many:

Brings Alexa to you around the home: When you're home, Astro brings the benefits of Alexa to you, including information, entertainment, smart home control, and more.
Check in on your home: When you're away, Astro helps provide the peace of mind that comes with knowing your home is safe. Astro can move autonomously around your home, navigate to check in on specific areas, show you a live view of rooms through the Astro app, or even send alerts if it detects an unrecognized person.
Provides peace of mind with Ring: Astro also works with Ring, adding to the peace of mind in keeping your home safe. With Ring Protect Pro, a new subscription service from Ring, you can set Astro to autonomously patrol your home when you're out, [and] proactively investigate when an event is detected.
Helps you look out for loved ones: Astro will be able to help customers who are remotely caring for elderly relatives and loved ones.

Oh, that's how they came up with four, because two of them are the same. But of course, Amazon is quite correct, Astro can certainly be used around the house in these ways. As with any robot, however, a crucial question to ask is whether that robot is really the best way of solving a problem, or if instead the robot is actually just a much flashier way of doing something that could be done more efficiently through existing non-robotic technology. In other words, it's not enough for a robot to be useful, it also has to be uniquely useful in a way that doesn't place additional burdens of complexity or efficiency or cost on the end user. And I just don't see that working out for Astro. Let's take a closer look at what it can do:

Brings Alexa to you around the home: Sure. But you know how else you can have Alexa around your home? Echo Dots, which are $40 each. Also, you can put Echo Dots both upstairs and downstairs, so if you have a multi-floor home, you're going to need more than just an Astro anyway if you want to be able to talk to Alexa everywhere.

Check in on your home: This is a common application for mobile home robots to advertise. In fact, Amazon already has a mobile home robot that can check in on your home—the Ring drone. I don't think either Astro or the Ring drone work much better than having stationary security cameras in the relevant areas of your home, and in many ways, the robots are far less useful than such cameras. The robots are also way more expensive. Read more about this in our article on the Ring security drone.

Provides peace of mind with Ring: Uh, see above?

It's not enough for a robot to be useful, it also has to be uniquely useful in a way that doesn't place additional burdens on the end user.

Helps you look out for loved ones: This one is slightly more complicated. If we acknowledge the fact that there are situations in which not being able to reach a family member via phone might be a concern, the problem that you face with Astro is that people are often uncomfortable having a robot with mobility and cameras being accessible from outside their home without their in-the-moment consent. I ran into this issue when testing the Ohmni telepresence robot with my partner's elderly and distant relative: telepresence was certainly nice to have there, but it came along with privacy concerns. The simple fact is that people don't want to be surprised by a mobile robot in their home. Amazon, we should note, seems to have taken privacy quite seriously with Astro. Users can designate off-limits areas, and there are easy ways to disable (in hardware) audio, video, and mobility. But if you do that, then the advantages that Astro offers go away. Users therefore have to choose between being able to have a robot check on them when they might need it, and maintaining their expectation of privacy. It's not Amazon's fault that telepresence robots work this way, but it's also not something that they've solved with Astro.

The video showed some other functions that Amazon chose not to enumerate in their blog post, so let's address those too:

Mobile telepresence: I am a fan of mobile telepresence. I think there's tangible value there. But Astro is not a good mobile telepresence platform, because that's not what it was designed for. Amazon's PR video, you'll notice, shows the telepresence application in pretty much the only possible way where the small size of the robot wouldn't be incredibly annoying: interacting with someone who is literally on the floor. If you were instead trying to talk to a standing (or even sitting) adult, you'd be looking up their nose, or worse. Amazon itself has better remote presence solutions; they're not mobile, but putting a stationary telepresence device on a table or countertop makes much more sense than a floor-level robot.

Dancing: Like every social home robot video has a bizarre dance scene like this. Who actually does this? Is this a real thing that happens?

Beer delivery: Ask yourself the obvious question of how that beer got placed into the robot in the first place. I guess if you want to pay $1,000 to not have to walk from the fridge to the living room, that's entirely up to you.

Now, just because I'm exceedingly skeptical about the usefulness and cost effectiveness of Astro doesn't mean that it's a bad robot. Doing simultaneous location and mapping (SLAM) in real home environments is a significant challenge. The mapping and navigation looks smooth, and the robot looks brisk enough to keep up with a walking human. I like the periscope idea, despite how fragile it looks. And the HRI elements seem well thought-out. Having said all that, it's important to remember that we've only seen Astro operate in what are presumably heavily edited PR videos, which may not accurately represent the capabilities of the robot. Here's a bit more detail about the technical side of Astro (mostly interesting), along with some other comments (mildly infuriating).

Several folks in the video talk about making science fiction a reality, which is a predictable and entirely inevitable expectations fail for any robot. We've learned over and over again that with a robot like this, it's super important to keep expectations in check, which brings me to one of the other comments in the video, about how this type of product didn't exist before. It totally did. Other companies have tried it and not gotten it to work. Maybe not this exact combination of features because every robot is different, but autonomous mobile home robots? Yeah, we've seen them before, and we know how hard it is, and I'm not convinced that Astro is different enough or better enough to somehow succeed where others have not. As much as I respect Henrik Christensen, I do not understand at all when he says that "Astro is a huge step forward." Maybe I'm missing something, but I just don't get it.

The one thing that really did resonate with me in that video was someone saying that Astro is just Amazon's first robot like this, and that there will be more. Perhaps the best way to look at Astro is as a learning experience for Amazon, and fortunately, unlike many other companies who tried building similar robots, Amazon can almost certainly survive Astro not turning out to be a commercial success. I recognize that it seems premature to be so pessimistic about it, but I really feel like we're looking at just another home robot without a really useful and compelling application. And for $1,000 if you're part of the Day 1 Editions program, and $1,500 for everyone else, Astro seems like it's going to be a very hard sell.



Yesterday, Amazon announced a mobile home robot called Astro, which we've been expecting for the past several years. Astro's got wheels. It's got cameras. It's got a screen. It has two cup holders for some reason. It costs $1,000. I am very, very confused.

"What are we going to do with a robot," a woman asks in this PR video about Astro. Watch as the rest of the video fails to answer that very question, which is a question you'd ideally want to have completely locked down before you build a robot and shoot a PR video:

According to Amazon, the following "are just a few of the ways Astro can be used around the house," although I'm frankly astonished they were able to come up with even this many:

Brings Alexa to you around the home: When you're home, Astro brings the benefits of Alexa to you, including information, entertainment, smart home control, and more.
Check in on your home: When you're away, Astro helps provide the peace of mind that comes with knowing your home is safe. Astro can move autonomously around your home, navigate to check in on specific areas, show you a live view of rooms through the Astro app, or even send alerts if it detects an unrecognized person.
Provides peace of mind with Ring: Astro also works with Ring, adding to the peace of mind in keeping your home safe. With Ring Protect Pro, a new subscription service from Ring, you can set Astro to autonomously patrol your home when you're out, [and] proactively investigate when an event is detected.
Helps you look out for loved ones: Astro will be able to help customers who are remotely caring for elderly relatives and loved ones.

Oh, that's how they came up with four, because two of them are the same. But of course, Amazon is quite correct, Astro can certainly be used around the house in these ways. As with any robot, however, a crucial question to ask is whether that robot is really the best way of solving a problem, or if instead the robot is actually just a much flashier way of doing something that could be done more efficiently through existing non-robotic technology. In other words, it's not enough for a robot to be useful, it also has to be uniquely useful in a way that doesn't place additional burdens of complexity or efficiency or cost on the end user. And I just don't see that working out for Astro. Let's take a closer look at what it can do:

Brings Alexa to you around the home: Sure. But you know how else you can have Alexa around your home? Echo Dots, which are $40 each. Also, you can put Echo Dots both upstairs and downstairs, so if you have a multi-floor home, you're going to need more than just an Astro anyway if you want to be able to talk to Alexa everywhere.

Check in on your home: This is a common application for mobile home robots to advertise. In fact, Amazon already has a mobile home robot that can check in on your home—the Ring drone. I don't think either Astro or the Ring drone work much better than having stationary security cameras in the relevant areas of your home, and in many ways, the robots are far less useful than such cameras. The robots are also way more expensive. Read more about this in our article on the Ring security drone.

Provides peace of mind with Ring: Uh, see above?

It's not enough for a robot to be useful, it also has to be uniquely useful in a way that doesn't place additional burdens on the end user.

Helps you look out for loved ones: This one is slightly more complicated. If we acknowledge the fact that there are situations in which not being able to reach a family member via phone might be a concern, the problem that you face with Astro is that people are often uncomfortable having a robot with mobility and cameras being accessible from outside their home without their in-the-moment consent. I ran into this issue when testing the Ohmni telepresence robot with my partner's elderly and distant relative: telepresence was certainly nice to have there, but it came along with privacy concerns. The simple fact is that people don't want to be surprised by a mobile robot in their home. Amazon, we should note, seems to have taken privacy quite seriously with Astro. Users can designate off-limits areas, and there are easy ways to disable (in hardware) audio, video, and mobility. But if you do that, then the advantages that Astro offers go away. Users therefore have to choose between being able to have a robot check on them when they might need it, and maintaining their expectation of privacy. It's not Amazon's fault that telepresence robots work this way, but it's also not something that they've solved with Astro.

The video showed some other functions that Amazon chose not to enumerate in their blog post, so let's address those too:

Mobile telepresence: I am a fan of mobile telepresence. I think there's tangible value there. But Astro is not a good mobile telepresence platform, because that's not what it was designed for. Amazon's PR video, you'll notice, shows the telepresence application in pretty much the only possible way where the small size of the robot wouldn't be incredibly annoying: interacting with someone who is literally on the floor. If you were instead trying to talk to a standing (or even sitting) adult, you'd be looking up their nose, or worse. Amazon itself has better remote presence solutions; they're not mobile, but putting a stationary telepresence device on a table or countertop makes much more sense than a floor-level robot.

Dancing: Like every social home robot video has a bizarre dance scene like this. Who actually does this? Is this a real thing that happens?

Beer delivery: Ask yourself the obvious question of how that beer got placed into the robot in the first place. I guess if you want to pay $1,000 to not have to walk from the fridge to the living room, that's entirely up to you.

Now, just because I'm exceedingly skeptical about the usefulness and cost effectiveness of Astro doesn't mean that it's a bad robot. Doing simultaneous location and mapping (SLAM) in real home environments is a significant challenge. The mapping and navigation looks smooth, and the robot looks brisk enough to keep up with a walking human. I like the periscope idea, despite how fragile it looks. And the HRI elements seem well thought-out. Having said all that, it's important to remember that we've only seen Astro operate in what are presumably heavily edited PR videos, which may not accurately represent the capabilities of the robot. Here's a bit more detail about the technical side of Astro (mostly interesting), along with some other comments (mildly infuriating).

Several folks in the video talk about making science fiction a reality, which is a predictable and entirely inevitable expectations fail for any robot. We've learned over and over again that with a robot like this, it's super important to keep expectations in check, which brings me to one of the other comments in the video, about how this type of product didn't exist before. It totally did. Other companies have tried it and not gotten it to work. Maybe not this exact combination of features because every robot is different, but autonomous mobile home robots? Yeah, we've seen them before, and we know how hard it is, and I'm not convinced that Astro is different enough or better enough to somehow succeed where others have not. As much as I respect Henrik Christensen, I do not understand at all when he says that "Astro is a huge step forward." Maybe I'm missing something, but I just don't get it.

The one thing that really did resonate with me in that video was someone saying that Astro is just Amazon's first robot like this, and that there will be more. Perhaps the best way to look at Astro is as a learning experience for Amazon, and fortunately, unlike many other companies who tried building similar robots, Amazon can almost certainly survive Astro not turning out to be a commercial success. I recognize that it seems premature to be so pessimistic about it, but I really feel like we're looking at just another home robot without a really useful and compelling application. And for $1,000 if you're part of the Day 1 Editions program, and $1,500 for everyone else, Astro seems like it's going to be a very hard sell.

Academic researchers concentrate on the scientific and technological feasibility of novel treatments. Investors and commercial partners, however, understand that success depends even more on strategies for regulatory approval, reimbursement, marketing, intellectual property protection and risk management. These considerations are critical for technologically complex and highly invasive treatments that entail substantial costs and risks in small and heterogeneous patient populations. Most implanted neural prosthetic devices for novel applications will be in FDA Device Class III, for which guidance documents have been issued recently. Less invasive devices may be eligible for the recently simplified “de novo” submission routes. We discuss typical timelines and strategies for integrating the regulatory path with approval for reimbursement, securing intellectual property and funding the enterprise, particularly as they might apply to implantable brain-computer interfaces for sensorimotor disabilities that do not yet have a track record of approved products.

While earlier research in human-robot interaction pre-dominantly uses rule-based architectures for natural language interaction, these approaches are not flexible enough for long-term interactions in the real world due to the large variation in user utterances. In contrast, data-driven approaches map the user input to the agent output directly, hence, provide more flexibility with these variations without requiring any set of rules. However, data-driven approaches are generally applied to single dialogue exchanges with a user and do not build up a memory over long-term conversation with different users, whereas long-term interactions require remembering users and their preferences incrementally and continuously and recalling previous interactions with users to adapt and personalise the interactions, known as the lifelong learning problem. In addition, it is desirable to learn user preferences from a few samples of interactions (i.e., few-shot learning). These are known to be challenging problems in machine learning, while they are trivial for rule-based approaches, creating a trade-off between flexibility and robustness. Correspondingly, in this work, we present the text-based Barista Datasets generated to evaluate the potential of data-driven approaches in generic and personalised long-term human-robot interactions with simulated real-world problems, such as recognition errors, incorrect recalls and changes to the user preferences. Based on these datasets, we explore the performance and the underlying inaccuracies of the state-of-the-art data-driven dialogue models that are strong baselines in other domains of personalisation in single interactions, namely Supervised Embeddings, Sequence-to-Sequence, End-to-End Memory Network, Key-Value Memory Network, and Generative Profile Memory Network. The experiments show that while data-driven approaches are suitable for generic task-oriented dialogue and real-time interactions, no model performs sufficiently well to be deployed in personalised long-term interactions in the real world, because of their inability to learn and use new identities, and their poor performance in recalling user-related data.

Pages