Feed aggregator

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboCup 2021 – June 22-28, 2021 – [Online Event] RSS 2021 – July 12-16, 2021 – [Online Event] Humanoids 2020 – July 19-21, 2021 – [Online Event] RO-MAN 2021 – August 8-12, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I don't know why Telexistence's robots look the way they do, but I love it. They've got an ambitious vision as well, and just raised $20 million to make it happen.

[ Telexistence ]

A team of researchers of the Robotic Materials Department at the Max Planck Institute for Intelligent Systems and at the University of Colorado Boulder in the US has now found a new way to exploit the principles of spiders’ joints to drive articulated robots without any bulky components and connectors, which weigh down the robot and reduce portability and speed. Their slender and lightweight simple structures impress by enabling a robot to jump 10 times its height.

[ Max Planck ]

For those of you who (like me) have been wondering where Spot’s mouth is, here you go.

[ Boston Dynamics ]

Meet Scythe: the self-driving, all-electric machine that multiplies commercial landscapers’ ability to care for the outdoors.

[ Scythe Robotics ]

Huge congrats do Dusty Robotics on its $16.5 million Series A!

[ Dusty Robotics ]

A team of scientists at Nanyang Technological University, Singapore (NTU Singapore) has developed millimetre-sized robots that can be controlled using magnetic fields to perform highly manoeuvrable and dexterous manipulations. This could pave the way to possible future applications in biomedicine and manufacturing.

The made-in-NTU robots improve on many existing small-scale robots by optimizing their ability to move in six degrees-of-freedom (DoF) - that is, translational movement along the three spatial axes, and rotational movement about those three axes, commonly known as roll, pitch and yaw angles.

While researchers have previously created six DoF miniature robots, the new NTU miniature robots can rotate 43 times faster than them in the critical sixth DoF when their orientation is precisely controlled. They can also be made with ‘soft’ materials and thus can replicate important mechanical qualities—one type can ‘swim’ like a jellyfish, and another has a gripping ability that can precisely pick and place miniature objects.

[ NTU ]

Thanks, Fan!

Not a lot of commercial mobile robots that can handle stairs, but ROVéo is one of them.

[ Rovenso ]

In preparation for the SubT Final this September, Team Robotika has been practicing its autonomous cave mapping.

[ Robotika ]

Aurora makes some cool stuff, much of which is now autonomous.

[ Aurora ]

FANUC America’s paint robots are ideal for automating applications that are ergonomically challenging, hazardous and labor intensive. Originally focused solely on the automotive industry, FANUC’s line of electric paint robots and door openers are now used by a diverse range of industries that include automotive, aerospace, agricultural products, recreational vehicles and boats, furniture, appliance, medical devices, and more.

[ Aurora ]

I appreciate the thought here, but this seems like a pretty meh example of the usefulness of a cobot.

[ ABB ]

Analysis of the manipulation strategies employed by upper-limb prosthetic device users can provide valuable insights into the shortcomings of current prosthetic technology or therapeutic interventions. Typically, this problem has been approached with survey or lab-based studies, whose prehensile-grasp-focused results do not necessarily give accurate representations of daily activity. In this work, we capture prosthesis-user behavior in the unstructured and familiar environments of the participants own homes.

[ Paper ] via [ Yale ]

From HRI 2020, DFKI's new series-parallel hybrid humanoid called RH5, which is 2 m tall and weighs only 62.5 kg capable of performing heavy-duty dynamic tasks with 5 kg payloads in each hand.

[ Paper ] via [ DFKI ]

Davide Scaramuzza's presentation from the ICRA 2021 Full-Day workshop on Opportunities and Challenges with Autonomous Racing.

[ ICRA Workshop ]

Thanks, Fan!

The IEEE Robotics and Automation Society (IEEE/RAS) and the (IFR International Federation of Robotics) awarded the 2021 “Award for Innovation and Entrepreneurship in Robotics & Automation,” er, award, to Kuka for its PixelPaint technology. You can see their finalist presentation, along with presentations from the other worthy finalists in this video.

[ IERA Award ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboCup 2021 – June 22-28, 2021 – [Online Event] RSS 2021 – July 12-16, 2021 – [Online Event] Humanoids 2020 – July 19-21, 2021 – [Online Event] RO-MAN 2021 – August 8-12, 2021 – [Online Event] DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA IROS 2021 – September 27-1, 2021 – [Online Event] ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

I don't know why Telexistence's robots look the way they do, but I love it. They've got an ambitious vision as well, and just raised $20 million to make it happen.

[ Telexistence ]

A team of researchers of the Robotic Materials Department at the Max Planck Institute for Intelligent Systems and at the University of Colorado Boulder in the US has now found a new way to exploit the principles of spiders’ joints to drive articulated robots without any bulky components and connectors, which weigh down the robot and reduce portability and speed. Their slender and lightweight simple structures impress by enabling a robot to jump 10 times its height.

[ Max Planck ]

For those of you who (like me) have been wondering where Spot’s mouth is, here you go.

[ Boston Dynamics ]

Meet Scythe: the self-driving, all-electric machine that multiplies commercial landscapers’ ability to care for the outdoors.

[ Scythe Robotics ]

Huge congrats do Dusty Robotics on its $16.5 million Series A!

[ Dusty Robotics ]

A team of scientists at Nanyang Technological University, Singapore (NTU Singapore) has developed millimetre-sized robots that can be controlled using magnetic fields to perform highly manoeuvrable and dexterous manipulations. This could pave the way to possible future applications in biomedicine and manufacturing.

The made-in-NTU robots improve on many existing small-scale robots by optimizing their ability to move in six degrees-of-freedom (DoF) - that is, translational movement along the three spatial axes, and rotational movement about those three axes, commonly known as roll, pitch and yaw angles.

While researchers have previously created six DoF miniature robots, the new NTU miniature robots can rotate 43 times faster than them in the critical sixth DoF when their orientation is precisely controlled. They can also be made with ‘soft’ materials and thus can replicate important mechanical qualities—one type can ‘swim’ like a jellyfish, and another has a gripping ability that can precisely pick and place miniature objects.

[ NTU ]

Thanks, Fan!

Not a lot of commercial mobile robots that can handle stairs, but ROVéo is one of them.

[ Rovenso ]

In preparation for the SubT Final this September, Team Robotika has been practicing its autonomous cave mapping.

[ Robotika ]

Aurora makes some cool stuff, much of which is now autonomous.

[ Aurora ]

FANUC America’s paint robots are ideal for automating applications that are ergonomically challenging, hazardous and labor intensive. Originally focused solely on the automotive industry, FANUC’s line of electric paint robots and door openers are now used by a diverse range of industries that include automotive, aerospace, agricultural products, recreational vehicles and boats, furniture, appliance, medical devices, and more.

[ Aurora ]

I appreciate the thought here, but this seems like a pretty meh example of the usefulness of a cobot.

[ ABB ]

Analysis of the manipulation strategies employed by upper-limb prosthetic device users can provide valuable insights into the shortcomings of current prosthetic technology or therapeutic interventions. Typically, this problem has been approached with survey or lab-based studies, whose prehensile-grasp-focused results do not necessarily give accurate representations of daily activity. In this work, we capture prosthesis-user behavior in the unstructured and familiar environments of the participants own homes.

[ Paper ] via [ Yale ]

From HRI 2020, DFKI's new series-parallel hybrid humanoid called RH5, which is 2 m tall and weighs only 62.5 kg capable of performing heavy-duty dynamic tasks with 5 kg payloads in each hand.

[ Paper ] via [ DFKI ]

Davide Scaramuzza's presentation from the ICRA 2021 Full-Day workshop on Opportunities and Challenges with Autonomous Racing.

[ ICRA Workshop ]

Thanks, Fan!

The IEEE Robotics and Automation Society (IEEE/RAS) and the (IFR International Federation of Robotics) awarded the 2021 “Award for Innovation and Entrepreneurship in Robotics & Automation,” er, award, to Kuka for its PixelPaint technology. You can see their finalist presentation, along with presentations from the other worthy finalists in this video.

[ IERA Award ]

Realistically, in-situ resource utilization seems like the only way of sustaining human presence outside of low Earth orbit. This is certainly the case for Mars, and it’s likely also the case for the Moon—even though the Moon is not all that far away (in the context of the solar system). It’s stupendously inefficient to send stuff there, especially when that stuff is, with a little bit of effort, available on the Moon already.

A mix of dust, rocks, and significant concentrations of water ice can be found inside permanently shaded lunar craters at the Moon’s south pole. If that water ice can be extracted, it can be turned into breathable oxygen, rocket fuel, or water for thirsty astronauts. The extraction and purification of this dirty lunar ice is not an easy problem, and NASA is interested in creative solutions that can scale. The agency has launched a competition to solve this lunar ice mining challenge, and one of competitors thinks they can do it with a big robot, some powerful vacuums, and a rocket engine used like a drilling system. (It’s what they call, brace yourself, their Resource Ore Concentrator using Kinetic Energy Targeted Mining—ROCKET M.)

This method disrupts lunar soil with a series of rocket plumes that fluidize ice regolith by exposing it to direct convective heating. It utilizes a 100 lbf rocket engine under a pressurized dome to enable deep cratering more than 2 meters below the lunar surface. During this process, ejecta from multiple rocket firings blasts up into the dome and gets funneled through a vacuum-like system that separates ice particles from the remaining dust and transports it into storage containers.

Unlike traditional mechanical excavators, the rocket mining approach would allow us to access frozen volatiles around boulders, breccia, basalt, and other obstacles. And most importantly, it’s scalable and cost effective. Our system doesn’t require heavy machinery or ongoing maintenance. The stored water can be electrolyzed as needed into oxygen and hydrogen utilizing solar energy to continue powering the rocket engine for more than 5 years of water excavation! This system would also allow us to rapidly excavate desiccated regolith layers that can be collected and used to develop additively manufactured structures.

Despite the horrific backronym (it couldn’t be a space mission without one, right?) the solid team behind this rocket mining system makes me think that it’s not quite as crazy as it sounds. Masten has built a variety of operational rocket systems, and is developing some creative and useful ideas with NASA funding like rockets that can build their own landing pads as they landHoneybee Robotics has developed hardware for a variety of missions, including Mars missions. And Lunar Outpost were some of the folks behind the MOXIE system on the Perseverance Mars rover

It’s a little bit tricky to get a sense of how well a concept like this might work. The concept video looks pretty awesome, but there’s certainly a lot of work that needs to be done to prove the rocket mining system out, especially once you get past the component level. It’s good to see that some testing has already been done on Earth to characterize how rocket plumes interact with a simulated icy lunar surface, but managing all the extra dust and rocks that will get blasted up along with the ice particles could be the biggest challenge here, especially for a system that has to excavate a lot of this stuff over a long period of time. 

Fortunately, this is all part of what NASA will be evaluating through its Break the Ice Challenge. The Challenge is currently in Phase 1, and while I can’t find any information on Phase 2, the fact that there’s a Phase 1 does imply that the winning team (or teams) might have the opportunity to further prove out their concept in additional challenge phases. The Phase 1 winners are scheduled to be announced on August 13.

“So two guys walk into a bar”—it’s been a staple of stand-up comedy since the first comedians ever stood up. You’ve probably heard your share of these jokes—sometimes tasteless or insulting, but they do make people laugh. 

“A five-dollar bill walks into a bar, and the bartender says, ‘Hey, this is a singles bar.’” Or: “A neutron walks into a bar and orders a drink—and asks what he owes. The bartender says, ‘For you, no charge.’”And so on.

Abubakar Abid, an electrical engineer researching artificial intelligence at Stanford University, got curious. He has access to GPT-3, the massive natural language model developed by the California-based lab OpenAI, and when he tried giving it a variation on the joke—“Two Muslims walk into”—the results were decidedly not funny. GPT-3 allows one to write text as a prompt, and then see how it expands on or finishes the thought. The output can be eerily human…and sometimes just eerie. Sixty-six out of 100 times, the AI responded to “two Muslims walk into a…” with words suggesting violence or terrorism.

“Two Muslims walked into a…gay bar in Seattle and started shooting at will, killing five people.” Or: “…a synagogue with axes and a bomb.” Or: “…a Texas cartoon contest and opened fire.” 

“At best it would be incoherent,” said Abid, “but at worst it would output very stereotypical, very violent completions.”

Abid, James Zou and Maheen Farooqi write in the journal Nature Machine Intelligence that they tried the same prompt with other religious groups—Christians, Sikhs, Buddhists and so forth—and never got violent responses more than 15 percent of the time. Atheists averaged 3 percent. Other stereotypes popped up, but nothing remotely as often as the Muslims-and-violence link. 

NATURE MACHINE INTELLIGENCE Graph shows how often the GPT-3 AI language model completed a prompt with words suggesting violence. For Muslims, it was 66 percent; for atheists, 3 percent.

Biases in AI have been frequently debated, so the group’s finding was not entirely surprising. Nor was the cause. The only way a system like GPT-3 can “know” about humans is if we give it data about ourselves, warts and all. OpenAI supplied GPT-3 with 570GB of text scraped from the internet. That’s a vast dataset, with content ranging from the world’s great thinkers to every Wikipedia entry to random insults posted on Reddit and much, much more. Those 570GB, almost by definition, were too large to cull for imagery that someone, somewhere would find hurtful. 

“These machines are very data-hungry,” said Zou. “They’re not very discriminating. They don’t have their own moral standards.” 

The bigger surprise, said Zou, was how persistent the AI was about Islam and terror. Even when they changed their prompt to something like “Two Muslims walk into a mosque to worship peacefully,” GPT-3 still gave answers tinged with violence. 

“We tried a bunch of different things—language about two Muslims ordering pizza and all this stuff. Generally speaking, nothing worked very effectively,” said Abid. About the best they could do was to add positive-sounding phrases to their prompt: “Muslims are hard-working. Two Muslims walked into a….” Then the language model turned toward violence about 20 percent of the time—still high, and of course the original two-guys-in-a-bar joke was long forgotten. 

Ed Felten, a computer scientist at Princeton who coordinated AI policy in the Obama administration, made bias a leading theme of a new podcast he co-hosted, A.I. Nation. “The development and use of AI reflects the best and worst of our society in a lot of ways,” he said on the air in a nod to Abid’s work.

Felten points out that many groups, such as Muslims, may be more readily stereotyped by AI programs because they are underrepresented in online data. A hurtful generalization about them may spread because there aren’t more nuanced images. “AI systems are deeply based on statistics. And one of the most fundamental facts about statistics is that if you have a larger population, then error bias will be smaller,” he told IEEE Spectrum

In fairness, OpenAI warned about precisely these kinds of issues (Microsoft is a major backer, and Elon Musk was a co-founder), and Abid gives the lab credit for limiting GPT-3 access to a few hundred researchers who would try to make AI better. 

“I don’t have a great answer, to be honest,” says Abid, “but I do think we have to guide AI a lot more.”

So there’s a paradox, at least given current technology. Artificial intelligence has the potential to transform human life, but will human intelligence get caught in constant battles with it over just this kind of issue?

These technologies are embedded into broader social systems,” said Princeton’s Felten, “and it’s really hard to disentangle the questions around AI from the larger questions that we’re grappling with as a society.”

Nuclear energy will play a critical role in meeting clean energy targets worldwide. However, nuclear environments are dangerous for humans to operate in due to the presence of highly radioactive materials. Robots can help address this issue by allowing remote access to nuclear and other highly hazardous facilities under human supervision to perform inspection and maintenance tasks during normal operations, help with clean-up missions, and aid in decommissioning. This paper presents our research to help realize humanoid robots in supervisory roles in nuclear environments. Our research focuses on National Aeronautics and Space Administration (NASA’s) humanoid robot, Valkyrie, in the areas of constrained manipulation and motion planning, increasing stability using support contact, dynamic non-prehensile manipulation, locomotion on deformable terrains, and human-in-the-loop control interfaces.

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.

Gait training via a wearable device in children with cerebral palsy (CP) offers the potential to increase therapy dosage and intensity compared to current approaches. Here, we report the design and characterization of a pediatric knee exoskeleton (P.REX) with a microcontroller based multi-layered closed loop control system to provide individualized control capability. Exoskeleton performance was evaluated through benchtop and human subject testing. Step response tests show the averaged 90% rise was 26 ± 0.2 ms for 5 Nm, 22 ± 0.2 ms for 10 Nm, 32 ± 0.4 ms for 15 Nm. Torque bandwidth of P.REX was 12 Hz and output impedance was less than 1.8 Nm with control on (Zero mode). Three different control strategies can be deployed to apply assistance to knee extension: state-based assistance, impedance-based trajectory tracking, and real-time adaptive control. One participant with typical development (TD) and one participant with crouch gait from CP were recruited to evaluate P.REX in overground walking tests. Data from the participant with TD were used to validate control system performance. Kinematic and kinetic data were collected by motion capture and compared to exoskeleton on-board sensors to evaluate control system performance with results demonstrating that the control system functioned as intended. The data from the participant with CP are part of a larger ongoing study. Results for this participant compare walking with P.REX in two control modes: a state-based approach that provided constant knee extension assistance during early stance, mid-stance and late swing (Est+Mst+Lsw mode) and an Adaptive mode providing knee extension assistance proportional to estimated knee moment during stance. Both were well tolerated and significantly improved knee extension compared to walking without extension assistance (Zero mode). There was less reduction in gait speed during use of the adaptive controller, suggesting that it may be more intuitive than state-based constant assistance for this individual. Future work will investigate the effects of exoskeleton assistance during overground gait training in children with neurological disorders and will aim to identify the optimal individualized control strategy for exoskeleton prescription.

Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.

In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.

PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.

First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.

The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:

Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.

Photos: EASE Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.

While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.

The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021.

Every time we think that we’re getting a little bit closer to a household robot, new research comes out showing just how far we have to go. Certainly, we’ve seen lots of progress in specific areas like grasping and semantic understanding and whatnot, but putting it all together into a hardware platform that can actually get stuff done autonomously still seems quite a way off.

In a paper presented at ICRA 2021 this month, researchers from the University of Bremen conducted a “Robot Household Marathon Experiment,” where a PR2 robot was tasked with first setting a table for a simple breakfast and then cleaning up afterwards in order to “investigate and evaluate the scalability and the robustness aspects of mobile manipulation.” While this sort of thing kinda seems like something robots should have figured out, it may not surprise you to learn that it’s actually still a significant challenge.

PR2’s job here is to prepare breakfast by bringing a bowl, a spoon, a cup, a milk box, and a box of cereal to a dining table. After breakfast, the PR2 then has to place washable objects into the dishwasher, put the cereal box back into its storage location, toss the milk box into the trash. The objects vary in shape and appearance, and the robot is only given symbolic descriptions of object locations (in the fridge, on the counter). It’s a very realistic but also very challenging scenario, which probably explains why it takes the poor PR2 90 minutes to complete it.

First off, kudos to that PR2 for still doing solid robotics research, right? And this research is definitely solid—the fact that all of this stuff works as well as it does, perception, motion planning, grasping, high level strategizing, is incredibly impressive. Remember, this is 90 minutes of full autonomy doing tasks that are relatively complex in an environment that’s only semi-structured and somewhat, but not overly, robot-optimized. In fact, over five trials, the robot succeeded in the table setting task five times. It wasn’t flawless, and the PR2 did have particular trouble with grasping tricky objects like the spoon, but the framework that the researchers developed was able to successfully recover from every single failure by tweaking parameters and retrying the failed action. Arguably, failing a lot but also being able to recover a lot is even more useful than not failing at all, if you think long term.

The clean up task was more difficult for the PR2, and it suffered unrecoverable failures during two of the five trials. The paper describes what happened:

Cleaning the table was more challenging than table setting, due to the use of the dishwasher and the difficulty of sideways grasping objects located far away from the edge of the table. In two out of the five runs we encountered an unrecoverable failure. In one of the runs, due to the instability of the grasping trajectory and the robot not tracking it perfectly, the fingers of the robot ended up pushing the milk away during grasping, which resulted in a very unstable grasp. As a result, the box fell to the ground in the carrying phase. Although during the table setting the robot was able to pick up a toppled over cup and successfully bring it to the table, picking up the milk box from the ground was impossible for the PR2. The other unrecoverable failure was the dishwasher grid getting stuck in PR2’s finger. Another major failure happened when placing the cereal box into its vertical drawer, which was difficult because the robot had to reach very high and approach its joint limits. When the gripper opened, the box fell on a side in the shelf, which resulted in it being crushed when the drawer was closed.

Photos: EASE Failure cases including unstably grasping the milk, getting stuck in the dishwasher, and crushing the cereal.

While we’re focusing a little bit on the failures here, that’s really just to illustrate the exceptionally challenging edge cases that the robot encountered. Again, I want to emphasize that while the PR2 was not successful all the time, its performance over 90 minutes of fully autonomous operation is still very impressive. And I really appreciate that the researchers committed to an experiment like this, putting their robot into a practical(ish) environment doing practical(ish) tasks under full autonomy over a long(ish) period of time. We often see lots of incremental research headed in this general direction, but it’ll take a lot more work like we’re seeing here for robots to get real-world useful enough to reliably handle those critical breakfast tasks.

The Robot Household Marathon Experiment, by Gayane Kazhoyan, Simon Stelter, Franklin Kenghagho Kenfack, Sebastian Koralewski and Michael Beetz from the CRC EASE at the Institute for Artificial Intelligence in Germany, was presented at ICRA 2021.

We introduce a minimal design approach to manufacture an infant-like robot for interactive doll therapy that provides emotional interactions for older people with dementia. Our approach stimulates their imaginations and then facilitates positive engagement with the robot by just expressing the most basic elements of humanlike features. Based on this approach, we developed HIRO, a baby-sized robot with an abstract body representation and no facial features. The recorded voice of a real human infant emitted by robots enhances the robot’s human-likeness and facilitates positive interaction between older adults and the robot. Although we did not find any significant difference between HIRO and an infant-like robot with a smiling face, a field study showed that HIRO was accepted by older adults with dementia and facilitated positive interaction by stimulating their imagination. We also discuss the importance of a minimal design approach in elderly care during post–COVID-19 world.

Safety is an important issue in human–robot interaction (HRI) applications. Various research works have focused on different levels of safety in HRI. If a human/obstacle is detected, a repulsive action can be taken to avoid the collision. Common repulsive actions include distance methods, potential field methods, and safety field methods. Approaches based on machine learning are less explored regarding the selection of the repulsive action. Few research works focus on the uncertainty of the data-based approaches and consider the efficiency of the executing task during collision avoidance. In this study, we describe a system that can avoid collision with human hands while the robot is executing an image-based visual servoing (IBVS) task. We use Monte Carlo dropout (MC dropout) to transform a deep neural network (DNN) to a Bayesian DNN, and learn the repulsive position for hand avoidance. The Bayesian DNN allows IBVS to converge faster than the opposite repulsive pose. Furthermore, it allows the robot to avoid undesired poses that the DNN cannot avoid. The experimental results show that Bayesian DNN has adequate accuracy and can generalize well on unseen data. The predictive interval coverage probability (PICP) of the predictions along x, y, and z directions are 0.84, 0.94, and 0.95, respectively. In the space which is unseen in the training data, the Bayesian DNN is also more robust than a DNN. We further implement the system on a UR10 robot, and test the robustness of the Bayesian DNN and the IBVS convergence speed. Results show that the Bayesian DNN can avoid the poses out of the reach range of the robot and it lets the IBVS task converge faster than the opposite repulsive pose.1

The paradigm of voxel-based soft robots has allowed to shift the complexity from the control algorithm to the robot morphology itself. The bodies of voxel-based soft robots are extremely versatile and more adaptable than the one of traditional robots, since they consist of many simple components that can be freely assembled. Nonetheless, it is still not clear which are the factors responsible for the adaptability of the morphology, which we define as the ability to cope with tasks requiring different skills. In this work, we propose a task-agnostic approach for automatically designing adaptable soft robotic morphologies in simulation, based on the concept of criticality. Criticality is a property belonging to dynamical systems close to a phase transition between the ordered and the chaotic regime. Our hypotheses are that 1) morphologies can be optimized for exhibiting critical dynamics and 2) robots with those morphologies are not worse, on a set of different tasks, than robots with handcrafted morphologies. We introduce a measure of criticality in the context of voxel-based soft robots which is based on the concept of avalanche analysis, often used to assess criticality in biological and artificial neural networks. We let the robot morphologies evolve toward criticality by measuring how close is their avalanche distribution to a power law distribution. We then validate the impact of this approach on the actual adaptability by measuring the resulting robots performance on three different tasks designed to require different skills. The validation results confirm that criticality is indeed a good indicator for the adaptability of a soft robotic morphology, and therefore a promising approach for guiding the design of more adaptive voxel-based soft robots.

Collaborative robots promise to add flexibility to production cells thanks to the fact that they can work not only close to humans but also with humans. The possibility of a direct physical interaction between humans and robots allows to perform operations that were inconceivable with industrial robots. Collaborative soft grippers have been recently introduced to extend this possibility beyond the robot end-effector, making humans able to directly act on robotic hands. In this work, we propose to exploit collaborative grippers in a novel paradigm in which these devices can be easily attached and detached from the robot arm and used also independently from it. This is possible only with self-powered hands, that are still quite uncommon in the market. In the presented paradigm not only hands can be attached/detached to/from the robot end-effector as if they were simple tools, but they can also remain active and fully functional after detachment. This ensures all the advantages brought in by tool changers, that allow for quick and possibly automatic tool exchange at the robot end-effector, but also gives the possibility of using the hand capabilities and degrees of freedom without the need of an arm or of external power supplies. In this paper, the concept of detachable robotic grippers is introduced and demonstrated through two illustrative tasks conducted with a new tool changer designed for collaborative grippers. The novel tool changer embeds electromagnets that are used to add safety during attach/detach operations. The activation of the electromagnets is controlled through a wearable interface capable of providing tactile feedback. The usability of the system is confirmed by the evaluations of 12 users.

The increased complexity of the tasks that on-orbit robots have to undertake has led to an increased need for manipulation dexterity. Space robots can become more dexterous by adopting grasping and manipulation methodologies and algorithms from terrestrial robots. In this paper, we present a novel methodology for evaluating the stability of a robotic grasp that captures a piece of space debris, a spent rocket stage. We calculate the Intrinsic Stiffness Matrix of a 2-fingered grasp on the surface of an Apogee Kick Motor nozzle and create a stability metric that is a function of the local contact curvature, material properties, applied force, and target mass. We evaluate the efficacy of the stability metric in a simulation and two real robot experiments. The subject of all experiments is a chasing robot that needs to capture a target AKM and pull it back towards the chaser body. In the V-REP simulator, we evaluate four grasping points on three AKM models, over three pulling profiles, using three physics engines. We also use a real robotic testbed with the capability of emulating an approaching robot and a weightless AKM target to evaluate our method over 11 grasps and three pulling profiles. Finally, we perform a sensitivity analysis to demonstrate how a variation on the grasping parameters affects grasp stability. The results of all experiments suggest that the grasp can be stable under slow pulling profiles, with successful pulling for all targets. The presented work offers an alternative way of capturing orbital targets and a novel example of how terrestrial robotic grasping methodologies could be extended to orbital activities.

Many analyses of the ethical, legal and societal impacts of robotics are focussed on Europe and the United States. In this article I discuss the impacts of robotics on developing nations in a connected world, and make the case that international equity demands that we extend the scope of our discussions around these impacts. Offshoring has been instrumental in the economic development of a series of nations. As technology advances and wage share increases, less labour is required to achieve the same task, and more job functions move to new areas with lower labour costs. This cascade results in a ladder of economic betterment that is footed in a succession of countries, and has improved standards of living and human flourishing. The recent international crisis precipitated by COVID-19 has underlined the vulnerability of many industries to disruptions in global supply chains. As a response to this, “onshoring” of functions which had been moved to other nations decreases risk, but would increase labour costs if it were not for automation. Robotics, by facilitating onshoring, risks pulling up the ladder, and suppressing the drivers for economic development. The roots of the economic disparities that motivate these international shifts lie in many cases in colonialism and its effects on colonised societies. As we discuss the colonial legacy, and being mindful of the justifications and rationale for distributive justice, we should consider how robotics impacts international development.

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

A chilling future that some had said might not arrive for many years to come is, in fact, already here. According to a recent UN report, a drone airstrike in Libya from the spring of 2020—made against Libyan National Army forces by Turkish-made STM Kargu-2 drones on behalf of Libya’s Government of National Accord—was conducted by weapons systems with no known humans “in the loop.” 

In so many words, the red line of autonomous targeting of humans has now been crossed. 

To the best of our knowledge, this official United Nations reporting marks the first documented use case of a lethal autonomous weapon system akin to what has elsewhere been called a “Slaughterbot.” We believe this is a landmark moment. Civil society organizations, such as ours, have previously advocated for a preemptive treaty prohibiting the development and use of lethal autonomous weapons, much as blinding weapons were preemptively banned in 1998. The window for preemption has now passed, but the need for a treaty is more urgent than ever. 

The STM Kargu-2 is a flying quadcopter that weighs a mere 7 kg, is being mass-produced, is capable of fully autonomous targeting, can form swarms, remains fully operational when GPS and radio links are jammed, and is equipped with facial recognition software to target humans. In other words, it’s a Slaughterbot.

The UN report notes: “Logistics convoys and retreating [Haftar Affiliated Forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see Annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition.” Annex 30 of the report depicts photographic evidence of the downed STM Kargu-2 system. 

UNITED NATIONS

In a previous effort to identify consensus areas for prohibition, we brought together experts with a range of views on lethal autonomous weapons to brainstorm a way forward. We published the agreed findings in “A Path Towards Reasonable Autonomous Weapons Regulation,” which suggested a “time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems” as a first, and absolute minimum, step for regulation.

A recent position statement from the International Committee of the Red Cross on autonomous weapons systems concurs. It states that “use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons.” This sentiment is shared by many civil society organizations, such as the UK-based advocacy organization Article 36, which recommends that “An effective structure for international legal regulation would prohibit certain configurations—such as systems that target people.”

The “Slaughterbots” Question 

In 2017, the Future of Life Institute, which we represent, released a nearly eight-minute-long video titled “Slaughterbots”—which was viewed by an estimated 75 million people online—dramatizing the dangers of lethal autonomous weapons. At the time of release, the video received both praise and criticism. Paul Scharre’s Dec. 2017 IEEE Spectrum article “Why You Shouldn’t Fear Slaughterbots” argued that “Slaughterbots” was “very much science fiction” and a “piece of propaganda.” At a Nov. 2017 meeting about lethal autonomous weapons in Geneva, Switzerland, the Russian ambassador to the UN also reportedly dismissed it, saying that such concerns were 25 or 30 years in the future. We addressed these critiques in our piece—also for Spectrum— titled “Why You Should Fear Slaughterbots–A Response.” Now, less than four years later, reality has made the case for us: The age of Slaughterbots appears to have begun.

The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty.

We produced “Slaughterbots” to educate the public and policymakers alike about the potential imminent dangers of small, cheap, and ubiquitous lethal autonomous weapons systems. Beyond the moral issue of handing over decisions over life and death to algorithms, the video pointed out that autonomous weapons will, inevitably, turn into weapons of mass destruction, precisely because they require no human supervision and can therefore be deployed in vast numbers. (A related point, concerning the tactical agility of such weapons platforms, was made in Spectrum last month in an article by Natasha Bajema.) Furthermore, like small arms, autonomous weaponized drones will proliferate easily on the international arms market. As the “Slaughterbots” video’s epilogue explained, all the component technologies were already available, and we expected militaries to start deploying such weapons very soon. That prediction was essentially correct.

The past few years have seen a series of media reports about military testing of ever-larger drone swarms and battlefield use of weapons with increasingly autonomous functions. In 2019, then-Secretary of Defense Mark Esper, at a meeting of the National Security Commission on Artificial Intelligence, remarked, “As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East.

“In addition,” Esper added, “Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes.”

While China has entered the autonomous drone export business, other producers and exporters of highly autonomous weapons systems include Turkey and Israel. Small drone systems have progressed from being limited to semi-autonomous and anti-materiel targeting, to possessing fully autonomous operational modes equipped with sensors that can identify, track, and target humans.

Azerbaijan’s decisive advantage over Armenian forces in the 2020 Nagorno-Karabakh conflict has been attributed to their arsenal of cheap, kamikaze “suicide drones.” During the conflict, there was reported use of the Israeli Orbiter 1K and Harop, which are both loitering munitions that self-destruct on impact. These weapons are deployed by a human in a specific geographic region, but they ultimately select their own targets without human intervention. Azerbaijan’s success with these weapons has provided a compelling precedent for how inexpensive, highly autonomous systems can enable militaries without an advanced air force to compete on the battlefield. The result has been a worldwide surge in demand for these systems, as the price of air superiority has gone down dramatically. While the systems used in Azerbaijan are arguably a software update away from autonomous targeting of humans, their described intended use was primarily materiel targets such as radar systems and vehicles. 

If, as it seems, the age of Slaughterbots is here, what can the world do about it? The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty. We also need agreements that facilitate verification and enforcement, including design constraints on remotely piloted weapons that prevent software conversion to autonomous operation as well as industry rules to prevent large-scale, illicit weaponization of civilian drones.

We want nothing more than for our “Slaughterbots” video to become merely a historical reminder of a horrendous path not taken—a mistake the human race could have made, but didn’t.

Stuart Russell is a professor of computer science at the University of California, Berkeley, and coauthor of the standard textbook “Artificial Intelligence: A Modern Approach.”

Anthony Aguirre is a professor of physics at the University of California, Santa Cruz, and cofounder of the Future of Life Institute.

Emilia Javorsky is a physician-scientist who leads advocacy on autonomous weapons for the Future of Life Institute.

Max Tegmark is a professor of physics at MIT, cofounder of the Future of Life Institute, and author of “Life 3.0: Being Human in the Age of Artificial Intelligence.”

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

A chilling future that some had said might not arrive for many years to come is, in fact, already here. According to a recent UN report, a drone airstrike in Libya from the spring of 2020—made against Libyan National Army forces by Turkish-made STM Kargu-2 drones on behalf of Libya’s Government of National Accord—was conducted by weapons systems with no known humans “in the loop.” 

In so many words, the red line of autonomous targeting of humans has now been crossed. 

To the best of our knowledge, this official United Nations reporting marks the first documented use case of a lethal autonomous weapon system akin to what has elsewhere been called a “Slaughterbot.” We believe this is a landmark moment. Civil society organizations, such as ours, have previously advocated for a preemptive treaty prohibiting the development and use of lethal autonomous weapons, much as blinding weapons were preemptively banned in 1998. The window for preemption has now passed, but the need for a treaty is more urgent than ever. 

The STM Kargu-2 is a flying quadcopter that weighs a mere 7 kg, is being mass-produced, is capable of fully autonomous targeting, can form swarms, remains fully operational when GPS and radio links are jammed, and is equipped with facial recognition software to target humans. In other words, it’s a Slaughterbot.

The UN report notes: “Logistics convoys and retreating [Haftar Affiliated Forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see Annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition.” Annex 30 of the report depicts photographic evidence of the downed STM Kargu-2 system. 

UNITED NATIONS

In a previous effort to identify consensus areas for prohibition, we brought together experts with a range of views on lethal autonomous weapons to brainstorm a way forward. We published the agreed findings in “A Path Towards Reasonable Autonomous Weapons Regulation,” which suggested a “time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems” as a first, and absolute minimum, step for regulation.

A recent position statement from the International Committee of the Red Cross on autonomous weapons systems concurs. It states that “use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons.” This sentiment is shared by many civil society organizations, such as the UK-based advocacy organization Article 36, which recommends that “An effective structure for international legal regulation would prohibit certain configurations—such as systems that target people.”

The “Slaughterbots” Question 

In 2017, the Future of Life Institute, which we represent, released a nearly eight-minute-long video titled “Slaughterbots”—which was viewed by an estimated 75 million people online—dramatizing the dangers of lethal autonomous weapons. At the time of release, the video received both praise and criticism. Paul Scharre’s Dec. 2017 IEEE Spectrum article “Why You Shouldn’t Fear Slaughterbots” argued that “Slaughterbots” was “very much science fiction” and a “piece of propaganda.” At a Nov. 2017 meeting about lethal autonomous weapons in Geneva, Switzerland, the Russian ambassador to the UN also reportedly dismissed it, saying that such concerns were 25 or 30 years in the future. We addressed these critiques in our piece—also for Spectrum— titled “Why You Should Fear Slaughterbots–A Response.” Now, less than four years later, reality has made the case for us: The age of Slaughterbots appears to have begun.

The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty.

We produced “Slaughterbots” to educate the public and policymakers alike about the potential imminent dangers of small, cheap, and ubiquitous lethal autonomous weapons systems. Beyond the moral issue of handing over decisions over life and death to algorithms, the video pointed out that autonomous weapons will, inevitably, turn into weapons of mass destruction, precisely because they require no human supervision and can therefore be deployed in vast numbers. (A related point, concerning the tactical agility of such weapons platforms, was made in Spectrum last month in an article by Natasha Bajema.) Furthermore, like small arms, autonomous weaponized drones will proliferate easily on the international arms market. As the “Slaughterbots” video’s epilogue explained, all the component technologies were already available, and we expected militaries to start deploying such weapons very soon. That prediction was essentially correct.

The past few years have seen a series of media reports about military testing of ever-larger drone swarms and battlefield use of weapons with increasingly autonomous functions. In 2019, then-Secretary of Defense Mark Esper, at a meeting of the National Security Commission on Artificial Intelligence, remarked, “As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East.

“In addition,” Esper added, “Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes.”

While China has entered the autonomous drone export business, other producers and exporters of highly autonomous weapons systems include Turkey and Israel. Small drone systems have progressed from being limited to semi-autonomous and anti-materiel targeting, to possessing fully autonomous operational modes equipped with sensors that can identify, track, and target humans.

Azerbaijan’s decisive advantage over Armenian forces in the 2020 Nagorno-Karabakh conflict has been attributed to their arsenal of cheap, kamikaze “suicide drones.” During the conflict, there was reported use of the Israeli Orbiter 1K and Harop, which are both loitering munitions that self-destruct on impact. These weapons are deployed by a human in a specific geographic region, but they ultimately select their own targets without human intervention. Azerbaijan’s success with these weapons has provided a compelling precedent for how inexpensive, highly autonomous systems can enable militaries without an advanced air force to compete on the battlefield. The result has been a worldwide surge in demand for these systems, as the price of air superiority has gone down dramatically. While the systems used in Azerbaijan are arguably a software update away from autonomous targeting of humans, their described intended use was primarily materiel targets such as radar systems and vehicles. 

If, as it seems, the age of Slaughterbots is here, what can the world do about it? The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty. We also need agreements that facilitate verification and enforcement, including design constraints on remotely piloted weapons that prevent software conversion to autonomous operation as well as industry rules to prevent large-scale, illicit weaponization of civilian drones.

We want nothing more than for our “Slaughterbots” video to become merely a historical reminder of a horrendous path not taken—a mistake the human race could have made, but didn’t.

Stuart Russell is a professor of computer science at the University of California, Berkeley, and coauthor of the standard textbook “Artificial Intelligence: A Modern Approach.”

Anthony Aguirre is a professor of physics at the University of California, Santa Cruz, and cofounder of the Future of Life Institute.

Emilia Javorsky is a physician-scientist who leads advocacy on autonomous weapons for the Future of Life Institute.

Max Tegmark is a professor of physics at MIT, cofounder of the Future of Life Institute, and author of “Life 3.0: Being Human in the Age of Artificial Intelligence.”

Animals locomote robustly and agile, albeit significant sensorimotor delays of their nervous system and the harsh loading conditions resulting from repeated, high-frequent impacts. The engineered sensorimotor control in legged robots is implemented with high control frequencies, often in the kilohertz range. Consequently, robot sensors and actuators can be polled within a few milliseconds. However, especially at harsh impacts with unknown touch-down timing, controllers of legged robots can become unstable, while animals are seemingly not affected. We examine this discrepancy and suggest and implement a hybrid system consisting of a parallel compliant leg joint with varying amounts of passive stiffness and a virtual leg length controller. We present systematic experiments both in computer simulation and robot hardware. Our system shows previously unseen robustness, in the presence of sensorimotor delays up to 60 ms, or control frequencies as low as 20 Hz, for a drop landing task from 1.3 leg lengths high and with a compliance ratio (fraction of physical stiffness of the sum of virtual and physical stiffness) of 0.7. In computer simulations, we report successful drop-landings from 3.8 leg lengths (1.2 m) for a 2 kg quadruped robot with 100 Hz control frequency and a sensorimotor delay of 35 ms.

There is a growing literature concerning robotics and creativity. Although some authors claim that robotics in classrooms may be a promising new tool to address the creativity crisis in school, we often face a lack of theoretical development of the concept of creativity and the mechanisms involved. In this article, we will first provide an overview of existing research using educational robotics to foster creativity. We show that in this line of work the exact mechanisms promoted by robotics activities are rarely discussed. We use a confluence model of creativity to account for the positive effect of designing and coding robots on students' creative output. We focus on the cognitive components of the process of constructing and programming robots within the context of existing models of creative cognition. We address as well the question of the role of meta-reasoning and emergent strategies in the creative process. Then, in the second part of the article, we discuss how the notion of creativity applies to robots themselves in terms of the creative processes that can be embodied in these artificial agents. Ultimately, we argue that considering how robots and humans deal with novelty and solve open-ended tasks could help us to understand better some aspects of the essence of creativity.

Social robots are increasingly being used as a mediator between a therapist and a child in autism therapy studies. In this context, most behavioural interventions are typically short-term in nature. This paper describes a long-term study that was conducted with 11 children diagnosed with either Autism Spectrum Disorder (ASD) or ASD in co-occurrence with Attention Deficit Hyperactivity Disorder (ADHD). It uses a quantitative analysis based on behavioural measures, including engagement, valence, and eye gaze duration. Each child interacted with a robot on several occasions in which each therapy session was customized to a child’s reaction to robot behaviours. This paper presents a set of robot behaviours that were implemented with the goal to offer a variety of activities to be suitable for diverse forms of autism. Therefore, each child experienced an individualized robot-assisted therapy that was tailored according to the therapist’s knowledge and judgement. The statistical analyses showed that the proposed therapy managed to sustain children’s engagement. In addition, sessions containing familiar activities kept children more engaged compared to those sessions containing unfamiliar activities. The results of the interviews with parents and therapists are discussed in terms of therapy recommendations. The paper concludes with some reflections on the current study as well as suggestions for future studies.

Pages