Feed aggregator

Untethered robots of the size of a few microns have attracted increasing attention for the potential to transform many aspects of manufacturing, medicine, health care, and bioengineering. Previously impenetrable environments have become available for high-resolution in situ and in vivo manipulations as the size of the untethered robots goes down to the microscale. Nevertheless, the independent navigation of several robots at the microscale is challenging as they cannot have onboard transducers, batteries, and control like other multi-agent systems, due to the size limitations. Therefore, various unconventional propulsion mechanisms have been explored to power motion at the nanoscale. Moreover, a variety of combinations of actuation methods has also been extensively studied to tackle different issues. In this survey, we present a thorough review of the recent developments of various dedicated ways to actuate and control multistimuli-enabled microrobots. We have also discussed existing challenges and evolving concepts associated with each technique.



Collisions with birds are a serious problem for commercial aircraft, costing the industry billions of dollars and killing thousands of animals every year. New research shows that a robotic imitation of a peregrine falcon could be an effective way to keep them out of flight paths.

Worldwide, so-called “birdstrikes” are estimated to cost the civil aviation industry almost $1.4 billion annually. Devoting significant resources to dealing with the challenge, she says, for many means hiring full-time bird control experts. Nearby habitats are often deliberately made unattractive to birds, but airports also rely on a variety of deterrents designed to scare them away such as loud pyrotechnics or speakers that play distress calls from common species.

These approaches tend to get less effective over time though, says Charlotte Hemelrijk—a professor on the faculty of science and engineering at the University of Groningen—as the birds get desensitized by repeated exposure. Live hawks or blinding lasers are also sometimes used to disperse flocks, she says, but this is controversial as it can harm the animals, and keeping and training falcons is not cheap.

“The birds don't distinguish [RobotFalcon] from a real falcon, it seems.”
—Charlotte Hemelrijk, University of Groningen

In an effort to find a more practical and lasting solution, Hemelrijk and colleagues designed a robotic peregrine falcon that can be used to chase flocks away from airports. The device is the same size and shape as a real hawk, and its fiberglass and carbon fiber body has been painted to mimic the markings of its real-life counterpart.

Rather than flapping like a bird, the “RobotFalcon” relies on two small battery-powered propellers on its wings, which allows it to travel at around 30 miles per hour for up to 15 minutes at a time. A human operator controls the machine remotely from a hawk's-eye perspective via a camera perched above the robot’s head.

To see how effective the RobotFalcon was at scaring away birds, the researchers tested it against a conventional quadcopter drone over three months of field testing close to the Dutch city of Workum. They also compared their results to 15 years of data collected by the Royal Netherlands Air Force assessing the effectiveness of conventional deterrence methods such as pyrotechnics and distress calls.

Flock-herding Falcon Drone Patrols Airport Flight Paths youtu.be

In a paper published in the Journal of the Royal Society Interface, the team showed that the RobotFalcon cleared fields of birds faster and more effectively than the drone. It also kept birds away from fields longer than distress calls, the most effective of the conventional approaches.

There was no evidence of birds getting habituated to the RobotFalcon over three months of testing, says Hemelrijk, and the researchers also found that birds exhibited behavior patterns associated with escaping from predators much more frequently with the robot than with the drone. “The way of reacting to the RobotFalcon is very similar to the real falcon,” says Hemelrijk. “The birds don't distinguish it from a real falcon, it seems.”

Other attempts to use hawk-imitating robots to disperse birds have had less promising results though. Morgan Drabik-Hamshare, a research wildlife biologist at the DoA, and her colleagues published a paper in Scientific Reports last year where they pitted a robotic peregrine falcon with flapping wings against a quadcopter and a fixed-wing remote control aircraft.

They found the robotic falcon was the least effective of the three at scaring away turkey vultures, with the quadcopter scaring the most birds off and the remote-control plane eliciting the quickest response. “Despite the predator silhouette, the vultures did not perceive the predator UAS [unmanned aircraft systems] as a threat,” Drabik-Hamshare wrote in an email.

Zihao Wang, an associate lecturer at The University of Sydney in Australia who develops UAS for bird deterrence, says the RobotFalcon does seem to be effective at dispersing flocks. But he points out that its wingspan is nearly twice the diagonal length of the quadcopter it was compared against, which means it creates a much larger silhouette when viewed from the birds’ perspective. This means the birds could be reacting more to its size than its shape, and he would like to see the RobotFalcon compared to a similar sized drone in the future.

The unique design also means it requires an experienced and specially trained operator, Wang adds, which could make it difficult to roll out widely. A potential solution could be to make the system autonomous, he says, but it’s unclear how easy this would be.

Hemelrijk says automating the RobotFalcon is probably not feasible, both due to strict regulations around the use of autonomous drones near airports as well as the sheer technical complexity. Their current operator is a falconer with significant experience in how hawks target their prey, she says, and creating an autonomous system that could recognize and target bird flocks in a similar way would be highly challenging.

But while the need for skilled operators is a limitation, Hemelrijk points out that most airports already have full-time staff dedicated to bird deterrence who could be trained up. And given the apparent lack of habituation and the ability to chase birds in a specific direction—so they head away from runways—she thinks it could be a useful addition to their arsenal.



Collisions with birds are a serious problem for commercial aircraft, costing the industry billions of dollars and killing thousands of animals every year. New research shows that a robotic imitation of a peregrine falcon could be an effective way to keep them out of flight paths.

Worldwide, so-called “birdstrikes” are estimated to cost the civil aviation industry almost $1.4 billion annually. Devoting significant resources to dealing with the challenge, she says, for many means hiring full-time bird control experts. Nearby habitats are often deliberately made unattractive to birds, but airports also rely on a variety of deterrents designed to scare them away such as loud pyrotechnics or speakers that play distress calls from common species.

These approaches tend to get less effective over time though, says Charlotte Hemelrijk—a professor on the faculty of science and engineering at the University of Groningen—as the birds get desensitized by repeated exposure. Live hawks or blinding lasers are also sometimes used to disperse flocks, she says, but this is controversial as it can harm the animals, and keeping and training falcons is not cheap.

“The birds don't distinguish [RobotFalcon] from a real falcon, it seems.”
—Charlotte Hemelrijk, University of Groningen

In an effort to find a more practical and lasting solution, Hemelrijk and colleagues designed a robotic peregrine falcon that can be used to chase flocks away from airports. The device is the same size and shape as a real hawk, and its fiberglass and carbon fiber body has been painted to mimic the markings of its real-life counterpart.

Rather than flapping like a bird, the “RobotFalcon” relies on two small battery-powered propellers on its wings, which allows it to travel at around 30 miles per hour for up to 15 minutes at a time. A human operator controls the machine remotely from a hawk's-eye perspective via a camera perched above the robot’s head.

To see how effective the RobotFalcon was at scaring away birds, the researchers tested it against a conventional quadcopter drone over three months of field testing close to the Dutch city of Workum. They also compared their results to 15 years of data collected by the Royal Netherlands Air Force assessing the effectiveness of conventional deterrence methods such as pyrotechnics and distress calls.

Flock-herding Falcon Drone Patrols Airport Flight Paths youtu.be

In a paper published in the Journal of the Royal Society Interface, the team showed that the RobotFalcon cleared fields of birds faster and more effectively than the drone. It also kept birds away from fields longer than distress calls, the most effective of the conventional approaches.

There was no evidence of birds getting habituated to the RobotFalcon over three months of testing, says Hemelrijk, and the researchers also found that birds exhibited behavior patterns associated with escaping from predators much more frequently with the robot than with the drone. “The way of reacting to the RobotFalcon is very similar to the real falcon,” says Hemelrijk. “The birds don't distinguish it from a real falcon, it seems.”

Other attempts to use hawk-imitating robots to disperse birds have had less promising results though. Morgan Drabik-Hamshare, a research wildlife biologist at the DoA, and her colleagues published a paper in Scientific Reports last year where they pitted a robotic peregrine falcon with flapping wings against a quadcopter and a fixed-wing remote control aircraft.

They found the robotic falcon was the least effective of the three at scaring away turkey vultures, with the quadcopter scaring the most birds off and the remote-control plane eliciting the quickest response. “Despite the predator silhouette, the vultures did not perceive the predator UAS [unmanned aircraft systems] as a threat,” Drabik-Hamshare wrote in an email.

Zihao Wang, an associate lecturer at The University of Sydney in Australia who develops UAS for bird deterrence, says the RobotFalcon does seem to be effective at dispersing flocks. But he points out that its wingspan is nearly twice the diagonal length of the quadcopter it was compared against, which means it creates a much larger silhouette when viewed from the birds’ perspective. This means the birds could be reacting more to its size than its shape, and he would like to see the RobotFalcon compared to a similar sized drone in the future.

The unique design also means it requires an experienced and specially trained operator, Wang adds, which could make it difficult to roll out widely. A potential solution could be to make the system autonomous, he says, but it’s unclear how easy this would be.

Hemelrijk says automating the RobotFalcon is probably not feasible, both due to strict regulations around the use of autonomous drones near airports as well as the sheer technical complexity. Their current operator is a falconer with significant experience in how hawks target their prey, she says, and creating an autonomous system that could recognize and target bird flocks in a similar way would be highly challenging.

But while the need for skilled operators is a limitation, Hemelrijk points out that most airports already have full-time staff dedicated to bird deterrence who could be trained up. And given the apparent lack of habituation and the ability to chase birds in a specific direction—so they head away from runways—she thinks it could be a useful addition to their arsenal.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Sikorsky, a Lockheed Martin company, and the Defense Advanced Research Projects Agency (DARPA) have successfully demonstrated to the U.S. Army for the first time how an uninhabited Black Hawk helicopter flying autonomously can safely and reliably perform internal and external cargo-resupply missions and a rescue operation.

[ Lockheed Martin ]

Taking inspiration from nature, SEAS researchers designed a new type of soft, robotic gripper that uses a collection of thin tentacles to entangle and ensnare objects, similar to how jellyfish collect stunned prey. Alone, individual tentacles, or filaments, are weak. But together, the collection of filaments can grasp and securely hold heavy and oddly shaped objects. The gripper relies on simple inflation to wrap around objects and doesn’t require sensing, planning, or feedback control.

[ Harvard ]

Agility Robotics’ Digit does not have bird legs. Birds have robot legs.

[ Agility Robotics ]

At TRI, we are developing robotic capabilities with the goal of improving the quality of everyday life for all. To reach this goal, we define “challenge tasks” that are exciting to work on, that drive our development towards general purpose robot capabilities, and that allow for rigorous quantitative testing.
Autonomous order fulfillment in grocery stores is a particularly good way to drive our development of mobile manipulation capabilities because it encompasses a host of difficult challenges for robots, including perceiving and manipulating a large variety of objects, navigating an ever-changing environment, and reacting to unexpected circumstances.

[ TRI ]

Thanks, Lukas!

On Halloween don’t come empty-handed to the MAB robotics’ basement. This is a spooky season, you’d better have a treat for the honey badger legged robot.

[ MAB Robotics ]

Thanks, Jakub!

The most important skill in humanoid robotics is knowing how to shove your robot in just the right way.

[ IHMC ]

If this is a humanlike workspace, I need to take Pilates or something.

[ Apptronik ]

A Spooky Lab Tour of KIMLAB!

[ KIMLAB ]

I know I say this every time, but I still cannot believe that this is a commercial system.

[ Tevel ]

Amazon has a prototype autonomous mobile robot, or AMR, for transporting oversize packages through warehouses. Its name is Bluebell.

[ Amazon ]

Using GPT3, Ameca answers user-submitted questions for you in the first installment of Ask Ameca!

[ Engineered Arts ]

If insects can discern up from down while flying without fancy accelerometers, could we develop drones to do the same? In a new article published in Nature, scientists from Delft University of Technology (Netherlands) and Aix-Marseille University (France) describe how insects detect gravity. And how we could perhaps copy from nature.

[ TU Delft ]

We show a new method to handle fabric using a robot. Our approach relies on a finger-tip-size electroadhesive skin to lift fabric up. A pinch-type grasp is then used to securely hold the separated sheet of fabric, enabling easy manipulation thereafter.

[ Paper ]

We present FLEX-SDK: an open-source software development kit that allows creating a social robot from two simple tablet screens. FLEX-SDK involves tools for designing the robot face and its facial expressions, creating screens for input/output interactions, controlling the robot through a Wizard-of-Oz interface, and scripting autonomous interactions through a simple text-based programming interface.

[ Paper ]

D’Manus is a 10 DoF, low-cost, reliable prehensile hand. It is fully 3D printable, and features integrated large-area ReSkin sensing.

[ D'Manus ]

10,000 cheese sticks per hour.

[ Kuka ]

We present UltraBat, an interactive 3D side-scrolling game inspired by Flappy Bird, in which the game character, a bat, is physically levitated in mid-air using ultrasound. Players aim to navigate the bat through a stalagmite tunnel that scrolls to one side as the bat travels, which is implemented using a pin-array display to create a shape-changing passage.

[ UltraBat ]

The next generation of robots will rely on machine learning in one way or another. However, when machine learning algorithms (or their results) are deployed on robots in the real world, studying their safety is important. In this talk, I will summarize the findings of our recent review paper “Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning”.

[ UofT ]

On October 20, 2022, Kimberly Hambuchen of NASA talked to Robotics students as a speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question: “What can I do with a robotics degree?”

[ Michigan Robotics ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Sikorsky, a Lockheed Martin company, and the Defense Advanced Research Projects Agency (DARPA) have successfully demonstrated to the U.S. Army for the first time how an uninhabited Black Hawk helicopter flying autonomously can safely and reliably perform internal and external cargo-resupply missions and a rescue operation.

[ Lockheed Martin ]

Taking inspiration from nature, SEAS researchers designed a new type of soft, robotic gripper that uses a collection of thin tentacles to entangle and ensnare objects, similar to how jellyfish collect stunned prey. Alone, individual tentacles, or filaments, are weak. But together, the collection of filaments can grasp and securely hold heavy and oddly shaped objects. The gripper relies on simple inflation to wrap around objects and doesn’t require sensing, planning, or feedback control.

[ Harvard ]

Agility Robotics’ Digit does not have bird legs. Birds have robot legs.

[ Agility Robotics ]

At TRI, we are developing robotic capabilities with the goal of improving the quality of everyday life for all. To reach this goal, we define “challenge tasks” that are exciting to work on, that drive our development towards general purpose robot capabilities, and that allow for rigorous quantitative testing.
Autonomous order fulfillment in grocery stores is a particularly good way to drive our development of mobile manipulation capabilities because it encompasses a host of difficult challenges for robots, including perceiving and manipulating a large variety of objects, navigating an ever-changing environment, and reacting to unexpected circumstances.

[ TRI ]

Thanks, Lukas!

On Halloween don’t come empty-handed to the MAB robotics’ basement. This is a spooky season, you’d better have a treat for the honey badger legged robot.

[ MAB Robotics ]

Thanks, Jakub!

The most important skill in humanoid robotics is knowing how to shove your robot in just the right way.

[ IHMC ]

If this is a humanlike workspace, I need to take Pilates or something.

[ Apptronik ]

A Spooky Lab Tour of KIMLAB!

[ KIMLAB ]

I know I say this every time, but I still cannot believe that this is a commercial system.

[ Tevel ]

Amazon has a prototype autonomous mobile robot, or AMR, for transporting oversize packages through warehouses. Its name is Bluebell.

[ Amazon ]

Using GPT3, Ameca answers user-submitted questions for you in the first installment of Ask Ameca!

[ Engineered Arts ]

If insects can discern up from down while flying without fancy accelerometers, could we develop drones to do the same? In a new article published in Nature, scientists from Delft University of Technology (Netherlands) and Aix-Marseille University (France) describe how insects detect gravity. And how we could perhaps copy from nature.

[ TU Delft ]

We show a new method to handle fabric using a robot. Our approach relies on a finger-tip-size electroadhesive skin to lift fabric up. A pinch-type grasp is then used to securely hold the separated sheet of fabric, enabling easy manipulation thereafter.

[ Paper ]

We present FLEX-SDK: an open-source software development kit that allows creating a social robot from two simple tablet screens. FLEX-SDK involves tools for designing the robot face and its facial expressions, creating screens for input/output interactions, controlling the robot through a Wizard-of-Oz interface, and scripting autonomous interactions through a simple text-based programming interface.

[ Paper ]

D’Manus is a 10 DoF, low-cost, reliable prehensile hand. It is fully 3D printable, and features integrated large-area ReSkin sensing.

[ D'Manus ]

10,000 cheese sticks per hour.

[ Kuka ]

We present UltraBat, an interactive 3D side-scrolling game inspired by Flappy Bird, in which the game character, a bat, is physically levitated in mid-air using ultrasound. Players aim to navigate the bat through a stalagmite tunnel that scrolls to one side as the bat travels, which is implemented using a pin-array display to create a shape-changing passage.

[ UltraBat ]

The next generation of robots will rely on machine learning in one way or another. However, when machine learning algorithms (or their results) are deployed on robots in the real world, studying their safety is important. In this talk, I will summarize the findings of our recent review paper “Safe Learning in Robotics: From Learning-Based Control to Safe Reinforcement Learning”.

[ UofT ]

On October 20, 2022, Kimberly Hambuchen of NASA talked to Robotics students as a speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question: “What can I do with a robotics degree?”

[ Michigan Robotics ]

Surgical skills can be improved by continuous surgical training and feedback, thus reducing adverse outcomes while performing an intervention. With the advent of new technologies, researchers now have the tools to analyze surgical instrument motion to differentiate surgeons’ levels of technical skill. Surgical skills assessment is time-consuming and prone to subjective interpretation. The surgical instrument detection and tracking algorithm analyzes the image captured by the surgical robotic endoscope and extracts the movement and orientation information of a surgical instrument to provide surgical navigation. This information can be used to label raw surgical video datasets that are used to form an action space for surgical skill analysis. Instrument detection and tracking is a challenging problem in MIS, including robot-assisted surgeries, but vision-based approaches provide promising solutions with minimal hardware integration requirements. This study offers an overview of the developments of assessment systems for surgical intervention analysis. The purpose of this study is to identify the research gap and make a leap in developing technology to automate the incorporation of new surgical skills. A prime factor in automating the learning is to create datasets with minimal manual intervention from raw surgical videos. This review encapsulates the current trends in artificial intelligence (AI) based visual detection and tracking technologies for surgical instruments and their application for surgical skill assessment.



This article is part of our Autonomous Weapons Challenges series.

The real world is anything but binary. It is fuzzy and indistinct, with lots of options and potential outcomes, full of complexity and nuance. Our societies create laws and cultural norms to provide and maintain some semblance of order, but such structures are often open to interpretation, and they shift and evolve over time.

This fuzziness can be challenging for any autonomous system navigating the uncertainty of a human world—such as Alexa reacting to the wrong conversations, or self-driving cars being stymied by white trucks and orange traffic cones. But not having clarity on "right or wrong" is especially problematic when considering autonomous weapons systems (AWS).

International Humanitarian Law (IHL) is the body of laws that govern international military conflicts, and they provide rules about how weapons should be used. The fundamentals of IHL were developed before the widespread use of personal computers, satellites, the Internet, and social media, and before private data became a commodity that could be accessed remotely and often without a person’s knowledge or consent. Many groups are concerned that the existing laws don’t cover the myriad issues that recent and emerging technologies have created, and the International Committee of the Red Cross, the watchdog of IHL, has recommended new, legally binding rules to cover AWS.

Ethical principles have been developed to help address gaps between changing cultural norms and technologies and established laws, but such principles also tend to be vague and difficult to translate into legal code. For example, even if everyone agrees on an ethical principle like minimizing bias in an autonomous system, how would that be programmed? Who would determine whether an algorithmic bias has been sufficiently “minimized” for the system to be deployed?

All countries involved in the autonomous weapons systems (AWS) debate at the United Nations have stated that AWS must follow international law. However, they don’t agree on what these laws and ethics mean in practice, and there’s additional disagreement over whether some AWS capabilities must be preemptively banned in order to ensure that IHL is honored.

IHL, Emerging Technology, and AWS

Much of the disagreement at the United Nations stems from the uncertainty surrounding the technology and how the technology will evolve in the future. Though existing weapons systems have some autonomous capabilities, and though there have been reports of AWS being used in Libya and questions about AWS being used in Ukraine, the extent to which AI and autonomy will change warfare remains unknown. Even when IHL mandates already exist, it’s unclear that AWS will be able to follow them: For example, can a machine be trained to reliably recognize when a combatant is injured or surrendering? Is it possible for a machine to learn the difference between a civilian and a combatant dressed as a civilian?

Cyberthreats pose new risks to national security, and the ability of companies and governments to collect personal data is already a controversial legal and ethical issue. These risks are only exacerbated when paired with AWS, which could be biased, hacked, trained on bad data, or otherwise compromised as a result of weak regulations surrounding emerging technologies.

Moreover, for AI systems to work, they typically need to be trained on huge data sets. But military conflict and battlefields can be chaotic and unpredictable, and large, reliable data sets may not exist. AWS may also be subject to greater adversarial manipulation, which, essentially, involves tricking the system into misunderstanding the situation–something that can be as easy to do as placing a sticker on or near an object. Is it possible for AWS algorithms to receive sufficient training and supervision to ensure they won’t violate international laws, and who makes that decision?

AWS are complex, with various people and organizations involved at different stages of development, and communication between designers and users of the systems may not exist. Additionally, the algorithms and AI software used in AWS may not have originally been intended for military use, or they may have been intended for the military but not for weapons specifically. To ensure the safety and reliability of AWS, new standards for testing, evaluation, verification, and validation are needed. And if an automated weapons system acts inappropriately or unexpectedly and causes unintended harm, will it be clear who is at fault?

Nonmilitary Use of AWS

While certain international laws cover human rights issues during a war, separate laws cover human rights issues in all other circumstances. Simply prohibiting a weapons system from being used during wartime does not guarantee that the system can’t be used outside of military combat. For example, tear gas has been classified as a chemical weapon and banned in warfare since 1925, but it remains legal for law enforcement to use for riot control.

If new international laws are developed to regulate the wartime use of AI and autonomy in weapons systems, human rights violations committed outside of the scope of a military action could—and likely would—still occur. The latter could include actions by private security companies, police, border control agencies, and nonstate armed groups.

Ultimately, in order to ensure that laws, policy, and ethics are well adapted to the new technologies of AWS—and that AWS are designed to better abide by international laws and norms—policymakers need to have a stronger understanding of the technical capabilities and limitations of the weapons, and how the weapons might be used.

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitled “Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.



This article is part of our Autonomous Weapons Challenges series.

The real world is anything but binary. It is fuzzy and indistinct, with lots of options and potential outcomes, full of complexity and nuance. Our societies create laws and cultural norms to provide and maintain some semblance of order, but such structures are often open to interpretation, and they shift and evolve over time.

This fuzziness can be challenging for any autonomous system navigating the uncertainty of a human world—such as Alexa reacting to the wrong conversations, or self-driving cars being stymied by white trucks and orange traffic cones. But not having clarity on "right or wrong" is especially problematic when considering autonomous weapons systems (AWS).

International Humanitarian Law (IHL) is the body of laws that govern international military conflicts, and they provide rules about how weapons should be used. The fundamentals of IHL were developed before the widespread use of personal computers, satellites, the Internet, and social media, and before private data became a commodity that could be accessed remotely and often without a person’s knowledge or consent. Many groups are concerned that the existing laws don’t cover the myriad issues that recent and emerging technologies have created, and the International Committee of the Red Cross, the watchdog of IHL, has recommended new, legally binding rules to cover AWS.

Ethical principles have been developed to help address gaps between changing cultural norms and technologies and established laws, but such principles also tend to be vague and difficult to translate into legal code. For example, even if everyone agrees on an ethical principle like minimizing bias in an autonomous system, how would that be programmed? Who would determine whether an algorithmic bias has been sufficiently “minimized” for the system to be deployed?

All countries involved in the autonomous weapons systems (AWS) debate at the United Nations have stated that AWS must follow international law. However, they don’t agree on what these laws and ethics mean in practice, and there’s additional disagreement over whether some AWS capabilities must be preemptively banned in order to ensure that IHL is honored.

IHL, Emerging Technology, and AWS

Much of the disagreement at the United Nations stems from the uncertainty surrounding the technology and how the technology will evolve in the future. Though existing weapons systems have some autonomous capabilities, and though there have been reports of AWS being used in Libya and questions about AWS being used in Ukraine, the extent to which AI and autonomy will change warfare remains unknown. Even when IHL mandates already exist, it’s unclear that AWS will be able to follow them: For example, can a machine be trained to reliably recognize when a combatant is injured or surrendering? Is it possible for a machine to learn the difference between a civilian and a combatant dressed as a civilian?

Cyberthreats pose new risks to national security, and the ability of companies and governments to collect personal data is already a controversial legal and ethical issue. These risks are only exacerbated when paired with AWS, which could be biased, hacked, trained on bad data, or otherwise compromised as a result of weak regulations surrounding emerging technologies.

Moreover, for AI systems to work, they typically need to be trained on huge data sets. But military conflict and battlefields can be chaotic and unpredictable, and large, reliable data sets may not exist. AWS may also be subject to greater adversarial manipulation, which, essentially, involves tricking the system into misunderstanding the situation–something that can be as easy to do as placing a sticker on or near an object. Is it possible for AWS algorithms to receive sufficient training and supervision to ensure they won’t violate international laws, and who makes that decision?

AWS are complex, with various people and organizations involved at different stages of development, and communication between designers and users of the systems may not exist. Additionally, the algorithms and AI software used in AWS may not have originally been intended for military use, or they may have been intended for the military but not for weapons specifically. To ensure the safety and reliability of AWS, new standards for testing, evaluation, verification, and validation are needed. And if an automated weapons system acts inappropriately or unexpectedly and causes unintended harm, will it be clear who is at fault?

Nonmilitary Use of AWS

While certain international laws cover human rights issues during a war, separate laws cover human rights issues in all other circumstances. Simply prohibiting a weapons system from being used during wartime does not guarantee that the system can’t be used outside of military combat. For example, tear gas has been classified as a chemical weapon and banned in warfare since 1925, but it remains legal for law enforcement to use for riot control.

If new international laws are developed to regulate the wartime use of AI and autonomy in weapons systems, human rights violations committed outside of the scope of a military action could—and likely would—still occur. The latter could include actions by private security companies, police, border control agencies, and nonstate armed groups.

Ultimately, in order to ensure that laws, policy, and ethics are well adapted to the new technologies of AWS—and that AWS are designed to better abide by international laws and norms—policymakers need to have a stronger understanding of the technical capabilities and limitations of the weapons, and how the weapons might be used.

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitled “Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.



This article is part of our Autonomous Weapons Challenges series.

Two Boeing 737 Max planes crashed in 2018 and 2019 due to sensor failures that led to autopilot malfunctions that two human pilots were unable to overcome. Also in 2018, an Uber autonomous vehicle struck and killed a pedestrian in Arizona, even though a person in the car was supposed to be overseeing the system. These examples highlight many of the issues that arise when considering what “human control” over an autonomous system really means.

The development of these autonomous technologies occurred within enormously complex bureaucratic frameworks. A huge number of people were involved—in engineering a number of autonomous capabilities to function within a single system, in determining how the systems would respond to an unknown or emergency situation, and in training people to oversee the systems. A failure in any of these steps could, and did, lead to a catastrophic failure in which the people overseeing the system weren’t able to prevent it from causing unintended harm.

These examples underscore the basic human psychology that developers need to understand in order to design and test autonomous systems. Humans are prone to over-trusting machines and become increasingly complacent the more they use a system and nothing bad happens. Humans are notoriously bad at maintaining the level of focus necessary to catch an error in such situations, typically losing focus after about 20 minutes. And the human response to an emergency situation can be unpredictable.

Ultimately, “human control” is hard to define and has become a controversial issue in discussions about autonomous weapons systems, with many similar phrases used in international debates, including “meaningful human control,” “human responsibility,” and “appropriate human judgment.” But regardless of the phrase that’s used, the problem remains the same: Simply assigning a human the task of overseeing an AWS may not prevent the system from doing something it shouldn’t, and it’s not clear who would be at fault.

Responsibility and Accountability

Autonomous weapons systems can process data at speeds that far exceed a human’s cognitive capabilities, which means a human involved will need to know when to trust the data and when to question it.

In the examples above, people were directly overseeing a single commercial system. In the very near future, a single soldier might be expected to monitor an entire swarm of hundreds of weaponized drones, with testing already taking place for soldiers. Each drone may be detecting and processing data in real time. If a human can’t keep up with a single autonomous system, they certainly wouldn’t be able to keep up with the data coming in from a swarm. Additional autonomous systems may thus be added to filter and package the data, introducing even more potential points of failure. Among other issues, this raises legal concerns, given that responsibility and accountability could quickly become unclear if the system behaves unexpectedly only after it’s been deployed.

Human-Machine Teams

Artificial intelligence often relies on machine learning, which can turn AI-based systems into black boxes, with the AI taking unexpected actions and leaving its designers and users uncertain as to why it did what it did. It remains unclear how humans working with AWS will respond to their machine partners or what type of training will be necessary to ensure the human understands the capabilities and limitations of the system. Human-machine teaming also presents challenges both in terms of training people to use the system and of developing a better understanding of the trust dynamic between humans and AWS. While the human-robot handoff may be a technical challenge in many fields, it quickly becomes a question of international humanitarian law if the handoff doesn’t go smoothly for a weapons system.

Ensuring responsibility and accountability for AWS is a general point of agreement among those involved in the international debate. But without sufficient understanding of human psychology or how human-machine teams should work, is it reasonable to expect the human to be responsible and accountable for any unintended consequences of the system’s deployment?

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitledEthical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions on the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other technical realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.



This article is part of our Autonomous Weapons Challenges series.

Two Boeing 737 Max planes crashed in 2018 and 2019 due to sensor failures that led to autopilot malfunctions that two human pilots were unable to overcome. Also in 2018, an Uber autonomous vehicle struck and killed a pedestrian in Arizona, even though a person in the car was supposed to be overseeing the system. These examples highlight many of the issues that arise when considering what “human control” over an autonomous system really means.

The development of these autonomous technologies occurred within enormously complex bureaucratic frameworks. A huge number of people were involved—in engineering a number of autonomous capabilities to function within a single system, in determining how the systems would respond to an unknown or emergency situation, and in training people to oversee the systems. A failure in any of these steps could, and did, lead to a catastrophic failure in which the people overseeing the system weren’t able to prevent it from causing unintended harm.

These examples underscore the basic human psychology that developers need to understand in order to design and test autonomous systems. Humans are prone to over-trusting machines and become increasingly complacent the more they use a system and nothing bad happens. Humans are notoriously bad at maintaining the level of focus necessary to catch an error in such situations, typically losing focus after about 20 minutes. And the human response to an emergency situation can be unpredictable.

Ultimately, “human control” is hard to define and has become a controversial issue in discussions about autonomous weapons systems, with many similar phrases used in international debates, including “meaningful human control,” “human responsibility,” and “appropriate human judgment.” But regardless of the phrase that’s used, the problem remains the same: Simply assigning a human the task of overseeing an AWS may not prevent the system from doing something it shouldn’t, and it’s not clear who would be at fault.

Responsibility and Accountability

Autonomous weapons systems can process data at speeds that far exceed a human’s cognitive capabilities, which means a human involved will need to know when to trust the data and when to question it.

In the examples above, people were directly overseeing a single commercial system. In the very near future, a single soldier might be expected to monitor an entire swarm of hundreds of weaponized drones, with testing already taking place for soldiers. Each drone may be detecting and processing data in real time. If a human can’t keep up with a single autonomous system, they certainly wouldn’t be able to keep up with the data coming in from a swarm. Additional autonomous systems may thus be added to filter and package the data, introducing even more potential points of failure. Among other issues, this raises legal concerns, given that responsibility and accountability could quickly become unclear if the system behaves unexpectedly only after it’s been deployed.

Human-Machine Teams

Artificial intelligence often relies on machine learning, which can turn AI-based systems into black boxes, with the AI taking unexpected actions and leaving its designers and users uncertain as to why it did what it did. It remains unclear how humans working with AWS will respond to their machine partners or what type of training will be necessary to ensure the human understands the capabilities and limitations of the system. Human-machine teaming also presents challenges both in terms of training people to use the system and of developing a better understanding of the trust dynamic between humans and AWS. While the human-robot handoff may be a technical challenge in many fields, it quickly becomes a question of international humanitarian law if the handoff doesn’t go smoothly for a weapons system.

Ensuring responsibility and accountability for AWS is a general point of agreement among those involved in the international debate. But without sufficient understanding of human psychology or how human-machine teams should work, is it reasonable to expect the human to be responsible and accountable for any unintended consequences of the system’s deployment?

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitledEthical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions on the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other technical realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.



This article is part of our Autonomous Weapons Challenges series.

International discussions about autonomous weapons systems (AWS) often focus on a fundamental question: Is it legal for a machine to make the decision to take a human life? But woven into this question is another fundamental issue: Can an automated weapons system be trusted to do what it’s expected to do?

If the technical challenges of developing and using AWS can’t be addressed, then the answer to both questions is likely “no.”

AI Challenges Are Magnified When Applied to Weapons

Many of the known issues with AI and machine learning become even more problematic when associated with weapons. For example, AI systems could help process data from images far faster than human analysts can, and the majority of the results would be accurate. But the algorithms used for this functionality are known to introduce or exacerbate issues of bias and discrimination, targeting certain demographics more than others. Given that, is it reasonable to use image-recognition software to help humans identify potential targets?

But concerns about the technical abilities of AWS extend beyond object recognition and algorithmic bias. Autonomy in weapons systems requires a slew of technologies, including sensors, communications, and onboard computing power, each of which poses its own challenges for developers. These components are often designed and programmed by different organizations, and it can be hard to predict how the components will function together within the system, as well as how they’ll react to a variety of real-world situations and adversaries.

Testing for Assurance and Risk

It’s also not at all clear how militaries can test these systems to ensure the AWS will do what’s expected and comply with International Humanitarian Law. And yet militaries typically want weapons to be tested and proven to act consistently, legally, and without harming their own soldiers before the systems are deployed. If commanders don’t trust a weapons system, they likely won’t use it. But standardized testing is especially complicated for an AI program that can learn from its interactions in the field—in fact, such standardized testing for AWS simply doesn’t exist.

We know how software updates can alter how a system behaves and may introduce bugs that cause a system to behave erratically. But an automated weapons system powered by AI may also update its behavior based on real-world experience, and changes to the AWS behavior could be much harder for users to track. New information that the system accesses in the field could even trigger it to start to shift away from its original goals.

Similarly, cyberattacks and adversarial attacks pose a known threat, which developers try to guard against. But if an attack is successful, what would testing look like to identify that the system has been hacked, and how would a user know to implement such tests?

Physical Challenges of Autonomous Weapons

Though recent advancements in artificial intelligence have led to greater concern about the use of AWS, the technical challenges of autonomy in weapons systems extends beyond AI. Physical challenges already exist for conventional weapons and for nonweaponized autonomous systems, but these same problems are further exacerbated and complicated in AWS.

For example, many autonomous systems are getting smaller, even as their computational needs grow, including navigation, data acquisition and analysis, and decision making—and potentially all while out of communication with commanders. Can the automated weapons system maintain the necessary and legal functionality throughout the mission, even if communication is lost? How is data protected if the system falls into enemy hands?

Issues similar to these may also arise with other autonomous systems, but the consequences of failure are magnified with AWS, and extra features will likely be necessary to ensure that, for example, a weaponized autonomous vehicle in the battlefield doesn’t violate International Humanitarian Law or mistake a friendly vehicle for an enemy target. Because these problems are so new, weapons developers and lawmakers will need to work with and learn from experts in the robotics space to be able to solve the technical challenges and create useful policy.

There are many technical advances that will contribute to various types of weapons systems. Some will prove far more difficult to develop than expected, while others will likely be developed faster. That means AWS development won’t be a leap from conventional weapons systems to full autonomy, but will instead make incremental steps as new autonomous capabilities are developed. This could lead to a slippery slope where it’s unclear if a line has been crossed from acceptable use of technology to unacceptable. Perhaps the solution is to look at specific robotic and autonomous technologies as they’re developed and ask ourselves whether society would want a weapons system with this capability, or if action should be taken to prevent that from happening.

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitled “Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other technical realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.


This article is part of our Autonomous Weapons Challenges series.

International discussions about autonomous weapons systems (AWS) often focus on a fundamental question: Is it legal for a machine to make the decision to take a human life? But woven into this question is another fundamental issue: Can an automated weapons system be trusted to do what it’s expected to do?

If the technical challenges of developing and using AWS can’t be addressed, then the answer to both questions is likely “no.”

AI Challenges Are Magnified When Applied to Weapons

Many of the known issues with AI and machine learning become even more problematic when associated with weapons. For example, AI systems could help process data from images far faster than human analysts can, and the majority of the results would be accurate. But the algorithms used for this functionality are known to introduce or exacerbate issues of bias and discrimination, targeting certain demographics more than others. Given that, is it reasonable to use image-recognition software to help humans identify potential targets?

But concerns about the technical abilities of AWS extend beyond object recognition and algorithmic bias. Autonomy in weapons systems requires a slew of technologies, including sensors, communications, and onboard computing power, each of which poses its own challenges for developers. These components are often designed and programmed by different organizations, and it can be hard to predict how the components will function together within the system, as well as how they’ll react to a variety of real-world situations and adversaries.

Testing for Assurance and Risk

It’s also not at all clear how militaries can test these systems to ensure the AWS will do what’s expected and comply with International Humanitarian Law. And yet militaries typically want weapons to be tested and proven to act consistently, legally, and without harming their own soldiers before the systems are deployed. If commanders don’t trust a weapons system, they likely won’t use it. But standardized testing is especially complicated for an AI program that can learn from its interactions in the field—in fact, such standardized testing for AWS simply doesn’t exist.

We know how software updates can alter how a system behaves and may introduce bugs that cause a system to behave erratically. But an automated weapons system powered by AI may also update its behavior based on real-world experience, and changes to the AWS behavior could be much harder for users to track. New information that the system accesses in the field could even trigger it to start to shift away from its original goals.

Similarly, cyberattacks and adversarial attacks pose a known threat, which developers try to guard against. But if an attack is successful, what would testing look like to identify that the system has been hacked, and how would a user know to implement such tests?

Physical Challenges of Autonomous Weapons

Though recent advancements in artificial intelligence have led to greater concern about the use of AWS, the technical challenges of autonomy in weapons systems extends beyond AI. Physical challenges already exist for conventional weapons and for nonweaponized autonomous systems, but these same problems are further exacerbated and complicated in AWS.

For example, many autonomous systems are getting smaller, even as their computational needs grow, including navigation, data acquisition and analysis, and decision making—and potentially all while out of communication with commanders. Can the automated weapons system maintain the necessary and legal functionality throughout the mission, even if communication is lost? How is data protected if the system falls into enemy hands?

Issues similar to these may also arise with other autonomous systems, but the consequences of failure are magnified with AWS, and extra features will likely be necessary to ensure that, for example, a weaponized autonomous vehicle in the battlefield doesn’t violate International Humanitarian Law or mistake a friendly vehicle for an enemy target. Because these problems are so new, weapons developers and lawmakers will need to work with and learn from experts in the robotics space to be able to solve the technical challenges and create useful policy.

There are many technical advances that will contribute to various types of weapons systems. Some will prove far more difficult to develop than expected, while others will likely be developed faster. That means AWS development won’t be a leap from conventional weapons systems to full autonomy, but will instead make incremental steps as new autonomous capabilities are developed. This could lead to a slippery slope where it’s unclear if a line has been crossed from acceptable use of technology to unacceptable. Perhaps the solution is to look at specific robotic and autonomous technologies as they’re developed and ask ourselves whether society would want a weapons system with this capability, or if action should be taken to prevent that from happening.

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitled “Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other technical realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.


This article is part of our Autonomous Weapons Challenges series.

Lethal autonomous weapons systems can sound terrifying, but autonomy in weapons systems is far more nuanced and complicated than a simple debate between “good or bad” and “ethical or unethical.” In order to address the legal and ethical issues that an autonomous weapons system (AWS) can raise, it’s important to look at the many technical challenges that arise along the full spectrum of autonomy. A group of experts convened by the IEEE Standards Association is working on this, but they need your help.

Weapons systems can be built with a range of autonomous capabilities. They might be self-driving tanks, surveillance drones with AI-enabled image recognition, unmanned underwater vehicles that operate in swarms, loitering munitions with advanced target recognition—the list goes on. Some autonomous capabilities are less controversial, while others trigger intense debate over the legality and ethics of the capability. Some capabilities have existed for decades, while others are still hypothetical and may never be developed.

All of this can make autonomous weapons systems difficult to talk about, and doing so has proven to be incredibly challenging over the years. Answering even the most seemingly straightforward questions, such as whether an AWS is lethal or not, can get surprisingly complicated.

To date, international discussions have largely focused on the legal, ethical, and moral issues that arise with the prospect of lethal AWS, with limited consideration of the technical challenges. At the United Nations, these discussions have taken place within the Convention on Certain Conventional Weapons. After nearly a decade, though, the U.N. has yet to come up with a new treaty or regulations to cover AWS. In early discussions at the CCW and other international forums, participants often talked past each other: One person might consider a “fully autonomous weapons system” to include capabilities that are only slightly more advanced than today’s drones, while another might use the term as a synonym for the Terminator.

Discussions advanced to the point that in 2019, member states at the CCW agreed on a set of 11 guiding principles regarding lethal AWS. But these principles are nonbinding, and it’s unclear how the technical community can implement them. At the most recent meeting of the CCW in July, delegates repeatedly pushed for more nuanced discussions and understanding of the various technical issues that arise throughout the life cycle of an AWS.

To help bring clarity to these and other discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance.

Last year, the expert group, which I lead, published its findings in a report entitled “Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” In the document, we identified over 60 challenges of autonomous weapons systems, organized into 10 categories:

  1. Establishing common language
  2. Enabling effective human control
  3. Determining legal obligations
  4. Ensuring robustness
  5. Testing and evaluating
  6. Assessing risk
  7. Addressing operational constraints
  8. Collecting and curating data
  9. Aligning procurement practices
  10. Addressing nonmilitary use

It’s not surprising that “establishing common language” is the first category. As mentioned, when the debates around AWS first began, the focus was on lethal autonomous weapons systems, and that’s often still where people focus. Yet determining whether or not an AWS is lethal turns out to be harder than one might expect.

Consider a drone that does autonomous surveillance and carries a remote-controlled weapon. It uses artificial intelligence to navigate to and identify targets, while a human makes the final decision about whether or not to launch an attack. Just the fact that the weapon and autonomous capabilities are within the same system suggests this could be considered a lethal AWS.

Additionally, a human may not be capable of monitoring all of the data the drone is collecting in real time in order to identify and verify the target, or the human may over-trust the system (a common problem when humans work with machines). Even if the human makes the decision to launch an attack against the target that the AWS has identified, it’s not clear how much “meaningful control” the human truly has. (“Meaningful human control” is another phrase that has been hotly debated.)

This problem of definitions isn’t just an issue that comes up when policymakers at the U.N. discuss AWS. AI developers also have different definitions for commonly used concepts, including “bias,” “transparency,” “trust,” “autonomy,” and “artificial intelligence.” In many instances, the ultimate question may not be, Can we establish technical definitions for these terms? but rather, How do we address the fact that there may never be consistent definitions and agreement on these terms? Because, of course, one of the most important questions for all of the AWS challenges is not whether we technically can address this, but even if there is a technical solution, should we build and deploy the system?

Identifying the challenges was just the first stage of the work for the IEEE-SA expert group. We also concluded that there are three critical perspectives from which a new group of experts will be considering these challenges in more depth:

  • Assurance and safety, which looks at the technical challenges of ensuring the system behaves the way it’s expected to.
  • Human–machine teaming, which considers how the human and the machine will interact to enable reasonable and realistic human control, responsibility, and accountability.
  • Law, policy, and ethics, which considers the legal, political, and ethical implications of the issues raised throughout the Challenges document.
What Do You Think?

This is where we want your feedback! Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.

The independent group of experts who authored the report for the IEEE Standards Associate includes Emmanuel Bloch, Ariel Conn, Denise Garcia, Amandeep Gill, Ashley Llorens, Mart Noorma, and Heather Roff.



This article is part of our Autonomous Weapons Challenges series.

Lethal autonomous weapons systems can sound terrifying, but autonomy in weapons systems is far more nuanced and complicated than a simple debate between “good or bad” and “ethical or unethical.” In order to address the legal and ethical issues that an autonomous weapons system (AWS) can raise, it’s important to look at the many technical challenges that arise along the full spectrum of autonomy. A group of experts convened by the IEEE Standards Association is working on this, but they need your help.

Weapons systems can be built with a range of autonomous capabilities. They might be self-driving tanks, surveillance drones with AI-enabled image recognition, unmanned underwater vehicles that operate in swarms, loitering munitions with advanced target recognition—the list goes on. Some autonomous capabilities are less controversial, while others trigger intense debate over the legality and ethics of the capability. Some capabilities have existed for decades, while others are still hypothetical and may never be developed.

All of this can make autonomous weapons systems difficult to talk about, and doing so has proven to be incredibly challenging over the years. Answering even the most seemingly straightforward questions, such as whether an AWS is lethal or not, can get surprisingly complicated.

To date, international discussions have largely focused on the legal, ethical, and moral issues that arise with the prospect of lethal AWS, with limited consideration of the technical challenges. At the United Nations, these discussions have taken place within the Convention on Certain Conventional Weapons. After nearly a decade, though, the U.N. has yet to come up with a new treaty or regulations to cover AWS. In early discussions at the CCW and other international forums, participants often talked past each other: One person might consider a “fully autonomous weapons system” to include capabilities that are only slightly more advanced than today’s drones, while another might use the term as a synonym for the Terminator.

Discussions advanced to the point that in 2019, member states at the CCW agreed on a set of 11 guiding principles regarding lethal AWS. But these principles are nonbinding, and it’s unclear how the technical community can implement them. At the most recent meeting of the CCW in July, delegates repeatedly pushed for more nuanced discussions and understanding of the various technical issues that arise throughout the life cycle of an AWS.

To help bring clarity to these and other discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance.

Last year, the expert group, which I lead, published its findings in a report entitled “Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” In the document, we identified over 60 challenges of autonomous weapons systems, organized into 10 categories:

  1. Establishing common language
  2. Enabling effective human control
  3. Determining legal obligations
  4. Ensuring robustness
  5. Testing and evaluating
  6. Assessing risk
  7. Addressing operational constraints
  8. Collecting and curating data
  9. Aligning procurement practices
  10. Addressing nonmilitary use

It’s not surprising that “establishing common language” is the first category. As mentioned, when the debates around AWS first began, the focus was on lethal autonomous weapons systems, and that’s often still where people focus. Yet determining whether or not an AWS is lethal turns out to be harder than one might expect.

Consider a drone that does autonomous surveillance and carries a remote-controlled weapon. It uses artificial intelligence to navigate to and identify targets, while a human makes the final decision about whether or not to launch an attack. Just the fact that the weapon and autonomous capabilities are within the same system suggests this could be considered a lethal AWS.

Additionally, a human may not be capable of monitoring all of the data the drone is collecting in real time in order to identify and verify the target, or the human may over-trust the system (a common problem when humans work with machines). Even if the human makes the decision to launch an attack against the target that the AWS has identified, it’s not clear how much “meaningful control” the human truly has. (“Meaningful human control” is another phrase that has been hotly debated.)

This problem of definitions isn’t just an issue that comes up when policymakers at the U.N. discuss AWS. AI developers also have different definitions for commonly used concepts, including “bias,” “transparency,” “trust,” “autonomy,” and “artificial intelligence.” In many instances, the ultimate question may not be, Can we establish technical definitions for these terms? but rather, How do we address the fact that there may never be consistent definitions and agreement on these terms? Because, of course, one of the most important questions for all of the AWS challenges is not whether we technically can address this, but even if there is a technical solution, should we build and deploy the system?

Identifying the challenges was just the first stage of the work for the IEEE-SA expert group. We also concluded that there are three critical perspectives from which a new group of experts will be considering these challenges in more depth:

  • Assurance and safety, which looks at the technical challenges of ensuring the system behaves the way it’s expected to.
  • Human–machine teaming, which considers how the human and the machine will interact to enable reasonable and realistic human control, responsibility, and accountability.
  • Law, policy, and ethics, which considers the legal, political, and ethical implications of the issues raised throughout the Challenges document.
What Do You Think?

This is where we want your feedback! Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.

The independent group of experts who authored the report for the IEEE Standards Associate includes Emmanuel Bloch, Ariel Conn, Denise Garcia, Amandeep Gill, Ashley Llorens, Mart Noorma, and Heather Roff.



The word “quadruped” means, technically, “four feet.” Roboticists tend to apply the term to anything that uses four limbs to walk, differentiating it from bipedal robots, which walk on two limbs instead. But there’s a huge, blurry crossover there, in both robotics and biology, where you find animals (and occasionally robots) that can transition from quadruped to biped when they need to (for example) manipulate something.

If you look at quadrupedal robots simply as robots with four limbs rather than robots with four feet, they start to seem much more versatile, but that transition can be a tricky one. At the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) in Kyoto, Japan this week, researchers from Worcester Polytechnic Institute in Massachusetts and ShanghaiTech University presented a generalizable method whereby an off-the-shelf quadruped robot can turn into a biped with some clever software and a tiny bit of mechanical modification.

We’ve seen robots that can transition from quadruped to biped before, but they’re almost always designed very deliberately to be able to do this, and they pay a penalty in weight, complexity, and cost. What’s unique about this research is that it’s intended to be applied to any quadruped at all—with some very minor hardware, your quadruped can become a biped, too.

The mechanical side of this bipedalization is a 3D-printed stick that gets installed onto the shin of each of the quadruped’s hind legs. This provides additional support so that the robot can stand and walk robustly—without the shin attachments, the robot wouldn’t be statically stable. This is especially useful as the robot stands up, since its center of mass is fully supported during that process. The video shows this working on what looks like a Mini Cheetah robot, but again, the platform really doesn’t matter as long as it meets some basic requirements.

“We [seek] to reap the benefits from two worlds: stability and speed from quadrupeds, manipulability and a gain in operational height from bipeds.”
—Andre Rosendo, Worcester Polytechnic Institute

Once the robot is upright, walking comes from a policy that’s trained first in simulation and then transferred onto the real robot. This isn’t trivial, because the controller is trying to get the robot to both walk and not fall over, which is a bit of a contradiction, but the best performing policy was able to get the robot to walk for several meters. It’s important to remember that this is a robot that was not designed to walk bipedally at all, so in some sense you’ve got software struggling to get hardware to work in a way that it isn’t supposed to and certainly isn’t optimized for. Perhaps if this kind of thing catches on, quadruped designers might be incentivized to build a little extra flexibility into their platforms to make them more adaptable.

For more on this research, we spoke with Andre Rosendo, who is now a professor at WPI.

IEEE Spectrum: Fundamentally is there a difference between a four-legged robot and a four-limbed robot?

Andre Rosendo: As seen in nature, quadruped locomotion enables higher speeds, and the robot is noticeably faster when moving with four legs. That said, the benefits related to manipulability seen in this animal transition from four to two legs (for example, Australopithecus using their hands to bring food to their mouths) are also true to robots. We are currently developing a “variant end-effector” for the forelimbs to allow this quadruped robot to become a “two arm manipulator” when standing, handling objects and operating environments.

Why did you decide on this particular system to enable the bipedal transition?

We noticed that it is quite easy to adapt the hindlegs of a quadruped robot with a fixed structure, with very little drop in performance. Although not as aesthetically pleasing as an active structure, advances in materials nowadays allow us to use a small carbon fiber link protruding from the leg to mimic the same passive stability that our feet give us (known in legged locomotion as the polygon of stability). An active retractable system, on the other hand, would add a tiny motor to the leg, increasing the moment of inertia of that leg during locomotion, affecting performance negatively.

What are the limitations to the walking performance of this system?

We trained the robot in a simulated environment, and the walking gait, after being transferred to the real world, is stable, albeit slow. Bipedal robots usually have more degrees of freedom in their legs to allow a more dynamic and adaptive locomotion, but in our case, we are focusing on the multi-modal aspect to reap the benefits from two worlds: stability and speed from quadrupeds, manipulability and a gain in operational height from bipeds.

What are you working on next?

Our next steps... will be on the development of the manipulability of this robot. More specifically, we have been asking ourselves the question: “Now that we can stand up, what can we do that other robots cannot?”, and we already have some preliminary results on climbing to places that are higher than the center of gravity of the robot itself. After mechanical changes on the forelimbs, we will better evaluate complex handling that might require both hands at the same time, which is rare in current mobile robots.

Multi-Modal Legged Locomotion Framework with Automated Residual Reinforcement Learning, by Chen Yu and Andre Rosendo from ShanghaiTech University, was presented this week at IROS 2022 in Kyoto, Japan. More details are available on Github.



The word “quadruped” means, technically, “four feet.” Roboticists tend to apply the term to anything that uses four limbs to walk, differentiating it from bipedal robots, which walk on two limbs instead. But there’s a huge, blurry crossover there, in both robotics and biology, where you find animals (and occasionally robots) that can transition from quadruped to biped when they need to (for example) manipulate something.

If you look at quadrupedal robots simply as robots with four limbs rather than robots with four feet, they start to seem much more versatile, but that transition can be a tricky one. At the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) in Kyoto, Japan this week, researchers from Worcester Polytechnic Institute in Massachusetts and ShanghaiTech University presented a generalizable method whereby an off-the-shelf quadruped robot can turn into a biped with some clever software and a tiny bit of mechanical modification.

We’ve seen robots that can transition from quadruped to biped before, but they’re almost always designed very deliberately to be able to do this, and they pay a penalty in weight, complexity, and cost. What’s unique about this research is that it’s intended to be applied to any quadruped at all—with some very minor hardware, your quadruped can become a biped, too.

The mechanical side of this bipedalization is a 3D-printed stick that gets installed onto the shin of each of the quadruped’s hind legs. This provides additional support so that the robot can stand and walk robustly—without the shin attachments, the robot wouldn’t be statically stable. This is especially useful as the robot stands up, since its center of mass is fully supported during that process. The video shows this working on what looks like a Mini Cheetah robot, but again, the platform really doesn’t matter as long as it meets some basic requirements.

“We [seek] to reap the benefits from two worlds: stability and speed from quadrupeds, manipulability and a gain in operational height from bipeds.”
—Andre Rosendo, Worcester Polytechnic Institute

Once the robot is upright, walking comes from a policy that’s trained first in simulation and then transferred onto the real robot. This isn’t trivial, because the controller is trying to get the robot to both walk and not fall over, which is a bit of a contradiction, but the best performing policy was able to get the robot to walk for several meters. It’s important to remember that this is a robot that was not designed to walk bipedally at all, so in some sense you’ve got software struggling to get hardware to work in a way that it isn’t supposed to and certainly isn’t optimized for. Perhaps if this kind of thing catches on, quadruped designers might be incentivized to build a little extra flexibility into their platforms to make them more adaptable.

For more on this research, we spoke with Andre Rosendo, who is now a professor at WPI.

IEEE Spectrum: Fundamentally is there a difference between a four-legged robot and a four-limbed robot?

Andre Rosendo: As seen in nature, quadruped locomotion enables higher speeds, and the robot is noticeably faster when moving with four legs. That said, the benefits related to manipulability seen in this animal transition from four to two legs (for example, Australopithecus using their hands to bring food to their mouths) are also true to robots. We are currently developing a “variant end-effector” for the forelimbs to allow this quadruped robot to become a “two arm manipulator” when standing, handling objects and operating environments.

Why did you decide on this particular system to enable the bipedal transition?

We noticed that it is quite easy to adapt the hindlegs of a quadruped robot with a fixed structure, with very little drop in performance. Although not as aesthetically pleasing as an active structure, advances in materials nowadays allow us to use a small carbon fiber link protruding from the leg to mimic the same passive stability that our feet give us (known in legged locomotion as the polygon of stability). An active retractable system, on the other hand, would add a tiny motor to the leg, increasing the moment of inertia of that leg during locomotion, affecting performance negatively.

What are the limitations to the walking performance of this system?

We trained the robot in a simulated environment, and the walking gait, after being transferred to the real world, is stable, albeit slow. Bipedal robots usually have more degrees of freedom in their legs to allow a more dynamic and adaptive locomotion, but in our case, we are focusing on the multi-modal aspect to reap the benefits from two worlds: stability and speed from quadrupeds, manipulability and a gain in operational height from bipeds.

What are you working on next?

Our next steps... will be on the development of the manipulability of this robot. More specifically, we have been asking ourselves the question: “Now that we can stand up, what can we do that other robots cannot?”, and we already have some preliminary results on climbing to places that are higher than the center of gravity of the robot itself. After mechanical changes on the forelimbs, we will better evaluate complex handling that might require both hands at the same time, which is rare in current mobile robots.

Multi-Modal Legged Locomotion Framework with Automated Residual Reinforcement Learning, by Chen Yu and Andre Rosendo from ShanghaiTech University, was presented this week at IROS 2022 in Kyoto, Japan. More details are available on Github.

Robots operating with humans in highly dynamic environments need not only react to moving persons and objects but also to anticipate and adhere to patterns of motion of dynamic agents in their environment. Currently, robotic systems use information about dynamics locally, through tracking and predicting motion within their direct perceptual range. This limits robots to reactive response to observed motion and to short-term predictions in their immediate vicinity. In this paper, we explore how maps of dynamics (MoDs) that provide information about motion patterns outside of the direct perceptual range of the robot can be used in motion planning to improve the behaviour of a robot in a dynamic environment. We formulate cost functions for four MoD representations to be used in any optimizing motion planning framework. Further, to evaluate the performance gain through using MoDs in motion planning, we design objective metrics, and we introduce a simulation framework for rapid benchmarking. We find that planners that utilize MoDs waste less time waiting for pedestrians, compared to planners that use geometric information alone. In particular, planners utilizing both intensity (proportion of observations at a grid cell where a dynamic entity was detected) and direction information have better task execution efficiency.

This work focuses on catching safely an aerial micro-robot in mid-air using another aerial robot that is equipped with a universal soft gripper. To avoid aerodynamic disturbances such as downwash, that would push the target robot away, we follow a horizontal grasping approach. To this end, the article introduces a gripper design based on soft actuators that can stay horizontally straight with a single fixture and maintain sufficiently compliance in order to bend when air pressure is applied. Further, we develop the Soft Aerial Gripper (SoAG), an open-source aerial robot equipped with the developed soft end-effector and that features an onboard pneumatic regulation system. Experimental results show that the developed low-cost soft gripper has fast opening and closing responses despite being powered by lightweight air pumps, responses that are comparable to those of a commercially available end-effector tested we test against. Static grasping tests study the soft gripper’s robustness in capturing aerial micro-robots under aerodynamic disturbances. We experimentally demonstrated the feasibility of using the SoAG robot to catch a hovering micro-robot with or without propeller guards. The feasibility of dynamic catching is also shown by capturing a moving aerial micro-robot with a velocity of 0.2 m/s. The free flight performance of the SoAG robot is studied against a conventional quadrotor and in different gripper and payload status.



Drones have the potential to be very useful in disaster scenarios by transporting food and water to people in need. But, whenever you ask a drone to transport anything, anywhere, the bulk of what gets moved is the drone itself. Most delivery drones can only carry about 30 percent of their mass as payload, because most of their mass is both critical, like wings, and comes in the form of things that are essentially useless to the end user, like wings.

At the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) conference in Kyoto last week, researchers from EPFL presented a paper describing a drone that can boost its payload of food from 30 percent to 50 percent of its mass. It does so with the ingenious use of wings made from rice cakes that contain the caloric equivalent of an average, if unbalanced, breakfast. For anyone interested in digesting the paper, it is titled Towards edible drones for rescue missions: design and flight of nutritional wings, by Bokeon Kwak, Jun Shintake, Lu Zhang, and Dario Floreano from EPFL.

The reason why this drone exists is to (work towards) the effective and efficient delivery of food to someone who, for whatever reason, really really needs food and is not in a position to gain access to it in any other way. The idea is that you could fly this drone directly to them and keep them going for an extra day or two. You obviously won’t get the drone back afterwards (because its wings will have been eaten off), but that’s a small price to pay for potentially keeping someone alive via the delivery of vital calories.

The researchers designed the wing of this partially edible drone out of compressed puffed rice (rice cakes or rice cookies depending on who you ask) because of the foodstuff's similarity to expanded polypropylene (EPP) foam. EPP foam is something that’s commonly used as wing material in drones because it’s strong and lightweight; puffed rice shares those qualities. Though it’s not quite as strong as the EPP, it's not bad. And it’s also affordable, accessible, and easy to laser cut. The puffed rice also has a respectable calorie density—at 3,870 kcal per kilogram, rice cakes aren’t as good as something like chocolate, but they’re about on par with pasta, just with a much lower density.

Out of the box, the rice cakes are round, so the first step in fabricating the wing is to laser cut them into hexagons to make them easier to stick together. The glue is just gelatin, and after it all dries, the wing is packaged in plastic and tape to make sure that it doesn’t break down in wet or humid environments. It’s a process that’s fast, simple, and cheap.

The size of the wing is actually driven not by flight requirements, but by nutrition requirements. In this case, a wingspan of about 700 centimeters results in enough rice cake and gelatin glue to deliver 300 kcal, or the equivalent of one breakfast serving, with 80 grams remaining for a payload of vitamins or water or something like that. The formula the researchers came up with to calculate the design of this avian appetite quencher assumes that the rest of the drone is not edible, because it isn’t. The structure and tail surfaces are made of carbon fiber and foam.

While this is just a prototype, the half-edible drone does actually fly, achieving speeds of about 10 meters per second with the addition of a motor, some servos to actuate the tail surfaces for control, and a small battery. The next step is to figure out a way of making as many of those non-edible pieces out of edible materials instead, as well as finding a way of carrying a payload (like water) in an edible container.

For a bit more about this drone, we spoke with first author of the paper, Bokeon Kwak.

IEEE Spectrum: It sounds like your selection of edible wing material was primarily optimized for its mechanical properties and low weight. Are there other options that could work if the goal was to instead optimize for calories while still maintaining functionality?

Kwak: As you pointed out, achieving sufficient mechanical properties while maintaining low weight (with food materials) was the foremost design criteria in designing the edible wing. We can expand the design criteria to contain higher calorie by using fat-based material (e.g., edible wax); Fat has higher calorie per gram than proteins and carbohydrates. On the other hand, containing more calories also implies the increase of structural weight, which is a price we need to pay toward higher calories. This aspect also requires further study to find a sweet spot!

What does the drone taste like?

The edible wing tastes like a crunch rice crisp cookie with a little touch of raw gelatin (which worked as an edible glue to hold the rice cookies as a flat plate shape). No artificial flavor has been added yet.

Would there be any significant advantages to making the wing into a more complex shape, for example with an airfoil cross section instead of a flat plate?

Making a well-streamlined airfoil (instead of flat plate) is actually our next goal to achieve more efficient aerodynamic properties, such as: lower drag, higher lift. These advantages let an edible drone to carry more payload (which is useful to carry water) and have prolonged flight time and distance. Our team is testing 3D food printing and molding to create such an edible wing, including material characterization to make sure the edible wing has sufficient mechanical properties (i.e., higher Young's modulus, low density).

What else will you be working on next?

Other structural components such as wing control surfaces (e.g. aileron, rudder) will be made of edible material by 3D food printing or molding. Other things that will be considered are an edible/water-resistant coating on the surface of the edible wing, and degradation testing of the edible wing upon time (and water exposure).

This drone is just one application of a broader European research initiative called RoboFood, which seeks to develop edible robots that maximizes both performance and nutritional value. Edible sensing, actuation, and computation are all parts of this project, and the researchers (led by Dario Floreano at EPFL) can now start to focus on some of those more challenging edible components.

Pages