Feed aggregator



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Swarms of autonomous robots are increasingly being tested and deployed in complex missions, yet a certain level of human oversight during these missions is still required. Which means a major question remains: How many robots—and how complex a mission—can a single human manage before becoming overwhelmed?

In a study funded by the U.S. Defense Advanced Research Projects Agency (DARPA), experts show that humans can single-handedly and effectively manage a heterogenous swarm of more than 100 autonomous ground and aerial vehicles, while feeling overwhelmed only for brief periods of time during an overall small portion of the mission. For instance, in a particularly challenging, multi-day experiment in an urban setting, human controllers were overloaded with stress and workload only three percent of the time. The results were published 19 November in IEEE Transactions on Field Robotics.

Julie A. Adams, the associate director of research at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, has been studying human interactions with robots and other complex systems, such as aircraft cockpits and nuclear power plant control rooms, for 35 years. She notes that robot swarms can be used to support missions where work may be particularly dangerous and hazardous for humans, such as monitoring wildfires.

“Swarms can be used to provide persistent coverage of an area, such as monitoring for new fires or looters in the recently burned areas of Los Angeles,” Adams says. “The information can be used to direct limited assets, such as firefighting units or water tankers to new fires and hotspots, or to locations at which fires were thought to have been extinguished.”

These kinds of missions can involve a mix of many different kinds of unmanned ground vehicles (such as the Aion Robotics R1 wheeled robot) and aerial autonomous vehicles (like the Modal AI VOXL M500 quadcopter), and a human controller may need to reassign individual robots to different tasks as the mission unfolds. Notably, some theories over the past few decades—and even Adams’ early thesis work—suggest that a single human has limited capacity to deploy very large numbers of robots.

“These historical theories and the associated empirical results showed that as the number of ground robots increased, so did the human’s workload, which often resulted in reduced overall performance,” says Adams, noting that, although earlier research focused on unmanned ground vehicles (UGVs), which must deal with curbs and other physical barriers, unmanned aerial vehicles (UAVs) often encounter fewer physical barriers.

Human controllers managed their swarms of autonomous vehicles with a virtual display. The fuschia ring represents the area the person could see within their head-mounted display.DARPA

As part of DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, Adams and her colleagues sought to explore whether these theories applied to very complex missions involving a mix of unmanned ground and air vehicles. In November 2021, at Fort Campbell in Kentucky, two human controllers took turns engaging in a series of missions over the course of three weeks with the objective of neutralizing an adversarial target. Both human controllers had significant experience controlling swarms, and participated in alternating shifts that ranged from 1.5 to 3 hours per day.

Testing How Big of a Swarm Humans Can Manage

During the tests, the human controllers were positioned in a designated area on the edge of the testing site, and used a virtual reconstruction of the environment to keep tabs on where vehicles were and what tasks they were assigned to.

The largest mission shift involved 110 drones, 30 ground vehicles, and up to 50 virtual vehicles representing additional real-world vehicles. The robots had to navigate through the physical urban environment, as well as a series of virtual hazards represented using AprilTags—simplified QR codes that could represent imaginary hazards—that were scattered throughout the mission site.

DARPA made the final field exercise exceptionally challenging by providing thousands of hazards and pieces of information to inform the search. “The complexity of the hazards was significant,” Adams says, noting that some hazards required multiple robots to interact with them simultaneously, and some hazards moved around the environment.

Throughout each mission shift, the human controller’s physiological responses to the tasks at hand were monitored. For example, sensors collected data on their heart-rate variability, posture, and even their speech rate. The data were input into an established algorithm that estimates workload levels and was used to determine when the controller was reaching a workload level that exceeded a normal range, called an “overload state.”

Adams notes that, despite the complexity and large volume of robots to manage in this field exercise, the number and duration of overload state instances were relatively short—a handful of minutes during a mission shift. “The total percentage of estimated overload states was 3 percent of all workload estimates across all shifts for which we collected data,” she says.


www.youtube.com

The most common reason for a human commander to reach an overload state is when they had to generate multiple new tactics or inspect which vehicles in the launch zone were available for deployment.

Adams notes that these finding suggest that—counter to past theories—the number of robots may be less influential on human swarm control performance than previously thought. Her team is exploring the other factors that may impact swarm control missions, such as other human limitations, system designs and UAS designs, the results of which will potentially inform US Federal Aviation Administration drone regulations, she says.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Swarms of autonomous robots are increasingly being tested and deployed in complex missions, yet a certain level of human oversight during these missions is still required. Which means a major question remains: How many robots—and how complex a mission—can a single human manage before becoming overwhelmed?

In a study funded by the U.S. Defense Advanced Research Projects Agency (DARPA), experts show that humans can single-handedly and effectively manage a heterogenous swarm of more than 100 autonomous ground and aerial vehicles, while feeling overwhelmed only for brief periods of time during an overall small portion of the mission. For instance, in a particularly challenging, multi-day experiment in an urban setting, human controllers were overloaded with stress and workload only three percent of the time. The results were published 19 November in IEEE Transactions on Field Robotics.

Julie A. Adams, the associate director of research at Oregon State University’s Collaborative Robotics and Intelligent Systems Institute, has been studying human interactions with robots and other complex systems, such as aircraft cockpits and nuclear power plant control rooms, for 35 years. She notes that robot swarms can be used to support missions where work may be particularly dangerous and hazardous for humans, such as monitoring wildfires.

“Swarms can be used to provide persistent coverage of an area, such as monitoring for new fires or looters in the recently burned areas of Los Angeles,” Adams says. “The information can be used to direct limited assets, such as firefighting units or water tankers to new fires and hotspots, or to locations at which fires were thought to have been extinguished.”

These kinds of missions can involve a mix of many different kinds of unmanned ground vehicles (such as the Aion Robotics R1 wheeled robot) and aerial autonomous vehicles (like the Modal AI VOXL M500 quadcopter), and a human controller may need to reassign individual robots to different tasks as the mission unfolds. Notably, some theories over the past few decades—and even Adams’ early thesis work—suggest that a single human has limited capacity to deploy very large numbers of robots.

“These historical theories and the associated empirical results showed that as the number of ground robots increased, so did the human’s workload, which often resulted in reduced overall performance,” says Adams, noting that, although earlier research focused on unmanned ground vehicles (UGVs), which must deal with curbs and other physical barriers, unmanned aerial vehicles (UAVs) often encounter fewer physical barriers.

Human controllers managed their swarms of autonomous vehicles with a virtual display. The fuschia ring represents the area the person could see within their head-mounted display.DARPA

As part of DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, Adams and her colleagues sought to explore whether these theories applied to very complex missions involving a mix of unmanned ground and air vehicles. In November 2021, at Fort Campbell in Kentucky, two human controllers took turns engaging in a series of missions over the course of three weeks with the objective of neutralizing an adversarial target. Both human controllers had significant experience controlling swarms, and participated in alternating shifts that ranged from 1.5 to 3 hours per day.

Testing How Big of a Swarm Humans Can Manage

During the tests, the human controllers were positioned in a designated area on the edge of the testing site, and used a virtual reconstruction of the environment to keep tabs on where vehicles were and what tasks they were assigned to.

The largest mission shift involved 110 drones, 30 ground vehicles, and up to 50 virtual vehicles representing additional real-world vehicles. The robots had to navigate through the physical urban environment, as well as a series of virtual hazards represented using AprilTags—simplified QR codes that could represent imaginary hazards—that were scattered throughout the mission site.

DARPA made the final field exercise exceptionally challenging by providing thousands of hazards and pieces of information to inform the search. “The complexity of the hazards was significant,” Adams says, noting that some hazards required multiple robots to interact with them simultaneously, and some hazards moved around the environment.

Throughout each mission shift, the human controller’s physiological responses to the tasks at hand were monitored. For example, sensors collected data on their heart-rate variability, posture, and even their speech rate. The data were input into an established algorithm that estimates workload levels and was used to determine when the controller was reaching a workload level that exceeded a normal range, called an “overload state.”

Adams notes that, despite the complexity and large volume of robots to manage in this field exercise, the number and duration of overload state instances were relatively short—a handful of minutes during a mission shift. “The total percentage of estimated overload states was 3 percent of all workload estimates across all shifts for which we collected data,” she says.


www.youtube.com

The most common reason for a human commander to reach an overload state is when they had to generate multiple new tactics or inspect which vehicles in the launch zone were available for deployment.

Adams notes that these finding suggest that—counter to past theories—the number of robots may be less influential on human swarm control performance than previously thought. Her team is exploring the other factors that may impact swarm control missions, such as other human limitations, system designs and UAS designs, the results of which will potentially inform US Federal Aviation Administration drone regulations, she says.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.ICRA 2025: 19–23 May 2025, ATLANTAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

Are wheeled quadrupeds going to run out of crazy new ways to move anytime soon? Looks like maybe not.

[ Deep Robotics ]

A giant eye and tiny feet make this pipe inspection robot exceptionally cute, I think.

[ tmsuk ] via [ Robotstart ]

Agility seems to be one of the few humanoid companies talking seriously about safety.

[ Agility Robotics ]

A brain-computer interface, surgically placed in a research participant with tetraplegia, paralysis in all four limbs, provided an unprecedented level of control over a virtual quadcopter—just by thinking about moving their unresponsive fingers. In this video, you’ll see just how the participant of the study controlled the virtual quadcopter using their brain’s thought signals to move a virtual hand controller.

[ University of Michigan ]

Hair styling is a crucial aspect of personal grooming, significantly influenced by the appearance of front hair. While brushing is commonly used both to detangle hair and for styling purposes, existing research primarily focuses on robotic systems for detangling hair, with limited exploration into robotic hair styling. This research presents a novel robotic system designed to automatically adjust front hairstyles, with an emphasis on path planning for root-centric strand adjustment.

[ Paper ]

Thanks, Kento!

If I’m understanding this correctly, if you’re careful, it’s possible to introduce chaos into a blind juggling robot to switch synced juggling to alternate juggling.

[ ETH Zurich ]

Drones with beaks? Sure, why not.

[ GRVC ]

Check out this amazing demo preview video we shot in our offices here at OLogic prior to CES 2025. OLogic built this demo robot for MediaTek to show off all kinds of cool things running on a MediaTek Genio 700 processor. The robot is a Create3 base with a custom tower (similar to a TurtleBot) using a Pumpkin Genio 700 EVK, plus a LIDAR and a Orbbec Gemini 335 camera on it. The robot is running ROS2 NAV and finds colored balls on the floor using an NVIDIA TAO model running on the Genio 700 and adds them to the map so the robot can find them. You can direct the robot through RVIZ to go pick up a ball and move it to wherever you want on the map.

[ OLogic ]

We explore the potential of multimodal large language models (LLMs) for enabling autonomous trash pickup robots to identify objects characterized as trash in complex, context-dependent scenarios. By constructing evaluation datasets with human agreement annotations, we demonstrate that LLMs excel in visually clear cases with high human consensus, while performance is lower in ambiguous cases, reflecting human uncertainty. To validate real-world applicability, we integrate GPT-4o with an open vocabulary object detector and deploy it on a quadruped with a manipulator arm with ROS 2, showing that it is possible to use this information for autonomous trash pickup in practical settings.

[ University of Texas at Austin ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.ICRA 2025: 19–23 May 2025, ATLANTAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

Are wheeled quadrupeds going to run out of crazy new ways to move anytime soon? Looks like maybe not.

[ Deep Robotics ]

A giant eye and tiny feet make this pipe inspection robot exceptionally cute, I think.

[ tmsuk ] via [ Robotstart ]

Agility seems to be one of the few humanoid companies talking seriously about safety.

[ Agility Robotics ]

A brain-computer interface, surgically placed in a research participant with tetraplegia, paralysis in all four limbs, provided an unprecedented level of control over a virtual quadcopter—just by thinking about moving their unresponsive fingers. In this video, you’ll see just how the participant of the study controlled the virtual quadcopter using their brain’s thought signals to move a virtual hand controller.

[ University of Michigan ]

Hair styling is a crucial aspect of personal grooming, significantly influenced by the appearance of front hair. While brushing is commonly used both to detangle hair and for styling purposes, existing research primarily focuses on robotic systems for detangling hair, with limited exploration into robotic hair styling. This research presents a novel robotic system designed to automatically adjust front hairstyles, with an emphasis on path planning for root-centric strand adjustment.

[ Paper ]

Thanks, Kento!

If I’m understanding this correctly, if you’re careful, it’s possible to introduce chaos into a blind juggling robot to switch synced juggling to alternate juggling.

[ ETH Zurich ]

Drones with beaks? Sure, why not.

[ GRVC ]

Check out this amazing demo preview video we shot in our offices here at OLogic prior to CES 2025. OLogic built this demo robot for MediaTek to show off all kinds of cool things running on a MediaTek Genio 700 processor. The robot is a Create3 base with a custom tower (similar to a TurtleBot) using a Pumpkin Genio 700 EVK, plus a LIDAR and a Orbbec Gemini 335 camera on it. The robot is running ROS2 NAV and finds colored balls on the floor using an NVIDIA TAO model running on the Genio 700 and adds them to the map so the robot can find them. You can direct the robot through RVIZ to go pick up a ball and move it to wherever you want on the map.

[ OLogic ]

We explore the potential of multimodal large language models (LLMs) for enabling autonomous trash pickup robots to identify objects characterized as trash in complex, context-dependent scenarios. By constructing evaluation datasets with human agreement annotations, we demonstrate that LLMs excel in visually clear cases with high human consensus, while performance is lower in ambiguous cases, reflecting human uncertainty. To validate real-world applicability, we integrate GPT-4o with an open vocabulary object detector and deploy it on a quadruped with a manipulator arm with ROS 2, showing that it is possible to use this information for autonomous trash pickup in practical settings.

[ University of Texas at Austin ]



Seabed observation plays a major role in safeguarding marine systems by keeping tabs on the species and habitats on the ocean floor at different depths. This is primarily done by underwater robots that use optical imaging to collect high quality data that can be fed into environmental models, and compliment the data obtained through sonar in large-scale ocean observations.

Different underwater robots have been trialed over the years, but many have struggled with performing near-seabed observations because they disturb the local seabed by destroying coral and disrupting the sediment. Gang Wang, from Harbin Engineering University in China, and his research team have recently developed a maneuverable underwater vehicle that is better suited to seabed operations because it doesn’t disturb the local environment by floating above the seabed and possessing a specially engineering propeller system to manuever. These robots could be used to better protect the seabed while studying it, and improve efforts to preserve marine biodiversity and explore for underwater resources such as minerals for EV batteries.

Many underwater robots are wheeled or legged, but “these robots face substantial challenges in rugged terrains where obstacles and slopes can impede their functionality,” says Wang. They can also damage coral reefs.

Floating robots don’t have this issue, but existing options disturb the sediment on the seabed because their thrusters create a downward current during ascension. The waves generated as the propeller’s wake directly hit the seafloor in most floating robots, which causes sediment to move in the immediate vicinity. In a similar way to dust blowing in front of your digital or smartphone camera, the particles moving through the water can obscure the view of the cameras on the robot and reduce the quality of the images it captures. “Addressing this issue was crucial for the functional success of our prototype and for increasing its acceptance among engineers,” says Wang.

Designing a Better Underwater Robot

After further investigation, Wang and the rest of the team found that the robot’s shape influences the local water resistance, or drag, even at low speeds. “During the design process, we configured the robot with two planes exhibiting significant differences in water resistance,” says Wang. This led to the researchers developing a robot with a flattened body and angling the thruster relative to the central axis. “We found that the robot’s shape and the thruster layout significantly influence its ascent speed,” says Wang.

Clockwise from left: relationship between rotational speed of the thruster and the resultant force and torque in the airframe coordinate system, overall structure of the robot, side view of the thruster arrangement and main electronics components.Gang Wang, Kaixin Liu et al.

The researchers created a navigational system where the thrusters generate a combined force that slants downwards but still allows the robot to ascend, changing the wake distribution during ascent so that it doesn’t disturb the sediment on the seafloor. “Flattening the robot’s body and angling the thruster relative to the central axis is a straightforward approach for most engineers, enhancing the potential for broader application of this design” in seabed monitoring, says Wang.

“By addressing the navigational concerns of floating robots, we aim to enhance the observational capabilities of underwater robots in near-seafloor environments,” says Wang. The vehicle was tested in a range of marine environments, including sandy areas, coral reefs, and sheer rock, to show its ability to minimally disturb sediments in multiple potential environments.

Alongside the structural design advancements, the team incorporated an angular acceleration feedback control to keep the robot as close to the seafloor as possible without actually hitting it—called bottoming out. They also developed external disturbance observation algorithms and designed a sensor layout structure that enables the robot to quickly recognize and resist external disturbances, as well as plot a path in real time. This approach allowed the new vehicle to travel along at only 20 centimeters above the seafloor without bottoming out.

By implanting this control, the robot was able to get close to the sea floor and improve the quality of the images it took by reducing light refraction and scattering caused by the water column. “Given the robot’s proximity to the seafloor, even brief periods of instability can lead to collisions with the bottom, and we have verified that the robot shows excellent resistance to strong disturbances,” says Wang.

With the success of this new robot achieving a closer approach to the seafloor without disturbing the seabed or crashing, Wang has stated that they plan to use the robot to closely observe coral reefs. Coral reef monitoring currently relies on inefficient manual methods, so the robots could widen the areas that are observed, and do so more quickly.

Wang adds that “effective detection methods are lacking in deeper waters, particularly in the mid-light layer. We plan to improve the autonomy of the detection process to substitute divers in image collection, and facilitate the automatic identification and classification of coral reef species density to provide a more accurate and timely feedback on the health status of coral reefs.”



Seabed observation plays a major role in safeguarding marine systems by keeping tabs on the species and habitats on the ocean floor at different depths. This is primarily done by underwater robots that use optical imaging to collect high quality data that can be fed into environmental models, and compliment the data obtained through sonar in large-scale ocean observations.

Different underwater robots have been trialed over the years, but many have struggled with performing near-seabed observations because they disturb the local seabed by destroying coral and disrupting the sediment. Gang Wang, from Harbin Engineering University in China, and his research team have recently developed a maneuverable underwater vehicle that is better suited to seabed operations because it doesn’t disturb the local environment by floating above the seabed and possessing a specially engineering propeller system to manuever. These robots could be used to better protect the seabed while studying it, and improve efforts to preserve marine biodiversity and explore for underwater resources such as minerals for EV batteries.

Many underwater robots are wheeled or legged, but “these robots face substantial challenges in rugged terrains where obstacles and slopes can impede their functionality,” says Wang. They can also damage coral reefs.

Floating robots don’t have this issue, but existing options disturb the sediment on the seabed because their thrusters create a downward current during ascension. The waves generated as the propeller’s wake directly hit the seafloor in most floating robots, which causes sediment to move in the immediate vicinity. In a similar way to dust blowing in front of your digital or smartphone camera, the particles moving through the water can obscure the view of the cameras on the robot and reduce the quality of the images it captures. “Addressing this issue was crucial for the functional success of our prototype and for increasing its acceptance among engineers,” says Wang.

Designing a Better Underwater Robot

After further investigation, Wang and the rest of the team found that the robot’s shape influences the local water resistance, or drag, even at low speeds. “During the design process, we configured the robot with two planes exhibiting significant differences in water resistance,” says Wang. This led to the researchers developing a robot with a flattened body and angling the thruster relative to the central axis. “We found that the robot’s shape and the thruster layout significantly influence its ascent speed,” says Wang.

Clockwise from left: relationship between rotational speed of the thruster and the resultant force and torque in the airframe coordinate system, overall structure of the robot, side view of the thruster arrangement and main electronics components.Gang Wang, Kaixin Liu et al.

The researchers created a navigational system where the thrusters generate a combined force that slants downwards but still allows the robot to ascend, changing the wake distribution during ascent so that it doesn’t disturb the sediment on the seafloor. “Flattening the robot’s body and angling the thruster relative to the central axis is a straightforward approach for most engineers, enhancing the potential for broader application of this design” in seabed monitoring, says Wang.

“By addressing the navigational concerns of floating robots, we aim to enhance the observational capabilities of underwater robots in near-seafloor environments,” says Wang. The vehicle was tested in a range of marine environments, including sandy areas, coral reefs, and sheer rock, to show its ability to minimally disturb sediments in multiple potential environments.

Alongside the structural design advancements, the team incorporated an angular acceleration feedback control to keep the robot as close to the seafloor as possible without actually hitting it—called bottoming out. They also developed external disturbance observation algorithms and designed a sensor layout structure that enables the robot to quickly recognize and resist external disturbances, as well as plot a path in real time. This approach allowed the new vehicle to travel along at only 20 centimeters above the seafloor without bottoming out.

By implanting this control, the robot was able to get close to the sea floor and improve the quality of the images it took by reducing light refraction and scattering caused by the water column. “Given the robot’s proximity to the seafloor, even brief periods of instability can lead to collisions with the bottom, and we have verified that the robot shows excellent resistance to strong disturbances,” says Wang.

With the success of this new robot achieving a closer approach to the seafloor without disturbing the seabed or crashing, Wang has stated that they plan to use the robot to closely observe coral reefs. Coral reef monitoring currently relies on inefficient manual methods, so the robots could widen the areas that are observed, and do so more quickly.

Wang adds that “effective detection methods are lacking in deeper waters, particularly in the mid-light layer. We plan to improve the autonomy of the detection process to substitute divers in image collection, and facilitate the automatic identification and classification of coral reef species density to provide a more accurate and timely feedback on the health status of coral reefs.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today's videos!

Unitree rolls out frequent updates nearly every month. This time, we present to you the smoothest walking and humanoid running in the world. We hope you like it.]

[ Unitree ]

This is just lovely.

[ Mimus CNK ]

There’s a lot to like about Grain Weevil as an effective unitasking robot, but what I really appreciate here is that the control system is just a remote and a camera slapped onto the top of the bin.

[ Grain Weevil ]

This video, “Robot arm picking your groceries like a real person,” has taught me that I am not a real person.

[ Extend Robotics ]

A robot walking like a human walking like what humans think a robot walking like a robot walks like.

And that was my favorite sentence of the week.

[ Engineai ]

For us, robots are tools to simplify life. But they should look friendly too, right? That’s why we added motorized antennas to Reachy, so it can show simple emotions—without a full personality. Plus, they match those expressive eyes O_o!

[ Pollen Robotics ]

So a thing that I have come to understand about ships with sails (thanks, Jack Aubrey!) is that sailing in the direction that the wind is coming from can be tricky. Turns out that having a boat with two fronts and no back makes this a lot easier.

[ Paper ] from [ 2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics ] via [ IEEE Xplore ]

I’m Kento Kawaharazuka from JSK Robotics Laboratory at the University of Tokyo. I’m writing to introduce our human-mimetic binaural hearing system on the musculoskeletal humanoid Musashi. The robot can perform 3D sound source localization using a human-like outer ear structure and an FPGA-based hearing system embedded within it.

[ Paper ]

Thanks, Kento!

The third CYBATHLON took place in Zurich on 25-27 October 2024. The CYBATHLON is a competition for people with impairments using novel robotic technologies to perform activities of daily living. It was invented and initiated by Prof. Robert Riener at ETH Zurich, Switzerland. Races were held in eight disciplines including arm and leg prostheses, exoskeletons, powered wheelchairs, brain computer interfaces, robot assistance, vision assistance, and functional electrical stimulation bikes.

[ Cybathlon ]

Thanks, Robert!

If you’re going to work on robot dogs, I’m honestly not sure whether Purina would be the most or least appropriate place to do that.

[ Michigan Robotics ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today's videos!

Unitree rolls out frequent updates nearly every month. This time, we present to you the smoothest walking and humanoid running in the world. We hope you like it.]

[ Unitree ]

This is just lovely.

[ Mimus CNK ]

There’s a lot to like about Grain Weevil as an effective unitasking robot, but what I really appreciate here is that the control system is just a remote and a camera slapped onto the top of the bin.

[ Grain Weevil ]

This video, “Robot arm picking your groceries like a real person,” has taught me that I am not a real person.

[ Extend Robotics ]

A robot walking like a human walking like what humans think a robot walking like a robot walks like.

And that was my favorite sentence of the week.

[ Engineai ]

For us, robots are tools to simplify life. But they should look friendly too, right? That’s why we added motorized antennas to Reachy, so it can show simple emotions—without a full personality. Plus, they match those expressive eyes O_o!

[ Pollen Robotics ]

So a thing that I have come to understand about ships with sails (thanks, Jack Aubrey!) is that sailing in the direction that the wind is coming from can be tricky. Turns out that having a boat with two fronts and no back makes this a lot easier.

[ Paper ] from [ 2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics ] via [ IEEE Xplore ]

I’m Kento Kawaharazuka from JSK Robotics Laboratory at the University of Tokyo. I’m writing to introduce our human-mimetic binaural hearing system on the musculoskeletal humanoid Musashi. The robot can perform 3D sound source localization using a human-like outer ear structure and an FPGA-based hearing system embedded within it.

[ Paper ]

Thanks, Kento!

The third CYBATHLON took place in Zurich on 25-27 October 2024. The CYBATHLON is a competition for people with impairments using novel robotic technologies to perform activities of daily living. It was invented and initiated by Prof. Robert Riener at ETH Zurich, Switzerland. Races were held in eight disciplines including arm and leg prostheses, exoskeletons, powered wheelchairs, brain computer interfaces, robot assistance, vision assistance, and functional electrical stimulation bikes.

[ Cybathlon ]

Thanks, Robert!

If you’re going to work on robot dogs, I’m honestly not sure whether Purina would be the most or least appropriate place to do that.

[ Michigan Robotics ]



In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story “Runaround.” The laws were later popularized in his seminal story collection I, Robot.

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov’s framework useful for considering the potential safeguards needed for AI that interacts with humans.

But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov’s original concerns about physical harm and obedience.

Deepfakes, Misinformation, and Scams

The proliferation of AI-enabled deception is particularly concerning. According to the FBI’s 2024 Internet Crime Report, cybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity’s 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust.

Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort.

Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly common, and any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanish, as he knew that I’m a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited.

Even more alarmingly, children and teenagers are forming emotional attachments to AI agents, and are sometimes unable to distinguish between interactions with real friends and bots online. Already, there have been suicides attributed to interactions with AI chatbots.

In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems’ ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union’s AI Act, which includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov’s time, people couldn’t have imagined how artificial agents could use online communication tools and avatars to deceive humans.

Therefore, we must make an addition to Asimov’s laws.

  • Fourth Law: A robot or AI must not deceive a human by impersonating a human being.
The Way Toward Trusted AI

We need clear boundaries. While human-AI collaboration can be constructive, AI deception undermines trust and leads to wasted time, emotional distress, and misuse of resources. Artificial agents must identify themselves to ensure our interactions with them are transparent and productive. AI-generated content should be clearly marked unless it has been significantly edited and adapted by a human.

Implementation of this Fourth Law would require:

  • Mandatory AI disclosure in direct interactions,
  • Clear labeling of AI-generated content,
  • Technical standards for AI identification,
  • Legal frameworks for enforcement,
  • Educational initiatives to improve AI literacy.

Of course, all this is easier said than done. Enormous research efforts are already underway to find reliable ways to watermark or detect AI-generated text, audio, images, and videos. Creating the transparency I’m calling for is far from a solved problem.

But the future of human-AI collaboration depends on maintaining clear distinctions between human and artificial agents. As noted in the IEEE’s 2022 “Ethically Aligned Design“ framework, transparency in AI systems is fundamental to building public trust and ensuring the responsible development of artificial intelligence.

Asimov’s complex stories showed that even robots that tried to follow the rules often discovered the unintended consequences of their actions. Still, having AI systems that are trying to follow Asimov’s ethical guidelines would be a very good start.



In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story “Runaround.” The laws were later popularized in his seminal story collection I, Robot.

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov’s framework useful for considering the potential safeguards needed for AI that interacts with humans.

But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov’s original concerns about physical harm and obedience.

Deepfakes, Misinformation, and Scams

The proliferation of AI-enabled deception is particularly concerning. According to the FBI’s 2024 Internet Crime Report, cybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity’s 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust.

Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort.

Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly common, and any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanish, as he knew that I’m a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited.

Even more alarmingly, children and teenagers are forming emotional attachments to AI agents, and are sometimes unable to distinguish between interactions with real friends and bots online. Already, there have been suicides attributed to interactions with AI chatbots.

In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems’ ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union’s AI Act, which includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov’s time, people couldn’t have imagined how artificial agents could use online communication tools and avatars to deceive humans.

Therefore, we must make an addition to Asimov’s laws.

  • Fourth Law: A robot or AI must not deceive a human by impersonating a human being.
The Way Toward Trusted AI

We need clear boundaries. While human-AI collaboration can be constructive, AI deception undermines trust and leads to wasted time, emotional distress, and misuse of resources. Artificial agents must identify themselves to ensure our interactions with them are transparent and productive. AI-generated content should be clearly marked unless it has been significantly edited and adapted by a human.

Implementation of this Fourth Law would require:

  • Mandatory AI disclosure in direct interactions,
  • Clear labeling of AI-generated content,
  • Technical standards for AI identification,
  • Legal frameworks for enforcement,
  • Educational initiatives to improve AI literacy.

Of course, all this is easier said than done. Enormous research efforts are already underway to find reliable ways to watermark or detect AI-generated text, audio, images, and videos. Creating the transparency I’m calling for is far from a solved problem.

But the future of human-AI collaboration depends on maintaining clear distinctions between human and artificial agents. As noted in the IEEE’s 2022 “Ethically Aligned Design“ framework, transparency in AI systems is fundamental to building public trust and ensuring the responsible development of artificial intelligence.

Asimov’s complex stories showed that even robots that tried to follow the rules often discovered the unintended consequences of their actions. Still, having AI systems that are trying to follow Asimov’s ethical guidelines would be a very good start.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

I’m not totally sure yet about the utility of having a small arm on a robot vacuum, but I love that this is a real thing. At least, it is at CES this year.

[ Roborock ]

We posted about SwitchBot’s new modular home robot system earlier this week, but here’s a new video showing some potentially useful hardware combinations.

[ SwitchBot ]

Yes, it’s in sim, but (and this is a relatively new thing) I will not be shocked to see this happen on Unitree’s hardware in the near future.

[ Unitree ]

With ongoing advancements in system engineering, ‪LimX Dynamics‬’ full-size humanoid robot features a hollow actuator design and high torque-density actuators, enabling full-body balance for a wide range of motion. Now it achieves complex full-body movements in a ultra stable and dynamic manner.

[ LimX Dynamics ]

We’ve seen hybrid quadrotor bipeds before, but this one , which is imitating the hopping behavior of Jacana birds, is pretty cute.

What’s a Jacana bird, you ask? It’s these things, which surely must have the most extreme foot to body ratio of any bird:

Also, much respect to the researchers for confidently titling this supplementary video “An Extremely Elegant Jump.”

[ SSRN Paper preprint ]

Twelve minutes flat from suitcase to mobile manipulator. Not bad!

[ Pollen Robotics ]

Happy New Year from Dusty Robotics!

[ Dusty Robotics ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

I’m not totally sure yet about the utility of having a small arm on a robot vacuum, but I love that this is a real thing. At least, it is at CES this year.

[ Roborock ]

We posted about SwitchBot’s new modular home robot system earlier this week, but here’s a new video showing some potentially useful hardware combinations.

[ SwitchBot ]

Yes, it’s in sim, but (and this is a relatively new thing) I will not be shocked to see this happen on Unitree’s hardware in the near future.

[ Unitree ]

With ongoing advancements in system engineering, ‪LimX Dynamics‬’ full-size humanoid robot features a hollow actuator design and high torque-density actuators, enabling full-body balance for a wide range of motion. Now it achieves complex full-body movements in a ultra stable and dynamic manner.

[ LimX Dynamics ]

We’ve seen hybrid quadrotor bipeds before, but this one , which is imitating the hopping behavior of Jacana birds, is pretty cute.

What’s a Jacana bird, you ask? It’s these things, which surely must have the most extreme foot to body ratio of any bird:

Also, much respect to the researchers for confidently titling this supplementary video “An Extremely Elegant Jump.”

[ SSRN Paper preprint ]

Twelve minutes flat from suitcase to mobile manipulator. Not bad!

[ Pollen Robotics ]

Happy New Year from Dusty Robotics!

[ Dusty Robotics ]



Back in the day, the defining characteristic of home-cleaning robots was that they’d randomly bounce around your floor as part of their cleaning process, because the technology required to localize and map an area hadn’t yet trickled down into the consumer space. That all changed in 2010, when home robots started using lidar (and other things) to track their location and optimize how they cleaned.

Consumer pool-cleaning robots are lagging about 15 years behind indoor robots on this, for a couple of reasons. First, most pool robots—different from automatic pool cleaners, which are purely mechanical systems that are driven by water pressure—have been tethered to an outlet for power, meaning that maximizing efficiency is less of a concern. And second, 3D underwater localization is a much different (and arguably more difficult) problem to solve than 2D indoor localization was. But pool robots are catching up, and at CES this week, Wybot introduced an untethered robot that uses ultrasound to generate a 3D map for fast, efficient pool cleaning. And it’s solar powered and self-emptying, too.

Underwater localization and navigation is not an easy problem for any robot. Private pools are certainly privileged to be operating environments with a reasonable amount of structure and predictability, at least if everything is working the way it should. But the lighting is always going to be a challenge, between bright sunlight, deep shadow, wave reflections, and occasionally murky water if the pool chemicals aren’t balanced very well. That makes relying on any light-based localization system iffy at best, and so Wybot has gone old school, with ultrasound.

Wybot Brings Ultrasound Back to Bots

Ultrasound used to be a very common way for mobile robots to navigate. You may (or may not) remember venerable robots like the Pioneer 3, with those big ultrasonic sensors across its front. As cameras and lidar got cheap and reliable, the messiness of ultrasonic sensors fell out of favor, but sound is still ideal for underwater applications where anything that relies on light may struggle.


The Wybot S3 uses 12 ultrasonic sensors, plus motor encoders and an inertial measurement unit to map residential pools in three dimensions. “We had to choose the ultrasonic sensors very carefully,” explains Felix (Huo) Feng, the CTO of Wybot. “Actually, we use multiple different sensors, and we compute time of flight [of the sonar pulses] to calculate distance.” The positional accuracy of the resulting map is about 10 centimeters, which is totally fine for the robot to get its job done, although Feng says that they’re actively working to improve the map’s resolution. For path planning purposes, the 3D map gets deconstructed into a series of 2D maps, since the robot needs to clean the bottom of the pool, stairs and ledges, and also the sides of the pool.

Efficiency is particularly important for the S3 because its charging dock has enough solar panels on the top of it to provide about 90 minutes of runtime for the robot over the course of an optimally sunny day. If your pool isn’t too big, that means the robot can clean it daily without requiring a power connection to the dock. The dock also sucks debris out of the collection bin on the robot itself, and Wybot suggests that the S3 can go for up to a month of cleaning without the dock overflowing.

The S3 has a camera on the front, which is used primarily to identify and prioritize dirtier areas (through AI, of course) that need focused cleaning. At some point in the future, Wybot may be able to use vision for navigation too, but my guess is that for reliable 24/7 navigation, ultrasound will still be necessary.

One other interesting little tidbit is the communication system. The dock can talk to your Wi-Fi, of course, and then talk to the robot while it’s charging. Once the robot goes off for a swim, however, traditional wireless signals won’t work, but the dock has its own sonar that can talk to the robot at several bytes per second. This isn’t going to get you streaming video from the robot’s camera, but it’s enough to let you steer the robot if you want, or ask it to come back to the dock, get battery status updates, and similar sorts of things.

The Wybot S3 will go on sale in Q2 of this year for a staggering US $2,999, but that’s how it always works: The first time a new technology shows up in the consumer space, it’s inevitably at a premium. Give it time, though, and my guess is that the ability to navigate and self-empty will become standard features in pool robots. But as far as I know, Wybot got there first.




Back in the day, the defining characteristic of home-cleaning robots was that they’d randomly bounce around your floor as part of their cleaning process, because the technology required to localize and map an area hadn’t yet trickled down into the consumer space. That all changed in 2010, when home robots started using lidar (and other things) to track their location and optimize how they cleaned.

Consumer pool-cleaning robots are lagging about 15 years behind indoor robots on this, for a couple of reasons. First, most pool robots—different from automatic pool cleaners, which are purely mechanical systems that are driven by water pressure—have been tethered to an outlet for power, meaning that maximizing efficiency is less of a concern. And second, 3D underwater localization is a much different (and arguably more difficult) problem to solve than 2D indoor localization was. But pool robots are catching up, and at CES this week, Wybot introduced an untethered robot that uses ultrasound to generate a 3D map for fast, efficient pool cleaning. And it’s solar powered and self-emptying, too.

Underwater localization and navigation is not an easy problem for any robot. Private pools are certainly privileged to be operating environments with a reasonable amount of structure and predictability, at least if everything is working the way it should. But the lighting is always going to be a challenge, between bright sunlight, deep shadow, wave reflections, and occasionally murky water if the pool chemicals aren’t balanced very well. That makes relying on any light-based localization system iffy at best, and so Wybot has gone old school, with ultrasound.

Wybot Brings Ultrasound Back to Bots

Ultrasound used to be a very common way for mobile robots to navigate. You may (or may not) remember venerable robots like the Pioneer 3, with those big ultrasonic sensors across its front. As cameras and lidar got cheap and reliable, the messiness of ultrasonic sensors fell out of favor, but sound is still ideal for underwater applications where anything that relies on light may struggle.


The Wybot S3 uses 12 ultrasonic sensors, plus motor encoders and an inertial measurement unit to map residential pools in three dimensions. “We had to choose the ultrasonic sensors very carefully,” explains Felix (Huo) Feng, the CTO of Wybot. “Actually, we use multiple different sensors, and we compute time of flight [of the sonar pulses] to calculate distance.” The positional accuracy of the resulting map is about 10 centimeters, which is totally fine for the robot to get its job done, although Feng says that they’re actively working to improve the map’s resolution. For path planning purposes, the 3D map gets deconstructed into a series of 2D maps, since the robot needs to clean the bottom of the pool, stairs and ledges, and also the sides of the pool.

Efficiency is particularly important for the S3 because its charging dock has enough solar panels on the top of it to provide about 90 minutes of runtime for the robot over the course of an optimally sunny day. If your pool isn’t too big, that means the robot can clean it daily without requiring a power connection to the dock. The dock also sucks debris out of the collection bin on the robot itself, and Wybot suggests that the S3 can go for up to a month of cleaning without the dock overflowing.

The S3 has a camera on the front, which is used primarily to identify and prioritize dirtier areas (through AI, of course) that need focused cleaning. At some point in the future, Wybot may be able to use vision for navigation too, but my guess is that for reliable 24/7 navigation, ultrasound will still be necessary.

One other interesting little tidbit is the communication system. The dock can talk to your Wi-Fi, of course, and then talk to the robot while it’s charging. Once the robot goes off for a swim, however, traditional wireless signals won’t work, but the dock has its own sonar that can talk to the robot at several bytes per second. This isn’t going to get you streaming video from the robot’s camera, but it’s enough to let you steer the robot if you want, or ask it to come back to the dock, get battery status updates, and similar sorts of things.

The Wybot S3 will go on sale in Q2 of this year for a staggering US $2,999, but that’s how it always works: The first time a new technology shows up in the consumer space, it’s inevitably at a premium. Give it time, though, and my guess is that the ability to navigate and self-empty will become standard features in pool robots. But as far as I know, Wybot got there first.




Autonomous systems, particularly fleets of drones and other unmanned vehicles, face increasing risks as their complexity grows. Despite advancements, existing testing frameworks fall short in addressing end-to-end security, resilience, and safety in zero-trust environments. The Secure Systems Research Center (SSRC) at TII has developed a rigorous, holistic testing framework to systematically evaluate the performance and security of these systems at each stage of development. This approach ensures secure, resilient, and safe operations for autonomous systems, from individual components to fleet-wide interactions.



Autonomous systems, particularly fleets of drones and other unmanned vehicles, face increasing risks as their complexity grows. Despite advancements, existing testing frameworks fall short in addressing end-to-end security, resilience, and safety in zero-trust environments. The Secure Systems Research Center (SSRC) at TII has developed a rigorous, holistic testing framework to systematically evaluate the performance and security of these systems at each stage of development. This approach ensures secure, resilient, and safe operations for autonomous systems, from individual components to fleet-wide interactions.



Earlier this year, we reviewed the SwitchBot S10, a vacuuming and wet mopping robot that uses a water-integrated docking system to autonomously manage both clean and dirty water for you. It’s a pretty clever solution, and we appreciated that SwitchBot was willing to try something a little different.

At CES this week, SwitchBot introduced the K20+ Pro, a little autonomous vacuum that can integrate with a bunch of different accessories by pulling them around on a backpack cart of sorts. The K20+ Pro is SwitchBot’s latest effort to explore what’s possible with mobile home robots.

SwitchBot’s small vacuum can transport different payloads on top.SwitchBot

What we’re looking at here is a “mini” robotic vacuum (it’s about 25 centimeters in diameter) that does everything a robotic vacuum does nowadays: It uses lidar to make a map of your house so that you can direct it where to go, it’s got a dock to empty itself and recharge, and so on. The mini robotic vacuum is attached to a wheeled platform that SwitchBot is calling a “FusionPlatform” that sits on top of the robot like a hat. The vacuum docks to this platform, and then the platform will go wherever the robot goes. This entire system (robot, dock, and platform) is the “K20+ Pro multitasking household robot.”

SwitchBot refers to the K20+ Pro as a “smart delivery assistant,” because you can put stuff on the FusionPlatform and the K20+ Pro will move that stuff around your house for you. This really doesn’t do it justice, though, because the platform is much more than just a passive mobile cart. It also can provide power to a bunch of different accessories, all of which benefit from autonomous mobility:

The SwitchBot can carry a variety of payloads, including custom payloads.SwitchBot

From left to right, you’re looking at an air circulation fan, a tablet stand, a vacuum and charging dock and an air purifier and security camera (and a stick vacuum for some reason), and lastly just the air purifier and security setup. You can also add and remove different bits, like if you want the fan along with the security camera, just plop the security camera down on the platform base in front of the fan and you’re good to go.

This basic concept is somewhat similar to Amazon’s Proteus robot, in the sense that you can have one smart powered base that moves around a bunch of less smart and unpowered payloads by driving underneath them and then carrying them around. But SwitchBot’s payloads aren’t just passive cargo, and the base can provide them with a useful amount of power.

A power port allows you to develop your own payloads for the robot.SwitchBot

SwitchBot is actively encouraging users to “to create, adapt, and personalize the robot for a wide variety of innovative applications,” which may include “3D-printed components [or] third-party devices with multiple power ports for speakers, car fridges, or even UV sterilization lamps,” according to the press release. The maximum payload is only 8 kilograms, though, so don’t get too crazy.

Several SwitchBots can make bath time much more enjoyable.SwitchBot

What we all want to know is when someone will put an arm on this thing, and SwitchBot is of course already working on this:

SwitchBot’s mobile manipulator is still in the lab stage.SwitchBot

The arm is still “in the lab stage,” SwichBot says, which I’m guessing means that the hardware is functional but that getting it to reliably do useful stuff with the arm is still a work in progress. But that’s okay—getting an arm to reliably do useful stuff is a work in progress for all of robotics, pretty much. And if SwitchBot can manage to produce an affordable mobile manipulation platform for consumers that even sort of works, that’ll be very impressive.



Earlier this year, we reviewed the SwitchBot S10, a vacuuming and wet mopping robot that uses a water-integrated docking system to autonomously manage both clean and dirty water for you. It’s a pretty clever solution, and we appreciated that SwitchBot was willing to try something a little different.

At CES this week, SwitchBot introduced the K20+ Pro, a little autonomous vacuum that can integrate with a bunch of different accessories by pulling them around on a backpack cart of sorts. The K20+ Pro is SwitchBot’s latest effort to explore what’s possible with mobile home robots.

SwitchBot’s small vacuum can transport different payloads on top.SwitchBot

What we’re looking at here is a “mini” robotic vacuum (it’s about 25 centimeters in diameter) that does everything a robotic vacuum does nowadays: It uses lidar to make a map of your house so that you can direct it where to go, it’s got a dock to empty itself and recharge, and so on. The mini robotic vacuum is attached to a wheeled platform that SwitchBot is calling a “FusionPlatform” that sits on top of the robot like a hat. The vacuum docks to this platform, and then the platform will go wherever the robot goes. This entire system (robot, dock, and platform) is the “K20+ Pro multitasking household robot.”

SwitchBot refers to the K20+ Pro as a “smart delivery assistant,” because you can put stuff on the FusionPlatform and the K20+ Pro will move that stuff around your house for you. This really doesn’t do it justice, though, because the platform is much more than just a passive mobile cart. It also can provide power to a bunch of different accessories, all of which benefit from autonomous mobility:

The SwitchBot can carry a variety of payloads, including custom payloads.SwitchBot

From left to right, you’re looking at an air circulation fan, a tablet stand, a vacuum and charging dock and an air purifier and security camera (and a stick vacuum for some reason), and lastly just the air purifier and security setup. You can also add and remove different bits, like if you want the fan along with the security camera, just plop the security camera down on the platform base in front of the fan and you’re good to go.

This basic concept is somewhat similar to Amazon’s Proteus robot, in the sense that you can have one smart powered base that moves around a bunch of less smart and unpowered payloads by driving underneath them and then carrying them around. But SwitchBot’s payloads aren’t just passive cargo, and the base can provide them with a useful amount of power.

A power port allows you to develop your own payloads for the robot.SwitchBot

SwitchBot is actively encouraging users to “to create, adapt, and personalize the robot for a wide variety of innovative applications,” which may include “3D-printed components [or] third-party devices with multiple power ports for speakers, car fridges, or even UV sterilization lamps,” according to the press release. The maximum payload is only 8 kilograms, though, so don’t get too crazy.

Several SwitchBots can make bath time much more enjoyable.SwitchBot

What we all want to know is when someone will put an arm on this thing, and SwitchBot is of course already working on this:

SwitchBot’s mobile manipulator is still in the lab stage.SwitchBot

The arm is still “in the lab stage,” SwichBot says, which I’m guessing means that the hardware is functional but that getting it to reliably do useful stuff with the arm is still a work in progress. But that’s okay—getting an arm to reliably do useful stuff is a work in progress for all of robotics, pretty much. And if SwitchBot can manage to produce an affordable mobile manipulation platform for consumers that even sort of works, that’ll be very impressive.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

It’s me. But we can all relate to this child android robot struggling to stay awake.

[ Osaka University ]

For 2025, the RoboCup SPL plans an interesting new technical challenge: Kicking a rolling ball! The velocity and start position of the ball can vary and the goal is to kick the ball straight and far. In this video, we show our results from our first testing session.

[ Team B-Human ]

When you think of a prosthetic hand you probably think of something similar to Luke Skywalker’s robotic hand from Star Wars, or even Furiosa’s multi-fingered claw from Mad Max. The reality is a far cry from these fictional hands: upper limb prostheses are generally very limited in what they can do, and how we can control them to do it. In this project, we investigate non-humanoid prosthetic hand design, exploring a new ideology for the design of upper limb prostheses that encourages alternative approaches to prosthetic hands. In this wider, more open design space, can we surpass humanoid prosthetic hands?

[ Imperial College London ]

Thanks, Digby!

A novel three-dimensional (3D) Minimally Actuated Serial Robot (MASR), actuated by a robotic motor. The robotic motor is composed of a mobility motor (to advance along the links) and an actuation motor [to] move the joints.

[ Zarrouk Lab ]

This year, Franka Robotics team hit the road, the skies and the digital space to share ideas, showcase our cutting-edge technology, and connect with the brightest minds in robotics across the globe. Here is 2024 video recap, capturing the events and collaborations that made this year unforgettable!

[ Franka Robotics ]

Aldebaran has sold an astonishing number of robots this year.

[ Aldebaran ]

The advancement of modern robotics starts at its foundation: the gearboxes. Ailos aims to define how these industries operate with increased precision, efficiency and versatility. By innovating gearbox technology across diverse fields, Ailos is catalyzing the transition towards the next wave of automation, productivity and agility.

[ Ailos Robotics ]

Many existing obstacle avoidance algorithms overlook the crucial balance between safety and agility, especially in environments of varying complexity. In our study, we introduce an obstacle avoidance pipeline based on reinforcement learning. This pipeline enables drones to adapt their flying speed according to the environmental complexity. After minimal fine-tuning, we successfully deployed our network on a real drone for enhanced obstacle avoidance.

[ MAVRL via Github ]

Robot-assisted feeding promises to empower people with motor impairments to feed themselves. However, research often focuses on specific system subcomponents and thus evaluates them in controlled settings. This leaves a gap in developing and evaluating an end-to-end system that feeds users entire meals in out-of-lab settings. We present such a system, collaboratively developed with community researchers.

[ Personal Robotics Lab ]

A drone’s eye-view reminder that fireworks explode in 3D.

[ Team BlackSheep ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

It’s me. But we can all relate to this child android robot struggling to stay awake.

[ Osaka University ]

For 2025, the RoboCup SPL plans an interesting new technical challenge: Kicking a rolling ball! The velocity and start position of the ball can vary and the goal is to kick the ball straight and far. In this video, we show our results from our first testing session.

[ Team B-Human ]

When you think of a prosthetic hand you probably think of something similar to Luke Skywalker’s robotic hand from Star Wars, or even Furiosa’s multi-fingered claw from Mad Max. The reality is a far cry from these fictional hands: upper limb prostheses are generally very limited in what they can do, and how we can control them to do it. In this project, we investigate non-humanoid prosthetic hand design, exploring a new ideology for the design of upper limb prostheses that encourages alternative approaches to prosthetic hands. In this wider, more open design space, can we surpass humanoid prosthetic hands?

[ Imperial College London ]

Thanks, Digby!

A novel three-dimensional (3D) Minimally Actuated Serial Robot (MASR), actuated by a robotic motor. The robotic motor is composed of a mobility motor (to advance along the links) and an actuation motor [to] move the joints.

[ Zarrouk Lab ]

This year, Franka Robotics team hit the road, the skies and the digital space to share ideas, showcase our cutting-edge technology, and connect with the brightest minds in robotics across the globe. Here is 2024 video recap, capturing the events and collaborations that made this year unforgettable!

[ Franka Robotics ]

Aldebaran has sold an astonishing number of robots this year.

[ Aldebaran ]

The advancement of modern robotics starts at its foundation: the gearboxes. Ailos aims to define how these industries operate with increased precision, efficiency and versatility. By innovating gearbox technology across diverse fields, Ailos is catalyzing the transition towards the next wave of automation, productivity and agility.

[ Ailos Robotics ]

Many existing obstacle avoidance algorithms overlook the crucial balance between safety and agility, especially in environments of varying complexity. In our study, we introduce an obstacle avoidance pipeline based on reinforcement learning. This pipeline enables drones to adapt their flying speed according to the environmental complexity. After minimal fine-tuning, we successfully deployed our network on a real drone for enhanced obstacle avoidance.

[ MAVRL via Github ]

Robot-assisted feeding promises to empower people with motor impairments to feed themselves. However, research often focuses on specific system subcomponents and thus evaluates them in controlled settings. This leaves a gap in developing and evaluating an end-to-end system that feeds users entire meals in out-of-lab settings. We present such a system, collaboratively developed with community researchers.

[ Personal Robotics Lab ]

A drone’s eye-view reminder that fireworks explode in 3D.

[ Team BlackSheep ]

Pages