Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

At ICRA 2024, Spectrum editor Evan Ackerman sat down with Unitree Founder and CEO Xingxing Wang and Tony Yang, VP of Business Development, to talk about the company’s newest humanoid, the G1 model.

[ Unitree ]

SACRIFICE YOUR BODY FOR THE ROBOT

[ WVUIRL ]

From navigating uneven terrain outside the lab to pure vision perception, GR-1 continues to push the boundaries of what’s possible.

[ Fourier ]

Aerial manipulation has gained interest for completing high-altitude tasks that are challenging for human workers, such as contact inspection and defect detection. This letter addresses a more general and dynamic task: simultaneously tracking time-varying contact force and motion trajectories on tangential surfaces. We demonstrate the approach on an aerial calligraphy task using a novel sponge pen design as the end-effector.

[ CMU ]

LimX Dynamics Biped Robot P1 was kicked and hit: Faced with random impacts in a crowd, P1 with its new design once again showcased exceptional stability as a mobility platform.

[ LimX Dynamics ]

Thanks, Ou Yan!

This is from ICRA 2018, but it holds up pretty well in the novelty department.

[ SNU INRoL ]

I think someone needs to crank the humor setting up on this one.

[ Deep Robotics ]

The paper summarizes the work at the Micro Air Vehicle Laboratory on end-to-end neural control of quadcopters. A major challenge in bringing these controllers to life is the “reality gap” between the real platform and the training environment. To address this, we combine online identification of the reality gap with pre-trained corrections through a deep neural controller, which is orders of magnitude more efficient than traditional computation of the optimal solution.

[ MAVLab ]

This is a dedicated Track Actuator from HEBI Robotics. Why they didn’t just call it a “tracktuator” is beyond me.

[ HEBI Robotics ]

Menteebot can navigate complex environments by combining a 3D model of the world with a dynamic obstacle map. On the first day in a new location, Menteebot generates the 3D model by following a person who shows the robot around.

[ Mentee Robotics ]

Here’s that drone with a 68kg payload and 70km range you’ve always wanted.

[ Malloy ]

AMBIDEX is a dual-armed robot with an innovative mechanism developed for safe coexistence with humans. Based on an innovative cable structure, it is designed to be both strong and stable.

[ NAVER Labs ]

As quadrotors take on an increasingly diverse range of roles, researchers often need to develop new hardware platforms tailored for specific tasks, introducing significant engineering overhead. In this article, we introduce the UniQuad series, a unified and versatile quadrotor hardware platform series that offers high flexibility to adapt to a wide range of common tasks, excellent customizability for advanced demands, and easy maintenance in case of crashes.

[ HKUST ]

The video demonstrates the field testing of a 43 kg (95 lb) amphibious cycloidal propeller unmanned underwater vehicle (Cyclo-UUV) developed at the Advanced Vertical Flight Laboratory, Texas A&M University. The vehicle utilizes a combination of cycloidal propellers (or cyclo-propellers), screw propellers, and tank treads for operations on land and underwater.

[ TAMU ]

The “pill” (the package hook) on Wing’s delivery drones is a crucial component to our aircraft! Did you know our package hook is designed to be aerodynamic and has stable flight characteristics, even at 65 mph?

[ Wing ]

Happy 50th to robotics at ABB!

[ ABB ]

This JHU Center for Functional Anatomy & Evolution Seminar is by Chen Li, on Terradynamics of Animals & Robots in Complex Terrain.

[ JHU ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

At ICRA 2024, Spectrum editor Evan Ackerman sat down with Unitree Founder and CEO Xingxing Wang and Tony Yang, VP of Business Development, to talk about the company’s newest humanoid, the G1 model.

[ Unitree ]

SACRIFICE YOUR BODY FOR THE ROBOT

[ WVUIRL ]

From navigating uneven terrain outside the lab to pure vision perception, GR-1 continues to push the boundaries of what’s possible.

[ Fourier ]

Aerial manipulation has gained interest for completing high-altitude tasks that are challenging for human workers, such as contact inspection and defect detection. This letter addresses a more general and dynamic task: simultaneously tracking time-varying contact force and motion trajectories on tangential surfaces. We demonstrate the approach on an aerial calligraphy task using a novel sponge pen design as the end-effector.

[ CMU ]

LimX Dynamics Biped Robot P1 was kicked and hit: Faced with random impacts in a crowd, P1 with its new design once again showcased exceptional stability as a mobility platform.

[ LimX Dynamics ]

Thanks, Ou Yan!

This is from ICRA 2018, but it holds up pretty well in the novelty department.

[ SNU INRoL ]

I think someone needs to crank the humor setting up on this one.

[ Deep Robotics ]

The paper summarizes the work at the Micro Air Vehicle Laboratory on end-to-end neural control of quadcopters. A major challenge in bringing these controllers to life is the “reality gap” between the real platform and the training environment. To address this, we combine online identification of the reality gap with pre-trained corrections through a deep neural controller, which is orders of magnitude more efficient than traditional computation of the optimal solution.

[ MAVLab ]

This is a dedicated Track Actuator from HEBI Robotics. Why they didn’t just call it a “tracktuator” is beyond me.

[ HEBI Robotics ]

Menteebot can navigate complex environments by combining a 3D model of the world with a dynamic obstacle map. On the first day in a new location, Menteebot generates the 3D model by following a person who shows the robot around.

[ Mentee Robotics ]

Here’s that drone with a 68kg payload and 70km range you’ve always wanted.

[ Malloy ]

AMBIDEX is a dual-armed robot with an innovative mechanism developed for safe coexistence with humans. Based on an innovative cable structure, it is designed to be both strong and stable.

[ NAVER Labs ]

As quadrotors take on an increasingly diverse range of roles, researchers often need to develop new hardware platforms tailored for specific tasks, introducing significant engineering overhead. In this article, we introduce the UniQuad series, a unified and versatile quadrotor hardware platform series that offers high flexibility to adapt to a wide range of common tasks, excellent customizability for advanced demands, and easy maintenance in case of crashes.

[ HKUST ]

The video demonstrates the field testing of a 43 kg (95 lb) amphibious cycloidal propeller unmanned underwater vehicle (Cyclo-UUV) developed at the Advanced Vertical Flight Laboratory, Texas A&M University. The vehicle utilizes a combination of cycloidal propellers (or cyclo-propellers), screw propellers, and tank treads for operations on land and underwater.

[ TAMU ]

The “pill” (the package hook) on Wing’s delivery drones is a crucial component to our aircraft! Did you know our package hook is designed to be aerodynamic and has stable flight characteristics, even at 65 mph?

[ Wing ]

Happy 50th to robotics at ABB!

[ ABB ]

This JHU Center for Functional Anatomy & Evolution Seminar is by Chen Li, on Terradynamics of Animals & Robots in Complex Terrain.

[ JHU ]



Food prep is one of those problems that seems like it should be solvable by robots. It’s a predictable, repetitive, basic manipulation task in a semi-structured environment—seems ideal, right? And obviously there’s a huge need, because human labor is expensive and getting harder and harder to find in these contexts. There are currently over a million unfilled jobs in the food industry in the United States, and even with jobs that are filled, the annual turnover rate is 150 percent (meaning a lot of workers don’t even last a year).

Food prep seems like a great opportunity for robots, which is why Chef Robotics and a handful of other robotics companies tackled it a couple years ago by bringing robots to fast casual restaurants like Chipotle or Sweetgreen, where you get served a custom-ish meal from a selection of ingredients at a counter.

But this didn’t really work out, for a couple of reasons. First, doing things that are mostly effortless for humans are inevitably extremely difficult for robots. And second, humans actually do a lot of useful things in a restaurant context besides just putting food onto plates, and the robots weren’t up for all of those things.

Still, Chef Robotics founder and CEO Rajat Bhageria wasn’t ready to let this opportunity go. “The food market is arguably the biggest market that’s tractable for AI today,” he told IEEE Spectrum. And with a bit of a pivot away from the complicated mess of fast casual restaurants, Chef Robotics has still managed to prepare over 20 million meals thanks to autonomous robot arms deployed all over North America. Without knowing it, you may even have eaten such a meal.

“The hard thing is, can you pick fast? Can you pick consistently? Can you pick the right portion size without spilling? And can you pick without making it look like the food was picked by a machine?” —Rajat Bhageria, Chef Robotics

When we spoke with Bhageria, he explained that there are three basic tasks involved in prepared food production: prep (tasks like chopping ingredients), the actual cooking process, and then assembly (or plating). Of these tasks, prep scales pretty well with industrial automation in that you can usually order pre-chopped or mixed ingredients, and cooking also scales well since you can cook more with only a minimal increase in effort just by using a bigger pot or pan or oven. What doesn’t scale well is the assembly, especially when any kind of flexibility or variety is required. You can clearly see this in action at any fast casual restaurant, where a couple of people are in the kitchen cooking up massive amounts of food while each customer gets served one at a time.

So with that bottleneck identified, let’s throw some robots at the problem, right? And that’s exactly what Chef Robotics did, explains Bhageria: “we went to our customers, who said that their biggest pain point was labor, and the most labor is in assembly, so we said, we can help you solve this.”

Chef Robotics started with fast casual restaurants. They weren’t the first to try this—many other robotics companies had attempted this before, with decidedly mixed results. “We actually had some good success in the early days selling to fast casual chains,” Bhageria says, “but then we had some technical obstacles. Essentially, if we want to have a human-equivalent system so that we can charge a human-equivalent service fee for our robot, we need to be able to do every ingredient. You’re either a full human equivalent, or our customers told us it wouldn’t be useful.”

Part of the challenge is that training robots do perform all of the different manipulations required for different assembly tasks requires different kinds of real world data. That data simply doesn’t exist—or, if it does, any company that has it knows what it’s worth and isn’t sharing. You can’t easily simulate this kind of data, because food can be gross and difficult to handle, whether it’s gloopy or gloppy or squishy or slimy or unpredictably deformable in some other way, and you really need physical experience to train a useful manipulation model.

Setting fast casual restaurants aside for a moment, what about food prep situations where things are as predictable as possible, like mass-produced meals? We’re talking about food like frozen dinners, that have a handful of discrete ingredients packed into trays at factory scale. Frozen meal production relies on automation rather than robotics because the scale is such that the cost of dedicated equipment can be justified.

There’s a middle ground, though, where robots have found (some) opportunity: When you need to produce a high volume of the same meal, but that meal changes regularly. For example, think of any kind of pre-packaged meal that’s made in bulk, just not at frozen-food scale. It’s an opportunity for automation in a structured environment—but with enough variety that actual automation isn’t cost effective. Suddenly, robots and their tiny bit of flexible automation have a chance to be a practical solution.

“We saw these long assembly lines, where humans were scooping food out of big tubs and onto individual trays,” Bhageria says. “They do a lot of different meals on these lines; it’s going to change over and they’re going to do different meals throughout the week. But at any given moment, each person is doing one ingredient, and maybe on a weekly basis, that person would do six ingredients. This was really compelling for us because six ingredients is something we can bootstrap in a lab. We can get something good enough and if we can get something good enough, then we can ship a robot, and if we can ship a robot to production, then we will get real world training data.”

Chef Robotics has been deploying robot modules that they can slot into existing food assembly lines in place of humans without any retrofitting necessary. The modules consist of six degree of freedom arms wearing swanky IP67 washable suits. To handle different kinds of food, the robots can be equipped with a variety of different utensils (and their accompanying manipulation software strategies). Sensing includes a few depth cameras, as well as a weight-sensing platform for the food tray to ensure consistent amounts of food are picked. And while arms with six degrees of freedom may be overkill for now, eventually the hope is that they’ll be able to handle more complex food like asparagus, where you need to do a little bit more than just scoop.

While Chef Robotics seems to have a viable business here, Bhageria tells us that he keeps coming back to that vision of robots being useful in fast casual restaurants, and eventually, robots making us food in our homes. Making that happen will require time, experience, technical expertise, and an astonishing amount of real-world training data, which is the real value behind those 20 million robot-prepared meals (and counting). The more robots the company deploys, the more data they collect, which will allow them to train their food manipulation models to handle a wider variety of ingredients to open up even more deployments. Their robots, Chef’s website says, “essentially act as data ingestion engines to improve our AI models.”

The next step is likely ghost kitchens where the environment is still somewhat controlled and human interaction isn’t necessary, followed by deployments in commercial kitchens more broadly. But even that won’t be enough for Bhageria, who wants robots that can take over from all of the drudgery in food service: “I’m really excited about this vision,” he says. “How do we deploy hundreds of millions of robots all over the world that allow humans to do what humans do best?”



Food prep is one of those problems that seems like it should be solvable by robots. It’s a predictable, repetitive, basic manipulation task in a semi-structured environment—seems ideal, right? And obviously there’s a huge need, because human labor is expensive and getting harder and harder to find in these contexts. There are currently over a million unfilled jobs in the food industry in the United States, and even with jobs that are filled, the annual turnover rate is 150 percent (meaning a lot of workers don’t even last a year).

Food prep seems like a great opportunity for robots, which is why Chef Robotics and a handful of other robotics companies tackled it a couple years ago by bringing robots to fast casual restaurants like Chipotle or Sweetgreen, where you get served a custom-ish meal from a selection of ingredients at a counter.

But this didn’t really work out, for a couple of reasons. First, doing things that are mostly effortless for humans are inevitably extremely difficult for robots. And second, humans actually do a lot of useful things in a restaurant context besides just putting food onto plates, and the robots weren’t up for all of those things.

Still, Chef Robotics founder and CEO Rajat Bhageria wasn’t ready to let this opportunity go. “The food market is arguably the biggest market that’s tractable for AI today,” he told IEEE Spectrum. And with a bit of a pivot away from the complicated mess of fast casual restaurants, Chef Robotics has still managed to prepare over 20 million meals thanks to autonomous robot arms deployed all over North America. Without knowing it, you may even have eaten such a meal.

“The hard thing is, can you pick fast? Can you pick consistently? Can you pick the right portion size without spilling? And can you pick without making it look like the food was picked by a machine?” —Rajat Bhageria, Chef Robotics

When we spoke with Bhageria, he explained that there are three basic tasks involved in prepared food production: prep (tasks like chopping ingredients), the actual cooking process, and then assembly (or plating). Of these tasks, prep scales pretty well with industrial automation in that you can usually order pre-chopped or mixed ingredients, and cooking also scales well since you can cook more with only a minimal increase in effort just by using a bigger pot or pan or oven. What doesn’t scale well is the assembly, especially when any kind of flexibility or variety is required. You can clearly see this in action at any fast casual restaurant, where a couple of people are in the kitchen cooking up massive amounts of food while each customer gets served one at a time.

So with that bottleneck identified, let’s throw some robots at the problem, right? And that’s exactly what Chef Robotics did, explains Bhageria: “we went to our customers, who said that their biggest pain point was labor, and the most labor is in assembly, so we said, we can help you solve this.”

Chef Robotics started with fast casual restaurants. They weren’t the first to try this—many other robotics companies had attempted this before, with decidedly mixed results. “We actually had some good success in the early days selling to fast casual chains,” Bhageria says, “but then we had some technical obstacles. Essentially, if we want to have a human-equivalent system so that we can charge a human-equivalent service fee for our robot, we need to be able to do every ingredient. You’re either a full human equivalent, or our customers told us it wouldn’t be useful.”

Part of the challenge is that training robots do perform all of the different manipulations required for different assembly tasks requires different kinds of real world data. That data simply doesn’t exist—or, if it does, any company that has it knows what it’s worth and isn’t sharing. You can’t easily simulate this kind of data, because food can be gross and difficult to handle, whether it’s gloopy or gloppy or squishy or slimy or unpredictably deformable in some other way, and you really need physical experience to train a useful manipulation model.

Setting fast casual restaurants aside for a moment, what about food prep situations where things are as predictable as possible, like mass-produced meals? We’re talking about food like frozen dinners, that have a handful of discrete ingredients packed into trays at factory scale. Frozen meal production relies on automation rather than robotics because the scale is such that the cost of dedicated equipment can be justified.

There’s a middle ground, though, where robots have found (some) opportunity: When you need to produce a high volume of the same meal, but that meal changes regularly. For example, think of any kind of pre-packaged meal that’s made in bulk, just not at frozen-food scale. It’s an opportunity for automation in a structured environment—but with enough variety that actual automation isn’t cost effective. Suddenly, robots and their tiny bit of flexible automation have a chance to be a practical solution.

“We saw these long assembly lines, where humans were scooping food out of big tubs and onto individual trays,” Bhageria says. “They do a lot of different meals on these lines; it’s going to change over and they’re going to do different meals throughout the week. But at any given moment, each person is doing one ingredient, and maybe on a weekly basis, that person would do six ingredients. This was really compelling for us because six ingredients is something we can bootstrap in a lab. We can get something good enough and if we can get something good enough, then we can ship a robot, and if we can ship a robot to production, then we will get real world training data.”

Chef Robotics has been deploying robot modules that they can slot into existing food assembly lines in place of humans without any retrofitting necessary. The modules consist of six degree of freedom arms wearing swanky IP67 washable suits. To handle different kinds of food, the robots can be equipped with a variety of different utensils (and their accompanying manipulation software strategies). Sensing includes a few depth cameras, as well as a weight-sensing platform for the food tray to ensure consistent amounts of food are picked. And while arms with six degrees of freedom may be overkill for now, eventually the hope is that they’ll be able to handle more complex food like asparagus, where you need to do a little bit more than just scoop.

While Chef Robotics seems to have a viable business here, Bhageria tells us that he keeps coming back to that vision of robots being useful in fast casual restaurants, and eventually, robots making us food in our homes. Making that happen will require time, experience, technical expertise, and an astonishing amount of real-world training data, which is the real value behind those 20 million robot-prepared meals (and counting). The more robots the company deploys, the more data they collect, which will allow them to train their food manipulation models to handle a wider variety of ingredients to open up even more deployments. Their robots, Chef’s website says, “essentially act as data ingestion engines to improve our AI models.”

The next step is likely ghost kitchens where the environment is still somewhat controlled and human interaction isn’t necessary, followed by deployments in commercial kitchens more broadly. But even that won’t be enough for Bhageria, who wants robots that can take over from all of the drudgery in food service: “I’m really excited about this vision,” he says. “How do we deploy hundreds of millions of robots all over the world that allow humans to do what humans do best?”



Against all odds, Ukraine is still standing almost two and a half years after Russia’s massive 2022 invasion. Of course, hundreds of billions of dollars in Western support as well as Russian errors have helped immensely, but it would be a mistake to overlook Ukraine’s creative use of new technologies, particularly drones. While uncrewed aerial vehicles have grabbed most of the attention, it is naval drones that could be the key to bringing Russian president Vladimir Putin to the negotiating table.

These naval-drone operations in the Black Sea against Russian warships and other targets have been so successful that they are prompting, in London, Paris, Washington, and elsewhere, fundamental reevaluations of how drones will affect future naval operations. In August, 2023, for example, the Pentagon launched the billion-dollar Replicator initiative to field air and naval drones (also called sea drones) on a massive scale. It’s widely believed that such drones could be used to help counter a Chinese invasion of Taiwan.

And yet Ukraine’s naval drones initiative grew out of necessity, not grand strategy. Early in the war, Russia’s Black Sea fleet launched cruise missiles into Ukraine and blockaded Odesa, effectively shutting down Ukraine’s exports of grain, metals, and manufactured goods. The missile strikes terrorized Ukrainian citizens and shut down the power grid, but Russia’s blockade was arguably more consequential, devastating Ukraine’s economy and creating food shortages from North Africa to the Middle East.

With its navy seized or sunk during the war’s opening days, Ukraine had few options to regain access to the sea. So Kyiv’s troops got creative. Lukashevich Ivan Volodymyrovych, a brigadier general in the Security Service of Ukraine, the country’s counterintelligence agency, proposed building a series of fast, uncrewed attack boats. In the summer of 2022, the service, which is known by the acronym SBU, began with a few prototype drones. These quickly led to a pair of naval drones that, when used with commercial satellite imagery, off-the-shelf uncrewed aircraft, and Starlink terminals, gave Ukrainian operators the means to sink or disable a third of Russia’s Black Sea Fleet, including the flagship Moskva and most of the fleet’s cruise-missile-equipped warships.

To protect their remaining vessels, Russian commanders relocated the Black Sea Fleet to Novorossiysk, 300 kilometers east of Crimea. This move sheltered the ships from Ukrainian drones and missiles, but it also put them too far away to threaten Ukrainian shipping or defend the Crimean Peninsula. Kyiv has exploited the opening by restoring trade routes and mounting sustained airborne and naval drone strikes against Russian bases on Crimea and the Kerch Strait Bridge connecting the peninsula with Russia.

How Maguras and Sea Babies Hunt and Attack

The first Ukrainian drone boats were cobbled together with parts from jet skis, motorboats, and off-the-shelf electronics. But within months, manufacturers working for the Ukraine defense ministry and SBU fielded several designs that proved their worth in combat, most notably the Magura V5 and the Sea Baby.

Carrying a 300-kilogram warhead, on par with that of a heavyweight torpedo, the Magura V5 is a hunter-killer antiship drone designed to work in swarms that confuse and overwhelm a ship’s defenses. Equipped with Starlink terminals, which connect to SpaceX’s Starlink satellites, and GPS, a group of about three to five Maguras likely moves autonomously to a location near the potential target. From there, operators can wait until conditions are right and then attack the target from multiple angles using remote control and video feeds from the vehicles.

A Ukrainian Magura V5 hunter-killer sea drone was demonstrated at an undisclosed location in Ukraine on 13 April 2024. The domed pod toward the bow, which can rotate from side to side, contains a thermal camera used for guidance and targeting.Valentyn Origrenko/Reuters/Redux

Larger than a Magura, the Sea Baby is a multipurpose vehicle that can carry about 800 kg of explosives, which is close to twice the payload of a Tomahawk cruise missile. A Sea Baby was used in 2023 to inflict substantial damage to the Kerch Strait Bridge. A more recent version carries a rocket launcher that Ukraine troops plan to use against Russian forces along the Dnipro River, which flows through eastern Ukraine and has often formed the frontline in that part of the country. Like a Magura, a Sea Baby is likely remotely controlled using Starlink and GPS. In addition to attack, it’s also equipped for surveillance and logistics.

Russia reduced the threat to its ships by moving them out of the region, but fixed targets like the Kerch Strait Bridge remain vulnerable to Ukrainian sea drones. To try to protect these structures from drone onslaughts, Russian commanders are taking a “kitchen sink” approach, submerging hulks around bridge supports, fielding more guns to shoot at incoming uncrewed vessels, and jamming GPS and Starlink around the Kerch Strait.

Ukrainian service members demonstrated the portable, ruggedized consoles used to remotely guide the Magura V5 naval drones in April 2024.Valentyn Origrenko/Reuters/Redux

While the war remains largely stalemated in the country’s north, Ukraine’s naval drones could yet force Russia into negotiations. The Crimean Peninsula was Moscow’s biggest prize from its decade-long assault on Ukraine. If the Kerch Bridge is severed and the Black Sea Fleet pushed back into Russian ports, Putin may need to end the fighting to regain control over Crimea.

Why the U.S. Navy Embraced the Swarm

Ukraine’s small, low-cost sea drones are offering a compelling view of future tactics and capabilities. But recent experiences elsewhere in the world are highlighting the limitations of drones for some crucial tasks. For example, for protecting shipping from piracy or stopping trafficking and illegal fishing, drones are less useful.

Before the Ukraine war, efforts by the U.S. Department of Defense to field surface sea drones focused mostly on large vehicles. In 2015, the Defense Advanced Research Projects Agency started, and the U.S. Navy later continued, a project that built two uncrewed surface vessels, called Sea Hunter and Sea Hawk. These were 130-tonne sea drones capable of roaming the oceans for up to 70 days while carrying payloads of thousands of pounds each. The point was to demonstrate the ability to detect, follow, and destroy submarines. The Navy and the Pentagon’s secretive Strategic Capabilities Office followed with the Ghost Fleet Overlord uncrewed vessel programs, which produced four larger prototypes designed to carry shipping-container-size payloads of missiles, sensors, or electronic countermeasures.

The U.S. Navy’s newly created Uncrewed Surface Vessel Division 1 ( USVDIV-1) completed a deployment across the Pacific Ocean last year with four medium and large sea drones: Sea Hunter and Sea Hawk and two Overlord vessels, Ranger and Mariner. The five-month deployment from Port Hueneme, Calif., took the vessels to Hawaii, Japan, and Australia, where they joined in annual exercises conducted by U.S. and allied navies. The U.S. Navy continues to assess its drone fleet through sea trials lasting from several days to a few months.

The Sea Hawk is a U.S. Navy trimaran drone vessel designed to find, pursue, and attack submarines. The 130-tonne ship, photographed here in October of 2023 in Sydney Harbor, was built to operate autonomously on missions of up to 70 days, but it can also accommodate human observers on board. Ensign Pierson Hawkins/U.S. Navy

In contrast with Ukraine’s small sea drones, which are usually remotely controlled and operate outside shipping lanes, the U.S. Navy’s much larger uncrewed vessels have to follow the nautical rules of the road. To navigate autonomously, these big ships rely on robust onboard sensors, processing for computer vision and target-motion analysis, and automation based on predictable forms of artificial intelligence, such as expert- or agent-based algorithms rather than deep learning.

But thanks to the success of the Ukrainian drones, the focus and energy in sea drones are rapidly moving to the smaller end of the scale. The U.S. Navy initially envisioned platforms like Sea Hunter conducting missions in submarine tracking, electronic deception, or clandestine surveillance far out at sea. And large drones will still be needed for such missions. However, with the right tactics and support, a group of small sea drones can conduct similar missions as well as other vital tasks.

For example, though they are constrained in speed, maneuverability, and power generation, solar- or sail-powered drones can stay out for months with little human intervention. The earliest of these are wave gliders like the Liquid Robotics (a Boeing company) SHARC, which has been conducting undersea and surface surveillance for the U.S. Navy for more than a decade. Newer designs like the Saildrone Voyager and Ocius Blue Bottle incorporate motors and additional solar or diesel power to haul payloads such as radars, jammers, decoys, or active sonars. The Ocean Aero Triton takes this model one step further: It can submerge, to conduct clandestine surveillance or a surprise attack, or to avoid detection.

The Triton, from Ocean Aero in Gulfport, Miss., is billed as the world’s only autonomous sea drone capable of both cruising underwater and sailing on the surface. Ocean Aero

Ukraine’s success in the Black Sea has also unleashed a flurry of new small antiship attack drones. USVDIV-1 will use the GARC from Maritime Applied Physics Corp. to develop tactics. The Pentagon’s Defense Innovation Unit has also begun purchasing drones for the China-focused Replicator initiative. Among the likely craft being evaluated are fast-attack sea drones from Austin, Texas–based Saronic.

Behind the soaring interest in small and inexpensive sea drones is the changing value proposition for naval drones. As recently as four years ago, military planners were focused on using them to replace crewed ships in “dull, dirty, and dangerous” jobs. But now, the thinking goes, sea drones can provide scale, adaptability, and resilience across each link in the “kill chain” that extends from detecting a target to hitting it with a weapon.

Today, to attack a ship, most navies generally have one preferred sensor (such as a radar system), one launcher, and one missile. But what these planners are now coming to appreciate is that a fleet of crewed surface ships with a collection of a dozen or two naval drones would offer multiple paths to both find that ship and attack it. These craft would also be less vulnerable, because of their dispersion.

Defending Taiwan by Surrounding It With a “Hellscape”

U.S. efforts to protect Taiwan may soon reflect this new value proposition. Many classified and unclassified war games suggest Taiwan and its allies could successfully defend the island—but at costs high enough to potentially dissuade a U.S. president from intervening on Taiwan’s behalf. With U.S. defense budgets capped by law and procurement constrained by rising personnel and maintenance costs, substantially growing or improving today’s U.S. military for this specific purpose is unrealistic. Instead, commanders are looking for creative solutions to slow or stop a Chinese invasion without losing most U.S. forces in the process.

Naval drones look like a good—and maybe the best— solution. The Taiwan Strait is only 160 kilometers (100 miles) wide, and Taiwan’s coastline offers only a few areas where large numbers of troops could come ashore. U.S. naval attack drones positioned on the likely routes could disrupt or possibly even halt a Chinese invasion, much as Ukrainian sea drones have denied Russia access to the western Black Sea and, for that matter, Houthi-controlled drones have sporadically closed off large parts of the Red Sea in the Middle East.

Rather than killer robots seeking out and destroying targets, the drones defending Taiwan would be passively waiting for Chinese forces to illegally enter a protected zone, within which they could be attacked.

The new U.S. Indo-Pacific Command leader, Admiral Sam Paparo, wants to apply this approach to defending Taiwan in a scenario he calls “Hellscape.” In it, U.S. surface and undersea drones would likely be based near Taiwan, perhaps in the Philippines or Japan. When the potential for an invasion rises, the drones would move themselves or be carried by larger uncrewed or crewed ships to the western coast of Taiwan to wait.

Sea drones are well-suited to this role, thanks in part to the evolution of naval technologies and tactics over the past half century. Until World War II, submarines were the most lethal threat to ships. But since the Cold War, long-range subsonic, supersonic, and now hypersonic antiship missiles have commanded navy leaders’ attention. They’ve spent decades devising ways to protect their ships against such antiship missiles.

Much less effort has gone into defending against torpedoes, mines—or sea drones. A dozen or more missiles might be needed to ensure that just one reaches a targeted ship, and even then, the damage may not be catastrophic. But a single surface or undersea drone could easily evade detection and explode at a ship’s waterline to sink it, because in this case, water pressure does most of the work.

The level of autonomy available in most sea drones today is more than enough to attack ships in the Taiwan Strait. Details of U.S. military plans are classified, but a recent Hudson Institute report that I wrote with Dan Patt, proposes a possible approach. In it, a drone flotilla, consisting of about three dozen hunter-killer surface drones, two dozen uncrewed surface vessels carrying aerial drones, and three dozen autonomous undersea drones, would take up designated positions in a “kill box” adjacent to one of Taiwan’s western beaches if a Chinese invasion fleet had begun massing on the opposite side of the strait. Even if they were based in Japan or the Philippines, the drones could reach Taiwan within a day. Upon receiving a signal from operators remotely using Starlink or locally using a line-of-sight radio, the drones would act as a mobile minefield, attacking troop transports and their escorts inside Taiwan’s territorial waters. Widely available electro-optical and infrared sensors, coupled to recognition algorithms, would direct the drones to targets.

Although communications with operators onshore would likely be jammed, the drones could coordinate their actions locally using line-of-sight Internet Protocol–based networks like Silvus or TTNT. For example, surface vessels could launch aerial drones that would attack the pilot houses and radars of ships, while surface and undersea drones strike ships at the waterline. The drones could also coordinate to ensure they do not all strike the same target and to prioritize the largest targets first. These kinds of simple collaborations are routine in today’s drones.

Treating drones like mines reduces the complexity needed in their control systems and helps them comply with Pentagon rules for autonomous weapons. Rather than killer robots seeking out and destroying targets, the drones defending Taiwan would be passively waiting for Chinese forces to illegally enter a protected zone, within which they could be attacked.

Like Russia’s Black Sea Fleet, the Chinese navy will develop countermeasures to sea drones, such as employing decoy ships, attacking drones from the air, or using minesweepers to move them away from the invasion fleet. To stay ahead, operators will need to continue innovating tactics and behaviors through frequent exercises and experiments, like those underway at U.S. Navy Unmanned Surface Vessel Squadron Three. (Like the USVDIV-1, it is a unit under the U.S. Navy’s Surface Development Squadron One.) Lessons from such exercises would be incorporated into the defending drones as part of their programming before a mission.

The emergence of sea drones heralds a new era in naval warfare. After decades of focusing on increasingly lethal antiship missiles, navies now have to defend against capable and widely proliferating threats on, above, and below the water. And while sea drone swarms may be mainly a concern for coastal areas, these choke points are critical to the global economy and most nations’ security. For U.S. and allied fleets, especially, naval drones are a classic combination of threat and opportunity. As the Hellscape concept suggests, uncrewed vessels may be a solution to some of the most challenging and sweeping of modern naval scenarios for the Pentagon and its allies—and their adversaries.

This article was updated on 10 July 2024. An earlier version stated that sea drones from Saronic Technologies are being purchased by the U.S. Department of Defense’s Defense Innovation Unit. This could not be publicly confirmed.



Against all odds, Ukraine is still standing almost two and a half years after Russia’s massive 2022 invasion. Of course, hundreds of billions of dollars in Western support as well as Russian errors have helped immensely, but it would be a mistake to overlook Ukraine’s creative use of new technologies, particularly drones. While uncrewed aerial vehicles have grabbed most of the attention, it is naval drones that could be the key to bringing Russian president Vladimir Putin to the negotiating table.

These naval-drone operations in the Black Sea against Russian warships and other targets have been so successful that they are prompting, in London, Paris, Washington, and elsewhere, fundamental reevaluations of how drones will affect future naval operations. In August, 2023, for example, the Pentagon launched the billion-dollar Replicator initiative to field air and naval drones (also called sea drones) on a massive scale. It’s widely believed that such drones could be used to help counter a Chinese invasion of Taiwan.

And yet Ukraine’s naval drones initiative grew out of necessity, not grand strategy. Early in the war, Russia’s Black Sea fleet launched cruise missiles into Ukraine and blockaded Odesa, effectively shutting down Ukraine’s exports of grain, metals, and manufactured goods. The missile strikes terrorized Ukrainian citizens and shut down the power grid, but Russia’s blockade was arguably more consequential, devastating Ukraine’s economy and creating food shortages from North Africa to the Middle East.

With its navy seized or sunk during the war’s opening days, Ukraine had few options to regain access to the sea. So Kyiv’s troops got creative. Lukashevich Ivan Volodymyrovych, a brigadier general in the Security Service of Ukraine, the country’s counterintelligence agency, proposed building a series of fast, uncrewed attack boats. In the summer of 2022, the service, which is known by the acronym SBU, began with a few prototype drones. These quickly led to a pair of naval drones that, when used with commercial satellite imagery, off-the-shelf uncrewed aircraft, and Starlink terminals, gave Ukrainian operators the means to sink or disable a third of Russia’s Black Sea Fleet, including the flagship Moskva and most of the fleet’s cruise-missile-equipped warships.

To protect their remaining vessels, Russian commanders relocated the Black Sea Fleet to Novorossiysk, 300 kilometers east of Crimea. This move sheltered the ships from Ukrainian drones and missiles, but it also put them too far away to threaten Ukrainian shipping or defend the Crimean Peninsula. Kyiv has exploited the opening by restoring trade routes and mounting sustained airborne and naval drone strikes against Russian bases on Crimea and the Kerch Strait Bridge connecting the peninsula with Russia.

How Maguras and Sea Babies Hunt and Attack

The first Ukrainian drone boats were cobbled together with parts from jet skis, motorboats, and off-the-shelf electronics. But within months, manufacturers working for the Ukraine defense ministry and SBU fielded several designs that proved their worth in combat, most notably the Magura V5 and the Sea Baby.

Carrying a 300-kilogram warhead, on par with that of a heavyweight torpedo, the Magura V5 is a hunter-killer antiship drone designed to work in swarms that confuse and overwhelm a ship’s defenses. Equipped with Starlink terminals, which connect to SpaceX’s Starlink satellites, and GPS, a group of about three to five Maguras likely moves autonomously to a location near the potential target. From there, operators can wait until conditions are right and then attack the target from multiple angles using remote control and video feeds from the vehicles.

A Ukrainian Magura V5 hunter-killer sea drone was demonstrated at an undisclosed location in Ukraine on 13 April 2024. The domed pod toward the bow, which can rotate from side to side, contains a thermal camera used for guidance and targeting.Valentyn Origrenko/Reuters/Redux

Larger than a Magura, the Sea Baby is a multipurpose vehicle that can carry about 800 kg of explosives, which is close to twice the payload of a Tomahawk cruise missile. A Sea Baby was used in 2023 to inflict substantial damage to the Kerch Strait Bridge. A more recent version carries a rocket launcher that Ukraine troops plan to use against Russian forces along the Dnipro River, which flows through eastern Ukraine and has often formed the frontline in that part of the country. Like a Magura, a Sea Baby is likely remotely controlled using Starlink and GPS. In addition to attack, it’s also equipped for surveillance and logistics.

Russia reduced the threat to its ships by moving them out of the region, but fixed targets like the Kerch Strait Bridge remain vulnerable to Ukrainian sea drones. To try to protect these structures from drone onslaughts, Russian commanders are taking a “kitchen sink” approach, submerging hulks around bridge supports, fielding more guns to shoot at incoming uncrewed vessels, and jamming GPS and Starlink around the Kerch Strait.

Ukrainian service members demonstrated the portable, ruggedized consoles used to remotely guide the Magura V5 naval drones in April 2024.Valentyn Origrenko/Reuters/Redux

While the war remains largely stalemated in the country’s north, Ukraine’s naval drones could yet force Russia into negotiations. The Crimean Peninsula was Moscow’s biggest prize from its decade-long assault on Ukraine. If the Kerch Bridge is severed and the Black Sea Fleet pushed back into Russian ports, Putin may need to end the fighting to regain control over Crimea.

Why the U.S. Navy Embraced the Swarm

Ukraine’s small, low-cost sea drones are offering a compelling view of future tactics and capabilities. But recent experiences elsewhere in the world are highlighting the limitations of drones for some crucial tasks. For example, for protecting shipping from piracy or stopping trafficking and illegal fishing, drones are less useful.

Before the Ukraine war, efforts by the U.S. Department of Defense to field surface sea drones focused mostly on large vehicles. In 2015, the Defense Advanced Research Projects Agency started, and the U.S. Navy later continued, a project that built two uncrewed surface vessels, called Sea Hunter and Sea Hawk. These were 130-tonne sea drones capable of roaming the oceans for up to 70 days while carrying payloads of thousands of pounds each. The point was to demonstrate the ability to detect, follow, and destroy submarines. The Navy and the Pentagon’s secretive Strategic Capabilities Office followed with the Ghost Fleet Overlord uncrewed vessel programs, which produced four larger prototypes designed to carry shipping-container-size payloads of missiles, sensors, or electronic countermeasures.

The U.S. Navy’s newly created Uncrewed Surface Vessel Division 1 ( USVDIV-1) completed a deployment across the Pacific Ocean last year with four medium and large sea drones: Sea Hunter and Sea Hawk and two Overlord vessels, Ranger and Mariner. The five-month deployment from Port Hueneme, Calif., took the vessels to Hawaii, Japan, and Australia, where they joined in annual exercises conducted by U.S. and allied navies. The U.S. Navy continues to assess its drone fleet through sea trials lasting from several days to a few months.

The Sea Hawk is a U.S. Navy trimaran drone vessel designed to find, pursue, and attack submarines. The 130-tonne ship, photographed here in October of 2023 in Sydney Harbor, was built to operate autonomously on missions of up to 70 days, but it can also accommodate human observers on board. Ensign Pierson Hawkins/U.S. Navy

In contrast with Ukraine’s small sea drones, which are usually remotely controlled and operate outside shipping lanes, the U.S. Navy’s much larger uncrewed vessels have to follow the nautical rules of the road. To navigate autonomously, these big ships rely on robust onboard sensors, processing for computer vision and target-motion analysis, and automation based on predictable forms of artificial intelligence, such as expert- or agent-based algorithms rather than deep learning.

But thanks to the success of the Ukrainian drones, the focus and energy in sea drones are rapidly moving to the smaller end of the scale. The U.S. Navy initially envisioned platforms like Sea Hunter conducting missions in submarine tracking, electronic deception, or clandestine surveillance far out at sea. And large drones will still be needed for such missions. However, with the right tactics and support, a group of small sea drones can conduct similar missions as well as other vital tasks.

For example, though they are constrained in speed, maneuverability, and power generation, solar- or sail-powered drones can stay out for months with little human intervention. The earliest of these are wave gliders like the Liquid Robotics (a Boeing company) SHARC, which has been conducting undersea and surface surveillance for the U.S. Navy for more than a decade. Newer designs like the Saildrone Voyager and Ocius Blue Bottle incorporate motors and additional solar or diesel power to haul payloads such as radars, jammers, decoys, or active sonars. The Ocean Aero Triton takes this model one step further: It can submerge, to conduct clandestine surveillance or a surprise attack, or to avoid detection.

The Triton, from Ocean Aero in Gulfport, Miss., is billed as the world’s only autonomous sea drone capable of both cruising underwater and sailing on the surface. Ocean Aero

Ukraine’s success in the Black Sea has also unleashed a flurry of new small antiship attack drones. USVDIV-1 will use the GARC from Maritime Applied Physics Corp. to develop tactics. The Pentagon’s Defense Innovation Unit has also begun purchasing drones for the China-focused Replicator initiative. Among the likely craft being evaluated are fast-attack sea drones from Austin, Texas–based Saronic.

Behind the soaring interest in small and inexpensive sea drones is the changing value proposition for naval drones. As recently as four years ago, military planners were focused on using them to replace crewed ships in “dull, dirty, and dangerous” jobs. But now, the thinking goes, sea drones can provide scale, adaptability, and resilience across each link in the “kill chain” that extends from detecting a target to hitting it with a weapon.

Today, to attack a ship, most navies generally have one preferred sensor (such as a radar system), one launcher, and one missile. But what these planners are now coming to appreciate is that a fleet of crewed surface ships with a collection of a dozen or two naval drones would offer multiple paths to both find that ship and attack it. These craft would also be less vulnerable, because of their dispersion.

Defending Taiwan by Surrounding It With a “Hellscape”

U.S. efforts to protect Taiwan may soon reflect this new value proposition. Many classified and unclassified war games suggest Taiwan and its allies could successfully defend the island—but at costs high enough to potentially dissuade a U.S. president from intervening on Taiwan’s behalf. With U.S. defense budgets capped by law and procurement constrained by rising personnel and maintenance costs, substantially growing or improving today’s U.S. military for this specific purpose is unrealistic. Instead, commanders are looking for creative solutions to slow or stop a Chinese invasion without losing most U.S. forces in the process.

Naval drones look like a good—and maybe the best— solution. The Taiwan Strait is only 160 kilometers (100 miles) wide, and Taiwan’s coastline offers only a few areas where large numbers of troops could come ashore. U.S. naval attack drones positioned on the likely routes could disrupt or possibly even halt a Chinese invasion, much as Ukrainian sea drones have denied Russia access to the western Black Sea and, for that matter, Houthi-controlled drones have sporadically closed off large parts of the Red Sea in the Middle East.

Rather than killer robots seeking out and destroying targets, the drones defending Taiwan would be passively waiting for Chinese forces to illegally enter a protected zone, within which they could be attacked.

The new U.S. Indo-Pacific Command leader, Admiral Sam Paparo, wants to apply this approach to defending Taiwan in a scenario he calls “Hellscape.” In it, U.S. surface and undersea drones would likely be based near Taiwan, perhaps in the Philippines or Japan. When the potential for an invasion rises, the drones would move themselves or be carried by larger uncrewed or crewed ships to the western coast of Taiwan to wait.

Sea drones are well-suited to this role, thanks in part to the evolution of naval technologies and tactics over the past half century. Until World War II, submarines were the most lethal threat to ships. But since the Cold War, long-range subsonic, supersonic, and now hypersonic antiship missiles have commanded navy leaders’ attention. They’ve spent decades devising ways to protect their ships against such antiship missiles.

Much less effort has gone into defending against torpedoes, mines—or sea drones. A dozen or more missiles might be needed to ensure that just one reaches a targeted ship, and even then, the damage may not be catastrophic. But a single surface or undersea drone could easily evade detection and explode at a ship’s waterline to sink it, because in this case, water pressure does most of the work.

The level of autonomy available in most sea drones today is more than enough to attack ships in the Taiwan Strait. Details of U.S. military plans are classified, but a recent Hudson Institute report that I wrote with Dan Patt, proposes a possible approach. In it, a drone flotilla, consisting of about three dozen hunter-killer surface drones, two dozen uncrewed surface vessels carrying aerial drones, and three dozen autonomous undersea drones, would take up designated positions in a “kill box” adjacent to one of Taiwan’s western beaches if a Chinese invasion fleet had begun massing on the opposite side of the strait. Even if they were based in Japan or the Philippines, the drones could reach Taiwan within a day. Upon receiving a signal from operators remotely using Starlink or locally using a line-of-sight radio, the drones would act as a mobile minefield, attacking troop transports and their escorts inside Taiwan’s territorial waters. Widely available electro-optical and infrared sensors, coupled to recognition algorithms, would direct the drones to targets.

Although communications with operators onshore would likely be jammed, the drones could coordinate their actions locally using line-of-sight Internet Protocol–based networks like Silvus or TTNT. For example, surface vessels could launch aerial drones that would attack the pilot houses and radars of ships, while surface and undersea drones strike ships at the waterline. The drones could also coordinate to ensure they do not all strike the same target and to prioritize the largest targets first. These kinds of simple collaborations are routine in today’s drones.

Treating drones like mines reduces the complexity needed in their control systems and helps them comply with Pentagon rules for autonomous weapons. Rather than killer robots seeking out and destroying targets, the drones defending Taiwan would be passively waiting for Chinese forces to illegally enter a protected zone, within which they could be attacked.

Like Russia’s Black Sea Fleet, the Chinese navy will develop countermeasures to sea drones, such as employing decoy ships, attacking drones from the air, or using minesweepers to move them away from the invasion fleet. To stay ahead, operators will need to continue innovating tactics and behaviors through frequent exercises and experiments, like those underway at U.S. Navy Unmanned Surface Vessel Squadron Three. (Like the USVDIV-1, it is a unit under the U.S. Navy’s Surface Development Squadron One.) Lessons from such exercises would be incorporated into the defending drones as part of their programming before a mission.

The emergence of sea drones heralds a new era in naval warfare. After decades of focusing on increasingly lethal antiship missiles, navies now have to defend against capable and widely proliferating threats on, above, and below the water. And while sea drone swarms may be mainly a concern for coastal areas, these choke points are critical to the global economy and most nations’ security. For U.S. and allied fleets, especially, naval drones are a classic combination of threat and opportunity. As the Hellscape concept suggests, uncrewed vessels may be a solution to some of the most challenging and sweeping of modern naval scenarios for the Pentagon and its allies—and their adversaries.

This article was updated on 10 July 2024. An earlier version stated that sea drones from Saronic Technologies are being purchased by the U.S. Department of Defense’s Defense Innovation Unit. This could not be publicly confirmed.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Figure is making progress toward a humanoid robot that can do something useful, but keep in mind that the “full use case” here is not one continuous shot.

[ Figure ]

Can this robot survive a 1-meter drop? Spoiler alert: it cannot.

[ WVUIRL ]

One of those things that’s a lot harder for robots than it probably looks.

This is a demo of hammering a nail. The instantaneous rebound force from the hammer is absorbed through a combination of the elasticity of the rubber material securing the hammer, the deflection in torque sensors and harmonic gears, back-drivability, and impedance control. This allows the nail to be driven with a certain amount of force.

[ Tokyo Robotics ]

Although bin packing has been a key benchmark task for robotic manipulation, the community has mainly focused on the placement of rigid rectilinear objects within the container. We address this by presenting a soft robotic hand that combines vision, motor-based proprioception, and soft tactile sensors to identify, sort, and pack a stream of unknown objects.

[ MIT CSAIL ]

Status Update: Extending traditional visual servo and compliant control by integrating the latest reinforcement and imitation learning control methodologies, UBTECH gradually trains the embodied intelligence-based “cerebellum” of its humanoid robot Walker S for diverse industrial manipulation tasks.

[ UBTECH ]

If you’re gonna ask a robot to stack bread, better make it flat.

[ FANUC ]

Cassie has to be one of the most distinctive sounding legged robots there is.

[ Paper ]

Twice the robots are by definition twice as capable, right...?

[ Pollen Robotics ]

The Robotic Systems Lab participated in the Advanced Industrial Robotic Applications (AIRA) Challenge at the ACHEMA 2024 process industry trade show, where teams demonstrated their teleoperated robotic solutions for industrial inspection tasks. We competed with the ALMA legged manipulator robot, teleoperated using a second robot arm in a leader-follower configuration, placing us in third place for the competition.

[ ETHZ RSL ]

This is apparently “peak demand” in a single market for Wing delivery drones.

[ Wing ]

Using a new type of surgical intervention and neuroprosthetic interface, MIT researchers, in collaboration with colleagues from Brigham and Women’s Hospital, have shown that a natural walking gait is achievable using a prosthetic leg fully driven by the body’s own nervous system. The surgical amputation procedure reconnects muscles in the residual limb, which allows patients to receive “proprioceptive” feedback about where their prosthetic limb is in space.

[ MIT ]

Coal mining in Forest of Dean (UK) is such a difficult and challenging job. Going into the mine as human is sometimes almost impossible. We did it with our robot while inspecting the mine with our partners (Forestry England) and the local miners!

[ UCL RPL ]

Chill.

[ ABB ]

Would you tango with a robot? Inviting us into the fascinating world of dancing machines, robot choreographer Catie Cuan highlights why teaching robots to move with grace, intention and emotion is essential to creating AI-powered machines we will want to welcome into our daily lives.

[ TED ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Figure is making progress toward a humanoid robot that can do something useful, but keep in mind that the “full use case” here is not one continuous shot.

[ Figure ]

Can this robot survive a 1-meter drop? Spoiler alert: it cannot.

[ WVUIRL ]

One of those things that’s a lot harder for robots than it probably looks.

This is a demo of hammering a nail. The instantaneous rebound force from the hammer is absorbed through a combination of the elasticity of the rubber material securing the hammer, the deflection in torque sensors and harmonic gears, back-drivability, and impedance control. This allows the nail to be driven with a certain amount of force.

[ Tokyo Robotics ]

Although bin packing has been a key benchmark task for robotic manipulation, the community has mainly focused on the placement of rigid rectilinear objects within the container. We address this by presenting a soft robotic hand that combines vision, motor-based proprioception, and soft tactile sensors to identify, sort, and pack a stream of unknown objects.

[ MIT CSAIL ]

Status Update: Extending traditional visual servo and compliant control by integrating the latest reinforcement and imitation learning control methodologies, UBTECH gradually trains the embodied intelligence-based “cerebellum” of its humanoid robot Walker S for diverse industrial manipulation tasks.

[ UBTECH ]

If you’re gonna ask a robot to stack bread, better make it flat.

[ FANUC ]

Cassie has to be one of the most distinctive sounding legged robots there is.

[ Paper ]

Twice the robots are by definition twice as capable, right...?

[ Pollen Robotics ]

The Robotic Systems Lab participated in the Advanced Industrial Robotic Applications (AIRA) Challenge at the ACHEMA 2024 process industry trade show, where teams demonstrated their teleoperated robotic solutions for industrial inspection tasks. We competed with the ALMA legged manipulator robot, teleoperated using a second robot arm in a leader-follower configuration, placing us in third place for the competition.

[ ETHZ RSL ]

This is apparently “peak demand” in a single market for Wing delivery drones.

[ Wing ]

Using a new type of surgical intervention and neuroprosthetic interface, MIT researchers, in collaboration with colleagues from Brigham and Women’s Hospital, have shown that a natural walking gait is achievable using a prosthetic leg fully driven by the body’s own nervous system. The surgical amputation procedure reconnects muscles in the residual limb, which allows patients to receive “proprioceptive” feedback about where their prosthetic limb is in space.

[ MIT ]

Coal mining in Forest of Dean (UK) is such a difficult and challenging job. Going into the mine as human is sometimes almost impossible. We did it with our robot while inspecting the mine with our partners (Forestry England) and the local miners!

[ UCL RPL ]

Chill.

[ ABB ]

Would you tango with a robot? Inviting us into the fascinating world of dancing machines, robot choreographer Catie Cuan highlights why teaching robots to move with grace, intention and emotion is essential to creating AI-powered machines we will want to welcome into our daily lives.

[ TED ]



It may at times seem like there are as many humanoid robotics companies out there as the industry could possibly sustain, but the potential for useful and reliable and affordable humanoids is so huge that there’s plenty of room for any company that can actually get them to work. Joining the dozen or so companies already on this quest is Persona AI, founded last month by Nic Radford and Jerry Pratt, two people who know better than just about anyone what it takes to make a successful robotics company, although they also know enough to be wary of getting into commercial humanoids.


Persona AI may not be the first humanoid robotics startup, but its founders have some serious experience in the space:

Nic Radford lead the team that developed NASA’s Valkyrie humanoid robot, before founding Houston Mechatronics (now Nauticus Robotics), which introduced a transforming underwater robot in 2019. He also founded Jacobi Motors, which is commercializing variable flux electric motors.

Jerry Pratt worked on walking robots for 20 years at the Institute for Human and Machine Cognition (IHMC) in Pensacola, Florida. He co-founded Boardwalk Robotics in 2017, and has spent the last two years as CTO of multi-billion-dollar humanoid startup Figure.

“It took me a long time to warm up to this idea,” Nic Radford tells us. “After I left Nauticus in January, I didn’t want anything to do with humanoids, especially underwater humanoids, and I didn’t even want to hear the word ‘robot.’ But things are changing so quickly, and I got excited and called Jerry and I’m like, this is actually very possible.” Jerry Pratt, who recently left Figure due primarily to the two-body problem, seems to be coming from a similar place: “There’s a lot of bashing your head against the wall in robotics, and persistence is so important. Nic and I have both gone through pessimism phases with our robots over the years. We’re a bit more optimistic about the commercial aspects now, but we want to be pragmatic and realistic about things too.”

Behind all of the recent humanoid hype lies the very, very difficult problem of making a highly technical piece of hardware and software compete effectively with humans in the labor market. But that’s also a very, very big opportunity—big enough that Persona doesn’t have to be the first company in this space, or the best funded, or the highest profile. They simply have to succeed, but of course sustainable commercial success with any robot (and bipedal robots in particular) is anything but simple. Step one will be building a founding team across two locations: Houston and Pensacola, Fla. But Radford says that the response so far to just a couple of LinkedIn posts about Persona has been “tremendous.” And with a substantial seed investment in the works, Persona will have more than just a vision to attract top talent.

For more details about Persona, we spoke with Persona AI co-founders Nic Radford and Jerry Pratt.

Why start this company, why now, and why you?

Nic Radford

Nic Radford: The idea for this started a long time ago. Jerry and I have been working together off and on for quite a while, being in this field and sharing a love for what the humanoid potential is while at the same time being frustrated by where humanoids are at. As far back as probably 2008, we were thinking about starting a humanoids company, but for one reason or another the viability just wasn’t there. We were both recently searching for our next venture and we couldn’t imagine sitting this out completely, so we’re finally going to explore it, although we know better than anyone that robots are really hard. They’re not that hard to build; but they’re hard to make useful and make money with, and the challenge for us is whether we can build a viable business with Persona: can we build a business that uses robots and makes money? That’s our singular focus. We’re pretty sure that this is likely the best time in history to execute on that potential.

Jerry Pratt: I’ve been interested in commercializing humanoids for quite a while—thinking about it, and giving it a go here and there, but until recently it has always been the wrong time from both a commercial point of view and a technological readiness point of view. You can think back to the DARPA Robotics Challenge days when we had to wait about 20 seconds to get a good lidar scan and process it, which made it really challenging to do things autonomously. But we’ve gotten much, much better at perception, and now, we can get a whole perception pipeline to run at the framerate of our sensors. That’s probably the main enabling technology that’s happened over the last 10 years.

From the commercial point of view, now that we’re showing that this stuff’s feasible, there’s been a lot more pull from the industry side. It’s like we’re at the next stage of the Industrial Revolution, where the harder problems that weren’t roboticized from the 60s until now can now be. And so, there’s really good opportunities in a lot of different use cases.

A bunch of companies have started within the last few years, and several were even earlier than that. Are you concerned that you’re too late?

Radford: The concern is that we’re still too early! There might only be one Figure out there that raises a billion dollars, but I don’t think that’s going to be the case. There’s going to be multiple winners here, and if the market is as large as people claim it is, you could see quite a diversification of classes of commercial humanoid robots.

Jerry Pratt

Pratt: We definitely have some catching up to do but we should be able to do that pretty quickly, and I’d say most people really aren’t that far from the starting line at this point. There’s still a lot to do, but all the technology is here now—we know what it takes to put together a really good team and to build robots. We’re also going to do what we can to increase speed, like by starting with a surrogate robot from someone else to get the autonomy team going while building our own robot in parallel.

Radford: I also believe that our capital structure is a big deal. We’re taking an anti-stealth approach, and we want to bring everyone along with us as our company grows and give out a significant chunk of the company to early joiners. It was an anxiety of ours that we would be perceived as a me-too and that nobody was going to care, but it’s been the exact opposite with a compelling response from both investors and early potential team members.

So your approach here is not to look at all of these other humanoid robotics companies and try and do something they’re not, but instead to pursue similar goals in a similar way in a market where there’s room for all?

Pratt: All robotics companies, and AI companies in general, are standing on the shoulders of giants. These are the thousands of robotics and AI researchers that have been collectively bashing their heads against the myriad problems for decades—some of the first humanoids were walking at Waseda University in the late 1960s. While there are some secret sauces that we might bring to the table, it is really the combined efforts of the research community that now enables commercialization.

So if you’re at a point where you need something new to be invented in order to get to applications, then you’re in trouble, because with invention you never know how long it’s going to take. What is available today and now, the technology that’s been developed by various communities over the last 50+ years—we all have what we need for the first three applications that are widely mentioned: warehousing, manufacturing, and logistics. The big question is, what’s the fourth application? And the fifth and the sixth? And if you can start detecting those and planning for them, you can get a leg up on everybody else.

The difficulty is in the execution and integration. It’s a ten thousand—no, that’s probably too small—it’s a hundred thousand piece puzzle where you gotta get each piece right, and occasionally you lose some pieces on the floor that you just can’t find. So you need a broad team that has expertise in like 30 different disciplines to try to solve the challenge of an end-to-end labor solution with humanoid robots.

Radford: The idea is like one percent of starting a company. The rest of it, and why companies fail, is in the execution. Things like, not understanding the market and the product-market fit, or not understanding how to run the company, the dimensions of the actual business. I believe we’re different because with our backgrounds and our experience we bring a very strong view on execution, and that is our focus on day one. There’s enough interest in the VC community that we can fund this company with a singular focus on commercializing humanoids for a couple different verticals.

But listen, we got some novel ideas in actuation and other tricks up our sleeve that might be very compelling for this, but we don’t want to emphasize that aspect. I don’t think Persona’s ultimate success comes just from the tech component. I think it comes mostly from ‘do we understand the customer, the market needs, the business model, and can we avoid the mistakes of the past?’

How is that going to change things about the way that you run Persona?

Radford: I started a company [Houston Mechatronics] with a bunch of research engineers. They don’t make the best product managers. More broadly, if you’re staffing all your disciplines with roboticists and engineers, you’ll learn that it may not be the most efficient way to bring something to market. Yes, we need those skills. They are essential. But there’s so many other aspects of a business that get overlooked when you’re fundamentally a research lab trying to commercialize a robot. I’ve been there, I’ve done that, and I’m not interested in making that mistake again.

Pratt: It’s important to get a really good product team that’s working with a customer from day one to have customer needs drive all the engineering. The other approach is ‘build it and they will come’ but then maybe you don’t build the right thing. Of course, we want to build multi-purpose robots, and we’re steering clear of saying ‘general purpose’ at this point. We don’t want to overfit to any one application, but if we can get to a dozen use cases, two or three per customer site, then we’ve got something.

There still seems to be a couple of unsolved technical challenges with humanoids, including hands, batteries, and safety. How will Persona tackle those things?

Pratt: Hands are such a hard thing—getting a hand that has the required degrees of freedom and is robust enough that if you accidentally hit it against your table, you’re not just going to break all your fingers. But we’ve seen robotic hand companies popping up now that are showing videos of hitting their hands with a hammer, so I’m hopeful.

Getting one to two hours of battery life is relatively achievable. Pushing up towards five hours is super hard. But batteries can now be charged in 20 minutes or so, as long as you’re going from 20 percent to 80 percent. So we’re going to need a cadence where robots are swapping in and out and charging as they go. And batteries will keep getting better.

Radford: We do have a focus on safety. It was paramount at NASA, and when we were working on Robonaut, it led to a lot of morphological considerations with padding. In fact, the first concepts and images we have of our robot illustrate extensive padding, but we have to do that carefully, because at the end of the day it’s mass and it’s inertia.

What does the near future look like for you?

Pratt: Building the team is really important—getting those first 10 to 20 people over the next few months. Then we’ll want to get some hardware and get going really quickly, maybe buying a couple of robot arms or something to get our behavior and learning pipelines going while in parallel starting our own robot design. From our experience, after getting a good team together and starting from a clean sheet, a new robot takes about a year to design and build. And then during that period we’ll be securing a customer or two or three.

Radford: We’re also working hard on some very high profile partnerships that could influence our early thinking dramatically. Like Jerry said earlier, it’s a massive 100,000 piece puzzle, and we’re working on the fundamentals: the people, the cash, and the customers.



It may at times seem like there are as many humanoid robotics companies out there as the industry could possibly sustain, but the potential for useful and reliable and affordable humanoids is so huge that there’s plenty of room for any company that can actually get them to work. Joining the dozen or so companies already on this quest is Persona AI, founded last month by Nic Radford and Jerry Pratt, two people who know better than just about anyone what it takes to make a successful robotics company, although they also know enough to be wary of getting into commercial humanoids.


Persona AI may not be the first humanoid robotics startup, but its founders have some serious experience in the space:

Nic Radford lead the team that developed NASA’s Valkyrie humanoid robot, before founding Houston Mechatronics (now Nauticus Robotics), which introduced a transforming underwater robot in 2019. He also founded Jacobi Motors, which is commercializing variable flux electric motors.

Jerry Pratt worked on walking robots for 20 years at the Institute for Human and Machine Cognition (IHMC) in Pensacola, Florida. He co-founded Boardwalk Robotics in 2017, and has spent the last two years as CTO of multi-billion-dollar humanoid startup Figure.

“It took me a long time to warm up to this idea,” Nic Radford tells us. “After I left Nauticus in January, I didn’t want anything to do with humanoids, especially underwater humanoids, and I didn’t even want to hear the word ‘robot.’ But things are changing so quickly, and I got excited and called Jerry and I’m like, this is actually very possible.” Jerry Pratt, who recently left Figure due primarily to the two-body problem, seems to be coming from a similar place: “There’s a lot of bashing your head against the wall in robotics, and persistence is so important. Nic and I have both gone through pessimism phases with our robots over the years. We’re a bit more optimistic about the commercial aspects now, but we want to be pragmatic and realistic about things too.”

Behind all of the recent humanoid hype lies the very, very difficult problem of making a highly technical piece of hardware and software compete effectively with humans in the labor market. But that’s also a very, very big opportunity—big enough that Persona doesn’t have to be the first company in this space, or the best funded, or the highest profile. They simply have to succeed, but of course sustainable commercial success with any robot (and bipedal robots in particular) is anything but simple. Step one will be building a founding team across two locations: Houston and Pensacola, Fla. But Radford says that the response so far to just a couple of LinkedIn posts about Persona has been “tremendous.” And with a substantial seed investment in the works, Persona will have more than just a vision to attract top talent.

For more details about Persona, we spoke with Persona AI co-founders Nic Radford and Jerry Pratt.

Why start this company, why now, and why you?

Nic Radford

Nic Radford: The idea for this started a long time ago. Jerry and I have been working together off and on for quite a while, being in this field and sharing a love for what the humanoid potential is while at the same time being frustrated by where humanoids are at. As far back as probably 2008, we were thinking about starting a humanoids company, but for one reason or another the viability just wasn’t there. We were both recently searching for our next venture and we couldn’t imagine sitting this out completely, so we’re finally going to explore it, although we know better than anyone that robots are really hard. They’re not that hard to build; but they’re hard to make useful and make money with, and the challenge for us is whether we can build a viable business with Persona: can we build a business that uses robots and makes money? That’s our singular focus. We’re pretty sure that this is likely the best time in history to execute on that potential.

Jerry Pratt: I’ve been interested in commercializing humanoids for quite a while—thinking about it, and giving it a go here and there, but until recently it has always been the wrong time from both a commercial point of view and a technological readiness point of view. You can think back to the DARPA Robotics Challenge days when we had to wait about 20 seconds to get a good lidar scan and process it, which made it really challenging to do things autonomously. But we’ve gotten much, much better at perception, and now, we can get a whole perception pipeline to run at the framerate of our sensors. That’s probably the main enabling technology that’s happened over the last 10 years.

From the commercial point of view, now that we’re showing that this stuff’s feasible, there’s been a lot more pull from the industry side. It’s like we’re at the next stage of the Industrial Revolution, where the harder problems that weren’t roboticized from the 60s until now can now be. And so, there’s really good opportunities in a lot of different use cases.

A bunch of companies have started within the last few years, and several were even earlier than that. Are you concerned that you’re too late?

Radford: The concern is that we’re still too early! There might only be one Figure out there that raises a billion dollars, but I don’t think that’s going to be the case. There’s going to be multiple winners here, and if the market is as large as people claim it is, you could see quite a diversification of classes of commercial humanoid robots.

Jerry Pratt

Pratt: We definitely have some catching up to do but we should be able to do that pretty quickly, and I’d say most people really aren’t that far from the starting line at this point. There’s still a lot to do, but all the technology is here now—we know what it takes to put together a really good team and to build robots. We’re also going to do what we can to increase speed, like by starting with a surrogate robot from someone else to get the autonomy team going while building our own robot in parallel.

Radford: I also believe that our capital structure is a big deal. We’re taking an anti-stealth approach, and we want to bring everyone along with us as our company grows and give out a significant chunk of the company to early joiners. It was an anxiety of ours that we would be perceived as a me-too and that nobody was going to care, but it’s been the exact opposite with a compelling response from both investors and early potential team members.

So your approach here is not to look at all of these other humanoid robotics companies and try and do something they’re not, but instead to pursue similar goals in a similar way in a market where there’s room for all?

Pratt: All robotics companies, and AI companies in general, are standing on the shoulders of giants. These are the thousands of robotics and AI researchers that have been collectively bashing their heads against the myriad problems for decades—some of the first humanoids were walking at Waseda University in the late 1960s. While there are some secret sauces that we might bring to the table, it is really the combined efforts of the research community that now enables commercialization.

So if you’re at a point where you need something new to be invented in order to get to applications, then you’re in trouble, because with invention you never know how long it’s going to take. What is available today and now, the technology that’s been developed by various communities over the last 50+ years—we all have what we need for the first three applications that are widely mentioned: warehousing, manufacturing, and logistics. The big question is, what’s the fourth application? And the fifth and the sixth? And if you can start detecting those and planning for them, you can get a leg up on everybody else.

The difficulty is in the execution and integration. It’s a ten thousand—no, that’s probably too small—it’s a hundred thousand piece puzzle where you gotta get each piece right, and occasionally you lose some pieces on the floor that you just can’t find. So you need a broad team that has expertise in like 30 different disciplines to try to solve the challenge of an end-to-end labor solution with humanoid robots.

Radford: The idea is like one percent of starting a company. The rest of it, and why companies fail, is in the execution. Things like, not understanding the market and the product-market fit, or not understanding how to run the company, the dimensions of the actual business. I believe we’re different because with our backgrounds and our experience we bring a very strong view on execution, and that is our focus on day one. There’s enough interest in the VC community that we can fund this company with a singular focus on commercializing humanoids for a couple different verticals.

But listen, we got some novel ideas in actuation and other tricks up our sleeve that might be very compelling for this, but we don’t want to emphasize that aspect. I don’t think Persona’s ultimate success comes just from the tech component. I think it comes mostly from ‘do we understand the customer, the market needs, the business model, and can we avoid the mistakes of the past?’

How is that going to change things about the way that you run Persona?

Radford: I started a company [Houston Mechatronics] with a bunch of research engineers. They don’t make the best product managers. More broadly, if you’re staffing all your disciplines with roboticists and engineers, you’ll learn that it may not be the most efficient way to bring something to market. Yes, we need those skills. They are essential. But there’s so many other aspects of a business that get overlooked when you’re fundamentally a research lab trying to commercialize a robot. I’ve been there, I’ve done that, and I’m not interested in making that mistake again.

Pratt: It’s important to get a really good product team that’s working with a customer from day one to have customer needs drive all the engineering. The other approach is ‘build it and they will come’ but then maybe you don’t build the right thing. Of course, we want to build multi-purpose robots, and we’re steering clear of saying ‘general purpose’ at this point. We don’t want to overfit to any one application, but if we can get to a dozen use cases, two or three per customer site, then we’ve got something.

There still seems to be a couple of unsolved technical challenges with humanoids, including hands, batteries, and safety. How will Persona tackle those things?

Pratt: Hands are such a hard thing—getting a hand that has the required degrees of freedom and is robust enough that if you accidentally hit it against your table, you’re not just going to break all your fingers. But we’ve seen robotic hand companies popping up now that are showing videos of hitting their hands with a hammer, so I’m hopeful.

Getting one to two hours of battery life is relatively achievable. Pushing up towards five hours is super hard. But batteries can now be charged in 20 minutes or so, as long as you’re going from 20 percent to 80 percent. So we’re going to need a cadence where robots are swapping in and out and charging as they go. And batteries will keep getting better.

Radford: We do have a focus on safety. It was paramount at NASA, and when we were working on Robonaut, it led to a lot of morphological considerations with padding. In fact, the first concepts and images we have of our robot illustrate extensive padding, but we have to do that carefully, because at the end of the day it’s mass and it’s inertia.

What does the near future look like for you?

Pratt: Building the team is really important—getting those first 10 to 20 people over the next few months. Then we’ll want to get some hardware and get going really quickly, maybe buying a couple of robot arms or something to get our behavior and learning pipelines going while in parallel starting our own robot design. From our experience, after getting a good team together and starting from a clean sheet, a new robot takes about a year to design and build. And then during that period we’ll be securing a customer or two or three.

Radford: We’re also working hard on some very high profile partnerships that could influence our early thinking dramatically. Like Jerry said earlier, it’s a massive 100,000 piece puzzle, and we’re working on the fundamentals: the people, the cash, and the customers.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

One of the (many) great things about robots is that they don’t have to be constrained by how their biological counterparts do things. If you have a particular problem your robot needs to solve, you can get creative with extra sensors: many quadrupeds have side cameras and butt cameras for obstacle avoidance, and humanoids sometimes have chest cameras and knee cameras to help with navigation along with wrist cameras for manipulation. But how far can you take this? I have no idea, but it seems like we haven’t gotten to the end of things yet because now there’s a quadruped with cameras on the bottom of its feet.

Sensorized feet is not a new idea; it’s pretty common for quadrupedal robots to have some kind of foot-mounted force sensor to detect ground contact. Putting an actual camera down there is fairly novel, though, because it’s not at all obvious how you’d go about doing it. And the way that roboticists from the Southern University of Science and Technology in Shenzhen went about doing it is, indeed, not at all obvious.

Go1’s snazzy feetsies have soles made of transparent acrylic, with slightly flexible plastic structure supporting a 60 millimeter gap up to each camera (640x480 at 120 frames per second) with a quartet of LEDs to provide illumination. While it’s complicated looking, at 120 grams, it doesn’t weigh all that much, and costs only about $50 per foot ($42 of which is the camera). The whole thing is sealed to keep out dirt and water.

So why bother with all of this (presumably somewhat fragile) complexity? As we ask quadruped robots to do more useful things in more challenging environments, having more information about what exactly they’re stepping on and how their feet are interacting with the ground is going to be super helpful. Robots that rely only on proprioceptive sensing (sensing self-movement) are great and all, but when you start trying to move over complex surfaces like sand, it can be really helpful to have vision that explicitly shows how your robot is interacting with the surface that it’s stepping on. Preliminary results showed that Foot Vision enabled the Go1 using it to perceive the flow of sand or soil around its foot as it takes a step, which can be used to estimate slippage, the bane of ground-contacting robots.

The researchers acknowledge that their hardware could use a bit of robustifying, and they also want to try adding some tread patterns around the circumference of the foot, since that plexiglass window is pretty slippery. The overall idea is to make Foot Vision as useful as the much more common gripper-integrated vision systems for robotic manipulation, helping legged robots make better decisions about how to get where they need to go.

Foot Vision: A Vision-Based Multi-Functional Sensorized Foot for Quadruped Robots, by Guowei Shi, Chen Yao, Xin Liu, Yuntian Zhao, Zheng Zhu, and Zhenzhong Jia from Southern University of Science and Technology in Shenzhen, is accepted to the July 2024 issue of IEEE Robotics and Automation Letters

.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

One of the (many) great things about robots is that they don’t have to be constrained by how their biological counterparts do things. If you have a particular problem your robot needs to solve, you can get creative with extra sensors: many quadrupeds have side cameras and butt cameras for obstacle avoidance, and humanoids sometimes have chest cameras and knee cameras to help with navigation along with wrist cameras for manipulation. But how far can you take this? I have no idea, but it seems like we haven’t gotten to the end of things yet because now there’s a quadruped with cameras on the bottom of its feet.

Sensorized feet is not a new idea; it’s pretty common for quadrupedal robots to have some kind of foot-mounted force sensor to detect ground contact. Putting an actual camera down there is fairly novel, though, because it’s not at all obvious how you’d go about doing it. And the way that roboticists from the Southern University of Science and Technology in Shenzhen went about doing it is, indeed, not at all obvious.

Go1’s snazzy feetsies have soles made of transparent acrylic, with slightly flexible plastic structure supporting a 60 millimeter gap up to each camera (640x480 at 120 frames per second) with a quartet of LEDs to provide illumination. While it’s complicated looking, at 120 grams, it doesn’t weigh all that much, and costs only about $50 per foot ($42 of which is the camera). The whole thing is sealed to keep out dirt and water.

So why bother with all of this (presumably somewhat fragile) complexity? As we ask quadruped robots to do more useful things in more challenging environments, having more information about what exactly they’re stepping on and how their feet are interacting with the ground is going to be super helpful. Robots that rely only on proprioceptive sensing (sensing self-movement) are great and all, but when you start trying to move over complex surfaces like sand, it can be really helpful to have vision that explicitly shows how your robot is interacting with the surface that it’s stepping on. Preliminary results showed that Foot Vision enabled the Go1 using it to perceive the flow of sand or soil around its foot as it takes a step, which can be used to estimate slippage, the bane of ground-contacting robots.

The researchers acknowledge that their hardware could use a bit of robustifying, and they also want to try adding some tread patterns around the circumference of the foot, since that plexiglass window is pretty slippery. The overall idea is to make Foot Vision as useful as the much more common gripper-integrated vision systems for robotic manipulation, helping legged robots make better decisions about how to get where they need to go.

Foot Vision: A Vision-Based Multi-Functional Sensorized Foot for Quadruped Robots, by Guowei Shi, Chen Yao, Xin Liu, Yuntian Zhao, Zheng Zhu, and Zhenzhong Jia from Southern University of Science and Technology in Shenzhen, is accepted to the July 2024 issue of IEEE Robotics and Automation Letters

.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Agility has been working with GXO for a bit now, but the big news here (and it IS big news) is that Agility’s Digit robots at GXO now represent the first formal commercial deployment of humanoid robots.

[ GXO ]

GXO can’t seem to get enough humanoids, because they’re also starting some R&D with Apptronik.

[ GXO ]

In this paper, we introduce a full-stack system for humanoids to learn motion and autonomous skills from human data. Through shadowing, human operators can teleoperate humanoids to collect whole-body data for learning different tasks in the real world. Using the data collected, we then perform supervised behavior cloning to train skill policies using egocentric vision, allowing humanoids to complete different tasks autonomously by imitating human skills.

THAT FACE.

[ HumanPlus ]

Yeah these robots are impressive but it’s the sound effects that make it.

[ Deep Robotics ]

Meet CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation–a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home.

[ CARMEN ] via [ UCSD ]

Thanks, Ioana!

The caption of this video is, “it did not work...”

You had one job, e-stop person! ONE JOB!

[ WVUIRL ]

This is a demo of cutting wood with a saw. When using position control for this task, precise measurement of the cutting amount is necessary. However, by using impedance control, this requirement is eliminated, allowing for successful cutting with only rough commands.

[ Tokyo Robotics ]

This is mesmerizing.

[ Oregon State ]

Quadrupeds are really starting to look like the new hotness in bipedal locomotion.

[ University of Leeds ]

I still think this is a great way of charging a robot. Make sure and watch until the end to see the detach trick.

[ YouTube ]

The Oasa R1, now on Kickstarter for $1,200, is the world’s first robotic lawn mower that uses one of them old timey reely things for cutting.

[ Kickstarter ]

ICRA next year is in Atlanta!

[ ICRA 2025 ]

Our Skunk Works team developed a modified version of the SR-71 Blackbird, titled the M-21, which carried an uncrewed reconnaissance drone called the D-21. The D-21 was designed to capture intelligence, release its camera, then self-destruct!

[ Lockheed Martin ]

The RPD 35 is a robotic powerhouse that surveys, distributes, and drives wide-flange solar piles up to 19 feet in length.

[ Built Robotics ]

Field AI’s brain technology is enabling robots to autonomously explore oil and gas facilities, navigating throughout the site and inspecting equipment for anomalies and hazardous conditions.

[ Field AI ]

Husky Observer was recently deployed at a busy automotive rail yard to carry out various autonomous inspection tasks including measuring train car positions and RFID data collection from the offloaded train inventory.

[ Clearpath ]

If you’re going to try to land a robot on the Moon, it’s useful to have a little bit of the Moon somewhere to practice on.

[ Astrobotic ]

Would you swallow a micro-robot? In a gutsy demo, physician Vivek Kumbhari navigates Pillbot, a wireless, disposable robot swallowed onstage by engineer Alex Luebke, modeling how this technology can swiftly provide direct visualization of internal organs. Learn more about how micro-robots could move us past the age of invasive endoscopies and open up doors to more comfortable, affordable medical imaging.

[ TED ]

How will AI improve our lives in the years to come? From its inception six decades ago to its recent exponential growth, futurist Ray Kurzweil highlights AI’s transformative impact on various fields and explains his prediction for the singularity: the point at which human intelligence merges with machine intelligence.

[ TED ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Agility has been working with GXO for a bit now, but the big news here (and it IS big news) is that Agility’s Digit robots at GXO now represent the first formal commercial deployment of humanoid robots.

[ GXO ]

GXO can’t seem to get enough humanoids, because they’re also starting some R&D with Apptronik.

[ GXO ]

In this paper, we introduce a full-stack system for humanoids to learn motion and autonomous skills from human data. Through shadowing, human operators can teleoperate humanoids to collect whole-body data for learning different tasks in the real world. Using the data collected, we then perform supervised behavior cloning to train skill policies using egocentric vision, allowing humanoids to complete different tasks autonomously by imitating human skills.

THAT FACE.

[ HumanPlus ]

Yeah these robots are impressive but it’s the sound effects that make it.

[ Deep Robotics ]

Meet CARMEN, short for Cognitively Assistive Robot for Motivation and Neurorehabilitation–a small, tabletop robot designed to help people with mild cognitive impairment (MCI) learn skills to improve memory, attention, and executive functioning at home.

[ CARMEN ] via [ UCSD ]

Thanks, Ioana!

The caption of this video is, “it did not work...”

You had one job, e-stop person! ONE JOB!

[ WVUIRL ]

This is a demo of cutting wood with a saw. When using position control for this task, precise measurement of the cutting amount is necessary. However, by using impedance control, this requirement is eliminated, allowing for successful cutting with only rough commands.

[ Tokyo Robotics ]

This is mesmerizing.

[ Oregon State ]

Quadrupeds are really starting to look like the new hotness in bipedal locomotion.

[ University of Leeds ]

I still think this is a great way of charging a robot. Make sure and watch until the end to see the detach trick.

[ YouTube ]

The Oasa R1, now on Kickstarter for $1,200, is the world’s first robotic lawn mower that uses one of them old timey reely things for cutting.

[ Kickstarter ]

ICRA next year is in Atlanta!

[ ICRA 2025 ]

Our Skunk Works team developed a modified version of the SR-71 Blackbird, titled the M-21, which carried an uncrewed reconnaissance drone called the D-21. The D-21 was designed to capture intelligence, release its camera, then self-destruct!

[ Lockheed Martin ]

The RPD 35 is a robotic powerhouse that surveys, distributes, and drives wide-flange solar piles up to 19 feet in length.

[ Built Robotics ]

Field AI’s brain technology is enabling robots to autonomously explore oil and gas facilities, navigating throughout the site and inspecting equipment for anomalies and hazardous conditions.

[ Field AI ]

Husky Observer was recently deployed at a busy automotive rail yard to carry out various autonomous inspection tasks including measuring train car positions and RFID data collection from the offloaded train inventory.

[ Clearpath ]

If you’re going to try to land a robot on the Moon, it’s useful to have a little bit of the Moon somewhere to practice on.

[ Astrobotic ]

Would you swallow a micro-robot? In a gutsy demo, physician Vivek Kumbhari navigates Pillbot, a wireless, disposable robot swallowed onstage by engineer Alex Luebke, modeling how this technology can swiftly provide direct visualization of internal organs. Learn more about how micro-robots could move us past the age of invasive endoscopies and open up doors to more comfortable, affordable medical imaging.

[ TED ]

How will AI improve our lives in the years to come? From its inception six decades ago to its recent exponential growth, futurist Ray Kurzweil highlights AI’s transformative impact on various fields and explains his prediction for the singularity: the point at which human intelligence merges with machine intelligence.

[ TED ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

We present Morphy, a novel compliant and morphologically aware flying robot that integrates sensorized flexible joints in its arms, thus enabling resilient collisions at high speeds and the ability to squeeze through openings narrower than its nominal dimensions.

Morphy represents a new class of soft flying robots that can facilitate unprecedented resilience through innovations both in the “body” and “brain.” The novel soft body can, in turn, enable new avenues for autonomy. Collisions that previously had to be avoided have now become acceptable risks, while areas that are untraversable for a certain robot size can now be negotiated through self-squeezing. These novel bodily interactions with the environment can give rise to new types of embodied intelligence.

[ ARL ]

Thanks, Kostas!

Segments of daily training for robots driven by reinforcement learning. Multiple tests done in advance for friendly service humans. The training includes some extreme tests. Please do not imitate!

[ Unitree ]

Sphero is not only still around, it’s making new STEM robots!

[ Sphero ]

Googly eyes mitigate all robot failures.

[ WVUIRL ]

Here I am, without the ability or equipment (or desire) required to iron anything that I own, and Flexiv’s got robots out there ironing fancy leather car seats.

[ Flexiv ]

Thanks, Noah!

We unveiled a significant leap forward in perception technology for our humanoid robot GR-1. The newly adapted pure-vision solution integrates bird’s-eye view, transformer models, and an occupancy network for precise and efficient environmental perception.

[ Fourier ]

Thanks, Serin!

LimX Dynamics’ humanoid robot CL-1 was launched in December 2023. It climbed stairs based on real-time terrain perception, two steps per stair. Four months later, in April 2024, the second demo video showcased CL-1 in the same scenario. It had advanced to climb the same stair, one step per stair.

[ LimX Dynamics ]

Thanks, Ou Yan!

New research from the University of Massachusetts Amherst shows that programming robots to create their own teams and voluntarily wait for their teammates results in faster task completion, with the potential to improve manufacturing, agriculture, and warehouse automation.

[ HCRL ] via [ UMass Amherst ]

Thanks, Julia!

LASDRA (Large-size Aerial Skeleton with Distributed Rotor Actuation system (ICRA18) is a scalable and modular aerial robot. It can assume a very slender, long, and dexterous form factor and is very lightweight.

[ SNU INRoL ]

We propose augmenting initially passive structures built from simple repeated cells, with novel active units to enable dynamic, shape-changing, and robotic applications. Inspired by metamaterials that can employ mechanisms, we build a framework that allows users to configure cells of this passive structure to allow it to perform complex tasks.

[ CMU ]

Testing autonomous exploration at the Exyn Office using Spot from Boston Dynamics. In this demo, Spot autonomously explores our flight space while on the hunt for one of our engineers.

[ Exyn ]

Meet Heavy Picker, the strongest robot in bulky-waste sorting and an absolute pro at lifting and sorting waste. With skills that would make a concert pianist jealous and a work ethic that never needs coffee breaks, Heavy Picker was on the lookout for new challenges.

[ Zen Robotics ]

AI is the biggest and most consequential business, financial, legal, technological, and cultural story of our time. In this panel, you will hear from the underrepresented community of women scientists who have been leading the AI revolution—from the beginning to now.

[ Stanford HAI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

We present Morphy, a novel compliant and morphologically aware flying robot that integrates sensorized flexible joints in its arms, thus enabling resilient collisions at high speeds and the ability to squeeze through openings narrower than its nominal dimensions.

Morphy represents a new class of soft flying robots that can facilitate unprecedented resilience through innovations both in the “body” and “brain.” The novel soft body can, in turn, enable new avenues for autonomy. Collisions that previously had to be avoided have now become acceptable risks, while areas that are untraversable for a certain robot size can now be negotiated through self-squeezing. These novel bodily interactions with the environment can give rise to new types of embodied intelligence.

[ ARL ]

Thanks, Kostas!

Segments of daily training for robots driven by reinforcement learning. Multiple tests done in advance for friendly service humans. The training includes some extreme tests. Please do not imitate!

[ Unitree ]

Sphero is not only still around, it’s making new STEM robots!

[ Sphero ]

Googly eyes mitigate all robot failures.

[ WVUIRL ]

Here I am, without the ability or equipment (or desire) required to iron anything that I own, and Flexiv’s got robots out there ironing fancy leather car seats.

[ Flexiv ]

Thanks, Noah!

We unveiled a significant leap forward in perception technology for our humanoid robot GR-1. The newly adapted pure-vision solution integrates bird’s-eye view, transformer models, and an occupancy network for precise and efficient environmental perception.

[ Fourier ]

Thanks, Serin!

LimX Dynamics’ humanoid robot CL-1 was launched in December 2023. It climbed stairs based on real-time terrain perception, two steps per stair. Four months later, in April 2024, the second demo video showcased CL-1 in the same scenario. It had advanced to climb the same stair, one step per stair.

[ LimX Dynamics ]

Thanks, Ou Yan!

New research from the University of Massachusetts Amherst shows that programming robots to create their own teams and voluntarily wait for their teammates results in faster task completion, with the potential to improve manufacturing, agriculture, and warehouse automation.

[ HCRL ] via [ UMass Amherst ]

Thanks, Julia!

LASDRA (Large-size Aerial Skeleton with Distributed Rotor Actuation system (ICRA18) is a scalable and modular aerial robot. It can assume a very slender, long, and dexterous form factor and is very lightweight.

[ SNU INRoL ]

We propose augmenting initially passive structures built from simple repeated cells, with novel active units to enable dynamic, shape-changing, and robotic applications. Inspired by metamaterials that can employ mechanisms, we build a framework that allows users to configure cells of this passive structure to allow it to perform complex tasks.

[ CMU ]

Testing autonomous exploration at the Exyn Office using Spot from Boston Dynamics. In this demo, Spot autonomously explores our flight space while on the hunt for one of our engineers.

[ Exyn ]

Meet Heavy Picker, the strongest robot in bulky-waste sorting and an absolute pro at lifting and sorting waste. With skills that would make a concert pianist jealous and a work ethic that never needs coffee breaks, Heavy Picker was on the lookout for new challenges.

[ Zen Robotics ]

AI is the biggest and most consequential business, financial, legal, technological, and cultural story of our time. In this panel, you will hear from the underrepresented community of women scientists who have been leading the AI revolution—from the beginning to now.

[ Stanford HAI ]



Insects have long been an inspiration for robots. The insect world is full of things that are tiny, fully autonomous, highly mobile, energy efficient, multimodal, self-repairing, and I could go on and on but you get the idea—insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

We’re definitely making progress, though. In a paper published last month in IEEE Robotics and Automation Letters, roboticists from Shanghai Jong Tong University demonstrated the most bug-like robotic bug I think I’ve ever seen.

A Multi-Modal Tailless Flapping-Wing Robot www.youtube.com

Okay so it may not look the most bug-like, but it can do many very buggy bug things, including crawling, taking off horizontally, flying around (with six degrees of freedom control), hovering, landing, and self-righting if necessary. JT-fly weighs about 35 grams and has a wingspan of 33 centimeters, using four wings at once to fly at up to 5 meters per second and six legs to scurry at 0.3 m/s. Its 380 milliampere-hour battery powers it for an actually somewhat useful 8-ish minutes of flying and about 60 minutes of crawling.

While that amount of endurance may not sound like a lot, robots like these aren’t necessarily intended to be moving continuously. Rather, they move a little bit, find a nice safe perch, and then do some sensing or whatever until you ask them to move to a new spot. Ideally, most of that movement would be crawling, but having the option to fly makes JT-fly exponentially more useful.

Or, potentially more useful, because obviously this is still very much a research project. It does seem like there’s a bunch more optimization that could be done here; for example, JT-fly uses completely separate systems for flying and crawling, with two motors powering the legs and two additional motors powering the wings plus with two wing servos for control. There’s currently a limited amount of onboard autonomy, with an inertial measurement unit, barometer, and wireless communication, but otherwise not much in the way of useful payload.

Insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

It won’t surprise you to learn that the researchers have disaster relief applications in mind for this robot, suggesting that “after natural disasters such as earthquakes and mudslides, roads and buildings will be severely damaged, and in these scenarios, JT-fly can rely on its flight ability to quickly deploy into the mission area.” One day, robots like these will actually be deployed for disaster relief, and although that day is not today, we’re just a little bit closer than we were before.

“A Multi-Modal Tailless Flapping-Wing Robot Capable of Flying, Crawling, Self-Righting and Horizontal Takeoff,” by Chaofeng Wu, Yiming Xiao, Jiaxin Zhao, Jiawang Mou, Feng Cui, and Wu Liu from Shanghai Jong Tong University, is published in the May issue of IEEE Robotics and Automation Letters.


Insects have long been an inspiration for robots. The insect world is full of things that are tiny, fully autonomous, highly mobile, energy efficient, multimodal, self-repairing, and I could go on and on but you get the idea—insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

We’re definitely making progress, though. In a paper published last month in IEEE Robotics and Automation Letters, roboticists from Shanghai Jong Tong University demonstrated the most bug-like robotic bug I think I’ve ever seen.

A Multi-Modal Tailless Flapping-Wing Robot www.youtube.com

Okay so it may not look the most bug-like, but it can do many very buggy bug things, including crawling, taking off horizontally, flying around (with six degrees of freedom control), hovering, landing, and self-righting if necessary. JT-fly weighs about 35 grams and has a wingspan of 33 centimeters, using four wings at once to fly at up to 5 meters per second and six legs to scurry at 0.3 m/s. Its 380 milliampere-hour battery powers it for an actually somewhat useful 8-ish minutes of flying and about 60 minutes of crawling.

While that amount of endurance may not sound like a lot, robots like these aren’t necessarily intended to be moving continuously. Rather, they move a little bit, find a nice safe perch, and then do some sensing or whatever until you ask them to move to a new spot. Ideally, most of that movement would be crawling, but having the option to fly makes JT-fly exponentially more useful.

Or, potentially more useful, because obviously this is still very much a research project. It does seem like there’s a bunch more optimization that could be done here; for example, JT-fly uses completely separate systems for flying and crawling, with two motors powering the legs and two additional motors powering the wings plus with two wing servos for control. There’s currently a limited amount of onboard autonomy, with an inertial measurement unit, barometer, and wireless communication, but otherwise not much in the way of useful payload.

Insects are both an inspiration and a source of frustration to roboticists because it’s so hard to get robots to have anywhere close to insect capability.

It won’t surprise you to learn that the researchers have disaster relief applications in mind for this robot, suggesting that “after natural disasters such as earthquakes and mudslides, roads and buildings will be severely damaged, and in these scenarios, JT-fly can rely on its flight ability to quickly deploy into the mission area.” One day, robots like these will actually be deployed for disaster relief, and although that day is not today, we’re just a little bit closer than we were before.

“A Multi-Modal Tailless Flapping-Wing Robot Capable of Flying, Crawling, Self-Righting and Horizontal Takeoff,” by Chaofeng Wu, Yiming Xiao, Jiaxin Zhao, Jiawang Mou, Feng Cui, and Wu Liu from Shanghai Jong Tong University, is published in the May issue of IEEE Robotics and Automation Letters.

In this study, we address the critical need for enhanced situational awareness and victim detection capabilities in Search and Rescue (SAR) operations amidst disasters. Traditional unmanned ground vehicles (UGVs) often struggle in such chaotic environments due to their limited manoeuvrability and the challenge of distinguishing victims from debris. Recognising these gaps, our research introduces a novel technological framework that integrates advanced gesture-recognition with cutting-edge deep learning for camera-based victim identification, specifically designed to empower UGVs in disaster scenarios. At the core of our methodology is the development and implementation of the Meerkat Optimization Algorithm—Stacked Convolutional Neural Network—Bi—Long Short Term Memory—Gated Recurrent Unit (MOA-SConv-Bi-LSTM-GRU) model, which sets a new benchmark for hand gesture detection with its remarkable performance metrics: accuracy, precision, recall, and F1-score all approximately 0.9866. This model enables intuitive, real-time control of UGVs through hand gestures, allowing for precise navigation in confined and obstacle-ridden spaces, which is vital for effective SAR operations. Furthermore, we leverage the capabilities of the latest YOLOv8 deep learning model, trained on specialised datasets to accurately detect human victims under a wide range of challenging conditions, such as varying occlusions, lighting, and perspectives. Our comprehensive testing in simulated emergency scenarios validates the effectiveness of our integrated approach. The system demonstrated exceptional proficiency in navigating through obstructions and rapidly locating victims, even in environments with visual impairments like smoke, clutter, and poor lighting. Our study not only highlights the critical gaps in current SAR response capabilities but also offers a pioneering solution through a synergistic blend of gesture-based control, deep learning, and purpose-built robotics. The key findings underscore the potential of our integrated technological framework to significantly enhance UGV performance in disaster scenarios, thereby optimising life-saving outcomes when time is of the essence. This research paves the way for future advancements in SAR technology, with the promise of more efficient and reliable rescue operations in the face of disaster.

Pages