IEEE Spectrum Automation

IEEE Spectrum
Subscribe to IEEE Spectrum Automation feed IEEE Spectrum Automation


In the 1960s and 1970s, NASA spent a lot of time thinking about whether toroidal (donut-shaped) fuel tanks were the way to go with its spacecraft. Toroidal tanks have a bunch of potential advantages over conventional spherical fuel tanks. For example, you can fit nearly 40% more volume within a toroidal tank than if you were using multiple spherical tanks within the same space. And perhaps most interestingly, you can shove stuff (like the back of an engine) through the middle of a toroidal tank, which could lead to some substantial efficiency gains if the tanks could also handle structural loads.

Because of their relatively complex shape, toroidal tanks are much more difficult to make than spherical tanks. Even though these tanks can perform better, NASA simply doesn’t have the expertise to manufacture them anymore, since each one has to be hand-built by highly skilled humans. But a company called Machina Labs thinks that they can do this with robots instead. And their vision is to completely change how we make things out of metal.

The fundamental problem that Machina Labs is trying to solve is that if you want to build parts out of metal efficiently at scale, it’s a slow process. Large metal parts need their own custom dies, which are very expensive one-offs that are about as inflexible as it’s possible to get, and then entire factories are built around these parts. It’s a huge investment, which means that it doesn’t matter if you find some new geometry or technique or material or market, because you have to justify that enormous up-front cost by making as much of the original thing as you possibly can, stifling the potential for rapid and flexible innovation.

On the other end of the spectrum you have the also very slow and expensive process of making metal parts one at a time by hand. A few hundred years ago, this was the only way of making metal parts: skilled metalworkers using hand tools for months to make things like armor and weapons. The nice thing about an expert metalworker is that they can use their skills and experience to make anything at all, which is where Machina Labs’ vision comes from, explains CEO Edward Mehr who co-founded Machina Labs after spending time at SpaceX followed by leading the 3D printing team at Relativity Space.

“Craftsmen can pick up different tools and apply them creatively to metal to do all kinds of different things. One day they can pick up a hammer and form a shield out of a sheet of metal,” says Mehr. “Next, they pick up the same hammer, and create a sword out of a metal rod. They’re very flexible.”

The technique that a human metalworker uses to shape metal is called forging, which preserves the grain flow of the metal as it’s worked. Casting, stamping, or milling metal (which are all ways of automating metal part production) are simply not as strong or as durable as parts that are forged, which can be an important differentiator for (say) things that have to go into space. But more on that in a bit.

The problem with human metalworkers is that the throughput is bad—humans are slow, and highly skilled humans in particular don’t scale well. For Mehr and Machina Labs, this is where the robots come in.

“We want to automate and scale using a platform called the ‘robotic craftsman.’ Our core enablers are robots that give us the kinematics of a human craftsman, and artificial intelligence that gives us control over the process,” Mehr says. “The concept is that we can do any process that a human craftsman can do, and actually some that humans can’t do because we can apply more force with better accuracy.”

This flexibility that robot metalworkers offer also enables the crafting of bespoke parts that would be impractical to make in any other way. These include toroidal (donut-shaped) fuel tanks that NASA has had its eye on for the last half century or so.

Machina Labs’ CEO Edward Mehr (on right) stands behind a 15 foot toroidal fuel tank.Machina Labs

“The main challenge of these tanks is that the geometry is complex,” Mehr says. “Sixty years ago, NASA was bump-forming them with very skilled craftspeople, but a lot of them aren’t around anymore.” Mehr explains that the only other way to get that geometry is with dies, but for NASA, getting a die made for a fuel tank that’s necessarily been customized for one single spacecraft would be pretty much impossible to justify. “So one of the main reasons we’re not using toroidal tanks is because it’s just hard to make them.”

Machina Labs is now making toroidal tanks for NASA. For the moment, the robots are just doing the shaping, which is the tough part. Humans then weld the pieces together. But there’s no reason why the robots couldn’t do the entire process end-to-end and even more efficiently. Currently, they’re doing it the “human” way based on existing plans from NASA. “In the future,” Mehr tells us, “we can actually form these tanks in one or two pieces. That’s the next area that we’re exploring with NASA—how can we do things differently now that we don’t need to design around human ergonomics?”

Machina Labs’ ‘robotic craftsmen’ work in pairs to shape sheet metal, with one robot on each side of the sheet. The robots align their tools slightly offset from each other with the metal between them such that as the robots move across the sheet, it bends between the tools. Machina Labs

The video above shows Machina’s robots working on a tank that’s 4.572 m (15 feet) in diameter, likely destined for the Moon. “The main application is for lunar landers,” says Mehr. “The toroidal tanks bring the center of gravity of the vehicle lower than what you would have with spherical or pill-shaped tanks.”

Training these robots to work metal like this is done primarily through physics-based simulations that Machina developed in house (existing software being too slow), followed by human-guided iterations based on the resulting real-world data. The way that metal moves under pressure can be simulated pretty well, and although there’s certainly still a sim-to-real gap (simulating how the robot’s tool adheres to the surface of the material is particularly tricky), the robots are collecting so much empirical data that Machina is making substantial progress towards full autonomy, and even finding ways to improve the process.

An example of the kind of complex metal parts that Machina’s robots are able to make.Machina Labs

Ultimately, Machina wants to use robots to produce all kinds of metal parts. On the commercial side, they’re exploring things like car body panels, offering the option to change how your car looks in geometry rather than just color. The requirement for a couple of beefy robots to make this work means that roboforming is unlikely to become as pervasive as 3D printing, but the broader concept is the same: making physical objects a software problem rather than a hardware problem to enable customization at scale.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

I think it’s time for us all to admit that some of the most interesting bipedal and humanoid research is being done by Disney.

[ Research Paper from ETH Zurich and Disney Research]

Over the past few months, Unitree G1 robot has been upgraded into a mass production version, with stronger performance, ultimate appearance, and being more in line with mass production requirements.

[ Unitree ]

This robot is from Kinisi Robotics, which was founded by Brennand Pierce, who also founded Bear Robotics. You can’t really tell from this video, but check out the website because the reach this robot has is bonkers.

Kinisi Robotics is on a mission to democratize access to advanced robotics with our latest innovation—a low-cost, dual-arm robot designed for warehouses, factories, and supermarkets. What sets our robot apart is its integration of LLM technology, enabling it to learn from demonstrations and perform complex tasks with minimal setup. Leveraging Brennand’s extensive experience in scaling robotic solutions, we’re able to produce this robot for under $20k, making it a game-changer in the industry.

[ Kinisi Robotics ]

Thanks Bren!

Finally, something that Atlas does that I am also physically capable of doing. In theory.

Okay, never mind. I don’t have those hips.

[ Boston Dynamics ]

Researchers in the Department of Mechanical Engineering at Carnegie Mellon University have created the first legged robot of its size to run, turn, push loads, and climb miniature stairs.

They say it can “run,” but I’m skeptical that there’s a flight phase unless someone sneezes nearby.

[ Carnegie Mellon University ]

The lights are cool and all, but it’s the pulsing soft skin that’s squigging me out.

[ Paper, Robotics Reports Vol.2 ]

Roofing is a difficult and dangerous enough job that it would be great if robots could take it over. It’ll be a challenge though.

[ Renovate Robotics ] via [ TechCrunch ]

Kento Kawaharazuka from JSK Robotics Laboratory at the University of Tokyo wrote in to share this paper, just accepted at RA-L, which (among other things) shows a robot using its flexible hands to identify objects through random finger motion.

[ Paper accepted by IEEE Robotics and Automation Letters ]

Thanks Kento!

It’s one thing to make robots that are reliable, and it’s another to make robots that are reliable and repairable by the end user. I don’t think iRobot gets enough credit for this.

[ iRobot ]

I like competitions where they say, “just relax and forget about the competition and show us what you can do.”

[ MBZIRC Maritime Grand Challenge ]

I kid you not, this used to be my job.

[ RoboHike ]



Boardwalk Robotics is announcing its entry into the increasingly crowded commercial humanoid(ish) space with Alex, a “workforce transformation” humanoid upper torso designed to work in manufacturing, logistics, and maintenance.

Before we get into Alex, let me take just a minute here to straighten out how Boardwalk Robotics is related to IHMC, the Institute for Human Machine Cognition in Pensacola, Florida. IHMC is, I think it’s fair to say, somewhat legendary when it comes to bipedal robotics—its DARPA Robotics Challenge team took second place in the final event (using a Boston Dynamics DRC Atlas), and when NASA needed someone to teach the agency’s Valkyrie humanoid to walk better, they sent it to IHMC.

Boardwalk, which was founded in 2017, has been a commercial partner with IHMC when it comes to the actual building of robots. The most visible example of this to date has been IHMC’s Nadia humanoid, a research platform which Boardwalk collaborated on and built. There’s obviously a lot of crossover between IHMC and Boardwalk in terms of institutional knowledge and experience, but Alex is a commercial robot developed entirely in-house by Boardwalk.

“We’ve used Nadia to learn a lot in the realm of dynamic locomotion research, and we’re taking all that and sticking it into a manipulation platform that’s ready for commercial work,” says Brandon Shrewsbury, Boardwalk Robotics’ CTO. “With Alex, we’re focusing on the manipulation side first, getting that well established. And then picking the mobility to match the task.”

The first thing you’ll notice about Alex is that it doesn’t have legs, at least for now. Boardwalk’s theory is that for a humanoid to be practical and cost effective in the near term, legs aren’t necessary, and that there are many tasks that offer a good return on investment where a stationary pedestal or a glorified autonomous mobile robotic base would be totally fine.

“There are going to be some problem sets that require legs, but there are many problem sets that don’t,” says Robert Griffin, a technical advisor at Boardwalk. “And there aren’t very many problem sets that don’t require halfway decent manipulation capabilities. So if we can design the manipulation well from the beginning, then we won’t have to depend on legs for making a robot that’s functionally useful.”

It certainly helps that Boardwalk isn’t at all worried about developing legs: “Every time we bring up a new humanoid, it’s something like twice as fast as the previous time,” Griffin says. This will be the eighth humanoid that IHMC has been involved in bringing up—I’d tell you more about all eight of those humanoids, but some of them are so secret that even I don’t know anything about them. Legs are definitely on the roadmap, but they’re not done yet, and IHMC will have a hand in their development to speed things along: It turns out that already having access to a functional (top of the line, really) locomotion stack is a big head start.

Alex’s actuators are all designed in-house, and the next version will feature new grippers that allow for quicker tool changes.Boardwalk Robotics

While the humanoid space is wide open right now and competition isn’t really an issue, looking ahead, Boardwalk sees safety as one of its primary differentiators since it’s not starting out with legs, says Shrewsbury. “For a full humanoid, there’s no way to make that completely safe. If it falls, it’s going to faceplant.” By keeping Alex on a stable base, it can work closer to humans and potentially move its arms much faster while also preserving a dynamic safety zone.

Alex is available for researchers to purchase immediately.Boardwalk Robotics

Despite its upbringing in research, Alex is not intended to be a research robot. You can buy it for research purposes, if you want, but Boardwalk will be selling Alex as a commercial robot. At the moment, Boardwalk is conducting pilot programs with Alex where they’re working in partnership with select customers, with the eventual goal of transitioning to a service model. The first few sectors that Boardwalk is targeting include logistics (because of course) and food processing, although as Boardwalk CEO Michael Morin one of the very first pilots is (appropriately enough) in aviation.

Morin, who helped to commercialize Barrett Technologies’ WAM Arm before spending some time at Vicarious Surgical as that company went public, joined Boardwalk to help them turn good engineering into a good product, which is arguably the hardest part of making useful robots (besides all the other hardest parts). “A lot of these companies are just learning about humanoids for the first time,” says Morin. “That makes the customer journey longer. But we’re putting in the effort to educate them on how this could be implemented in their world.”

If you want an Alex of your very own, Boardwalk is currently selecting commercial partners for a few more pilots. And for researchers, the robot is available right now.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

The title of this video is “Silly Robot Dog Jump” and that’s probably more than you need to know.

[ Deep Robotics ]

It’ll be great when robots are reliably autonomous, but until they get there, collaborative capabilities are a must.

[ Robust AI ]

I am so INCREDIBLY EXCITED for this.

[ IIT Instituto Italiano di Tecnologia ]

In this 3 minutes long one-take video, the LimX Dynamics CL-1 takes on the challenge of continuous heavy objects loading among shelves in a simulated warehouse, showcasing the advantages of the general-purpose form factor of humanoid robots.

[ LimX Dynamics ]

Birds, bats and many insects can tuck their wings against their bodies when at rest and deploy them to power flight. Whereas birds and bats use well-developed pectoral and wing muscles, how insects control their wing deployment and retraction remains unclear because this varies among insect species. Here we demonstrate that rhinoceros beetles can effortlessly deploy their hindwings without necessitating muscular activity. We validated the hypothesis using a flapping microrobot that passively deployed its wings for stable, controlled flight and retracted them neatly upon landing, demonstrating a simple, yet effective, approach to the design of insect-like flying micromachines.

[ Nature ]

Agility Robotics’ CTO, Pras Velagapudi, talks about data collection, and specifically about the different kinds we collect from our real-world robot deployments and generally what that data is used for.

[ Agility Robotics ]

Robots that try really hard but are bad at things are utterly charming.

[ University of Tokyo JSK Lab ]

The DARPA Triage Challenge unsurprisingly has a bunch of robots in it.

[ DARPA ]

The Cobalt security robot has been around for a while, but I have to say, the design really holds up—it’s a good looking robot.

[ Cobalt AI ]

All robots that enter elevators should be programmed to gently sway back and forth to the elevator music. Even if there’s no elevator music.

[ Somatic ]

ABB Robotics and the Texas Children’s Hospital have developed a groundbreaking lab automation solution using ABB’s YuMi® cobot to transfer fruit flies (Drosophila melanogaster) used in the study for developing new drugs for neurological conditions such as Alzheimer’s, Huntington’s and Parkinson’s.

[ ABB ]

Extend Robotics are building embodied AI enabling highly flexible automation for real-world physical tasks. The system features intuitive immersive interface enabling tele-operation, supervision and training AI models.

[ Extend Robotics ]

The recorded livestream of RSS 2024 is now online, in case you missed anything.

[ RSS 2024 ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

At ICRA 2024, in Tokyo last May, we sat down with the director of Shadow Robot, Rich Walker, to talk about the journey toward developing its newest model. Designed for reinforcement learning, the hand is extremely rugged, has three fingers that act like thumbs, and has fingertips that are highly sensitive to touch.

[ IEEE Spectrum ]

Food Angel is a food delivery robot to help with the problems of food insecurity and homelessness. Utilizing autonomous wheeled robots for this application may seem to be a good approach, especially with a number of successful commercial robotic delivery services. However, besides technical considerations such as range, payload, operation time, autonomy, etc., there are a number of important aspects that still need to be investigated, such as how the general public and the receiving end may feel about using robots for such applications, or human-robot interaction issues such as how to communicate the intent of the robot to the homeless.

[ RoMeLa ]

The UKRI FLF team RoboHike of UCL Computer Science of the Robot Perception and Learning lab with Forestry England demonstrate the ANYmal robot to help preserve the cultural heritage of an historic mine in the Forest of Dean, Gloucestershire, UK.

This clip is from a reboot of the British TV show “Time Team.” If you’re not already a fan of “Time Team,” let me just say that it is one of the greatest retro reality TV shows ever made, where actual archaeologists wander around the United Kingdom and dig stuff up. If they can find anything. Which they often can’t. And also it has Tony Robinson (from “Blackadder”), who runs everywhere for some reason. Go to Time Team Classics on YouTube for 70+ archived episodes.

[ UCL RPL ]

UBTECH humanoid robot Walker S Lite is working in Zeekr’s intelligent factory to complete handling tasks at the loading workstation for 21 consecutive days, and assist its employees with logistics work.

[ UBTECH ]

Current visual navigation systems often treat the environment as static, lacking the ability to adaptively interact with obstacles. This limitation leads to navigation failure when encountering unavoidable obstructions. In response, we introduce IN-Sight, a novel approach to self-supervised path planning, enabling more effective navigation strategies through interaction with obstacles.

[ ETH Zurich paper / IROS 2024 ]

When working on autonomous cars, sometimes it’s best to start small.

[ University of Pennsylvania ]

MIT MechE researchers introduce an approach called SimPLE (Simulation to Pick Localize and placE), a method of precise kitting, or pick and place, in which a robot learns to pick, regrasp, and place objects using the object’s computer-aided design (CAD) model, and all without any prior experience or encounters with the specific objects.

[ MIT ]

Staff, students (and quadruped robots!) from UCL Computer Science wish the Great Britain athletes the best of luck this summer in the Olympic Games & Paralympics.

[ UCL Robotics Institute ]

Walking in tall grass can be hard for robots, because they can’t see the ground that they’re actually stepping on. Here’s a technique to solve that, published in Robotics and Automation Letters last year.

[ ETH Zurich Robotic Systems Lab ]

There is no such thing as excess batter on a corn dog, and there is also no such thing as a defective donut. And apparently, making Kool-Aid drink pouches is harder than it looks.

[ Oxipital AI ]

Unitree has open-sourced its software to teleoperate humanoids in VR for training-data collection.

[ Unitree / GitHub ]

Nothing more satisfying than seeing point-cloud segments wiggle themselves into place, and CSIRO’s Wildcat SLAM does this better than anyone.

[ IEEE Transactions on Robotics ]

A lecture by Mentee Robotics CEO Lior Wolf, on Mentee’s AI approach.

[ Mentee Robotics ]



Today, Figure is introducing the newest, slimmest, shiniest, and least creatively named next generation of its humanoid robot: Figure 02. According to the press release, Figure 02 is the result of “a ground-up hardware and software redesign” and is “the highest performing humanoid robot,” which may even be true for some arbitrary value of “performing.” Also notable is that Figure has been actively testing robots with BMW at a manufacturing plant in Spartanburg, S.C., where the new humanoid has been performing “data collection and use case training.”

The rest of the press release is pretty much, “Hey, check out our new robot!” And you’ll get all of the content in the release by watching the videos. What you won’t get from the videos is any additional info about the robot. But we sent along some questions to Figure about these videos, and have a few answers from Michael Rose, director of controls, and Vadim Chernyak, director of hardware.

First, the trailer:

How many parts does Figure 02 have, and is this all of them?

Figure: A couple hundred unique parts and a couple thousand parts total. No, this is not all of them.

Does Figure 02 make little Figure logos with every step?

Figure: If the surface is soft enough, yes.

Swappable legs! Was that hard to do, or easier to do because you only have to make one leg? Figure: We chose to make swappable legs to help with manufacturing.

Is the battery pack swappable too?

Figure: Our battery is swappable, but it is not a quick swap procedure.

What’s that squishy-looking stuff on the back of Figure 02’s knees and in its elbow joints?

Figure: These are soft stops which limit the range of motion in a controlled way and prevent robot pinch points

Where’d you hide that thumb motor?

Figure: The thumb is now fully contained in the hand.

Tell me about the “skin” on the neck!

Figure: The skin is a soft fabric which is able to keep a clean seamless look even as the robot moves its head.

And here’s the reveal video:

When Figure 02’s head turns, its body turns too, and its arms move. Is that necessary, or aesthetic?

Figure: Aesthetic.

The upper torso and shoulders seem very narrow compared to other humanoids. Why is that?

Figure: We find it essential to package the robot to be of similar proportions to a human. This allows us to complete our target use cases and fit into our environment more easily.

What can you tell me about Figure 02’s walking gait?

Figure: The robot is using a model predictive controller to determine footstep locations and forces required to maintain balance and follow the desired robot trajectory.

How much runtime do you get from 2.25 kilowatt-hours doing the kinds of tasks that we see in the video?

Figure: We are targeting a 5-hour run time for our product.


Slick, but also a little sinister?Figure

This thing looks slick. I’d say that it’s maybe a little too far on the sinister side for a robot intended to work around humans, but the industrial design is badass and the packaging is excellent, with the vast majority of the wiring now integrated within the robot’s skins and flexible materials covering joints that are typically left bare. Figure, if you remember, raised a US $675 million Series B that valued the company at $2.6 billion, and somehow the look of this robot seems appropriate to that.

I do still have some questions about Figure 02, such as where the interesting foot design came from and whether a 16-degree-of-freedom hand is really worth it in the near term. It’s also worth mentioning that Figure seems to have a fair number of Figure 02 robots running around—at least five units at its California headquarters, plus potentially a couple of more at the BMW Spartanburg manufacturing facility.

I also want to highlight this boilerplate at the end of the release: “our humanoid is designed to perform human-like tasks within the workforce and in the home.” We are very, very far away from a humanoid robot in the home, but I appreciate that it’s still an explicit goal that Figure is trying to achieve. Because I want one.



Rodney Brooks is the Panasonic Professor of Robotics (emeritus) at MIT, where he was director of the AI Lab and then CSAIL. He has been cofounder of iRobot, Rethink Robotics, and Robust AI, where he is currently CTO. This article is shared with permission from his blog.

Here are some of the things I’ve learned about robotics after working in the field for almost five decades. In honor of Isaac Asimov and Arthur C. Clarke, my two boyhood go-to science fiction writers, I’m calling them my three laws of robotics.

  1. The visual appearance of a robot makes a promise about what it can do and how smart it is. It needs to deliver or slightly overdeliver on that promise or it will not be accepted.
  2. When robots and people coexist in the same spaces, the robots must not take away from people’s agency, particularly when the robots are failing, as inevitably they will at times.
  3. Technologies for robots need 10+ years of steady improvement beyond lab demos of the target tasks to mature to low cost and to have their limitations characterized well enough that they can deliver 99.9 percent of the time. Every 10 more years gets another 9 in reliability.

Below I explain each of these laws in more detail. But in a related post here are my three laws of artificial intelligence.

Note that these laws are written from the point of view of making robots work in the real world, where people pay for them, and where people want return on their investment. This is very different from demonstrating robots or robot technologies in the laboratory.

In the lab there is phalanx of graduate students eager to demonstrate their latest idea, on which they have worked very hard, to show its plausibility. Their interest is in showing that a technique or technology that they have developed is plausible and promising. They will do everything in their power to nurse the robot through the demonstration to make that point, and they will eagerly explain everything about what they have developed and what could come next.

In the real world there is just the customer, or the employee or relative of the customer. The robot has to work with no external intervention from the people who designed and built it. It needs to be a good experience for the people around it or there will not be more sales to those, and perhaps other, customers.

So these laws are not about what might, or could, be done. They are about real robots deployed in the real world. The laws are not about research demonstrations. They are about robots in everyday life.

The Promise Given By Appearance

My various companies have produced all sorts of robots and sold them at scale. A lot of thought goes into the visual appearance of the robot when it is designed, as that tells the buyer or user what to expect from it.

The iRobot Roomba was carefully designed to meld looks with function.iStock

The Roomba, from iRobot, looks like a flat disk. It cleans floors. The disk shape was so that it could turn in place without hitting anything it wasn’t already hitting. The low profile of the disk was so that it could get under the toe kicks in kitchens and clean the floor that is overhung just a little by kitchen cabinets. It does not look like it can go up and down stairs or even a single step up or step down in a house and it cannot. It has a handle, which makes it look like it can be picked up by a person, and it can be. Unlike fictional Rosey the Robot it does not look like it could clean windows, and it cannot. It cleans floors, and that is it.

The Packbot, the remotely operable military robot, also from iRobot, looked very different indeed. It has tracked wheels, like a miniature tank, and that appearance promises anyone who looks at it that it can go over rough terrain, and is not going to be stopped by steps or rocks or drops in terrain. When the Fukushima disaster happened, in 2011, Packbots were able to operate in the reactor buildings that had been smashed and wrecked by the tsunami, open door handles under remote control, drive up rubble-covered staircases and get their cameras pointed at analog pressure and temperature gauges so that workers trying to safely secure the nuclear plant had some data about what was happening in highly radioactive areas of the plant.

An iRobot PackBot picks up a demonstration object at the Joint Robotics Repair Detachment at Victory Base Complex in Baghdad.Alamy

The point of this first law of robotics is to warn against making a robot appear more than it actually is. Perhaps that will get funding for your company, leading investors to believe that in time the robot will be able to do all the things its physical appearance suggests it might be able to do. But it is going to disappoint customers when it cannot do the sorts of things that something with that physical appearance looks like it can do. Glamming up a robot risks overpromising what the robot as a product can actually do. That risks disappointing customers. And disappointed customers are not going to be advocates for your product/robot, nor be repeat buyers.

Preserving People’s Agency

The worst thing for its acceptance by people that a robot can do in the workplace is to make their jobs or lives harder, by not letting them do what they need to do.

Robots that work in hospitals taking dirty sheets or dishes from a patient floor to where they are to be cleaned are meant to make the lives of the nurses easier. But often they do exactly the opposite. If the robots are not aware of what is happening and do not get out of the way when there is an emergency they will probably end up blocking some lifesaving work by the nurses—e.g., pushing a gurney with a critically ill patient on it to where they need to be for immediate treatment. That does not endear such a robot to the hospital staff. It has interfered with their main job function, a function of which the staff is proud, and what motivates them to do such work.

A lesser, but still unacceptable behavior of robots in hospitals, is to have them wait in front of elevator doors, central, and blocking for people. It makes it harder for people to do some things they need to do all the time in that environment—enter and exit elevators.

Those of us who live in San Francisco or Austin, Texas, have had firsthand views of robots annoying people daily for the last few years. The robots in question have been autonomous vehicles, driving around the city with no human occupant. I see these robots every single time I leave my house, whether on foot or by car.

Some of the vehicles were notorious for blocking intersections, and there was absolutely nothing that other drivers, pedestrians, or police could do. We just had to wait until some remote operator hidden deep inside the company that deployed them decided to pay attention to the stuck vehicle and get it out of people’s way. Worse, they would wander into the scene of a fire where there were fire trucks and firefighters and actual buildings on fire, get confused and just stop, sometime on top of the fire hoses.

There was no way for the firefighters to move the vehicles, nor communicate with them. This is in contrast to an automobile driven by a human driver. Firefighters can use their normal social interactions to communicate with a driver, and use their privileged position in society as frontline responders to apply social pressure on a human driver to cooperate with them. Not so with the autonomous vehicles.

The autonomous vehicles took agency from people going about their regular business on the streets, but worse took away agency from firefighters whose role is to protect other humans. Deployed robots that do not respect people and what they need to do will not get respect from people and the robots will end up undeployed.

Robust Robots That Work Every Time

Making robots that work reliably in the real world is hard. In fact, making anything that works physically in the real world, and is reliable, is very hard.

For a customer to be happy with a robot it must appear to work every time it tries a task, otherwise it will frustrate the user to the point that they will question whether it makes their life better or not.

But what does appear mean here? It means that the user can have the assumption that it going to work, as their default understanding of what will happen in the world.

The tricky part is that robots interact with the real physical world.

Software programs interact with a well-understood abstracted machine, so they tend not fail in a manner where the instructions in them do not get executed in a consistent way by the hardware on which they are running. Those same programs may also interact with the physical world, be it a human being, a network connection, or an input device like a mouse. It is then that the programs might fail as the instructions in them are based on assumptions in the real world that are not met.

Robots are subject to forces in the real world, subject to the exact position of objects relative to them, and subject to interacting with humans who are very variable in their behavior. There are no teams of graduate students or junior engineers eager to make the robot succeed on the 8,354th attempt to do the same thing that has worked so many times before. Getting software that adequately adapts to the uncertain changes in the world in that particular instance and that particular instant of time is where the real challenge arises in robotics.

Great-looking videos are just not the same things as working for a customer every time. Most of what we see in the news about robots is lab demonstrations. There is no data on how general the solution is, nor how many takes it took to get the video that is shown. Even worse sometimes the videos are tele-operated or sped up many times over.

I have rarely seen a new technology that is less than ten years out from a lab demo make it into a deployed robot. It takes time to see how well the method works, and to characterize it well enough that it is unlikely to fail in a deployed robot that is working by itself in the real world. Even then there will be failures, and it takes many more years of shaking out the problem areas and building it into the robot product in a defensive way so that the failure does not happen again.

Most robots require kill buttons or estops on them so that a human can shut them down. If a customer ever feels the need to hit that button, then the people who have built and sold the robot have failed. They have not made it operate well enough that the robot never gets into a state where things are going that wrong.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

We introduce Berkeley Humanoid, a reliable and low-cost mid-scale humanoid research platform for learning-based control. Our lightweight, in-house-built robot is designed specifically for learning algorithms with low simulation complexity, anthropomorphic motion, and high reliability against falls. Capable of omnidirectional locomotion and withstanding large perturbations with a compact setup, our system aims for scalable, sim-to-real deployment of learning-based humanoid systems.

[ Berkeley Humanoid ]

This article presents Ray, a new type of audio-animatronic robot head. All the mechanical structure of the robot is built in one step by 3-D printing... This simple, lightweight structure and the separatetendon-based actuation system underneath allow for smooth, fast motions of the robot. We also develop an audio-driven motion generation module that automatically synthesizes natural and rhythmic motions of the head and mouth based on the given audio.

[ Paper ]

CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.

[ MIT News ]

Okay, sign me up for this.

[ Deep Robotics ]

NEURA Robotics is among the first joining the early access NVIDIA Humanoid Robot Developer Program.

This could be great, but there’s an awful lot of jump cuts in that video.

[ Neura ] via [ NVIDIA ]

I like that Unitree’s tagline in the video description here is “let’s have fun together.”

Is that “please don’t do dumb stuff with our robots” at the end of the video new...?

[ Unitree ]

NVIDIA CEO Jensen Huang presented a major breakthrough on Project GR00T with WIRED’s Lauren Goode at SIGGRAPH 2024. In a two-minute demonstration video, NVIDIA explained a systematic approach they discovered to scale up robot data, addressing one of the most challenging issues in robotics.

[ NVIDIA ]

In this research, we investigated the innovative use of a manipulator as a tail in quadruped robots to augment their physical capabilities. Previous studies have primarily focused on enhancing various abilities by attaching robotic tails that function solely as tails on quadruped robots. While these tails improve the performance of the robots, they come with several disadvantages, such as increased overall weight and higher costs. To mitigate these limitations, we propose the use of a 6-DoF manipulator as a tail, allowing it to serve both as a tail and as a manipulator.

[ Paper ]

In this end-to-end demo, we showcase how MenteeBot transforms the shopping experience for individuals, particularly those using wheelchairs. Through discussions with a global retailer, MenteeBot has been designed to act as the ultimate shopping companion, offering a seamless, natural experience.

[ Menteebot ]

Nature Fresh Farms, based in Leamington, Ontario is one of North America’s largest greenhouse farms growing high-quality organics, berries, peppers, tomatoes, and cucumbers. In 2022, Nature Fresh partnered with Four Growers, a FANUC Authorized System Integrator, to develop a robotic system equipped with AI to harvest tomatoes in the greenhouse environment.

[ FANUC ]

Contrary to what you may have been led to believe by several previous Video Fridays, WVUIRL’s open source rover is quite functional, most of the time.

[ WVUIRL ]

Honeybee Robotics, a Blue Origin company, is developing Lunar Utility Navigation with Advanced Remote Sensing and Autonomous Beaming for Energy Redistribution, also known as LUNARSABER. In July 2024, Honeybee Robotics captured LUNARSABER’s capabilities during a demonstration of a scaled prototype.

[ Honeybee Robotics ]

Bunker Mini is a compact tracked mobile robot specifically designed to tackle demanding off-road terrains.

[ AgileX ]

In this video we present results of our lab from the latest field deployments conducted in the scope of the Digiforest EU project, in Stein am Rhein, Switzerland. Digiforest brings together various partners working on aerial and legged robots, autonomous harvesters, and forestry decision-makers. The goal of the project is to enable autonomous robot navigation, exploration, and mapping, both below and above the canopy, to create a data pipeline that can support and enhance foresters’ decision-making systems.

[ ARL ]



Ten years. Two countries. Multiple redesigns. Some US $80 million invested. And, finally, Zero Zero Robotics has a product it says is ready for consumers, not just robotics hobbyists—the HoverAir X1. The company has sold several hundred thousand flying cameras since the HoverAir X1 started shipping last year. It hasn’t gotten the millions of units into consumer hands—or flying above them—that its founders would like to see, but it’s a start.

“It’s been like a 10-year-long Ph.D. project,” says Zero Zero founder and CEO Meng Qiu Wang. “The thesis topic hasn’t changed. In 2014 I looked at my cell phone and thought that if I could throw away the parts I don’t need—like the screen—and add some sensors, I could build a tiny robot.”

I first spoke to Wang in early 2016, when Zero Zero came out of stealth with its version of a flying camera—at $600. Wang had been working on the project for two years. He started the project in Silicon Valley, where he and cofounder Tony Zhang were finishing up Ph.D.s in computer science at Stanford University. Then the two decamped for China, where development costs are far less.

Flying cameras were a hot topic at the time; startup Lily Robotics demonstrated a $500 flying camera in mid-2015 (and was later charged with fraud for faking its demo video), and in March of 2016 drone-maker DJI introduced a drone with autonomous flying and tracking capabilities that turned it into much the same type of flying camera that Wang envisioned, albeit at the high price of $1400.

Wang aimed to make his flying camera cheaper and easier to use than these competitors by relying on image processing for navigation—no altimeter, no GPS. In this approach, which has changed little since the first design, one camera looks at the ground and algorithms follow the camera’s motion to navigate. Another camera looks out ahead, using facial and body recognition to track a single subject.

The current version, at $349, does what Wang had envisioned, which is, he told me, “to turn the camera into a cameraman.” But, he points out, the hardware and software, and particularly the user interface, changed a lot. The size and weight have been cut in half; it’s just 125 grams. This version uses a different and more powerful chipset, and the controls are on board; while you can select modes from a smart phone app, you don’t have to.

I can verify that it is cute (about the size of a paperback book), lightweight, and extremely easy to use. I’ve never flown a standard drone without help or crashing but had no problem sending the HoverAir up to follow me down the street and then land on my hand.

It isn’t perfect. It can’t fly over water—the movement of the water confuses the algorithms that judge speed through video images of the ground. And it only tracks people; though many would like it to track their pets, Wang says animals behave erratically, diving into bushes or other places the camera can’t follow. Since the autonomous navigation algorithms rely on the person being filmed to avoid objects and simply follows that path, such dives tend to cause the drone to crash.

Since we last spoke eight years ago, Wang has been through the highs and lows of the startup rollercoaster, turning to contract engineering for a while to keep his company alive. He’s become philosophical about much of the experience.

Here’s what he had to say.

We last spoke in 2016. Tell me how you’ve changed.

Meng Qiu Wang: When I got out of Stanford in 2014 and started the company with Tony [Zhang], I was eager and hungry and hasty and I thought I was ready. But retrospectively, I wasn’t ready to start a company. I was chasing fame and money, and excitement.

Now I’m 42, I have a daughter—everything seems more meaningful now. I’m not a Buddhist, but I have a lot of Zen in my philosophy now.

I was trying so hard to flip the page to see the next chapter of my life, but now I realize, there is no next chapter, flipping the page itself is life.

You were moving really fast in 2016 and 2017. What happened during that time?

Wang: After coming out of stealth, we ramped up from 60 to 140 people planning to take this product into mass production. We got a crazy amount of media attention—covered by 2,200 media outlets. We went to CES, and it seemed like we collected every trophy there was there.

And then Apple came to us, inviting us to retail at all the Apple stores. This was a big deal; I think we were the first third party robotic product to do live demos in Apple stores. We produced about 50,000 units, bringing in about $15 million in revenue in six months.

Then a giant company made us a generous offer and we took it. But it didn’t work out. It was a certainly lesson learned for us. I can’t say more about that, but at this point if I walk down the street and I see a box of pizza, I would not try to open it; there really is no free lunch.

This early version of the Hover flying camera generated a lot of initial excitement, but never fully took off.Zero Zero Robotics

How did you survive after that deal fell apart?

Wang: We went from 150 to about 50 people and turned to contract engineering. We worked with toy drone companies, with some industrial product companies. We built computer vision systems for larger drones. We did almost four years of contract work.

But you kept working on flying cameras and launched a Kickstarter campaign in 2018. What happened to that product?

Wang: It didn’t go well. The technology wasn’t really there. We filled some orders and refunded ones that we couldn’t fill because we couldn’t get the remote controller to work.

We really didn’t have enough resources to create a new product for a new product category, a flying camera, to educate the market.

So we decided to build a more conventional drone—our V-Coptr, a V-shaped bi-copter with only two propellers—to compete against DJI. We didn’t know how hard it would be. We worked on it for four years. Key engineers left out of total dismay, they lost faith, they lost hope.

We came so close to going bankrupt so many times—at least six times in 10 years I thought I wasn’t going to be able to make payroll for the next month, but each time I got super lucky with something random happening. I never missed paying one dime—not because of my abilities, just because of luck.

We still have a relatively healthy chunk of the team, though. And this summer my first ever software engineer is coming back. The people are the biggest wealth that we’ve collected over the years. The people who are still with us are not here for money or for success. We just realized along the way that we enjoy working with each other on impossible problems.

When we talked in 2016, you envisioned the flying camera as the first in a long line of personal robotics products. Is that still your goal?

Wang: In terms of short-term strategy, we are focusing 100 percent on the flying camera. I think about other things, but I’m not going to say I have an AI hardware company, though we do use AI. After 10 years I’ve given up on talking about that.

Do you still think there’s a big market for a flying camera?

Wang: I think flying cameras have the potential to become the second home robot [the first being the robotic vacuum] that can enter tens of millions of homes.



I’ll be honest: when I first got this pitch for an autonomous robot dentist, I was like: “Okay, I’m going to talk to these folks and then write an article, because there’s no possible way for this thing to be anything but horrific.” Then they sent me some video that was, in fact, horrific, in the way that only watching a high speed drill remove most of a tooth can be.

But fundamentally this has very little to do with robotics, because getting your teeth drilled just sucks no matter what. So the real question we should be asking is this: How can we make a dental procedure as quick and safe as possible, to minimize that inherent horrific-ness?And the answer, surprisingly, may be this robot from a startup called Perceptive.

Perceptive is today announcing two new technologies that I very much hope will make future dental experiences better for everyone. While it’s easy to focus on the robot here (because, well, it’s a robot), the reason the robot can do what it does (which we’ll get to in a minute) is because of a new imaging system. The handheld imager, which is designed to operate inside of your mouth, uses optical coherence tomography (OCT) to generate a 3D image of the inside of your teeth, and even all the way down below the gum line and into the bone. This is vastly better than the 2D or 3D x-rays that dentists typically use, both in resolution and positional accuracy.

Perceptive’s handheld optical coherence tomography imager scans for tooth decay.Perceptive

X-Rays, it turns out, are actually really bad at detecting cavities; Perceptive CEO Chris Ciriello tells us that the accuracy is on the order of 30 percent of figuring out the location and extent of tooth decay. In practice, this isn’t as much of a problem as it seems like it should be, because the dentist will just start drilling into your tooth and keep going until they find everything. But obviously this won’t work for a robot, where you need all of the data beforehand. That’s where the OCT comes in. You can think of OCT as similar to an ultrasound, in that it uses reflected energy to build up an image, but OCT uses light instead of sound for much higher resolution.

Perceptive’s imager can create detailed 3D maps of the insides of teeth.Perceptive

The reason OCT has not been used for teeth before is because with conventional OCT, the exposure time required to get a detailed image is several seconds, and if you move during the exposure, the image will blur. Perceptive is instead using a structure from motion approach (which will be familiar to many robotics folks), where they’re relying on a much shorter exposure time resulting in far fewer data points, but then moving the scanner and collecting more data to gradually build up a complete 3D image. According to Ciriello, this approach can localize pathology within about 20 micrometers with over 90 percent accuracy, and it’s easy for a dentist to do since they just have to move the tool around your tooth in different orientations until the scan completes.

Again, this is not just about collecting data so that a robot can get to work on your tooth. It’s about better imaging technology that helps your dentist identify and treat issues you might be having. “We think this is a fundamental step change,” Ciriello says. “We’re giving dentists the tools to find problems better.”

The robot is mechanically coupled to your mouth for movement compensation.Perceptive

Ciriello was a practicing dentist in a small mountain town in British Columbia, Canada. People in such communities can have a difficult time getting access to care. “There aren’t too many dentists who want to work in rural communities,” he says. “Sometimes it can take months to get treatment, and if you’re in pain, that’s really not good. I realized that what I had to do was build a piece of technology that could increase the productivity of dentists.”

Perceptive’s robot is designed to take a dental procedure that typically requires several hours and multiple visits, and complete it in minutes in a single visit. The entry point for the robot is crown installation, where the top part of a tooth is replaced with an artificial cap (the crown). This is an incredibly common procedure, and it usually happens in two phases. First, the dentist will remove the top of the tooth with a drill. Next, they take a mold of the tooth so that a crown can be custom fit to it. Then they put a temporary crown on and send you home while they mail the mold off to get your crown made. A couple weeks later, the permanent crown arrives, you go back to the dentist, and they remove the temporary one and cement the permanent one on.

With Perceptive’s system, it instead goes like this: on a previous visit where the dentist has identified that you need a crown in the first place, you’d have gotten a scan of your tooth with the OCT imager. Based on that data, the robot will have planned a drilling path, and then the crown could be made before you even arrive for the drilling to start, which is only possible because the precise geometry is known in advance. You arrive for the procedure, the robot does the actually drilling in maybe five minutes or so, and the perfectly fitting permanent crown is cemented into place and you’re done.

The robot is still in the prototype phase but could be available within a few years.Perceptive

Obviously, safety is a huge concern here, because you’ve got a robot arm with a high-speed drill literally working inside of your skull. Perceptive is well aware of this.

The most important thing to understand about the Perceptive robot is that it’s physically attached to you as it works. You put something called a bite block in your mouth and bite down on it, which both keeps your mouth open and keeps your jaw from getting tired. The robot’s end effector is physically attached to that block through a series of actuated linkages, such that any motions of your head are instantaneously replicated by the end of the drill, even if the drill is moving. Essentially, your skull is serving as the robot’s base, and your tooth and the drill are in the same reference frame. Purely mechanical coupling means there’s no vision system or encoders or software required: it’s a direct physical connection so that motion compensation is instantaneous. As a patient, you’re free to relax and move your head somewhat during the procedure, because it makes no difference to the robot.

Human dentists do have some strategies for not stabbing you with a drill if you move during a procedure, like putting their fingers on your teeth and then supporting the drill on them. But this robot should be safer and more accurate than that method, because of the rigid connection leading to only a few tens of micrometers of error, even on a moving patient. It’ll move a little bit slower than a dentist would, but because it’s only drilling exactly where it needs to, it can complete the procedure faster overall, says Ciriello.

There’s also a physical counterbalance system within the arm, a nice touch that makes the arm effectively weightless. (It’s somewhat similar to the PR2 arm, for you OG robotics folks.) And the final safety measure is the dentist-in-the-loop via a foot pedal that must remain pressed or the robot will stop moving and turn off the drill.

Ciriello claims that not only is the robot able to work faster, it also will produce better results. Most restorations like fillings or crowns last about five years, because the dentist either removed too much material from the tooth and weakened it, or removed too little material and didn’t completely solve the underlying problem. Perceptive’s robot is able to be far more exact. Ciriello says that the robot can cut geometry that’s “not humanly possible,” fitting restorations on to teeth with the precision of custom-machined parts, which is pretty much exactly what they are.

Perceptive has successfully used its robot on real human patients, as shown in this sped-up footage. In reality the robot moves slightly slower than a human dentist.Perceptive

While it’s easy to focus on the technical advantages of Perceptive’s system, dentist Ed Zuckerberg (who’s an investor in Perceptive) points out that it’s not just about speed or accuracy, it’s also about making patients feel better. “Patients think about the precision of the robot, versus the human nature of their dentist,” Zuckerberg says. It gives them confidence to see that their dentist is using technology in their work, especially in ways that can address common phobias. “If it can enhance the patient experience or make the experience more comfortable for phobic patients, that automatically checks the box for me.”

There is currently one other dental robot on the market. Called Yomi, it offers assistive autonomy for one very specific procedure for dental implants. Yomi is not autonomous, but instead provides guidance for a dentist to make sure that they drill to the correct depth and angle.

While Perceptive has successfully tested their first-generation system on humans, it’s not yet ready for commercialization. The next step will likely be what’s called a pivotal clinical trial with the FDA, and if that goes well, Cirello estimates that it could be available to the public in “several years”. Perceptive has raised US $30 million in funding so far, and here’s hoping that’s enough to get them across the finish line.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

If IIT’s iRonCub3 looks this cool while learning to fly, just imagine how cool it will look when it actually takes off!

Hovering is in the works, but this is a really hard problem, which you can read more about in Daniele Pucci’s post on LinkedIn.

[ LinkedIn ]

Stanford Engineering and Toyota Research Institute Achieve World’s First Autonomous Tandem Drift. Leveraging the latest AI technology, Stanford Engineering and Toyota Research Institute are working to make driving safer for all. By automating a driving style used in motorsports called ‘drifting’—where a driver deliberately spins the rear wheels to break traction—the teams have unlocked new possibilities for future safety systems.

[ TRI ]

Researchers at the Istituto Italiano di Tecnologia (IIT-Italian Institute of Technology) have demonstrated that under specific conditions, humans can treat robots as co-authors of the results of their actions. The condition that enables this phenomenon is that a robot behaves in a human-like, social manner. Engaging in gaze contact and participating in a common emotional experience, such as watching a movie, are the key.

[ Science Robotics ]

If Aibo is not quite cat-like enough for you, here you go.

[ Maicat ] via [ RobotStart ]

I’ve never been more excited for a sim to real gap to be bridged.

[ USC Viterbi ]

I’m sorry but this looks exactly like a quadrotor sitting on a test stand.

The 12-lb Quad-Biplane combines four rotors and two wings without any control surfaces. The aircraft takes off like a conventional quadcopter and transitions to a more efficient horizontal cruise flight, similar to a biplane. This combines the simplicity of a quad-rotor design, providing vertical flight capability, with the cruise efficiency of a fixed-wing aircraft. The rotors are responsible for aircraft control both in vertical and forward cruise flight regimes.

[ AVFL ]

Tensegrity robots are so weird, and I so want them to be useful.

[ Suzumori Endo Lab ]

Top performing robots need all the help they can get.

[ Team B-Human ]

And now: a beetle nearly hit by an autonomous robot.

[ WVUIRL ]

Humans possess a remarkable ability to react to unpredictable perturbations through immediate mechanical responses, which harness the visco-elastic properties of muscles to maintain balance. Inspired by this behaviour, we propose a novel design of a robotic leg utilising fibre jammed structures as passive compliant mechanisms to achieve variable joint stiffness and damping.

[ Paper ]

I don’t know what this piece of furniture is but your cats will love it.

[ ABB ]

This video shows a Dexterous Avatar humanoid robot with VR teleoperation, hand tracking and speech recognition to achieve highly dexterous mobile manipulation. Extend Robotics is developing a dexterous remote operation interface to enable data collection for embodied AI and humanoid robot.

[ Extend Robotics ]

I never really thought about this, but wind turbine blades are hollow inside and need to be inspected sometimes, which is really one of those jobs where you’d much rather have a robot.

[ Flyability ]

Here’s an uncut, full drone delivery mission, including a package pickup from our AutoLoader—a simple, non-powered mechanical device that allows retail partners to utilize drone delivery with existing curbside pickup workflows.

[ Wing ]

Daniel Simu and his acrobatic robot competed in America’s Got Talent, and even though his robot did a very robot thing by breaking itself immediately beforehand, the performance went really well.

[ Acrobot ]

A tour of the Creative Robotics Mini Exhibition at the Creative Computing Institute, University of the Arts London.

[ UAL ]

Thanks, Hooman!

Zoox CEO, Aicha Evans, and Co-Founder and CTO, Jesse Levinson, hosted a LinkedIn Live last week to reflect on the past decade of building Zoox and their predictions for the next 10 years of the AV industry.

[ Zoox ]



This is a sponsored article brought to you by Elephant Robotics.

Elephant Robotics has gone through years of research and development to accelerate its mission of bringing robots to millions of homes and a vision of “Enjoy Robots World”. From the collaborative industrial robots P-series and C-series, which have been on the drawing board since its establishment in 2016, to the lightweight desktop 6 DOF collaborative robot myCobot 280 in 2020, to the dual-armed, semi-humanoid robot myBuddy, which was launched in 2022, Elephant Robotics is launching 3-5 robots per year, and this year’s full-body humanoid robot, the Mercury series, promises to reshape the landscape of non-human workers, introducing intelligent robots like Mercury into research and education and even everyday home environments.

A Commitment to Practical Robotics

Elephant Robotics proudly introduces the Mercury Series, a suite of humanoid robots that not only push the boundaries of innovation but also embody a deep commitment to practical applications. Designed with the future of robotics in mind, the Mercury Series is poised to become the go-to choice for researchers and industry professionals seeking reliable, scalable, and robust solutions.


Elephant Robotics

The Genesis of Mercury Series: Bridging Vision With Practicality

From the outset, the Mercury Series has been envisioned as more than just a collection of advanced prototypes. It is a testament to Elephant Robotics’ dedication to creating humanoid robots that are not only groundbreaking in their capabilities but also practical for mass production and consistent, reliable use in real-world applications.

Mercury X1: Wheeled Humanoid Robot

The Mercury X1 is a versatile wheeled humanoid robot that combines advanced functionalities with mobility. Equipped with dual NVIDIA Jetson controllers, lidar, ultrasonic sensors, and an 8-hour battery life, the X1 is perfect for a wide range of applications, from exploratory studies to commercial tasks requiring mobility and adaptability.

Mercury B1: Dual-Arm Semi-Humanoid Robot

The Mercury B1 is a semi-humanoid robot tailored for sophisticated research. It features 17 degrees of freedom, dual robotic arms, a 9-inch touchscreen, a NVIDIA Xavier control chip, and an integrated 3D camera. The B1 excels in machine vision and VR-assisted teleoperation, and its AI voice interaction and LLM integration mark significant advancements in human-robot communication.

These two advanced models exemplify Elephant Robotics’ commitment to practical robotics. The wheeled humanoid robot Mercury X1 integrates advanced technology with a state-of-the-art mobile platform, ensuring not only versatility but also the feasibility of large-scale production and deployment.

Embracing the Power of Reliable Embodied AI

The Mercury Series is engineered as the ideal hardware platform for embodied AI research, providing robust support for sophisticated AI algorithms and real-world applications. Elephant Robotics demonstrates its commitment to innovation through the Mercury series’ compatibility with NVIDIA’s ISSACSIM, a state-of-the-art simulation platform that facilitates sim2real learning, bridging the gap between virtual environments and physical robot interaction.

The Mercury Series is perfectly suited for the study and experimentation of mainstream large language models in embodied AI. Its advanced capabilities allow seamless integration with the latest AI research. This provides a reliable and scalable platform for exploring the frontiers of machine learning and robotics.

Furthermore, the Mercury Series is complemented by the myArm C650, a teleoperation robotic arm that enables rapid acquisition of physical data. This feature supports secondary learning and adaptation, allowing for immediate feedback and iterative improvements in real-time. These features, combined with the Mercury Series’ reliability and practicality, make it the preferred hardware platform for researchers and institutions looking to advance the field of embodied AI.

The Mercury Series is supported by a rich software ecosystem, compatible with major programming languages, and integrates seamlessly with industry-standard simulation software. This comprehensive development environment is enhanced by a range of auxiliary hardware, all designed with mass production practicality in mind.

Elephant Robotics

Drive to Innovate: Mass Production and Global Benchmarks

The “Power Spring” harmonic drive modules, a hallmark of the Elephant Robotics’ commitment to innovation for mass production, have been meticulously engineered to offer an unparalleled torque-to-weight ratio. These components are a testament to the company’s foresight in addressing the practicalities of large-scale manufacturing. The incorporation of carbon fiber in the design of these modules not only optimizes agility and power but also ensures that the robots are well-prepared for the rigors of the production line and real-world applications. The Mercury Series, with its spirit of innovation, is making a significant global impact, setting a new benchmark for what practical robotics can achieve.

Elephant Robotics is consistently delivering mass-produced robots to a range of renowned institutions and industry leaders, thereby redefining the industry standards for reliability and scalability. The company’s dedication to providing more than mere prototypes is evident in the active role its robots play in various sectors, transforming industries that are in search of dependable and efficient robotic solutions.

Conclusion: The Mercury Series—A Beacon for the Future of Practical Robotics

The Mercury Series represents more than a product; it is a beacon for the future of practical robotics. Elephant Robotics’ dedication to affordability, accessibility, and technological advancement ensures that the Mercury Series is not just a research tool but a platform for real-world impact.

Mercury Usecases | Explore the Capabilities of the Wheeled Humanoid Robot and Discover Its Precision youtu.be

Elephant Robotics: https://www.elephantrobotics.com/en/

Mercury Robot Series: https://www.elephantrobotics.com/en/mercury-humanoid-robot/



The dream of robotic floor care has always been for it to be hands-off and mind-off. That is, for a robot to live in your house that will keep your floors clean without you having to really do anything or even think about it. When it comes to robot vacuuming, that’s been more or less solved thanks to self-emptying robots that transfer debris into docking stations, which iRobot pioneered with the Roomba i7+ in 2018. By 2022, iRobot’s Combo j7+ added an intelligent mopping pad to the mix, which definitely made for cleaner floors but was also a step backwards in the sense that you had to remember to toss the pad into your washing machine and fill the robot’s clean water reservoir every time. The Combo j9+ stuffed a clean water reservoir into the dock itself, which could top off the robot with water by itself for a month.

With the new Roomba Combo 10 Max, announced today, iRobot has cut out (some of) that annoying process thanks to a massive new docking station that self empties vacuum debris, empties dirty mop water, refills clean mop water, and then washes and dries the mopping pad, completely autonomously.

iRobot

The Roomba part of this is a mildly upgraded j7+, and most of what’s new on the hardware side here is in the “multifunction AutoWash Dock.” This new dock is a beast: It empties the robot of all of the dirt and debris picked up by the vacuum, refills the Roomba’s clean water tank from a reservoir, and then starts up a wet scrubby system down under the bottom of the dock. The Roomba deploys its dirty mopping pad onto that system, and then drives back and forth while the scrubby system cleans the pad. All the dirty water from this process gets sucked back up into a dedicated reservoir inside the dock, and the pad gets blow dried while the scrubby system runs a self-cleaning cycle.

The dock removes debris from the vacuum, refills it with clean water, and then uses water to wash the mopping pad.iRobot

This means that as a user, you’ve only got to worry about three things: dumping out the dirty water tank every week (if you use the robot for mopping most days), filling the clean water tank every week, and then changing out the debris back every two months. That is not a lot of hands-on time for having consistently clean floors.

The other thing to keep in mind about all of these robots is that they do need relatively frequent human care if you want them to be happy and successful. That means flipping them over and getting into their guts to clean out the bearings and all that stuff. iRobot makes this very easy to do, and it’s a necessary part of robot ownership, so the dream of having a robot that you can actually forget completely is probably not achievable.

The consequence for this convenience is a real chonker of a dock. The dock is basically furniture, and to their credit iRobot designed it so that the top surface is useable as a shelf—Access to the guts of the dock are from the front, not the top. This is fine, but it’s also kind of crazy just how much these docks have expanded, especially once you factor in the front ramp that the robot drives up, which sticks out even farther.

The Roomba will detect carpet and lift its mopping pad up to prevent drips.iRobot

We asked iRobot Director of Project Management Warren Fernandez about whether docks are just going to keep on getting bigger forever until we’re all just living in giant robot docks, to which he said: “Are you going to continue to see some large capable multi-function docks out there in the market? Yeah, I absolutely think you will—but when does big become too big?” Fernandez says that there are likely opportunities to reduce dock size going forward through packaging efficiencies or dual-purpose components, but that there’s another option, too: Distributed docks. “If a robot has dry capabilities and wet capabilities, do those have to coexist inside the same chassis? What if they were separate?” says Fernandez.

We should mention that iRobot is not the first in the robotic floor care robot space to have a self-cleaning mop, and they’re also not the first to think about distributed docks, although as Fernandez explains, this is a more common approach in Asia where you can also take advantage of home plumbing integration. “It’s a major trend in China, and starting to pop up a little bit in Europe, but not really in North America yet. How amazing could it be if you had a dock that, in a very easy manner, was able to tap right into plumbing lines for water supply and sewage disposal?”

According to Fernandez, this tends to be much easier to do in China, both because the labor cost for plumbing work is far lower than in the U.S. and Europe, and also because it’s fairly common for apartments in China to have accessible floor drains. “We don’t really yet see it in a major way at a global level,” Fernandez tells us. “But that doesn’t mean it’s not coming.”

The robot autonomously switches mopping mode on and off for different floor surfaces.iRobot

We should also mention the Roomba Combo 10 Max, which includes some software updates:

  • The front-facing camera and specialized bin sensors can identify dirtier areas eight times as effectively as before.
  • The Roomba can identify specific rooms and prioritize the order they’re cleaned in, depending on how dirty they get.
  • A new cleaning behavior called “Smart Scrub” adds a back-and-forth scrubbing motion for floors that need extra oomph.

And here’s what I feel like the new software should do, but doesn’t:

  • Use the front-facing camera and bin sensors to identify dirtier areas and then autonomously develop a schedule to more frequently clean those areas.
  • Activate Smart Scrub when the camera and bin sensors recognize an especially dirty floor.

I say “should do” because the robot appears to be collecting the data that it needs to do these things but it doesn’t do them yet. New features (especially new features that involve autonomy) take time to develop and deploy, but imagine a robot that makes much more nuanced decisions about where and when to clean based on very detailed real-time data and environmental understanding that iRobot has already implemented.

I also appreciate that even as iRobot is emphasizing autonomy and leveraging data to start making more decisions for the user, the company is also making sure that the user has as much control as possible through the app. For example, you can set the robot to mop your floor without vacuuming first, even though if you do that, all you’re going to end up with a much dirtier mop. Doesn’t make a heck of a lot of sense, but if that’s what you want, iRobot has empowered you to do it.

The dock opens from the front for access to the clean and dirty water storage and the dirt bag.iRobot

The Roomba Combo 10 Max will be launching in August for US $1,400. That’s expensive, but it’s also how iRobot does things: A new Roomba with new tech always gets flagship status and premium cost. Sooner or later it’ll be affordable enough that the rest of us will be able to afford it, too.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

Perching with winged Unmanned Aerial Vehicles has often been solved by means of complex control or intricate appendages. Here, we present a method that relies on passive wing morphing for crash-landing on trees and other types of vertical poles. Inspired by the adaptability of animals’ and bats’ limbs in gripping and holding onto trees, we design dual-purpose wings that enable both aerial gliding and perching on poles.

[ Nature Communications Engineering ]

Pretty impressive to have low enough latency in controlling your robot’s hardware that it can play ping pong, although it makes it impossible to tell whether the robot or the human is the one that’s actually bad at the game.

[ IHMC ]

How to be a good robot when boarding an elevator.

[ NAVER ]

Have you ever wondered how insects are able to go so far beyond their home and still find their way? The answer to this question is not only relevant to biology but also to making the AI for tiny, autonomous robots. We felt inspired by biological findings on how ants visually recognize their environment and combine it with counting their steps in order to get safely back home.

[ Science Robotics ]

Team RoMeLa Practice with ARTEMIS humanoid robots, featuring Tsinghua Hephaestus (Booster Alpha). Fully autonomous humanoid robot soccer match with the official goal of beating the human WorldCup Champions by the year 2050.

[ RoMeLa ]

Triangle is the most stable shape, right?

[ WVU IRL ]

We propose RialTo, a new system for robustifying real-world imitation learning policies via reinforcement learning in “digital twin” simulation environments constructed on the fly from small amounts of real-world data.

[ MIT CSAIL ]

There is absolutely no reason to watch this entire video, but Moley Robotics is still working on that robotic kitchen of theirs.

I will once again point out that the hardest part of cooking (for me, anyway) is the prep and the cleanup, and this robot still needs you to do all that.

[ Moley ]

B-Human has so far won 10 titles at the RoboCup SPL tournament. Can we make it 11 this year? Our RoboCup starts off with a banger game against HTWK Robots form Leipzig!

[ Team B-Human ]

AMBIDEX is a dual-armed robot with an innovative mechanism developed for safe coexistence with humans. Based on an innovative cable structure, it is designed to be both strong and stable.

[ NAVER ]

As NASA’s Perseverance rover prepares to ascend to the rim of Jezero Crater, its team is investigating a rock unlike any that they’ve seen so far on Mars. Deputy project scientist Katie Stack Morgan explains why this rock, found in an ancient channel that funneled water into the crater, could be among the oldest that Perseverance has investigated—or the youngest.

[ NASA ]

We present a novel approach for enhancing human-robot collaboration using physical interactions for real-time error correction of large language model (LLM) parameterized commands.

[ Figueroa Robotics Lab ]

Husky Observer was recently used to autonomously inspect solar panels at a large solar panel farm. As part of its mission, the robot navigated rows of solar panels, stopping to inspect areas with its integrated thermal camera. Images were taken by the robot and enhanced to detect potential “hot spots” in the panels.

[ Clearpath Robotics ]

Most of the time, robotic workcells contain just one robot, so it’s cool to see a pair of them collaborating on tasks.

[ Leverage Robotics ]

Thanks, Roman!

Meet Hydrus, the autonomous underwater drone revolutionising underwater data collection by eliminating the barriers to its entry. Hydrus ensures that even users with limited resources can execute precise and regular subsea missions to meet their data requirements.

[ Advanced Navigation ]

Those adorable Disney robots have finally made their way into a paper.

[ RSS 2024 ]



Cigarette butts are the second most common undisposed-of litter on Earth—of the six trillion-ish cigarettes inhaled every year, it’s estimated that at over four trillion of the butts are just tossed onto the ground, each one leeching over 700 different toxic chemicals into the environment. Let’s not focus on the fact that all those toxic chemicals are also going into people’s lungs, and instead talk about the ecosystem damage that they can do and also just the general grossness of having bits of sucked-on trash everywhere. Ew.

Preventing those cigarette butts from winding up on the ground in the first place would be the best option, but would require a pretty big shift in human behavior. Operating under the assumption that humans changing their behavior is a non-starter, roboticists from the Dynamic Legged Systems unit at the Italian Institute of Technology (IIT) in Genoa have instead designed a novel platform for cigarette butt cleanup in the form of a quadrupedal robot with vacuums attached to its feet.

IIT

There are, of course, far more efficient ways of at least partially automating the cleanup of litter with machines. The challenge is that most of that automation relies on mobility systems with wheels, which won’t work on the many beautiful beaches (and many beautiful flights of stairs) of Genoa. In places like these, it still falls to humans to do the hard work, which is less than ideal.

This robot, developed in Claudio Semini’s lab at IIT, is called VERO (Vacuum-cleaner Equipped RObot). It’s based around an AlienGo from Unitree, with a commercial vacuum mounted on its back. Hoses go from the vacuum down the leg to each foot, with a custom 3D printed nozzle that puts as much suction near the ground as possible without tripping the robot up. While the vacuum is novel, the real contribution here is how the robot autonomously locates things on the ground and then plans out how to interact with those things using its feet.

First, an operator designates an area for VERO to clean, after which the robot operates by itself. After calculating an exploration path to explore the entire area, the robot uses its onboard cameras and a neural network to detect cigarette butts. This is trickier than it sounds, because there may be a lot of cigarette butts on the ground, and they all probably look pretty much the same, so the system has to filter out all of the potential duplicates. The next step is to plan out its next steps—VERO has to plan footsteps to put the vacuum side of one of its feet right next to each cigarette butt, while calculating a safe, stable pose for the rest of its body. Since this whole process can take place on sand or stairs or other uneven surfaces, VERO has to prioritize not falling over before it decides how to do the collection. The final collecting maneuver is fine tuned using an extra Intel RealSense depth camera mounted on the robot’s chin.

VERO has been tested successfully in six different scenarios that challenge both its locomotion and detection capabilities.IIT

Initial testing with the robot in a variety of different environments showed that it could successfully collect just under 90 percent of cigarette butts, which I bet is better than I could do, and I’m also much more likely to get fed up with the whole process. The robot is not very quick at the task, but unlike me it will never get fed up as long as it’s got energy in its battery, so speed is somewhat less important.

As far as the authors of this paper are aware (and I assume they’ve done their research), this is “the first time that the legs of a legged robot are concurrently utilized for locomotion and for a different task.” This is distinct from other robots that can (for example) open doors with their feet, because those robots stop using the feet as feet for a while and instead use them as manipulators.

So, this is about a lot more than cigarette butts, and the researchers suggest a variety of other potential use cases, including spraying weeds in crop fields, inspecting cracks in infrastructure, and placing nails and rivets during construction.

Some use cases include potentially doing multiple things at the same time, like planting different kinds of seeds, using different surface sensors, or driving both nails and rivets. And since quadrupeds have four feet, they could potentially host four completely different tools, and the software that the researchers developed for VERO can be slightly modified to put whatever foot you want on whatever spot you need.

VERO: A vacuum‐cleaner‐equipped quadruped robot for efficient litter removal, by Lorenzo Amatucci, Giulio Turrisi, Angelo Bratta, Victor Barasuol, and Claudio Semini from IIT, was published in the Journal of Field Robotics.


Scientists in China have built what they claim to be the smallest and lightest solar-powered aerial vehicle. It’s small enough to sit in the palm of a person’s hand, weighs less than a U.S. nickel, and can fly indefinitely while the sun shines on it.

Micro aerial vehicles (MAVs) are insect- and bird-size aircraft that might prove useful for reconnaissance and other possible applications. However, a major problem that MAVs currently face is their limited flight times, usually about 30 minutes. Ultralight MAVs—those weighing less than 10 grams—can often only stay aloft for less than 10 minutes.

One potential way to keep MAVs flying longer is to power them with a consistent source of energy such as sunlight. Now, in a new study, researchers have developed what they say is the first solar-powered MAV capable of sustained flight.

The new ultralight MAV, CoulombFly, is just 4.21g with a wingspan of 20 centimeters. That’s about 10 times as small as and roughly 600 times as light as the previous smallest sunlight-powered aircraft, a quadcopter that’s 2 meters wide and weighs 2.6 kilograms.

Sunlight powered flight test Nature

“My ultimate goal is to make a super tiny flying vehicle, about the size and weight of a mosquito, with a wingspan under 1 centimeter,” says Mingjing Qi, a professor of energy and power engineering at Beihang University in Beijing. Qi and the scientists who built CoulombFly developed a prototype of such an aircraft, measuring 8 millimeters wide and 9 milligrams in mass, “but it can’t fly on its own power yet. I believe that with the ongoing development of microcircuit technology, we can make this happen.”

Previous sunlight-powered aerial vehicles typically rely on electromagnetic motors, which use electromagnets to generate motion. However, the smaller a solar-powered aircraft gets, the less surface area it has with which to collect sunlight, reducing the amount of energy it can generate. In addition, the efficiency of electromagnetic motors decrease sharply as vehicles shrink in size. Smaller electromagnetic motors experience comparably greater friction than larger ones, as well as greater energy losses due to electrical resistance from their components. This results in low lift-to-power efficiencies, Qi and his colleagues explain.

CoulombFly instead employs an electrostatic motor, which produce motion using electrostatic fields. Electrostatic motors are generally used as sensors in microelectromechanical systems (MEMS), not for aerial propulsion. Nevertheless, with a mass of only 1.52 grams, the electrostatic motor the scientists used has a lift-to-power efficiency two to three times that of other MAV motors.

The electrostatic motor has two nested rings. The inner ring is a spinning rotor that possesses 64 slats, each made of a carbon fiber sheet covered with aluminum foil. It resembles a wooden fence curved into a circle, with gaps between the fence’s posts. The outer ring is equipped eight alternating pairs of positive and negative electrode plates, which are each also made of a carbon fiber sheet bonded to aluminum foil. Each plate’s edge also possesses a brush made of aluminum that touches the inner ring’s slats.

Above CoulombFly’s electrostatic motor is a propeller 20 cm wide and connected to the rotor. Below the motor are two high-power-density thin-film gallium arsenide solar cells, each 4 by 6 cm in size, with a mass of 0.48 g and an energy conversion efficiency of more than 30 percent.

Sunlight electrically charges CoulombFly’s outer ring, and its 16 plates generate electric fields. The brushes on the outer ring’s plates touch the inner ring, electrically charging the rotor slats. The electric fields of the outer ring’s plates exert force on the charged rotor slats, making the inner ring and the propeller spin.

In tests under natural sunlight conditions—about 920 watts of light per square meter—CoulombFly successfully took off within one second and sustained flight for an hour without any deterioration in performance. Potential applications for sunlight-powered MAVs may include long-distance and long-duration aerial reconnaissance, the researchers say.

Long term test for hovering operation Nature

CoulombFly’s propulsion system can generate up to 5.8 g of lift. This means it could support an extra payload of roughly 1.59 g, which is “sufficient to accommodate the smallest available sensors, controllers, cameras and so on” to support future autonomous operations, Qi says. ”Right now, there’s still a lot of room to improve things like motors, propellers, and circuits, so we think we can get the extra payload up to 4 grams in the future. If we need even more payload, we could switch to quadcopters or fixed-wing designs, which can carry up to 30 grams.”

Qi adds “it should be possible for the vehicle to carry a tiny lithium-ion battery.” That means it could store energy from its solar panels and fly even when the sun is not out, potentially enabling 24-hour operations.

In the future, “we plan to use this propulsion system in different types of flying vehicles, like fixed-wing and rotorcraft,” Qi says.

The scientists detailed their findings online 17 July in the journal Nature.



Among the many things that humans cannot do (without some fairly substantial modification) is shifting our body morphology around on demand. It sounds a little extreme to be talking about things like self-amputation, and it is a little extreme, but it’s also not at all uncommon for other animals to do—lizards can disconnect their tails to escape a predator, for example. And it works in the other direction, too, with animals like ants adding to their morphology by connecting to each other to traverse gaps that a single ant couldn’t cross alone.

In a new paper, roboticists from The Faboratory at Yale University have given a soft robot the ability to detach and reattach pieces of itself, editing its body morphology when necessary. It’s a little freaky to watch, but it kind of makes me wish I could do the same thing.

Faboratory at Yale

These are fairly standard soft-bodied silicon robots that use asymmetrically stiff air chambers that inflate and deflate (using a tethered pump and valves) to generate a walking or crawling motion. What’s new here are the joints, which rely on a new material called a bicontinuous thermoplastic foam (BTF) to form a supportive structure for a sticky polymer that’s solid at room temperature but can be easily melted.

The BTF acts like a sponge to prevent the polymer from running out all over the place when it melts, and means that you can pull two BTF surfaces apart by melting the joint, and stick them together again by reversing the procedure. The process takes about 10 minutes and the resulting joint is quite strong. It’s also good for a couple hundred dettach/reattach cycles before degrading. It even stands up to dirt and water reasonably well.

Faboratory at Yale

This kind of thing has been done before with mechanical connections and magnets and other things like that—getting robots to attach to and detach from other robots is a foundational technique for modular robotics, after all. But these systems are inherently rigid, which is bad for soft robots, whose whole thing is about not being rigid. It’s all very preliminary, of course, because there are plenty of rigid things attached to these robots with tubes and wires and stuff. And there’s no autonomy or payloads here either. That’s not the point, though—the point is the joint, which (as the researchers point out) is “the first instantiation of a fully soft reversible joint” resulting in the “potential for soft artificial systems [that can] shape change via mass addition and subtraction.”

Self-Amputating and Interfusing Machines, by Bilige Yang, Amir Mohammadi Nasab, Stephanie J. Woodman, Eugene Thomas, Liana G. Tilton, Michael Levin, and Rebecca Kramer-Bottiglio from Yale, was published in May in Advanced Materials.

.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDSICRA@40: 23–26 September 2024, ROTTERDAM, NETHERLANDSIROS 2024: 14–18 October 2024, ABU DHABI, UAEICSR 2024: 23–26 October 2024, ODENSE, DENMARKCybathlon 2024: 25–27 October 2024, ZURICH

Enjoy today’s videos!

At ICRA 2024, Spectrum editor Evan Ackerman sat down with Unitree Founder and CEO Xingxing Wang and Tony Yang, VP of Business Development, to talk about the company’s newest humanoid, the G1 model.

[ Unitree ]

SACRIFICE YOUR BODY FOR THE ROBOT

[ WVUIRL ]

From navigating uneven terrain outside the lab to pure vision perception, GR-1 continues to push the boundaries of what’s possible.

[ Fourier ]

Aerial manipulation has gained interest for completing high-altitude tasks that are challenging for human workers, such as contact inspection and defect detection. This letter addresses a more general and dynamic task: simultaneously tracking time-varying contact force and motion trajectories on tangential surfaces. We demonstrate the approach on an aerial calligraphy task using a novel sponge pen design as the end-effector.

[ CMU ]

LimX Dynamics Biped Robot P1 was kicked and hit: Faced with random impacts in a crowd, P1 with its new design once again showcased exceptional stability as a mobility platform.

[ LimX Dynamics ]

Thanks, Ou Yan!

This is from ICRA 2018, but it holds up pretty well in the novelty department.

[ SNU INRoL ]

I think someone needs to crank the humor setting up on this one.

[ Deep Robotics ]

The paper summarizes the work at the Micro Air Vehicle Laboratory on end-to-end neural control of quadcopters. A major challenge in bringing these controllers to life is the “reality gap” between the real platform and the training environment. To address this, we combine online identification of the reality gap with pre-trained corrections through a deep neural controller, which is orders of magnitude more efficient than traditional computation of the optimal solution.

[ MAVLab ]

This is a dedicated Track Actuator from HEBI Robotics. Why they didn’t just call it a “tracktuator” is beyond me.

[ HEBI Robotics ]

Menteebot can navigate complex environments by combining a 3D model of the world with a dynamic obstacle map. On the first day in a new location, Menteebot generates the 3D model by following a person who shows the robot around.

[ Mentee Robotics ]

Here’s that drone with a 68kg payload and 70km range you’ve always wanted.

[ Malloy ]

AMBIDEX is a dual-armed robot with an innovative mechanism developed for safe coexistence with humans. Based on an innovative cable structure, it is designed to be both strong and stable.

[ NAVER Labs ]

As quadrotors take on an increasingly diverse range of roles, researchers often need to develop new hardware platforms tailored for specific tasks, introducing significant engineering overhead. In this article, we introduce the UniQuad series, a unified and versatile quadrotor hardware platform series that offers high flexibility to adapt to a wide range of common tasks, excellent customizability for advanced demands, and easy maintenance in case of crashes.

[ HKUST ]

The video demonstrates the field testing of a 43 kg (95 lb) amphibious cycloidal propeller unmanned underwater vehicle (Cyclo-UUV) developed at the Advanced Vertical Flight Laboratory, Texas A&M University. The vehicle utilizes a combination of cycloidal propellers (or cyclo-propellers), screw propellers, and tank treads for operations on land and underwater.

[ TAMU ]

The “pill” (the package hook) on Wing’s delivery drones is a crucial component to our aircraft! Did you know our package hook is designed to be aerodynamic and has stable flight characteristics, even at 65 mph?

[ Wing ]

Happy 50th to robotics at ABB!

[ ABB ]

This JHU Center for Functional Anatomy & Evolution Seminar is by Chen Li, on Terradynamics of Animals & Robots in Complex Terrain.

[ JHU ]



Food prep is one of those problems that seems like it should be solvable by robots. It’s a predictable, repetitive, basic manipulation task in a semi-structured environment—seems ideal, right? And obviously there’s a huge need, because human labor is expensive and getting harder and harder to find in these contexts. There are currently over a million unfilled jobs in the food industry in the United States, and even with jobs that are filled, the annual turnover rate is 150 percent (meaning a lot of workers don’t even last a year).

Food prep seems like a great opportunity for robots, which is why Chef Robotics and a handful of other robotics companies tackled it a couple years ago by bringing robots to fast casual restaurants like Chipotle or Sweetgreen, where you get served a custom-ish meal from a selection of ingredients at a counter.

But this didn’t really work out, for a couple of reasons. First, doing things that are mostly effortless for humans are inevitably extremely difficult for robots. And second, humans actually do a lot of useful things in a restaurant context besides just putting food onto plates, and the robots weren’t up for all of those things.

Still, Chef Robotics founder and CEO Rajat Bhageria wasn’t ready to let this opportunity go. “The food market is arguably the biggest market that’s tractable for AI today,” he told IEEE Spectrum. And with a bit of a pivot away from the complicated mess of fast casual restaurants, Chef Robotics has still managed to prepare over 20 million meals thanks to autonomous robot arms deployed all over North America. Without knowing it, you may even have eaten such a meal.

“The hard thing is, can you pick fast? Can you pick consistently? Can you pick the right portion size without spilling? And can you pick without making it look like the food was picked by a machine?” —Rajat Bhageria, Chef Robotics

When we spoke with Bhageria, he explained that there are three basic tasks involved in prepared food production: prep (tasks like chopping ingredients), the actual cooking process, and then assembly (or plating). Of these tasks, prep scales pretty well with industrial automation in that you can usually order pre-chopped or mixed ingredients, and cooking also scales well since you can cook more with only a minimal increase in effort just by using a bigger pot or pan or oven. What doesn’t scale well is the assembly, especially when any kind of flexibility or variety is required. You can clearly see this in action at any fast casual restaurant, where a couple of people are in the kitchen cooking up massive amounts of food while each customer gets served one at a time.

So with that bottleneck identified, let’s throw some robots at the problem, right? And that’s exactly what Chef Robotics did, explains Bhageria: “we went to our customers, who said that their biggest pain point was labor, and the most labor is in assembly, so we said, we can help you solve this.”

Chef Robotics started with fast casual restaurants. They weren’t the first to try this—many other robotics companies had attempted this before, with decidedly mixed results. “We actually had some good success in the early days selling to fast casual chains,” Bhageria says, “but then we had some technical obstacles. Essentially, if we want to have a human-equivalent system so that we can charge a human-equivalent service fee for our robot, we need to be able to do every ingredient. You’re either a full human equivalent, or our customers told us it wouldn’t be useful.”

Part of the challenge is that training robots do perform all of the different manipulations required for different assembly tasks requires different kinds of real world data. That data simply doesn’t exist—or, if it does, any company that has it knows what it’s worth and isn’t sharing. You can’t easily simulate this kind of data, because food can be gross and difficult to handle, whether it’s gloopy or gloppy or squishy or slimy or unpredictably deformable in some other way, and you really need physical experience to train a useful manipulation model.

Setting fast casual restaurants aside for a moment, what about food prep situations where things are as predictable as possible, like mass-produced meals? We’re talking about food like frozen dinners, that have a handful of discrete ingredients packed into trays at factory scale. Frozen meal production relies on automation rather than robotics because the scale is such that the cost of dedicated equipment can be justified.

There’s a middle ground, though, where robots have found (some) opportunity: When you need to produce a high volume of the same meal, but that meal changes regularly. For example, think of any kind of pre-packaged meal that’s made in bulk, just not at frozen-food scale. It’s an opportunity for automation in a structured environment—but with enough variety that actual automation isn’t cost effective. Suddenly, robots and their tiny bit of flexible automation have a chance to be a practical solution.

“We saw these long assembly lines, where humans were scooping food out of big tubs and onto individual trays,” Bhageria says. “They do a lot of different meals on these lines; it’s going to change over and they’re going to do different meals throughout the week. But at any given moment, each person is doing one ingredient, and maybe on a weekly basis, that person would do six ingredients. This was really compelling for us because six ingredients is something we can bootstrap in a lab. We can get something good enough and if we can get something good enough, then we can ship a robot, and if we can ship a robot to production, then we will get real world training data.”

Chef Robotics has been deploying robot modules that they can slot into existing food assembly lines in place of humans without any retrofitting necessary. The modules consist of six degree of freedom arms wearing swanky IP67 washable suits. To handle different kinds of food, the robots can be equipped with a variety of different utensils (and their accompanying manipulation software strategies). Sensing includes a few depth cameras, as well as a weight-sensing platform for the food tray to ensure consistent amounts of food are picked. And while arms with six degrees of freedom may be overkill for now, eventually the hope is that they’ll be able to handle more complex food like asparagus, where you need to do a little bit more than just scoop.

While Chef Robotics seems to have a viable business here, Bhageria tells us that he keeps coming back to that vision of robots being useful in fast casual restaurants, and eventually, robots making us food in our homes. Making that happen will require time, experience, technical expertise, and an astonishing amount of real-world training data, which is the real value behind those 20 million robot-prepared meals (and counting). The more robots the company deploys, the more data they collect, which will allow them to train their food manipulation models to handle a wider variety of ingredients to open up even more deployments. Their robots, Chef’s website says, “essentially act as data ingestion engines to improve our AI models.”

The next step is likely ghost kitchens where the environment is still somewhat controlled and human interaction isn’t necessary, followed by deployments in commercial kitchens more broadly. But even that won’t be enough for Bhageria, who wants robots that can take over from all of the drudgery in food service: “I’m really excited about this vision,” he says. “How do we deploy hundreds of millions of robots all over the world that allow humans to do what humans do best?”



Against all odds, Ukraine is still standing almost two and a half years after Russia’s massive 2022 invasion. Of course, hundreds of billions of dollars in Western support as well as Russian errors have helped immensely, but it would be a mistake to overlook Ukraine’s creative use of new technologies, particularly drones. While uncrewed aerial vehicles have grabbed most of the attention, it is naval drones that could be the key to bringing Russian president Vladimir Putin to the negotiating table.

These naval-drone operations in the Black Sea against Russian warships and other targets have been so successful that they are prompting, in London, Paris, Washington, and elsewhere, fundamental reevaluations of how drones will affect future naval operations. In August, 2023, for example, the Pentagon launched the billion-dollar Replicator initiative to field air and naval drones (also called sea drones) on a massive scale. It’s widely believed that such drones could be used to help counter a Chinese invasion of Taiwan.

And yet Ukraine’s naval drones initiative grew out of necessity, not grand strategy. Early in the war, Russia’s Black Sea fleet launched cruise missiles into Ukraine and blockaded Odesa, effectively shutting down Ukraine’s exports of grain, metals, and manufactured goods. The missile strikes terrorized Ukrainian citizens and shut down the power grid, but Russia’s blockade was arguably more consequential, devastating Ukraine’s economy and creating food shortages from North Africa to the Middle East.

With its navy seized or sunk during the war’s opening days, Ukraine had few options to regain access to the sea. So Kyiv’s troops got creative. Lukashevich Ivan Volodymyrovych, a brigadier general in the Security Service of Ukraine, the country’s counterintelligence agency, proposed building a series of fast, uncrewed attack boats. In the summer of 2022, the service, which is known by the acronym SBU, began with a few prototype drones. These quickly led to a pair of naval drones that, when used with commercial satellite imagery, off-the-shelf uncrewed aircraft, and Starlink terminals, gave Ukrainian operators the means to sink or disable a third of Russia’s Black Sea Fleet, including the flagship Moskva and most of the fleet’s cruise-missile-equipped warships.

To protect their remaining vessels, Russian commanders relocated the Black Sea Fleet to Novorossiysk, 300 kilometers east of Crimea. This move sheltered the ships from Ukrainian drones and missiles, but it also put them too far away to threaten Ukrainian shipping or defend the Crimean Peninsula. Kyiv has exploited the opening by restoring trade routes and mounting sustained airborne and naval drone strikes against Russian bases on Crimea and the Kerch Strait Bridge connecting the peninsula with Russia.

How Maguras and Sea Babies Hunt and Attack

The first Ukrainian drone boats were cobbled together with parts from jet skis, motorboats, and off-the-shelf electronics. But within months, manufacturers working for the Ukraine defense ministry and SBU fielded several designs that proved their worth in combat, most notably the Magura V5 and the Sea Baby.

Carrying a 300-kilogram warhead, on par with that of a heavyweight torpedo, the Magura V5 is a hunter-killer antiship drone designed to work in swarms that confuse and overwhelm a ship’s defenses. Equipped with Starlink terminals, which connect to SpaceX’s Starlink satellites, and GPS, a group of about three to five Maguras likely moves autonomously to a location near the potential target. From there, operators can wait until conditions are right and then attack the target from multiple angles using remote control and video feeds from the vehicles.

A Ukrainian Magura V5 hunter-killer sea drone was demonstrated at an undisclosed location in Ukraine on 13 April 2024. The domed pod toward the bow, which can rotate from side to side, contains a thermal camera used for guidance and targeting.Valentyn Origrenko/Reuters/Redux

Larger than a Magura, the Sea Baby is a multipurpose vehicle that can carry about 800 kg of explosives, which is close to twice the payload of a Tomahawk cruise missile. A Sea Baby was used in 2023 to inflict substantial damage to the Kerch Strait Bridge. A more recent version carries a rocket launcher that Ukraine troops plan to use against Russian forces along the Dnipro River, which flows through eastern Ukraine and has often formed the frontline in that part of the country. Like a Magura, a Sea Baby is likely remotely controlled using Starlink and GPS. In addition to attack, it’s also equipped for surveillance and logistics.

Russia reduced the threat to its ships by moving them out of the region, but fixed targets like the Kerch Strait Bridge remain vulnerable to Ukrainian sea drones. To try to protect these structures from drone onslaughts, Russian commanders are taking a “kitchen sink” approach, submerging hulks around bridge supports, fielding more guns to shoot at incoming uncrewed vessels, and jamming GPS and Starlink around the Kerch Strait.

Ukrainian service members demonstrated the portable, ruggedized consoles used to remotely guide the Magura V5 naval drones in April 2024.Valentyn Origrenko/Reuters/Redux

While the war remains largely stalemated in the country’s north, Ukraine’s naval drones could yet force Russia into negotiations. The Crimean Peninsula was Moscow’s biggest prize from its decade-long assault on Ukraine. If the Kerch Bridge is severed and the Black Sea Fleet pushed back into Russian ports, Putin may need to end the fighting to regain control over Crimea.

Why the U.S. Navy Embraced the Swarm

Ukraine’s small, low-cost sea drones are offering a compelling view of future tactics and capabilities. But recent experiences elsewhere in the world are highlighting the limitations of drones for some crucial tasks. For example, for protecting shipping from piracy or stopping trafficking and illegal fishing, drones are less useful.

Before the Ukraine war, efforts by the U.S. Department of Defense to field surface sea drones focused mostly on large vehicles. In 2015, the Defense Advanced Research Projects Agency started, and the U.S. Navy later continued, a project that built two uncrewed surface vessels, called Sea Hunter and Sea Hawk. These were 130-tonne sea drones capable of roaming the oceans for up to 70 days while carrying payloads of thousands of pounds each. The point was to demonstrate the ability to detect, follow, and destroy submarines. The Navy and the Pentagon’s secretive Strategic Capabilities Office followed with the Ghost Fleet Overlord uncrewed vessel programs, which produced four larger prototypes designed to carry shipping-container-size payloads of missiles, sensors, or electronic countermeasures.

The U.S. Navy’s newly created Uncrewed Surface Vessel Division 1 ( USVDIV-1) completed a deployment across the Pacific Ocean last year with four medium and large sea drones: Sea Hunter and Sea Hawk and two Overlord vessels, Ranger and Mariner. The five-month deployment from Port Hueneme, Calif., took the vessels to Hawaii, Japan, and Australia, where they joined in annual exercises conducted by U.S. and allied navies. The U.S. Navy continues to assess its drone fleet through sea trials lasting from several days to a few months.

The Sea Hawk is a U.S. Navy trimaran drone vessel designed to find, pursue, and attack submarines. The 130-tonne ship, photographed here in October of 2023 in Sydney Harbor, was built to operate autonomously on missions of up to 70 days, but it can also accommodate human observers on board. Ensign Pierson Hawkins/U.S. Navy

In contrast with Ukraine’s small sea drones, which are usually remotely controlled and operate outside shipping lanes, the U.S. Navy’s much larger uncrewed vessels have to follow the nautical rules of the road. To navigate autonomously, these big ships rely on robust onboard sensors, processing for computer vision and target-motion analysis, and automation based on predictable forms of artificial intelligence, such as expert- or agent-based algorithms rather than deep learning.

But thanks to the success of the Ukrainian drones, the focus and energy in sea drones are rapidly moving to the smaller end of the scale. The U.S. Navy initially envisioned platforms like Sea Hunter conducting missions in submarine tracking, electronic deception, or clandestine surveillance far out at sea. And large drones will still be needed for such missions. However, with the right tactics and support, a group of small sea drones can conduct similar missions as well as other vital tasks.

For example, though they are constrained in speed, maneuverability, and power generation, solar- or sail-powered drones can stay out for months with little human intervention. The earliest of these are wave gliders like the Liquid Robotics (a Boeing company) SHARC, which has been conducting undersea and surface surveillance for the U.S. Navy for more than a decade. Newer designs like the Saildrone Voyager and Ocius Blue Bottle incorporate motors and additional solar or diesel power to haul payloads such as radars, jammers, decoys, or active sonars. The Ocean Aero Triton takes this model one step further: It can submerge, to conduct clandestine surveillance or a surprise attack, or to avoid detection.

The Triton, from Ocean Aero in Gulfport, Miss., is billed as the world’s only autonomous sea drone capable of both cruising underwater and sailing on the surface. Ocean Aero

Ukraine’s success in the Black Sea has also unleashed a flurry of new small antiship attack drones. USVDIV-1 will use the GARC from Maritime Applied Physics Corp. to develop tactics. The Pentagon’s Defense Innovation Unit has also begun purchasing drones for the China-focused Replicator initiative. Among the likely craft being evaluated are fast-attack sea drones from Austin, Texas–based Saronic.

Behind the soaring interest in small and inexpensive sea drones is the changing value proposition for naval drones. As recently as four years ago, military planners were focused on using them to replace crewed ships in “dull, dirty, and dangerous” jobs. But now, the thinking goes, sea drones can provide scale, adaptability, and resilience across each link in the “kill chain” that extends from detecting a target to hitting it with a weapon.

Today, to attack a ship, most navies generally have one preferred sensor (such as a radar system), one launcher, and one missile. But what these planners are now coming to appreciate is that a fleet of crewed surface ships with a collection of a dozen or two naval drones would offer multiple paths to both find that ship and attack it. These craft would also be less vulnerable, because of their dispersion.

Defending Taiwan by Surrounding It With a “Hellscape”

U.S. efforts to protect Taiwan may soon reflect this new value proposition. Many classified and unclassified war games suggest Taiwan and its allies could successfully defend the island—but at costs high enough to potentially dissuade a U.S. president from intervening on Taiwan’s behalf. With U.S. defense budgets capped by law and procurement constrained by rising personnel and maintenance costs, substantially growing or improving today’s U.S. military for this specific purpose is unrealistic. Instead, commanders are looking for creative solutions to slow or stop a Chinese invasion without losing most U.S. forces in the process.

Naval drones look like a good—and maybe the best— solution. The Taiwan Strait is only 160 kilometers (100 miles) wide, and Taiwan’s coastline offers only a few areas where large numbers of troops could come ashore. U.S. naval attack drones positioned on the likely routes could disrupt or possibly even halt a Chinese invasion, much as Ukrainian sea drones have denied Russia access to the western Black Sea and, for that matter, Houthi-controlled drones have sporadically closed off large parts of the Red Sea in the Middle East.

Rather than killer robots seeking out and destroying targets, the drones defending Taiwan would be passively waiting for Chinese forces to illegally enter a protected zone, within which they could be attacked.

The new U.S. Indo-Pacific Command leader, Admiral Sam Paparo, wants to apply this approach to defending Taiwan in a scenario he calls “Hellscape.” In it, U.S. surface and undersea drones would likely be based near Taiwan, perhaps in the Philippines or Japan. When the potential for an invasion rises, the drones would move themselves or be carried by larger uncrewed or crewed ships to the western coast of Taiwan to wait.

Sea drones are well-suited to this role, thanks in part to the evolution of naval technologies and tactics over the past half century. Until World War II, submarines were the most lethal threat to ships. But since the Cold War, long-range subsonic, supersonic, and now hypersonic antiship missiles have commanded navy leaders’ attention. They’ve spent decades devising ways to protect their ships against such antiship missiles.

Much less effort has gone into defending against torpedoes, mines—or sea drones. A dozen or more missiles might be needed to ensure that just one reaches a targeted ship, and even then, the damage may not be catastrophic. But a single surface or undersea drone could easily evade detection and explode at a ship’s waterline to sink it, because in this case, water pressure does most of the work.

The level of autonomy available in most sea drones today is more than enough to attack ships in the Taiwan Strait. Details of U.S. military plans are classified, but a recent Hudson Institute report that I wrote with Dan Patt, proposes a possible approach. In it, a drone flotilla, consisting of about three dozen hunter-killer surface drones, two dozen uncrewed surface vessels carrying aerial drones, and three dozen autonomous undersea drones, would take up designated positions in a “kill box” adjacent to one of Taiwan’s western beaches if a Chinese invasion fleet had begun massing on the opposite side of the strait. Even if they were based in Japan or the Philippines, the drones could reach Taiwan within a day. Upon receiving a signal from operators remotely using Starlink or locally using a line-of-sight radio, the drones would act as a mobile minefield, attacking troop transports and their escorts inside Taiwan’s territorial waters. Widely available electro-optical and infrared sensors, coupled to recognition algorithms, would direct the drones to targets.

Although communications with operators onshore would likely be jammed, the drones could coordinate their actions locally using line-of-sight Internet Protocol–based networks like Silvus or TTNT. For example, surface vessels could launch aerial drones that would attack the pilot houses and radars of ships, while surface and undersea drones strike ships at the waterline. The drones could also coordinate to ensure they do not all strike the same target and to prioritize the largest targets first. These kinds of simple collaborations are routine in today’s drones.

Treating drones like mines reduces the complexity needed in their control systems and helps them comply with Pentagon rules for autonomous weapons. Rather than killer robots seeking out and destroying targets, the drones defending Taiwan would be passively waiting for Chinese forces to illegally enter a protected zone, within which they could be attacked.

Like Russia’s Black Sea Fleet, the Chinese navy will develop countermeasures to sea drones, such as employing decoy ships, attacking drones from the air, or using minesweepers to move them away from the invasion fleet. To stay ahead, operators will need to continue innovating tactics and behaviors through frequent exercises and experiments, like those underway at U.S. Navy Unmanned Surface Vessel Squadron Three. (Like the USVDIV-1, it is a unit under the U.S. Navy’s Surface Development Squadron One.) Lessons from such exercises would be incorporated into the defending drones as part of their programming before a mission.

The emergence of sea drones heralds a new era in naval warfare. After decades of focusing on increasingly lethal antiship missiles, navies now have to defend against capable and widely proliferating threats on, above, and below the water. And while sea drone swarms may be mainly a concern for coastal areas, these choke points are critical to the global economy and most nations’ security. For U.S. and allied fleets, especially, naval drones are a classic combination of threat and opportunity. As the Hellscape concept suggests, uncrewed vessels may be a solution to some of the most challenging and sweeping of modern naval scenarios for the Pentagon and its allies—and their adversaries.

This article was updated on 10 July 2024. An earlier version stated that sea drones from Saronic Technologies are being purchased by the U.S. Department of Defense’s Defense Innovation Unit. This could not be publicly confirmed.

Pages