IEEE Spectrum Automation

IEEE Spectrum
Subscribe to IEEE Spectrum Automation feed IEEE Spectrum Automation


The Ingenuity Mars Helicopter made its 72nd and final flight on 18 January. “While the helicopter remains upright and in communication with ground controllers,” NASA’s Jet Propulsion Lab said in a press release this afternoon, “imagery of its Jan. 18 flight sent to Earth this week indicates one or more of its rotor blades sustained damage during landing, and it is no longer capable of flight.” That’s what you’re seeing in the picture above: the shadow of a broken tip of one of the helicopter’s four two-foot long carbon fiber rotor blades. NASA is assuming that at least one blade struck the Martian surface during a “rough landing,” and this is not the kind of damage that will allow the helicopter to get back into the air. Ingenuity’s mission is over.


The Perseverance rover took this picture of Ingenuity on on Aug. 2, 2023, just before flight 54.NASA/JPL-Caltech/ASU/MSSS

NASA held a press conference earlier this evening to give as much information as they can about exactly what happened to Ingenuity, and what comes next. First, here’s a summary from the press release:

Ingenuity’s team planned for the helicopter to make a short vertical flight on Jan. 18 to determine its location after executing an emergency landing on its previous flight. Data shows that, as planned, the helicopter achieved a maximum altitude of 40 feet (12 meters) and hovered for 4.5 seconds before starting its descent at a velocity of 3.3 feet per second (1 meter per second).

However, about 3 feet (1 meter) above the surface, Ingenuity lost contact with the rover, which serves as a communications relay for the rotorcraft. The following day, communications were reestablished and more information about the flight was relayed to ground controllers at NASA JPL. Imagery revealing damage to the rotor blade arrived several days later. The cause of the communications dropout and the helicopter’s orientation at time of touchdown are still being investigated.

While NASA doesn’t know for sure what happened, they do have some ideas based on the cause of the emergency landing during the previous flight, Flight 71. “[This location] is some of the hardest terrain we’ve ever had to navigate over,” said Teddy Tzanetos, Ingenuity Project Manager at NASA JPL, during the NASA press conference. “It’s very featureless—bland, sandy terrain. And that’s why we believe that during Flight 71, we had an emergency landing. She was flying over the surface and was realizing that there weren’t too many rocks to look at or features to navigate from, and that’s why Ingenuity called an emergency landing on her own.”

Ingenuity uses a downward-pointing VGA camera running at 30hz for monocular feature tracking, and compares the apparent motion of distinct features between frames to determine its motion over the ground. This optical flow technique is used for drones (and other robots) on Earth too, and it’s very reliable, as long as you have enough features to track. Where it starts to go wrong is when your camera is looking at things that are featureless, which is why consumer drones will sometimes warn you about unexpected behavior when flying over water, and why robotics labs often have bizarre carpets and wallpaper: the more features, the better. On Mars, Ingenuity has been reliably navigating by looking for distinctive features like rocks, but flying over a featureless expanse of sand caused serious problems, as Ingenuity’s Chief Pilot Emeritus Håvard Grip explained to us during today’s press conference:

The way a system like this works is by looking at the consensus of [the features] it sees, and then throwing out the things that don’t really agree with the consensus. The danger is when you run out of features, when you don’t have very many features to navigate on, and you’re not really able to establish what that consensus is and you end up tracking the wrong kinds of features, and that’s when things can get off track.

This view from Ingenuity’s navigation camera during flight 70 (on December 22) shows areas of nearly featureless terrain that would cause problems during flights 71 and 72.NASA/JPL-Caltech

After the Flight 71 emergency landing, the team decided to try a “pop-up” flight next: it was supposed to be about 30 seconds in the air, just straight up to 12 meters and then straight down as a check-out of the helicopter’s systems. As Ingenuity was descending, just before landing, there was a loss of communications with the helicopter. “We have reason to believe that it was facing the same featureless sandy terrain challenges [as in the previous flight],” said Tzanetos. “And because of the navigation challenges, we had a rotor strike with the surface that would have resulted in a power brownout which caused the communications loss.” Grip describes what he thinks happened in more detail:

Some of this is speculation because of the sparse telemetry that we have, but what we see in the telemetry is that coming down towards the last part of the flight, on the sand, when we’re closing in on the ground, the helicopter relatively quickly starts to think that it’s moving horizontally away from the landing target. It’s likely that it made an aggressive maneuver to try to correct that right upon landing. And that would have accounted for a sideways motion and tilt of the helicopter that could have led to either striking the blade to the ground and then losing power, or making a maneuver that was aggressive enough to lose power before touching down and striking the blade, we don’t know those details yet. We may never know. But we’re trying as hard as we can with the data that we have to figure out those details.

When the Ingenuity team tried reestablishing contact with the helicopter the next sol, “she was right there where we expected her to be,” Tzanetos said. “Solar panel currents were looking good, which indicated that she was upright.” In fact, everything was “green across the board.” That is, until the team started looking through the images from Ingenuity’s navigation camera, and spotted the shadow of the damaged lower blade. Even if that’s the only damage to Ingenuity, the whole rotor system is now both unbalanced and producing substantially less lift, and further flights will be impossible.

A closeup of the shadow of the damaged blade tip.NASA/JPL-Caltech

There’s always that piece in the back of your head that’s getting ready every downlink—today could be the last day, today could be the last day. So there was an initial moment, obviously, of sadness, seeing that photo come down and pop on screen, which gives us certainty of what occurred. But that’s very quickly replaced with happiness and pride and a feeling of celebration for what we’ve pulled off. Um, it’s really remarkable the journey that she’s been on and worth celebrating every single one of those sols. Around 9pm tonight Pacific time will mark 1000 sols that Ingenuity has been on the surface since her deployment from the Perseverance rover. So she picked a very fitting time to come to the end of her mission. —Teddy Tzanetos

The Ingenuity team is guessing that there’s damage to more than one of the helicopter’s blades; the blades spin fast enough that if one hit the surface, others likely did too. The plan is to attempt to slowly spin the blades to bring others into view to try and collect more information. It sounds unlikely that NASA will divert the Perseverance rover to give Ingenuity a closer look; while continuing on its sincere mission the rover will come between 200 and 300 meters of Ingenuity and will try to take some pictures, but that’s likely too far away for a good quality image.

Perseverance watches Ingenuity take off on flight 47 on March 14, 2023.NASA/JPL-Caltech/ASU/MSSS

As a tech demo, Ingenuity’s entire reason for existence was to push the boundaries of what’s possible. And as Grip explains, even in its last flight, the little helicopter was doing exactly that, going above and beyond and trying newer and riskier things until it got as far as it possibly could:

Overall, the way that Ingenuity has navigated using features of terrain has been incredibly successful. We didn’t design this system to handle this kind of terrain, but nonetheless it’s sort of been invincible until this moment where we flew in this completely bland terrain where you just have nothing to really hold on to. So there are some lessons in that for us: we now know that that particular kind of terrain can be a trap for a system like this. Backing up when encountering this featureless terrain is a functionality that a future helicopter could be equipped with. And then there are solutions like having a higher resolution camera, which would have likely helped mitigate this situation. But it’s all part of this tech demo, where we equipped this helicopter to do at most five flights in a pre-scouted area and it’s gone on to do so much more than that. And we just worked it all the way up to the line, and then just tipped it right over the line to where it couldn’t handle it anymore.

Arguably, Ingenuity’s most important contribution has been showing that it’s not just possible, but practical and valuable to have rotorcraft on Mars. “I don’t think we’d be talking about sample recovery helicopters if Ingenuity didn’t fly, period, and if it hadn’t survived for as long as it has,” Teddy Tzanetos told us after Ingenuity’s 50th flight. And it’s not just the sample return mission: JPL is also developing a much larger Mars Science Helicopter, which will owe its existence to Ingenuity’s success.

Nearly three years on Mars. 128 minutes and 11 miles of flight in the Martian skies. “I look forward to the day that one of our astronauts brings home Ingenuity and we can all visit it in the Smithsonian,” said Director of JPL Laurie Leshin at the end of today’s press conference.

I’ll be first in line.

We’ve written extensively about Ingenuity, including in-depth interviews with both helicopter and rover team members, and they’re well worth re-reading today. Thanks, Ingenuity. You did well.


What Flight 50 Means for the Ingenuity Mars Helicopter

Team lead Teddy Tzanetos on the helicopter’s milestone aerial mission


Mars Helicopter Is Much More Than a Tech Demo

A Mars rover driver explains just how much of a difference the little helicopter scout is making to Mars exploration


Ingenuity’s Chief Pilot Explains How to Fly a Helicopter on Mars

Simulation is the secret to flying a helicopter on Mars


How NASA Designed a Helicopter That Could Fly Autonomously on Mars

The Perseverance rover’s Mars Helicopter (Ingenuity) will take off, navigate, and land on Mars without human intervention



Over the past few weeks, we’ve seen a couple of high-profile videos of robotic systems doing really impressive things. And I mean, that’s what we’re all here for, right? Being impressed by the awesomeness of robots! But sometimes the awesomeness of robots is more complicated than what you see in a video making the rounds on social media—any robot has a lot of things going on behind the scenes to make it successful, but if you can’t tell what those things are, what you see at first glance might be deceiving you.

Earlier this month, a group of researchers from Stanford’s IRIS Lab introduced Mobile ALOHA, which (if you read the YouTube video description) is described as “a low-cost and whole-body teleoperation system for data collection”:

And just last week, Elon Musk posted a video of Tesla’s Optimus robot folding a shirt:

— (@)

Most people who watch these videos without poking around in the descriptions or comments will likely not assume that these robots were being entirely controlled by experienced humans, because why would they? Even for roboticists, it can be tricky to know for sure whether the robot they’re watching has a human in the loop somewhere. This is a problem that’s not unique to the folks behind either of the videos above; it’s a communication issue that the entire robotics community struggles with. But as robots (and robot videos) become more mainstream, it’s important that we get better at it.

Why use teleoperation?

Humans are way, way, way, way, way better than robots at almost everything. We’re fragile and expensive, which is why so many people are trying to get robots to do stuff instead, but with a very few exceptions involving speed and precision, humans are the gold standard and are likely to remain so for the foreseeable future. So, if you need a robot to do something complicated or something finicky or something that might require some innovation or creativity, the best solution is to put a human in control.

What about autonomy, though?

Having one-to-one human teleoperation of a robot is a great way of getting things done, but it’s not scalable, and aside from some very specific circumstances, the whole point of robots is to do stuff autonomously at scale so that humans don’t have to. One approach to autonomy is to learn as much as you can from human teleoperation: Many robotics companies are betting that they’ll be able to use humans to gradually train their robotic systems, transitioning from full teleoperation to partial teleoperation to supervisory control to full autonomy. Sanctuary AI is a great example of this: They’ve been teleoperating their humanoid robots through all kinds of tasks, collecting training data as a foundation for later autonomy.

What’s wrong with teleoperation, then?

Nothing! Teleoperation is great. But when people see a robot doing something and it looks autonomous but it’s actually teleoperated, that’s a problem, because it’s a misrepresentation of the state of the technology. Not only do people end up with the wrong idea of how your robot functions and what it’s really capable of, it also means that whenever those people see other robots doing similar tasks autonomously, their frame of reference will be completely wrong, minimizing what otherwise may be a significant contribution to the field by other robotics folks. To be clear, I don’t (usually) think that the roboticists making these videos have any intention of misleading people, but that is unfortunately what often ends up happening.

What can we do about this problem?

Last year, I wrote an article for the IEEE Robotics & Automation Society (RAS) with some tips for making a good robot video, which includes arguably the most important thing: context. This covers teleoperation, along with other common things that can cause robot videos to mislead an unfamiliar audience. Here’s an excerpt from the RAS article:

It’s critical to provide accurate context for videos of robots. It’s not always clear (especially to nonroboticists) what a robot may be doing or not doing on its own, and your video should be as explicit as possible about any assistance that your system is getting. For example, your video should identify:

  • If the video has been sped up or slowed down
  • If the video makes multiple experiments look like one continuous experiment
  • If external power, compute, or localization is being used
  • How the robot is being controlled (e.g., human in the loop, human supervised, scripted actions, partial autonomy, full autonomy)

These things should be made explicit on the video itself, not in the video description or in captions. Clearly communicating the limitations of your work is the responsible thing to do, and not doing this is detrimental to the robotics community.

I want to emphasize that context should be made explicit on the video itself. That is, when you edit the video together, add captions or callouts or something that describes the context on top of the actual footage. Don’t put it in the description or in the subtitles or in a link, because when videos get popular online, they may be viewed and shared and remixed without any of that stuff being readily available.

So how can I tell if a robot is being teleoperated?

If you run across a video of a robot doing some kind of amazing manipulation task and aren’t sure whether it’s autonomous or not, here are some questions to ask that might help you figure it out.

  • Can you identify an operator? In both of the videos we mentioned above, if you look very closely, you can tell that there’s a human operator, whether it’s a pair of legs or a wayward hand in a force-sensing glove. This may be the first thing to look for, because sometimes an operator is very obvious, but at the same time, not seeing an operator isn’t particularly meaningful because it’s easy for them to be out of frame.
  • Is there any more information? The second thing to check is whether the video says anywhere what’s actually going on. Does the video have a description? Is there a link to a project page or paper? Are there credits at the end of the video? What account is publishing the video? Even if you can narrow down the institution or company or lab, you might be able to get a sense of whether they’re working on autonomy or teleoperation.
  • What kind of task is it? You’re most likely to see teleoperation in tasks that would be especially difficult for a robot to do autonomously. At the moment, that’s predominantly manipulation tasks that aren’t well structured—for example, getting multiple objects to interact with each other, handling things that are difficult to model (like fabrics), or extended multistep tasks. If you see a robot doing this stuff quickly and well, it’s worth questioning whether it’s autonomous.
  • Is the robot just too good? I always start asking more questions when a robot demo strikes me as just too impressive. But when does impressive become too impressive? Personally, I think a robot demonstrating human-level performance at just about any complex task is too impressive. Some autonomous robots definitely have reached that benchmark, but not many, and the circumstances of them doing so are usually atypical. Furthermore, it takes a lot of work to reach humanlike performance with an autonomous system, so there’s usually some warning in the form of previous work. If you see an impressive demo that comes out of nowhere, showcasing an autonomous capability without any recent precedents, that’s probably too impressive. Remember that it can be tricky with a video because you have no idea whether you’re watching the first take or the 500th, and that itself is a good thing to be aware of—even if it turns out that a demo is fully autonomous, there are many other ways of obfuscating how successful the system actually is.
  • Is it too fast? Autonomous robots are well known for being very fast and precise, but only in the context of structured tasks. For complex manipulation tasks, robots need to sense their environment, decide what to do next, and then plan how to move. This takes time. If you see an extended task that consists of multiple parts but the system never stops moving, that suggests it’s not fully autonomous.
  • Does it move like a human? Robots like to move optimally. Humans might also like to move optimally, but we’re bad at it. Autonomous robots tend to move smoothly and fluidly, while teleoperated robots often display small movements that don’t make sense in the context of the task, but are very humanlike in nature. For example, finger motions that are unrelated to gripping, or returning an arm to a natural rest position for no particular reason, or being just a little bit sloppy in general. If the motions seem humanlike, that’s usually a sign of a human in the loop rather than a robot that’s just so good at doing a task that it looks human.

None of these points make it impossible for an autonomous robot demo to come out of nowhere and blow everyone away. Improbable, perhaps, but not impossible. And the rare moments when that actually happens is part of what makes robotics so exciting. That’s why it’s so important to understand what’s going on when you see a robot doing something amazing, though—knowing how it’s done, and all of the work that went into it, can only make it more impressive.

This article was inspired by Peter Corke‘s LinkedIn post, What’s with all these deceptive teleoperation demos? And extra thanks to Peter for his feedback on an early draft of this article.



While organic thin-film transistors built on flexible plastic have been around long enough for people to start discussing a Moore’s Law for bendable ICs, memory devices for these flexible electronics have been a bit more elusive. Now researchers from Tsinghua University in China have developed a fully flexible resistive random access memory device, dubbed FlexRAM, that offers another approach: a liquid one.

In research described in the journal Advanced Materials, the researchers have used a gallium-based liquid metal to achieve FlexRAM’s data writing and reading process. In an example of biomimicry, the gallium-based liquid metal (GLM) droplets undergo oxidation and reduction mechanisms while in a solution environment that mimic the hyperpolarization and depolarization of neurons.

“This breakthrough fundamentally changes traditional notions of flexible memory, offering a theoretical foundation and technical path for future soft intelligent robots, brain-machine interface systems, and wearable/implantable electronic devices.”
—Jing Liu, Tsinghua University

These positive and negative bias voltages define the writing of information “1” and “0,” respectively. When a low voltage is applied, the liquid metal is oxidized, corresponding to the high-resistance state of “1.” By reversing the voltage polarity, it returns the metal to its initial low-resistance state of “0.” This reversible switching process allows for the storage and erasure of data.

To showcase the reading and writing capabilities of FlexRAM, the researchers integrated it into a software and hardware setup. Through computer commands, they encoded a string of letters and numbers, represented in the form of 0s and 1s, onto an array of eight FlexRAM storage units, equivalent to one byte of data information. The digital signal from the computer underwent conversion into an analog signal using pulse-width modulation to precisely control the oxidation and reduction of the liquid metal.

Photographs of the oxidation and reduction state of the gallium-based liquid metal at the heart of FlexRAM.Jing Liu/Tsinghua University

The present prototype is a volatile memory, according to Jing Liu, a professor at the Department of Biomedical Engineering at Tsinghua University. But Liu contends that the memory principle allows for the development of the device into different forms of memory.

This contention is supported by the unusual phenomenon that the data stored in FlexRAM persists even when the power is switched off. In a low or no-oxygen environment, FlexRAM can retain its data for up to 43,200 seconds (12 hours). It also exhibits repeatable use, maintaining stable performance for over 3,500 cycles of operation.

“This breakthrough fundamentally changes traditional notions of flexible memory, offering a theoretical foundation and technical path for future soft intelligent robots, brain-machine interface systems, and wearable/implantable electronic devices,” said Liu.

The GLM droplets are encapsulated in Ecoflex, a stretchable biopolymer. Using a 3D printer, the researchers printed Ecoflex molds and injected gallium-based liquid metal droplets and a solution of polyvinyl acetate hydrogel separately into the cavities in the mold. The hydrogel not only prevents solution leakage but also enhances the mechanical properties of the device, increasing its resistance ratio.

“FlexRAM could be incorporated into entire liquid-based computing systems, functioning as a logic device.”
—Jing Liu, Tsinghua University

In the present prototype, an array of 8 FlexRAM units can store one byte of information.

At this conceptual demonstration stage, millimeter-scale resolution molding is sufficient for demonstration of its working principle, Liu notes.

“The conceivable size scale for these FlexRAM devices can range widely,” said Liu. “For example, the size for each of the droplet memory elements can be from millimeter to nano-scale droplets. Interestingly, as revealed by the present study, the smaller the droplet size, the more sensitive the memory response.”

This groundbreaking work paves the way for the realization of brain-like circuits, aligning with concepts proposed by researchers such as Stuart Parkin at IBM over a decade ago. “FlexRAM could be incorporated into entire liquid-based computing systems, functioning as a logic device,” Liu envisions.

As researchers and engineers continue to address challenges and refine the technology, the potential applications of FlexRAM in soft robotics, brain-machine interface systems, and wearable/implantable electronic could be significant.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

You may not be familiar with Swiss-Mile, but you’d almost certainly recognize its robot: it’s the ANYmal with wheels on its feet that can do all kinds of amazing things. Swiss-Mile has just announced a seed round to commercialize these capabilities across quadrupedal platforms, including Unitree’s, which means it’s even affordable-ish!

It’s always so cool to see impressive robotics research move toward commercialization, and I’ve already started saving up for one of these of my own.

[ Swiss-Mile ]

Thanks Marko!

This video presents the capabilities of PAL Robotics’ TALOS robot as it demonstrates agile and robust walking using Model Predictive Control (MPC) references sent to a Whole-Body Inverse Dynamics (WBID) controller developed in collaboration with Dynamograde. The footage shows TALOS navigating various challenging terrains, including stairs and slopes, while handling unexpected disturbances and additional weight.

[ PAL Robotics ]

Thanks Lorna!

Do you want to create a spectacular bimanual manipulation demo? All it takes is this teleoperation system and a carefully cropped camera shot! This is based on the Mobile ALOHA system from Stanford that we featured in Video Friday last week.

[ AgileX ]

Wing is still trying to make the drone-delivery thing work, and it’s got a new, bigger drone to deliver even more stuff at once.

[ Wing ]

A lot of robotics research claims to be about search and rescue and disaster relief, but it really looks like RSL’s ANYmal can actually pull it off.

And here’s even more impressive video, along with some detail about how the system works.

[ Paper ]

This might be the most appropriate soundtrack for a robot video that I’ve ever heard.

Snakes have long captivated robotics researchers due to their effective locomotion, flexible body structure, and ability to adapt their skin friction to different terrains. While extensive research has delved into serpentine locomotion, there remains a gap in exploring rectilinear locomotion as a robotic solution for navigating through narrow spaces. In this study, we describe the fundamental principles of rectilinear locomotion and apply them to design a soft crawling robot using origami modules constructed from laminated fabrics.

[ SDU ]

We wrote about Fotokite’s innovative tethered drone seven or eight years ago, and it’s good to see the company is still doing solid work.

I do miss the consumer version, though.

[ Fotokite ]

[ JDP ] via [ Petapixel ]

This is SHIVAA the strawberry picking robot of DFKI Robotics Innovation Center. The system is being developed in the RoLand (Robotic Systems in Agriculture) project, coordinated by the #RoboticsInnovationCenter (RIC) of the DFKI Bremen. Within the project we design and develop a semi-autonomous, mobile system that is capable of harvesting strawberries independent of human interaction.

[ DFKI ]

On December 6, 2023, Demarcus Edwards talked to Robotics students as a speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question: “What can I do with a robotics degree?”

[ Michigan Robotics ]

This movie, Loss of Sensation, was released in Russia in 1935. It seems to be the movie that really, really irritated Karel Čapek, because they made his “robots” into mechanical beings instead of biological ones.

[ IMDB ]



You’re familiar with Karel Čapek, right? If not, you should be—he’s the guy who (along with his brother Josef) invented the word “robot.” Čapek introduced robots to the world in 1921, when his play “R.U.R.” (subtitled “Rossum’s Universal Robots”) was first performed in Prague. It was performed in New York City the next year, and by the year after that, it had been translated into 30 languages. Translated, that is, except for the word “robot” itself, which originally described artificial humans but within a decade of its introduction came to mean things that were mechanical and electronic in nature.

Čapek, it turns out, was a little miffed that his “robots” had been so hijacked, and in 1935, he wrote a column in the Lidové noviny “defending” his vision of what robots should be, while also resigning himself to what they had become. A new translation of this column is included as an afterword in a new English translation of R.U.R. that is accompanied by 20 essays exploring robotics, philosophy, politics, and AI in the context of the play, and it makes for fascinating reading.

R.U.R. and the Vision of Artificial Life is edited by Jitka Čejková, a professor at the Chemical Robotics Laboratory at the University of Chemistry and Technology Prague, and whose research interests arguably make her one of the most qualified people to write about Čapek’s perspective on robots. “The chemical robots in the form of microparticles that we designed and investigated, and that had properties similar to living cells, were much closer to Čapek’s original ideas than any other robots today,” Čejková explains in the book’s introduction. These microparticles can exhibit surprisingly complex autonomous behaviors under specific situations, like solving simple mazes:

“I started to call these droplets liquid robots,” says Čejková. “Just as Rossum’s robots were artificial human beings that only looked like humans and could imitate only certain characteristics and behaviors of humans, so liquid robots, as artificial cells, only partially imitate the behavior of their living counterparts.”

What is or is not called a robot is an ongoing debate that most roboticists seem to try to avoid, but personally, I appreciate the idea that very broadly, a robot is something that seems alive but isn’t—something with independent embodied intelligence. Perhaps the requirement that a robot is mechanical and electronic is too strict, although as Čapek himself realized a hundred years ago, what defines a robot has escaped from the control of anyone, even its creator. Here then is his column from 1935, excerpted from R.U.R. and the Vision of Artificial Life, released just today:

“THE AUTHOR OF THE ROBOTS DEFENDS HIMSELF” By Karel ČapekPublished in Lidové noviny, June 9, 1935

I know it is a sign of ingratitude on the part of the author, if he raises both hands against a certain popularity that has befallen something which is called his spiritual brainchild; for that matter, he is aware that by doing so he can no longer change a thing. The author was silent a goodly time and kept his own counsel, while the notion that robots have limbs of metal and innards of wire and cogwheels (or the like) has become current; he has learned, without any great pleasure, that genuine steel robots have started to appear, robots that move in various directions, tell the time, and even fly airplanes; but when he recently read that, in Moscow, they have shot a major film, in which the world is trampled underfoot by mechanical robots, driven by electromagnetic waves, he developed a strong urge to protest, at least in the name of his own robots. For his robots were not mechanisms. They were not made of sheet metal and cogwheels. They were not a celebration of mechanical engineering. If the author was thinking of any of the marvels of the human spirit during their creation, it was not of technology, but of science. With outright horror, he refuses any responsibility for the thought that machines could take the place of people, or that anything like life, love, or rebellion could ever awaken in their cogwheels. He would regard this somber vision as an unforgivable overvaluation of mechanics or as a severe insult to life.

The author of the robots appeals to the fact that he must know the most about it: and therefore he pronounces that his robots were created quite differently—that is, by a chemical path. The author was thinking about modern chemistry, which in various emulsions (or whatever they are called) has located substances and forms that in some ways behave like living matter. He was thinking about biological chemistry, which is constantly discovering new chemical agents that have a direct regulatory influence on living matter; about chemistry, which is finding—and to some extent already building—those various enzymes, hormones, and vitamins that give living matter its ability to grow and multiply and arrange all the other necessities of life. Perhaps, as a scientific layman, he might develop an urge to attribute this patient ingenious scholarly tinkering with the ability to one day produce, by artificial means, a living cell in the test tube; but for many reasons, amongst which also belonged a respect for life, he could not resolve to deal so frivolously with this mystery. That is why he created a new kind of matter by chemical synthesis, one which simply behaves a lot like the living; it is an organic substance, different from that from which living cells are made; it is something like another alternative to life, a material substrate in which life could have evolved if it had not, from the beginning, taken a different path. We do not have to suppose that all the different possibilities of creation have been exhausted on our planet. The author of the robots would regard it as an act of scientific bad taste if he had brought something to life with brass cogwheels or created life in the test tube; the way he imagined it, he created only a new foundation for life, which began to behave like living matter, and which could therefore have become a vehicle of life—but a life which remains an unimaginable and incomprehensible mystery. This life will reach its fulfillment only when (with the aid of considerable inaccuracy and mysticism) the robots acquire souls. From which it is evident that the author did not invent his robots with the technological hubris of a mechanical engineer, but with the metaphysical humility of a spiritualist.

Well then, the author cannot be blamed for what might be called the worldwide humbug over the robots. The author did not intend to furnish the world with plate metal dummies stuffed with cogwheels, photocells, and other mechanical gizmos. It appears, however, that the modern world is not interested in his scientific robots and has replaced them with technological ones; and these are, as is apparent, the true flesh-of-our-flesh of our age. The world needed mechanical robots, for it believes in machines more than it believes in life; it is fascinated more by the marvels of technology than by the miracle of life. For which reason, the author who wanted—through his insurgent robots, striving for a soul—to protest against the mechanical superstition of our times, must in the end claim something which nobody can deny him: the honor that he was defeated.

Excerpted from R.U.R. and the Vision of Artificial Life, by Karel Čapek, edited by Jitka Čejková. Published by The MIT Press. Copyright © 2024 MIT. All rights reserved.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Figure’s robot is watching videos of humans making coffee, and then making coffee on its own.

While this is certainly impressive, just be aware that it’s not at all clear from the video exactly how impressive it is.

[ Figure ]

It’s really the shoes that get me with Westwood’s THEMIS robot.

THEMIS can also deliver a package just as well as a human can, if not better!

And I appreciate the inclusion of all of these outtakes, too:

[ Westwood Robotics ]

Kepler Exploration Robot recently unveiled its latest innovation, the Kepler Forerunner series of general-purpose humanoid robots. This advanced humanoid stands at a height of 178cm (5’10”), weighs 85kg (187 lbs.), and boasts an intelligent and dexterous hand with 12 degrees of freedom. The entire body has up to 40 degrees of freedom, enabling functionalities such as navigating complex terrains, intelligent obstacle avoidance, flexible manipulation of hands, powerful lifting and carrying of heavy loads, hand-eye coordination, and intelligent interactive communication.

[ Kepler Exploration ]

Introducing the new Ballie, your true AI companion. With more advanced intelligence, Ballie can come right to you and project visuals on your walls. It can also help you interact with other connected devices or take care of hassles.

[ Samsung ]

There is a thing called Drone Soccer that got some exposure at CES this week, but apparently it’s been around for several years, and originated in South Korea. Inspired by Quiddich, targeted at STEM students.

[ Drone Soccer ]

Every so often, JPL dumps a bunch of raw footage onto YouTube. This time, there’s Perseverance’s view of Ingenuity taking off, a test of the EELS robot, and an unusual sample tube drop test.

[ JPL ]

Our first months delivering to Walmart customers have made one thing clear: Demand for drone delivery is real. On the heels of our Dallas-wide FAA approvals, today we announced that millions of new DFW-area customers will have access to drone delivery in 2024!

[ Wing ]

Dave Burke works with Biomechatronics researcher Michael Fernandez to test a prosthesis with neural control, by cutting a sheet of paper with scissors. This is the first time in 30 years that Dave has performed this task with his missing hand.

[ MIT ]

Meet DJI’s first delivery drone—FlyCart 30. Overcome traditional transport challenges and start a new era of dynamic aerial delivery with large payload capacity, long operation range, high reliability, and intelligent features.

[ DJI ]

The Waymo Driver autonomously operating both a passenger vehicle and class 8 truck safely in various freeway scenarios, including on-ramps and off-ramps, lane merges, and sharing the road with others.

[ Waymo ]

In this paper, we present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks. DiffuseBot bridges the gap between virtually generated content and physical utility by (i) augmenting the diffusion process with a physical dynamical simulation which provides a certificate of performance, and ii) introducing a co-design procedure that jointly optimizes physical design and control by leveraging information about physical sensitivities from differentiable simulation.

[ Paper ]



The generative AI revolution embodied in tools like ChatGPT, Midjourney, and many others is at its core based on a simple formula: Take a very large neural network, train it on a huge dataset scraped from the Web, and then use it to fulfill a broad range of user requests. Large language models (LLMs) can answer questions, write code, and spout poetry, while image-generating systems can create convincing cave paintings or contemporary art.

So why haven’t these amazing AI capabilities translated into the kinds of helpful and broadly useful robots we’ve seen in science fiction? Where are the robots that can clean off the table, fold your laundry, and make you breakfast?

Unfortunately, the highly successful generative AI formula—big models trained on lots of Internet-sourced data—doesn’t easily carry over into robotics, because the Internet is not full of robotic-interaction data in the same way that it’s full of text and images. Robots need robot data to learn from, and this data is typically created slowly and tediously by researchers in laboratory environments for very specific tasks. Despite tremendous progress on robot-learning algorithms, without abundant data we still can’t enable robots to perform real-world tasks (like making breakfast) outside the lab. The most impressive results typically only work in a single laboratory, on a single robot, and often involve only a handful of behaviors.

If the abilities of each robot are limited by the time and effort it takes to manually teach it to perform a new task, what if we were to pool together the experiences of many robots, so a new robot could learn from all of them at once? We decided to give it a try. In 2023, our labs at Google and the University of California, Berkeley came together with 32 other robotics laboratories in North America, Europe, and Asia to undertake the RT-X project, with the goal of assembling data, resources, and code to make general-purpose robots a reality.

Here is what we learned from the first phase of this effort.

How to create a generalist robot

Humans are far better at this kind of learning. Our brains can, with a little practice, handle what are essentially changes to our body plan, which happens when we pick up a tool, ride a bicycle, or get in a car. That is, our “embodiment” changes, but our brains adapt. RT-X is aiming for something similar in robots: to enable a single deep neural network to control many different types of robots, a capability called cross-embodiment. The question is whether a deep neural network trained on data from a sufficiently large number of different robots can learn to “drive” all of them—even robots with very different appearances, physical properties, and capabilities. If so, this approach could potentially unlock the power of large datasets for robotic learning.

The scale of this project is very large because it has to be. The RT-X dataset currently contains nearly a million robotic trials for 22 types of robots, including many of the most commonly used robotic arms on the market. The robots in this dataset perform a huge range of behaviors, including picking and placing objects, assembly, and specialized tasks like cable routing. In total, there are about 500 different skills and interactions with thousands of different objects. It’s the largest open-source dataset of real robotic actions in existence.

Surprisingly, we found that our multirobot data could be used with relatively simple machine-learning methods, provided that we follow the recipe of using large neural-network models with large datasets. Leveraging the same kinds of models used in current LLMs like ChatGPT, we were able to train robot-control algorithms that do not require any special features for cross-embodiment. Much like a person can drive a car or ride a bicycle using the same brain, a model trained on the RT-X dataset can simply recognize what kind of robot it’s controlling from what it sees in the robot’s own camera observations. If the robot’s camera sees a UR10 industrial arm, the model sends commands appropriate to a UR10. If the model instead sees a low-cost WidowX hobbyist arm, the model moves it accordingly.

To test the capabilities of our model, five of the laboratories involved in the RT-X collaboration each tested it in a head-to-head comparison against the best control system they had developed independently for their own robot. Each lab’s test involved the tasks it was using for its own research, which included things like picking up and moving objects, opening doors, and routing cables through clips. Remarkably, the single unified model provided improved performance over each laboratory’s own best method, succeeding at the tasks about 50 percent more often on average.

While this result might seem surprising, we found that the RT-X controller could leverage the diverse experiences of other robots to improve robustness in different settings. Even within the same laboratory, every time a robot attempts a task, it finds itself in a slightly different situation, and so drawing on the experiences of other robots in other situations helped the RT-X controller with natural variability and edge cases. Here are a few examples of the range of these tasks:




Building robots that can reason

Encouraged by our success with combining data from many robot types, we next sought to investigate how such data can be incorporated into a system with more in-depth reasoning capabilities. Complex semantic reasoning is hard to learn from robot data alone. While the robot data can provide a range of physical capabilities, more complex tasks like “Move apple between can and orange” also require understanding the semantic relationships between objects in an image, basic common sense, and other symbolic knowledge that is not directly related to the robot’s physical capabilities.

So we decided to add another massive source of data to the mix: Internet-scale image and text data. We used an existing large vision-language model that is already proficient at many tasks that require some understanding of the connection between natural language and images. The model is similar to the ones available to the public such as ChatGPT or Bard. These models are trained to output text in response to prompts containing images, allowing them to solve problems such as visual question-answering, captioning, and other open-ended visual understanding tasks. We discovered that such models can be adapted to robotic control simply by training them to also output robot actions in response to prompts framed as robotic commands (such as “Put the banana on the plate”). We applied this approach to the robotics data from the RT-X collaboration.

The RT-X model uses images or text descriptions of specific robot arms doing different tasks to output a series of discrete actions that will allow any robot arm to do those tasks. By collecting data from many robots doing many tasks from robotics labs around the world, we are building an open-source dataset that can be used to teach robots to be generally useful.Chris Philpot

To evaluate the combination of Internet-acquired smarts and multirobot data, we tested our RT-X model with Google’s mobile manipulator robot. We gave it our hardest generalization benchmark tests. The robot had to recognize objects and successfully manipulate them, and it also had to respond to complex text commands by making logical inferences that required integrating information from both text and images. The latter is one of the things that make humans such good generalists. Could we give our robots at least a hint of such capabilities?

Even without specific training, this Google research robot is able to follow the instruction “move apple between can and orange.” This capability is enabled by RT-X, a large robotic manipulation dataset and the first step towards a general robotic brain.

We conducted two sets of evaluations. As a baseline, we used a model that excluded all of the generalized multirobot RT-X data that didn’t involve Google’s robot. Google’s robot-specific dataset is in fact the largest part of the RT-X dataset, with over 100,000 demonstrations, so the question of whether all the other multirobot data would actually help in this case was very much open. Then we tried again with all that multirobot data included.

In one of the most difficult evaluation scenarios, the Google robot needed to accomplish a task that involved reasoning about spatial relations (“Move apple between can and orange”); in another task it had to solve rudimentary math problems (“Place an object on top of a paper with the solution to ‘2+3’”). These challenges were meant to test the crucial capabilities of reasoning and drawing conclusions.

In this case, the reasoning capabilities (such as the meaning of “between” and “on top of”) came from the Web-scale data included in the training of the vision-language model, while the ability to ground the reasoning outputs in robotic behaviors—commands that actually moved the robot arm in the right direction—came from training on cross-embodiment robot data from RT-X. Some examples of evaluations where we asked the robots to perform tasks not included in their training data are shown below.While these tasks are rudimentary for humans, they present a major challenge for general-purpose robots. Without robotic demonstration data that clearly illustrates concepts like “between,” “near,” and “on top of,” even a system trained on data from many different robots would not be able to figure out what these commands mean. By integrating Web-scale knowledge from the vision-language model, our complete system was able to solve such tasks, deriving the semantic concepts (in this case, spatial relations) from Internet-scale training, and the physical behaviors (picking up and moving objects) from multirobot RT-X data. To our surprise, we found that the inclusion of the multirobot data improved the Google robot’s ability to generalize to such tasks by a factor of three. This result suggests that not only was the multirobot RT-X data useful for acquiring a variety of physical skills, it could also help to better connect such skills to the semantic and symbolic knowledge in vision-language models. These connections give the robot a degree of common sense, which could one day enable robots to understand the meaning of complex and nuanced user commands like “Bring me my breakfast” while carrying out the actions to make it happen.

The next steps for RT-X

The RT-X project shows what is possible when the robot-learning community acts together. Because of this cross-institutional effort, we were able to put together a diverse robotic dataset and carry out comprehensive multirobot evaluations that wouldn’t be possible at any single institution. Since the robotics community can’t rely on scraping the Internet for training data, we need to create that data ourselves. We hope that more researchers will contribute their data to the RT-X database and join this collaborative effort. We also hope to provide tools, models, and infrastructure to support cross-embodiment research. We plan to go beyond sharing data across labs, and we hope that RT-X will grow into a collaborative effort to develop data standards, reusable models, and new techniques and algorithms.

Our early results hint at how large cross-embodiment robotics models could transform the field. Much as large language models have mastered a wide range of language-based tasks, in the future we might use the same foundation model as the basis for many real-world robotic tasks. Perhaps new robotic skills could be enabled by fine-tuning or even prompting a pretrained foundation model. In a similar way to how you can prompt ChatGPT to tell a story without first training it on that particular story, you could ask a robot to write “Happy Birthday” on a cake without having to tell it how to use a piping bag or what handwritten text looks like. Of course, much more research is needed for these models to take on that kind of general capability, as our experiments have focused on single arms with two-finger grippers doing simple manipulation tasks.

As more labs engage in cross-embodiment research, we hope to further push the frontier on what is possible with a single neural network that can control many robots. These advances might include adding diverse simulated data from generated environments, handling robots with different numbers of arms or fingers, using different sensor suites (such as depth cameras and tactile sensing), and even combining manipulation and locomotion behaviors. RT-X has opened the door for such work, but the most exciting technical developments are still ahead.

This is just the beginning. We hope that with this first step, we can together create the future of robotics: where general robotic brains can power any robot, benefiting from data shared by all robots around the world.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

One approach to robot autonomy is to learn from human demonstration, which can be very effective as long as you have enough high quality data to work with. Mobile ALOHA is a low-cost and whole-body teleoperation system for data collection from Stanford’s IRIS Lab, and under the control of an experienced human, it can do pretty much everything we’ve ever fantasized about home robots doing for us.

[ Stanford ]

Researchers at SEAS and the BU’s Sargent College of Health & Rehabilitation Sciences used a soft, wearable robot to help a person living with Parkinson’s walk without freezing. The robotic garment, worn around the hips and thighs, gives a gentle push to the hips as the leg swings, helping the patient achieve a longer stride. The research demonstrates the potential of soft robotics to treat a potentially dangerous symptom of Parkinson’s disease and could allow people living with the disease to regain their mobility and independence.

[ Harvard SEAS ]

Happy 2024 from SkyMul!

[ SkyMul ]

Thanks, Eohan!

As the holiday season approaches, we at Kawasaki Robotics (USA), Inc. wanted to take a moment to express our warmest wishes to you. May your holidays be filled with joy, love, and peace, and may the New Year bring you prosperity, success, and happiness. From our team to yours, we wish you a very happy holiday season and a wonderful New Year ahead.

[ Kawasaki Robotics ]

Aurora Flight Sciences is working on a new X-plane for the Defense Advanced Research Projects Agency’s (DARPA) Control of Revolutionary Aircraft with Novel Effectors (CRANE) program. X-65 is purpose-designed for testing and demonstrating the benefits of active flow control (AFC) at tactically relevant scale and flight conditions.

[ Aurora ]

Well, this is the craziest piece of immersive robotic teleop hardware I’ve ever seen.

[ Jinkisha ]

Looks like Moley Robotics is still working on the least practical robotic kitchen ever.

[ Moley ]



As IEEE Spectrum editors, we pride ourselves on spotting promising technologies and following them from the research phase through development and ultimately deployment. In every January issue, we focus on the technologies that are now poised to achieve significant milestones in the new year.

This issue was curated by Senior Editor Samuel K. Moore, our in-house expert on semiconductors. So it’s no surprise that he included a story on Intel’s plan to roll out two momentous chip technologies in the next few months.

For “Intel Hopes to Leapfrog Its Competitors,” Moore directed our editorial intern, Gwendolyn Rak, to report on the risk the chip giant is taking by introducing two technologies at once. We began tracking the first technology, nanosheet transistors, in 2017. By the time we gave all the details in a 2019 feature article, it was clear that this device was destined to be the successor to the FinFET. Moore first spotted the second technology, back-side power delivery, at the IEEE International Electron Devices Meeting in 2019. Less than two years later, Intel publicly committed to incorporating the tech in 2024.

Speaking of commitment, the U.S. military’s Defense Advanced Research Projects Agency has played an enormous part in bankrolling some of the fundamental advances that appear in these pages. Many of our readers will be familiar with the robots that Senior Editor Evan Ackerman covered during DARPA’s humanoid challenge almost 10 years ago. Those robots were essentially research projects, but as Ackerman reports in “Year of the Humanoid,” a few companies will start up pilot projects in 2024 to see if this generation of humanoids is ready to roll up its metaphorical sleeves and get down to business.

More recently, fully homomorphic encryption (FHE) has burst onto the scene. Moore, who’s been covering the Cambrian explosion in chip architectures for AI and other alternative computing modalities since the mid-teens, notes that, like the robotics challenge, DARPA was the initial driver.

“You’d expect the three companies DARPA funded to come up with a chip, though there was no guarantee they’d commercialize it,” says Moore, who wrote “Chips to Compute With Encrypted Data Are Coming.” “But what you wouldn’t expect is three more startups, independently of DARPA, to come out with their own FHE chips at the same time.”

Senior Editor Tekla S. Perry’s story about phosphorescent OLEDs, “A Behind-the-Screens Change for OLED,” is actually a deep cut for us. One of the first feature articles Moore edited at Spectrum way back in 2000 was Stephen Forrest’s article on organic electronics. His lab developed the first phosphorescent OLED materials, which are hugely more efficient than the fluorescent ones. Forrest was a founder of Universal Display Corp., which has now, after more than two decades, finally commercialized the last of its trio of phosphorescent colors—blue.

Then there’s our cover story about deepfakes and their potential impact on dozens of national elections later this year. We’ve been tracking the rise of deepfakes since mid-2018, when we ran a story about AI researchers betting on whether or not a deepfake video about a political candidate would receive more than 2 million views during the U.S. midterm elections that year. As Senior Editor Eliza Strickland reports in “This Election Year, Look for Content Credentials,” several companies and industry groups are working hard to ensure that deepfakes don’t take down democracy.

Best wishes for a healthy and prosperous new year, and enjoy this year’s technology forecast. It’s been years in the making.

This article appears in the January 2024 print issue.



This story is part of our Top Tech 2024 special report.

Journey to the Center of the Earth

To unlock the terawatt potential of geothermal energy, MIT startup Quaise Energy is testing a deep-drilling rig in 2024 that will use high-power millimeter waves to melt a column of rock down as far as 10 to 20 kilometers. Its “deeper, hotter, and faster” strategy will start with old oil-and-gas drilling structures and extend them by blasting radiation from a gyrotron to vaporize the hard rock beneath. At these depths, Earth reaches 500 °C. Accessing this superhot geothermal energy could be a key part of achieving net zero emission goals by 2050, according to Quaise executives.


“Batteries Included” Induction Ovens

Now we’re cooking with gas—but soon, we may be cooking with induction. A growing number of consumers are switching to induction-based stoves and ovens to address environmental concerns and health risks associated with gas ranges. But while these new appliances are more energy efficient, most models require modified electrical outlets and cost hundreds of dollars to install. That’s why startups like Channing Street Copper and Impulse Labs are working to make induction ovens easier to install by adding built-in batteries that supplement regular wall-socket power. Channing Street Copper plans to roll out its battery-boosted Charlie appliance in early 2024.


Triage Tech to the Rescue

In the second half of 2024, the U.S. Defense Advanced Research Projects Agency will begin the first round of its Triage Challenge, a competition to develop sensors and algorithms to support triage efforts during mass-casualty incidents. According to a DARPA video presentation from last February, the agency is seeking new ways to help medics at two stages of treatment: During primary triage, those most in need of care will be identified with sensors from afar. Then, when the patients are stable, medics can decide the best treatment regimens based on data gleaned from noninvasive sensors. The three rounds will continue through 2026, with prizes totaling US $7 million.


Killer Drones Deployed From the Skies

A new class of missile-firing drones will take to the skies in 2024. Like a three-layer aerial nesting doll, the missile-stuffed drone is itself released from the belly of a bomber while in flight. The uncrewed aircraft was developed by energy and defense company General Atomics as part of the Defense Advanced Research Projects Agency’s LongShot program and will be flight-tested this year to prove its feasibility in air-based combat. Its goal is to extend the range and effectiveness of both air-to-air missiles and the current class of fighter jets while new aircraft are introduced.


Visible’s Anti-Activity Tracker

Long COVID and chronic fatigue often go unseen by others. But it’s important that people with these invisible illnesses understand how different activities affect their symptoms so they can properly pace their days. That’s why one man with long COVID, Harry Leeming, decided to create Visible, an app that helps users monitor activity and avoid overexertion. This year, according to Leeming, Visible will launch a premium version of the app that uses a specialized heart-rate monitor. While most wearables are meant for workouts, Leeming says, these armband monitors are optimized for lower heart rates to help people with both long COVID and fatigue. The app will also collect data from consenting users to help research these conditions.


Amazon Launches New Internet Service—Literally

Amazon expects to begin providing Internet service from space with Project Kuiper by the end of 2024. The US $10 billion project aims to expand reliable broadband internet access to rural areas around the globe by launching a constellation of more than 3,000 satellites into low Earth orbit. While the project will take years to complete in full, Amazon is set to start beta testing with customers later this year. If successful, Kuiper could be integrated into the suite of Amazon Web Services. SpaceX’s Starlink, meanwhile, has been active since 2019 and already has 5,000 satellites in orbit.


Solar-Powered Test Drive

The next car you buy might be powered by the sun. Long awaited by potential customers and crowdfunders, solar electric vehicles (SEVs) made by the startup Aptera Motors are set to hit the road in 2024, the company says. Like the cooler cousin of an SUV, these three-wheeled SEVs feature a sleek, aerodynamic design to cut down on drag. The latest version of the vehicle combines plug-in capability with solar panels that cover its roof, allowing for a 1,600-kilometer range on a single charge and up to 65 km a day from solar power. Aptera says it aims to begin early production in 2024, with the first 2,000 vehicles set to be delivered to investors.


Zero Trust, Two-Thirds Confidence

“Trust but verify” is now a proverb of the past in cybersecurity policy in the United States. By the end of the 2024 fiscal year, in September, all U.S. government agencies will be required to switch to a Zero Trust security architecture. All users must validate their identity and devices—even when they’re already connected to government networks and VPNs. This is achieved with methods like multifactor authentication and other access controls. About two-thirds of security professionals employed by federal agencies are confident that their department will hit the cybersecurity deadline, according to a 2023 report.


First Light for Vera Rubin

Vera C. Rubin Observatory, home to the largest digital camera ever constructed, is expected to open its eye to the sky for the first time in late 2024. The observatory features an 8.4-meter wide-field telescope that will scan the Southern Hemisphere’s skies over the course of a decade-long project. Equipped with a 3,200-megapixel camera, the telescope will photograph an area the size of 40 full moons every night from its perch atop a Chilean mountain. That means it can capture the entire visible sky every three to four nights. When operational, the Rubin Observatory will help astronomers inventory the solar system, map the Milky Way, and shed light on dark matter and dark energy.


Hailing Air Taxis at the Olympics

At this year’s summer Olympic Games in Paris, attendees may be able to take an electric vertical-take-off-and-landing vehicle, or eVTOL, to get around the city. Volocopter, in Bruchsal, Germany, hopes to make an air taxi service available to sports enthusiasts and tourists during the competition. Though the company is still awaiting certification from the European Union Aviation Safety Agency, Volocopter plans to offer three routes between various parts of the city, as well as two round-trip routes for tourists. Volocopter’s air taxis could make Paris the first European city to offer eVTOL services.


Faster Than a Speeding Bullet

Boom Technology is developing an airliner, called Overture, that flies faster than the speed of sound. The U.S. company says it’s set to finish construction of its North Carolina “superfactory” in 2024. Each year Boom plans to manufacture as many as 33 of the aircraft, which the company claims will be the world’s fastest airliner. Overture is designed to be capable of flying twice as fast as today’s commercial planes, and Boom says it expects the plane to be powered by sustainable aviation fuel, made without petroleum. The company says it already has orders in place from commercial airlines and is aiming for first flight by 2027.



Ten years ago, at the DARPA Robotics Challenge (DRC) Trial event near Miami, I watched the most advanced humanoid robots ever built struggle their way through a scenario inspired by the Fukushima nuclear disaster. A team of experienced engineers controlled each robot, and overhead safety tethers kept them from falling over. The robots had to demonstrate mobility, sensing, and manipulation—which, with painful slowness, they did.

These robots were clearly research projects, but DARPA has a history of catalyzing technology with a long-term view. The DARPA Grand and Urban Challenges for autonomous vehicles, in 2005 and 2007, formed the foundation for today’s autonomous taxis. So, after DRC ended in 2015 with several of the robots successfully completing the entire final scenario, the obvious question was: When would humanoid robots make the transition from research project to a commercial product?

This article is part of our special report Top Tech 2024.

The answer seems to be 2024, when a handful of well-funded companies will be deploying their robots in commercial pilot projects to figure out whether humanoids are really ready to get to work.

One of the robots that made an appearance at the DRC Finals in 2015 was called ATRIAS, developed by Jonathan Hurst at the Oregon State University Dynamic Robotics Laboratory. In 2015, Hurst cofounded Agility Robotics to turn ATRIAS into a human-centric, multipurpose, and practical robot called Digit. Approximately the same size as a human, Digit stands 1.75 meters tall (about 5 feet, 8 inches), weighs 65 kilograms (about 140 pounds), and can lift 16 kg (about 35 pounds). Agility is now preparing to produce a commercial version of Digit at massive scale, and the company sees its first opportunity in the logistics industry, where it will start doing some of the jobs where humans are essentially acting like robots already.

Are humanoid robots useful?

“We spent a long time working with potential customers to find a use case where our technology can provide real value, while also being scalable and profitable,” Hurst says. “For us, right now, that use case is moving e-commerce totes.” Totes are standardized containers that warehouses use to store and transport items. As items enter or leave the warehouse, empty totes need to be continuously moved from place to place. It’s a vital job, and even in highly automated warehouses, much of that job is done by humans.

Agility says that in the United States, there are currently several million people working at tote-handling tasks, and logistics companies are having trouble keeping positions filled, because in some markets there are simply not enough workers available. Furthermore, the work tends to be dull, repetitive, and stressful on the body. “The people doing these jobs are basically doing robotic jobs,” says Hurst, and Agility argues that these people would be much better off doing work that’s more suited to their strengths. “What we’re going to have is a shifting of the human workforce into a more supervisory role,” explains Damion Shelton, Agility Robotics’ CEO. “We’re trying to build something that works with people,” Hurst adds. “We want humans for their judgment, creativity, and decision-making, using our robots as tools to do their jobs faster and more efficiently.”

For Digit to be an effective warehouse tool, it has to be capable, reliable, safe, and financially sustainable for both Agility and its customers. Agility is confident that all of this is possible, citing Digit’s potential relative to the cost and performance of human workers. “What we’re encouraging people to think about,” says Shelton, “is how much they could be saving per hour by being able to allocate their human capital elsewhere in the building.” Shelton estimates that a typical large logistics company spends at least US $30 per employee-hour for labor, including benefits and overhead. The employee, of course, receives much less than that.

Agility is not yet ready to provide pricing information for Digit, but we’re told that it will cost less than $250,000 per unit. Even at that price, if Digit is able to achieve Agility’s goal of minimum 20,000 working hours (five years of two shifts of work per day), that brings the hourly rate of the robot to $12.50. A service contract would likely add a few dollars per hour to that. “You compare that against human labor doing the same task,” Shelton says, “and as long as it’s apples to apples in terms of the rate that the robot is working versus the rate that the human is working, you can decide whether it makes more sense to have the person or the robot.”

Agility’s robot won’t be able to match the general capability of a human, but that’s not the company’s goal. “Digit won’t be doing everything that a person can do,” says Hurst. “It’ll just be doing that one process-automated task,” like moving empty totes. In these tasks, Digit is able to keep up with (and in fact slightly exceed) the speed of the average human worker, when you consider that the robot doesn’t have to accommodate the needs of a frail human body.

Amazon’s experiments with warehouse robots

The first company to put Digit to the test is Amazon. In 2022, Amazon invested in Agility as part of its Industrial Innovation Fund, and late last year Amazon started testing Digit at its robotics research and development site near Seattle, Wash. Digit will not be lonely at Amazon—the company currently has more than 750,000 robots deployed across its warehouses, including legacy systems that operate in closed-off areas as well as more modern robots that have the necessary autonomy to work more collaboratively with people. These newer robots include autonomous mobile robotic bases like Proteus, which can move carts around warehouses, as well as stationary robot arms like Sparrow and Cardinal, which can handle inventory or customer orders in structured environments. But a robot with legs will be something new.

“What’s interesting about Digit is because of its bipedal nature, it can fit in spaces a little bit differently,” says Emily Vetterick, director of engineering at Amazon Global Robotics, who is overseeing Digit’s testing. “We’re excited to be at this point with Digit where we can start testing it, because we’re going to learn where the technology makes sense.”

Where two legs make sense has been an ongoing question in robotics for decades. Obviously, in a world designed primarily for humans, a robot with a humanoid form factor would be ideal. But balancing dynamically on two legs is still difficult for robots, especially when those robots are carrying heavy objects and are expected to work at a human pace for tens of thousands of hours. When is it worthwhile to use a bipedal robot instead of something simpler?

“The people doing these jobs are basically doing robotic jobs.”—Jonathan Hurst, Agility Robotics

“The use case for Digit that I’m really excited about is empty tote recycling,” Vetterick says. “We already automate this task in a lot of our warehouses with a conveyor, a very traditional automation solution, and we wouldn’t want a robot in a place where a conveyor works. But a conveyor has a specific footprint, and it’s conducive to certain types of spaces. When we start to get away from those spaces, that’s where robots start to have a functional need to exist.”

The need for a robot doesn’t always translate into the need for a robot with legs, however, and a company like Amazon has the resources to build its warehouses to support whatever form of robotics or automation it needs. Its newer warehouses are indeed built that way, with flat floors, wide aisles, and other environmental considerations that are particularly friendly to robots with wheels.

“The building types that we’re thinking about [for Digit] aren’t our new-generation buildings. They’re older-generation buildings, where we can’t put in traditional automation solutions because there just isn’t the space for them,” says Vetterick. She describes the organized chaos of some of these older buildings as including narrower aisles with roof supports in the middle of them, and areas where pallets, cardboard, electrical cord covers, and ergonomics mats create uneven floors. “Our buildings are easy for people to navigate,” Vetterick continues. “But even small obstructions become barriers that a wheeled robot might struggle with, and where a walking robot might not.” Fundamentally, that’s the advantage bipedal robots offer relative to other form factors: They can quickly and easily fit into spaces and workflows designed for humans. Or at least, that’s the goal.

Vetterick emphasizes that the Seattle R&D site deployment is only a very small initial test of Digit’s capabilities. Having the robot move totes from a shelf to a conveyor across a flat, empty floor is not reflective of the use case that Amazon ultimately would like to explore. Amazon is not even sure that Digit will turn out to be the best tool for this particular job, and for a company so focused on efficiency, only the best solution to a specific problem will find a permanent home as part of its workflow. “Amazon isn’t interested in a general-purpose robot,” Vetterick explains. “We are always focused on what problem we’re trying to solve. I wouldn’t want to suggest that Digit is the only way to solve this type of problem. It’s one potential way that we’re interested in experimenting with.”

The idea of a general-purpose humanoid robot that can assist people with whatever tasks they may need is certainly appealing, but as Amazon makes clear, the first step for companies like Agility is to find enough value performing a single task (or perhaps a few different tasks) to achieve sustainable growth. Agility believes that Digit will be able to scale its business by solving Amazon’s empty tote-recycling problem, and the company is confident enough that it’s preparing to open a factory in Salem, Ore. At peak production the plant will eventually be capable of manufacturing 10,000 Digit robots per year.

A menagerie of humanoids

Agility is not alone in its goal to commercially deploy bipedal robots in 2024. At least seven other companies are also working toward this goal, with hundreds of millions of dollars of funding backing them. 1X, Apptronik, Figure, Sanctuary, Tesla, and Unitree all have commercial humanoid robot prototypes.

Despite an influx of money and talent into commercial humanoid robot development over the past two years, there have been no recent fundamental technological breakthroughs that will substantially aid these robots’ development. Sensors and computers are capable enough, but actuators remain complex and expensive, and batteries struggle to power bipedal robots for the length of a work shift.

There are other challenges as well, including creating a robot that’s manufacturable with a resilient supply chain and developing the service infrastructure to support a commercial deployment at scale. The biggest challenge by far is software. It’s not enough to simply build a robot that can do a job—that robot has to do the job with the kind of safety, reliability, and efficiency that will make it desirable as more than an experiment.

There’s no question that Agility Robotics and the other companies developing commercial humanoids have impressive technology, a compelling narrative, and an enormous amount of potential. Whether that potential will translate into humanoid robots in the workplace now rests with companies like Amazon, who seem cautiously optimistic. It would be a fundamental shift in how repetitive labor is done. And now, all the robots have to do is deliver.

This article appears in the January 2024 print issue as “Year of the Humanoid.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

Wishing you and your loved ones merry Christmas, happy holidays, and a happy New Year from everyone at the Autonomous Systems Lab at ETH Zürich!

[ ASL ]

Merry Christmas and sustainable 2024 from VUB-imec Brubotics & Fysc!

[ BruBotics ]

Thanks, Bram!

Embark on MOMO (Mobile Object Manipulation Operator)’s thrilling quest to ignite joy and excitement! Watch as MOMO skillfully places the tree topper, ensuring that every KIMLAB member’s holiday season is filled with happiness and brightness. Happy Holidays!

[ KIMLAB ]

Merry Christmas from AgileX Robotics and our little wheeled bipedal robot, T-Rex! As we step into 2024, may the joy of the season accompany you throughout the year. Here’s to a festive holiday filled with warmth, laughter, and innovative adventures!

[ AgileX Robotics ]

To celebrate this amazing year, we’d like to share a special holiday video showcasing our most requested demo! We hope it brings you a smile as bright as the lights of the season.

[ Flexiv ]

The Robotnik team is still working to make even smarter, more autonomous and more efficient mobile robotics solutions available to you in 2024. Merry Christmas!

[ Robotnik ]

Season’s Greetings from ABB Robotics!

[ ABB ]

If you were at ICRA you got a sneak peak at this, but here’s a lovely Spot tango from the AI Institute.

[ The Institute ]

CL-1 is one of the few humanoid robots around the world that achieves dynamic stair climbing based on real-time terrain perception, mainly thanks to LimX Dynamics’ advanced motion control and AI algorithms, along with proprietary high-performing actuators and hardware system.

[ LimX Dynamics ]

We wrote about Parallel Systems a couple years ago, and here’s a brief update.

[ Parallel Systems ]

After 1,000 Martian days of exploration, NASA’s Perseverance rover is studying rocks that show several eras in the history of a river delta billions of years old. Scientists are investigating this region of Mars, known as Jezero Crater, to see if they can find evidence of ancient life recorded in the rocks. Perseverance project scientist Ken Farley provides a guided tour of a richly detailed panorama of the rover’s location in November 2023, taken by the Mastcam-Z instrument.

[ NASA ]

It’s been many, many years since we’ve seen a new steampunk robot from I-Wei Huang, but it was worth the wait!

[ CrabFu ]

Ok apparently this is a loop of Digit standing in front of a fireplace for 10 hours, rather than a very impressive demonstration of battery life.

[ Agility ]

Pages