IEEE Spectrum Automation

IEEE Spectrum
Subscribe to IEEE Spectrum Automation feed IEEE Spectrum Automation

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Robotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

ReachBot is a new concept for planetary exploration, consisting of a small body and long, lightweight extending arms loaded primarily in tension. The arms are equipped with spined grippers for anchoring on rock surfaces. Experiments with rock grasping and coordinated locomotion illustrate the advantages of low inertia passive grippers, triggered by impact and using stored mechanical energy for the internal force.

[ Paper ]

DHL Supply Chain is deploying Stretch to automate trailer unloading and support warehouse associates. In the past 8-10 years there have been tremendous advancements in warehouse automation. DHL has been a leader in deploying automation technology to improve efficiency, drive cost-effectiveness, and support exceptional employee experiences. Discover how they are putting Stretch to work.

[ Boston Dynamics ]

Scientists at the University of Bristol have drawn on the design and life of a mysterious zooplankton to develop underwater robots. These robotic units called RoboSalps, after their animal namesakes, have been engineered to operate in unknown and extreme environments such as extra-terrestrial oceans.

RoboSalps are unique as each individual module can swim on its own. This is possible because of a small motor with rotor blades—typically used for drones—inserted into the soft tubular structure. When swimming on their own, RoboSalps modules are difficult to control, but after joining them together to form colonies, they become more stable and show sophisticated movements.

[ Bristol ]

AIce is an Autonomous Zamboni Convoy that is designed to automate ice resurfacing in any ice rink. The current goal of this product is to demonstrate an autonomous driving task based on leader-follower utilizing computer vision, motion planning, control, and localization. The team aspires to build this project in a manner that will give it potential to grow after the project is completed, to a fully autonomous Zamboni.

[ AIce ] via [ CMU ]

We propose a new neck design for legged robots to achieve robust visual-inertial state estimation in dynamic locomotion. While visual-inertial state estimation is widely used in robotics, it has a problem of being disturbed by the impacts and vibration generated when legged robots move dynamically. To address this problem, we develop a tunable neck system that absorbs the impacts and vibration during diverse gait locomotions.

[ Paper ]

I will not make any comments about meat-handling robots.

[ Soft Robotics ]

This should be pretty cool to see once it’s running on hardware.

[ Paper ]

A largely untapped potential for aerial robots is to capture airborne targets in flight. We present an approach in which a simple dynamic model of a quadrotor/target interaction leads to the design of a gripper and associated velocity sufficiency region with a high probability of capture. We demonstrate in-flight experiments that a 550 g drone can capture an 85 g target at various relative velocities between 1 m/s and 2.7 m/s.

[ Paper ]

The process of bin picking presents new challenges again and again. In order to be able to deal with small and flat component geometries as well as with entanglements and packaging material, methods of machine learning are used at Fraunhofer IPA. In addition to increasing the robustness of the removal process, attempts are also made to minimize the process time and the commissioning effort.

[ Fraunhofer ]

The history of lidar: After the devastating loss of Mars Observer, the Goddard team mourns and regroups to build a second MOLA instrument for the Mars Global Surveyor mission. But before their laser altimeter goes to Mars, the team seizes an opportunity to test it on the Space Shuttle.

[ NASA ] [Leaders in Lidar, Chapter 1]

What are the challenges in the development of humanoid robotic systems? What are the advantages and what are the criticalities? Bruno Siciliano, coordinator of PRISMA Lab, discusses these themes with Fabio Puglia, president and co-founder of Oversonic Robotics. Moderated by science journalist Riccardo Oldani, Siciliano and Puglia also bring concrete cases of the development of two humanoid robots, Rodyman and RoBee respectively, and their applications.

[ PRISMA Lab ]

Please join us for a lively panel discussion featuring GRASP Faculty members including Dr. Nadia Figueroa, Dr. Dinesh Jayaraman, and Dr. Marc Miskin. This panel will be moderated by Penn Engineering SEAS Dean Dr. Vijay Kumar.

[ UPenn ]

An interactive webinar discussing how progress in robotic materials is impacting the field of manipulation. The second conversation in the series, hosted by Northwestern’s Center for Robotics and Biosystems. Moderator: Carmel Majidi, Carnegie Mellon University. Panelists: Elliot W. Hawkes, UC Santa Barbara; Tess Hellebrekers, Meta AI; Nancy Pollard, Carnegie Mellon University; Yon Visell, UC Santa Barbara.

[ Northwestern ]

At the 2022 Conference on Robot Learning (CoRL), Waymo’s Head of Research Drago Anguelov shared some of his team’s recent research on improving models for behavior.

[ Waymo ]

This week’s CMU RI Seminar is from Russ Tedrake, on “Motion Planning Around Obstacles with Graphs of Convex Sets.”

[ CMU ]



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

When creating robots, it can be challenging to achieve the right combination of qualities, which sometimes contradict one another. For example, it’s difficult to make a robot that is both flexible and strong—but not impossible.

In a recent study, researchers created a robot that yields high degrees of flexibility while still maintaining high tension within its “muscles,” giving it sufficient torsional motion to accomplish difficult tasks. In an experiment, the robot was able to remove a cap from a bottle, while yielding a torsional motion that was 2.5 times greater than the next leading robot of its type. The results were published January 13 in IEEE Robotics and Automation Letters.

Soft Tensegrity Robot Arm with Twist Manipulation Suzumori Endo Lab, Tokyo Tech has developed soft tensegrity arm with twist manipulation. Project members: Ryota Kobayashi, ...

Tensegrity robots are made of networks of rigid frames and soft cables, which enable them to change their shape by adjusting their internal tension.

“Tensegrity structures are intriguing due to their unique characteristics—lightweight, flexible, and durable,” explains Ryota Kobayashi, a Master’s student at the Tokyo Institute of Technology, who was involved in the study. “These robots could operate in challenging unknown environments, such as caves or space, with more sophisticated and effective behavior.”

Tensegrity robots can have a foundational structure with varying numbers of rigid structures, or “bars,” ranging from two to twelve or some time even more bars—but as a general rule of thumb, robots with more bars are typically more complex and difficult to design.

In their study, Kobayashi’s team created a tensegrity robot, which relies on six-bar tensegrity modules. To ensure the robot achieve large torsion, a virtual map of triangles is used, whereby the robot’s artificial muscles were placed so that they connected the vertices of the triangles. When the muscles contract, it brings the vertices of the triangles closer together.

Relying on this technique, the robot was achieved a large torsional motion of 50 degrees in two directions using only a 20% contraction of the artificial muscle. Kobayashi says his team was surprised at the efficiency of the system—just small contractions of the artificial muscle resulted in large contractions and torsional deformations.

“Most six-bar tensegrity robots only roll with slight deformations of the structure, resulting in limited movements,” says Dr. Hiroyuki Nabae, an assistant professor in Tokyo Institute of Technology who was also involved in the study. Notably, the authors report that their six-bar robot yields large torsional motion that are 2.5 times more than any other six-bar tensegrity robot they could find in the literature.

Next, the research team attached rubber fingers to the robot to help it grip objects and tested its ability to complete tasks. In one experiment, the robot arm is lowered to a Coca-Cola bottle, grips the cap, twists, raises the arm and repeats one more grip and twist motion to remove the cap in a matter of seconds.

The researchers are considering ways to build upon this technology, for example by increasing the robot’s ability to bend in different directions and incorporating tech that allows the robot to recognize new shapes in its environment. This latter advancement could help the robot adapt more to novel environments and tasks as needed.



What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard.

Such scenarios may seem like science fiction, but recent progress in robotics and neuroscience makes extra robotic limbs conceivable with today’s technology. Our research groups at Imperial College London and the University of Freiburg, in Germany, together with partners in the European project NIMA, are now working to figure out whether such augmentation can be realized in practice to extend human abilities. The main questions we’re tackling involve both neuroscience and neurotechnology: Is the human brain capable of controlling additional body parts as effectively as it controls biological parts? And if so, what neural signals can be used for this control?

We think that extra robotic limbs could be a new form of human augmentation, improving people’s abilities on tasks they can already perform as well as expanding their ability to do things they simply cannot do with their natural human bodies. If humans could easily add and control a third arm, or a third leg, or a few more fingers, they would likely use them in tasks and performances that went beyond the scenarios mentioned here, discovering new behaviors that we can’t yet even imagine.

Levels of human augmentation

Robotic limbs have come a long way in recent decades, and some are already used by people to enhance their abilities. Most are operated via a joystick or other hand controls. For example, that’s how workers on manufacturing lines wield mechanical limbs that hold and manipulate components of a product. Similarly, surgeons who perform robotic surgery sit at a console across the room from the patient. While the surgical robot may have four arms tipped with different tools, the surgeon’s hands can control only two of them at a time. Could we give these surgeons the ability to control four tools simultaneously?

Robotic limbs are also used by people who have amputations or paralysis. That includes people in powered wheelchairs controlling a robotic arm with the chair’s joystick and those who are missing limbs controlling a prosthetic by the actions of their remaining muscles. But a truly mind-controlled prosthesis is a rarity.

If humans could easily add and control a third arm, they would likely use them in new behaviors that we can’t yet even imagine.

The pioneers in brain-controlled prosthetics are people with tetraplegia, who are often paralyzed from the neck down. Some of these people have boldly volunteered for clinical trials of brain implants that enable them to control a robotic limb by thought alone, issuing mental commands that cause a robot arm to lift a drink to their lips or help with other tasks of daily life. These systems fall under the category of brain-machine interfaces (BMI). Other volunteers have used BMI technologies to control computer cursors, enabling them to type out messages, browse the Internet, and more. But most of these BMI systems require brain surgery to insert the neural implant and include hardware that protrudes from the skull, making them suitable only for use in the lab.

Augmentation of the human body can be thought of as having three levels. The first level increases an existing characteristic, in the way that, say, a powered exoskeleton can give the wearer super strength. The second level gives a person a new degree of freedom, such as the ability to move a third arm or a sixth finger, but at a cost—if the extra appendage is controlled by a foot pedal, for example, the user sacrifices normal mobility of the foot to operate the control system. The third level of augmentation, and the least mature technologically, gives a user an extra degree of freedom without taking mobility away from any other body part. Such a system would allow people to use their bodies normally by harnessing some unused neural signals to control the robotic limb. That’s the level that we’re exploring in our research.

Deciphering electrical signals from muscles

Third-level human augmentation can be achieved with invasive BMI implants, but for everyday use, we need a noninvasive way to pick up brain commands from outside the skull. For many research groups, that means relying on tried-and-true electroencephalography (EEG) technology, which uses scalp electrodes to pick up brain signals. Our groups are working on that approach, but we are also exploring another method: using electromyography (EMG) signals produced by muscles. We’ve spent more than a decade investigating how EMG electrodes on the skin’s surface can detect electrical signals from the muscles that we can then decode to reveal the commands sent by spinal neurons.

Electrical signals are the language of the nervous system. Throughout the brain and the peripheral nerves, a neuron “fires” when a certain voltage—some tens of millivolts—builds up within the cell and causes an action potential to travel down its axon, releasing neurotransmitters at junctions, or synapses, with other neurons, and potentially triggering those neurons to fire in turn. When such electrical pulses are generated by a motor neuron in the spinal cord, they travel along an axon that reaches all the way to the target muscle, where they cross special synapses to individual muscle fibers and cause them to contract. We can record these electrical signals, which encode the user’s intentions, and use them for a variety of control purposes.

How the Neural Signals Are Decoded

A training module [orange] takes an initial batch of EMG signals read by the electrode array [left], determines how to extract signals of individual neurons, and summarizes the process mathematically as a separation matrix and other parameters. With these tools, the real-time decoding module [green] can efficiently extract individual neurons’ sequences of spikes, or “spike trains” [right], from an ongoing stream of EMG signals. Chris Philpot

Deciphering the individual neural signals based on what can be read by surface EMG, however, is not a simple task. A typical muscle receives signals from hundreds of spinal neurons. Moreover, each axon branches at the muscle and may connect with a hundred or more individual muscle fibers distributed throughout the muscle. A surface EMG electrode picks up a sampling of this cacophony of pulses.

A breakthrough in noninvasive neural interfaces came with the discovery in 2010 that the signals picked up by high-density EMG, in which tens to hundreds of electrodes are fastened to the skin, can be disentangled, providing information about the commands sent by individual motor neurons in the spine. Such information had previously been obtained only with invasive electrodes in muscles or nerves. Our high-density surface electrodes provide good sampling over multiple locations, enabling us to identify and decode the activity of a relatively large proportion of the spinal motor neurons involved in a task. And we can now do it in real time, which suggests that we can develop noninvasive BMI systems based on signals from the spinal cord.

A typical muscle receives signals from hundreds of spinal neurons.

The current version of our system consists of two parts: a training module and a real-time decoding module. To begin, with the EMG electrode grid attached to their skin, the user performs gentle muscle contractions, and we feed the recorded EMG signals into the training module. This module performs the difficult task of identifying the individual motor neuron pulses (also called spikes) that make up the EMG signals. The module analyzes how the EMG signals and the inferred neural spikes are related, which it summarizes in a set of parameters that can then be used with a much simpler mathematical prescription to translate the EMG signals into sequences of spikes from individual neurons.

With these parameters in hand, the decoding module can take new EMG signals and extract the individual motor neuron activity in real time. The training module requires a lot of computation and would be too slow to perform real-time control itself, but it usually has to be run only once each time the EMG electrode grid is fixed in place on a user. By contrast, the decoding algorithm is very efficient, with latencies as low as a few milliseconds, which bodes well for possible self-contained wearable BMI systems. We validated the accuracy of our system by comparing its results with signals obtained concurrently by two invasive EMG electrodes inserted into the user’s muscle.

Exploiting extra bandwidth in neural signals

Developing this real-time method to extract signals from spinal motor neurons was the key to our present work on controlling extra robotic limbs. While studying these neural signals, we noticed that they have, essentially, extra bandwidth. The low-frequency part of the signal (below about 7 hertz) is converted into muscular force, but the signal also has components at higher frequencies, such as those in the beta band at 13 to 30 Hz, which are too high to control a muscle and seem to go unused. We don’t know why the spinal neurons send these higher-frequency signals; perhaps the redundancy is a buffer in case of new conditions that require adaptation. Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.

That discovery set us thinking about what could be done with the spare frequencies. In particular, we wondered if we could take that extraneous neural information and use it to control a robotic limb. But we didn’t know if people would be able to voluntarily control this part of the signal separately from the part they used to control their muscles. So we designed an experiment to find out.

Neural Control Demonstrated

A volunteer exploits unused neural bandwidth to direct the motion of a cursor on the screen in front of her. Neural signals pass from her brain, through spinal neurons, to the muscle in her shin, where they are read by an electromyography (EMG) electrode array on her leg and deciphered in real time. These signals include low-frequency components [blue] that control muscle contractions, higher frequencies [beta band, yellow] with no known biological purpose, and noise [gray]. Chris Philpot; Source: M. Bräcklein et al., Journal of Neural Engineering

In our first proof-of-concept experiment, volunteers tried to use their spare neural capacity to control computer cursors. The setup was simple, though the neural mechanism and the algorithms involved were sophisticated. Each volunteer sat in front of a screen, and we placed an EMG system on their leg, with 64 electrodes in a 4-by-10-centimeter patch stuck to their shin over the tibialis anterior muscle, which flexes the foot upward when it contracts. The tibialis has been a workhorse for our experiments: It occupies a large area close to the skin, and its muscle fibers are oriented along the leg, which together make it ideal for decoding the activity of spinal motor neurons that innervate it.

These are some results from the experiment in which low- and high-frequency neural signals, respectively, controlled horizontal and vertical motion of a computer cursor. Colored ellipses (with plus signs at centers) show the target areas. The top three diagrams show the trajectories (each one starting at the lower left) achieved for each target across three trials by one user. At bottom, dots indicate the positions achieved across many trials and users. Colored crosses mark the mean positions and the range of results for each target.Source: M. Bräcklein et al., Journal of Neural Engineering

We asked our volunteers to steadily contract the tibialis, essentially holding it tense, and throughout the experiment we looked at the variations within the extracted neural signals. We separated these signals into the low frequencies that controlled the muscle contraction and spare frequencies at about 20 Hz in the beta band, and we linked these two components respectively to the horizontal and vertical control of a cursor on a computer screen. We asked the volunteers to try to move the cursor around the screen, reaching all parts of the space, but we didn’t, and indeed couldn’t, explain to them how to do that. They had to rely on the visual feedback of the cursor’s position and let their brains figure out how to make it move.

Remarkably, without knowing exactly what they were doing, these volunteers mastered the task within minutes, zipping the cursor around the screen, albeit shakily. Beginning with one neural command signal—contract the tibialis anterior muscle—they were learning to develop a second signal to control the computer cursor’s vertical motion, independently from the muscle control (which directed the cursor’s horizontal motion). We were surprised and excited by how easily they achieved this big first step toward finding a neural control channel separate from natural motor tasks. But we also saw that the control was not accurate enough for practical use. Our next step will be to see if more accurate signals can be obtained and if people can use them to control a robotic limb while also performing independent natural movements.

We are also interested in understanding more about how the brain performs feats like the cursor control. In a recent study using a variation of the cursor task, we concurrently used EEG to see what was happening in the user’s brain, particularly in the area associated with the voluntary control of movements. We were excited to discover that the changes happening to the extra beta-band neural signals arriving at the muscles were tightly related to similar changes at the brain level. As mentioned, the beta neural signals remain something of a mystery since they play no known role in controlling muscles, and it isn’t even clear where they originate. Our result suggests that our volunteers were learning to modulate brain activity that was sent down to the muscles as beta signals. This important finding is helping us unravel the potential mechanisms behind these beta signals.

Meanwhile, at Imperial College London we have set up a system for testing these new technologies with extra robotic limbs, which we call the MUlti-limb Virtual Environment, or MUVE. Among other capabilities, MUVE will enable users to work with as many as four lightweight wearable robotic arms in scenarios simulated by virtual reality. We plan to make the system open for use by other researchers worldwide.

Next steps in human augmentation

Connecting our control technology to a robotic arm or other external device is a natural next step, and we’re actively pursuing that goal. The real challenge, however, will not be attaching the hardware, but rather identifying multiple sources of control that are accurate enough to perform complex and precise actions with the robotic body parts.

We are also investigating how the technology will affect the neural processes of the people who use it. For example, what will happen after someone has six months of experience using an extra robotic arm? Would the natural plasticity of the brain enable them to adapt and gain a more intuitive kind of control? A person born with six-fingered hands can have fully developed brain regions dedicated to controlling the extra digits, leading to exceptional abilities of manipulation. Could a user of our system develop comparable dexterity over time? We’re also wondering how much cognitive load will be involved in controlling an extra limb. If people can direct such a limb only when they’re focusing intently on it in a lab setting, this technology may not be useful. However, if a user can casually employ an extra hand while doing an everyday task like making a sandwich, then that would mean the technology is suited for routine use.

Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.

Other research groups are pursuing the same neuroscience questions. Some are experimenting with control mechanisms involving either scalp-based EEG or neural implants, while others are working on muscle signals. It is early days for movement augmentation, and researchers around the world have just begun to address the most fundamental questions of this emerging field.

Two practical questions stand out: Can we achieve neural control of extra robotic limbs concurrently with natural movement, and can the system work without the user’s exclusive concentration? If the answer to either of these questions is no, we won’t have a practical technology, but we’ll still have an interesting new tool for research into the neuroscience of motor control. If the answer to both questions is yes, we may be ready to enter a new era of human augmentation. For now, our (biological) fingers are crossed.



Apptronik, a Texas-based robotics company with its roots in the Human Centered Robotics Lab at University of Texas at Austin, has spent the last few years working towards a practical, general purpose humanoid robot. By designing their robot (called Apollo) completely from the ground up, including electronics and actuators, Apptronik is hoping that they’ll be able to deliver something affordable, reliable, and broadly useful. But at the moment, the most successful robots are not generalized systems—they’re uni-taskers, robots that can do one specific task very well but more or less nothing else. A general purpose robot, especially one in a human form factor, would have enormous potential. But the challenge is enormous, too.

So why does Apptronik believe that they have the answer to general purpose humanoid robots with Apollo? To find out, we spoke with Apptronik’s founders, CEO Jeff Cardenas and CTO Nick Paine.

IEEE Spectrum: Why are you developing a general purpose robot when the most successful robots in the supply chain focus on specific tasks?

Nick Paine: It’s about our level of ambition. A specialized tool is always going to beat a general tool at one task, but if you’re trying to solve ten tasks, or 100 tasks, or 1000 tasks, it’s more logical to put your effort into a single versatile hardware platform with specialized software that solves a myriad of different problems.

How do you know that you’ve reached an inflection point where building a general purpose commercial humanoid is now realistic, when it wasn’t before?

Paine: There are a number of different things. For one, Moore’s Law has slowed down, but computers are evolving in a way that has helped advance the complexity of algorithms that can be deployed on mobile systems. Also, there are new algorithms that have been developed recently that have enabled advancements in legged locomotion, machine vision, and manipulation. And along with algorithmic improvements, there have been sensing improvements. All of this has influenced the ability to design these types of legged systems for unstructured environments.

Jeff Cardenas: I think it’s taken decades for it to be the right time. After many many iterations as a company, we’ve gotten to the point where we’ve said, “Okay, we see all the pieces to where we believe we can build a robust, capable, affordable system that can really go out and do work.” It’s still the beginning, but we’re now at an inflection point where there’s demand from the market, and we can get these out into the world.

The reason that I got into robotics is that I was sick of seeing robots just dancing all the time. I really wanted to make robots that could be useful in the world.
—Nick Paine, CTO Apptronik

Why did you need to develop and test 30 different actuators for Apollo, and how did you know that the 30th actuator was the right one?

Paine: The reason for the variety was that we take a first-principles approach to designing robotic systems. The way you control the system really impacts how you design the system, and that goes all the way down to the actuators. A certain type of actuator is not always the silver bullet: every actuator has its strengths and weaknesses, and we’ve explored that space to understand the limitations of physics to guide us toward the right solutions.

With your focus on making a system that’s affordable, how much are you relying on software to help you minimize hardware costs?

Paine: Some groups have tried masking the deficiencies of cheap, low-quality hardware with software. That’s not at all the approach we’re taking. We are leaning on our experience building these kinds of systems over the years from a first principles approach. Building from the core requirements for this type of system, we’ve found a solution that hits our performance targets while also being far more mass producible compared to anything we’ve seen in this space previously. We’re really excited about the solution that we’ve found.

How much effort are you putting into software at this stage? How will you teach Apollo to do useful things?

Paine: There are some basic applications that we need to solve for Apollo to be fundamentally useful. It needs to be able to walk around, to use its upper body and its arms to interact with the environment. Those are the core capabilities that we’re working on, and once those are at a certain level of maturity, that’s where we can open up the platform for third party application developers to build on top of that.

Cardenas: If you look at Willow Garage with the PR2, they had a similar approach, which was to build a solid hardware platform, create a powerful API, and then let others build applications on it. But then you’re really putting your destiny in the hands of other developers. One of the things that we learned from that is if you want to enable that future, you have to prove that initial utility. So what we’re doing is handling the full stack development on the initial applications, which will be targeting supply chain and logistics.

NASA officials have expressed their interest in Apptronik developing “technology and talent that will sustain us through the Artemis program and looking forward to Mars.”

“In robotics, seeing is believing. You can say whatever you want, but you really have to prove what you can do, and that’s been our focus. We want to show versus tell.”
—Jeff Cardenas, CEO Apptronik

Apptronik plans for the alpha version of Apollo to be ready in March, in time for a sneak peak for a small audience at SXSW. From there, the alpha Apollos will go through pilots as Apptronik collects feedback to develop a beta version that will begin larger deployments. The company expects these programs to lead to full a gamma version and full production runs by the end of 2024.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREARoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCECLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILRSS 2023: 10–14 July 2023, DAEGU, KOREAICRA 2023: 29 May–2 June 2023, LONDONRobotics Summit & Expo: 10–11 May 2023, BOSTON

Enjoy today’s videos!

Sometimes, watching a robot almost but not quite fail is way cooler than watching it succeed.

[ Boston Dynamics ]

Simulation-based reinforcement learning approaches are leading the next innovations in legged robot control. However, the resulting control policies are still not applicable on soft and deformable terrains, especially at high speed. To this end, we introduce a versatile and computationally efficient granular media model for reinforcement learning. We applied our techniques to the Raibo robot, a dynamic quadrupedal robot developed in-house. The trained networks demonstrated high-speed locomotion capabilities on deformable terrains.

[ Kaist ]

A lonely badminton player’s best friend.

[ YouTube ]

Come along for the (autonomous) ride with Yorai Shaoul, and see what a day is like for a Ph.D. student at Carnegie Mellon University Robotics Institute.

[ AirLab ]

In this video we showcase a Husky-based robot that’s preparing for its journey across the continent to live with a family of alpacas on Formant’s farm in Denver, Colorado.

[ Clearpath ]

Arm prostheses are becoming smarter, more customized and more versatile. We’re closer to replicating everyday movements than ever before, but we’re not there yet. Can you do better? Join teams to revolutionize prosthetics and build a world without barriers.

[ Cybathlon 2024 ]

RB-VOGUI is the robot developed for this success story and is mainly responsible for the navigation and collection of high quality data, which is transferred in real time to the relevant personnel. After the implementation of the fleet of autonomous mobile robots, only one operator is needed to monitor the fleet from a control centre.

[ Robotnik ]

Bagging groceries isn’t only a physical task: knowing how to order the items to prevent damage requires human-like intelligence. Also … bin packing.

[ Sanctuary AI ]

Seems like lidar is everywhere nowadays, but it started at NASA back in the 1980s.

[ NASA ]

This GRASP on Robotics talk is by Frank Dellaert at Georgia Tech, on “Factor Graphs for Perception and Action.”

Factor graphs have been very successful in providing a lingua franca in which to phrase robotics perception and navigation problems. In this talk I will revisit some of those successes, also discussed in depth in a recent review article. However, I will focus on our more recent work in the talk, centered on using factor graphs for action. I will discuss our efforts in motion planning, trajectory optimization, optimal control, and model-predictive control, highlighting SCATE, our recent work on collision avoidance for autonomous spacecraft.

[ UPenn ]



There’s a handful of robotics companies currently working on what could be called general-purpose humanoid robots. That is, human-size, human-shaped robots with legs for mobility and arms for manipulation that can (or, may one day be able to) perform useful tasks in environments designed primarily for humans. The value proposition is obvious—drop-in replacement of humans for dull, dirty, or dangerous tasks. This sounds a little ominous, but the fact is that people don’t want to be doing the jobs that these robots are intended to do in the short term, and there just aren’t enough people to do these jobs as it is.

We tend to look at claims of commercializable general-purpose humanoid robots with some skepticism, because humanoids are really, really hard. They’re still really hard in a research context, which is usually where things have to get easier before anyone starts thinking about commercialization. There are certainly companies out there doing some amazing work toward practical legged systems, but at this point, “practical” is more about not falling over than it is about performance or cost effectiveness. The overall approach toward solving humanoids in this way tends to be to build something complex and expensive that does what you want, with the goal of cost reduction over time to get it to a point where it’s affordable enough to be a practical solution to a real problem.

Apptronik, based in Austin, Texas, is the latest company to attempt to figure out how to make a practical general-purpose robot. Its approach is to focus on things like cost and reliability from the start, developing (for example) its own actuators from scratch in a way that it can be sure will be cost effective and supply-chain friendly. Apptronik’s goal is to develop a platform that costs well under US $100,000 of which it hopes to be able to deliver a million by 2030, although the plan is to demonstrate a prototype early this year. Based on what we’ve seen of commercial humanoid robots recently, this seems like a huge challenge. And in part two of this story (to be posted tomorrow), we will be talking in depth to Apptronik’s cofounders to learn more about how they’re going to make general-purpose humanoids happen.

First, though, some company history. Apptronik spun out from the Human Centered Robotics Lab at the University of Texas at Austin in 2016, but the company traces its robotics history back a little farther, to 2015’s DARPA Robotics Challenge. Apptronik’s CTO and cofounder, Nick Paine, was on the NASA-JSC Valkyrie DRC team, and Apptronik’s first contract was to work on next-gen actuation and controls for NASA. Since then, the company has been working on robotics projects for a variety of large companies. In particular, Apptronik developed Astra, a humanoid upper body for dexterous bimanual manipulation that’s currently being tested for supply-chain use.

But Apptronik has by no means abandoned its NASA roots. In 2019, NASA had plans for what was essentially going to be a Valkyrie 2, which was to be a ground-up redesign of the Valkyrie platform. As with many of the coolest NASA projects, the potential new humanoid didn’t survive budget prioritization for very long, but even at the time it wasn’t clear to us why NASA wanted to build its own humanoid rather than asking someone else to build one for it considering how much progress we’ve seen with humanoid robots over the last decade. Ultimately, NASA decided to move forward with more of a partnership model, which is where Apptronik fits in—a partnership between Apptronik and NASA will help accelerate commercialization of Apollo.

“We recognize that Apptronik is building a production robot that’s designed for terrestrial use,” says NASA’s Shaun Azimi, who leads the Dexterous Robotics Team at NASA’s Johnson Space Center. “From NASA’s perspective, what we’re aiming to do with this partnership is to encourage the development of technology and talent that will sustain us through the Artemis program and looking forward to Mars.”

Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile system. It is imagining an “iPhone of robots.”

“Apollo is the robot that we always wanted to build,” says Jeff Cardenas, Apptronik cofounder and CEO. This new humanoid is the culmination of an astonishing amount of R&D, all the way down to the actuator level. “As a company, we’ve built more than 30 unique electric actuators,” Cardenas explains. “You name it, we’ve tried it. Liquid cooling, cable driven, series elastic, parallel elastic, quasi-direct drive…. And we’ve now honed our approach and are applying it to commercial humanoids.”

Apptronik’s emphasis on commercialization gives it a much different perspective on robotics development than you get when focusing on pure research the way that NASA does. To build a commercial product rather than a handful of totally cool but extremely complex bespoke humanoids, you need to consider things like minimizing part count, maximizing maintainability and robustness, and keeping the overall cost manageable. “Our starting point was figuring out what the minimum viable humanoid robot looked like,” explains Apptronik CTO Nick Paine. “Iteration is then necessary to add complexity as needed to solve particular problems.”

This robot is called Astra. It’s only an upper body, and it’s Apptronik’s first product, but (not having any legs) it’s designed for manipulation rather than dynamic locomotion. Astra is force controlled, with series-elastic torque-controlled actuators, giving it the compliance necessary to work in dynamic environments (and particularly around humans). “Astra is pretty unique,” says Paine. “What we were trying to do with the system is to approach and achieve human-level capability in terms of manipulation workspace and payload. This robot taught us a lot about manipulation and actually doing useful work in the world, so that’s why it’s where we wanted to start.”

While Astra is currently out in the world doing pilot projects with clients (mostly in the logistics space), internally Apptronik has moved on to robots with legs. The following video, which Apptronik is sharing publicly for the first time, shows a robot that the company is calling its Quick Development Humanoid, or QDH:


QDH builds on Astra by adding legs, along with a few extra degrees of freedom in the upper body to help with mobility and balance while simplifying the upper body for more basic manipulation capability. It uses only three different types of actuators, and everything (from structure to actuators to electronics to software) has been designed and built by Apptronik. “With QDH, we’re approaching minimum viable product from a usefulness standpoint,” says Paine, “and this is really what’s driving our development, both in software and hardware.”

“What people have done in humanoid robotics is to basically take the same sort of architectures that have been used in industrial robotics and apply those to building what is in essence a multi-degree-of-freedom industrial robot,” adds Cardenas. “We’re thinking of new ways to build these systems, leveraging mass manufacturing techniques to allow us to develop a high-degree-of-freedom robot that’s as affordable as many industrial robots that are out there today.”

Cardenas explains that a major driver for the cost of humanoid robots is the number of different parts, the precision machining of some specific parts, and the resulting time and effort it then takes to put these robots together. As an internal-controls test bed, QDH has helped Apptronik to explore how it can switch to less complex parts and lower the total part count. The plan for Apollo is to not use any high-precision or proprietary components at all, which mitigates many supply-chain issues and will help Apptronik reach its target price point for the robot.

Apollo will be a completely new robot, based around the lessons Apptronik has learned from QDH. It’ll be average human size: about 1.75 meters tall, weighing around 75 kilograms, with the ability to lift 25 kg. It’s designed to operate untethered, either indoors or outdoors. Broadly, Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile robot that can do a bunch of different things. It is imagining an “iPhone of robots,” where apps can be created for the robot to perform specific tasks. To extend the iPhone metaphor, Apptronik itself will make sure that Apollo can do all of the basics (such as locomotion and manipulation) so that it has fundamental value, but the company sees versatility as the way to get to large-scale deployments and the cost savings that come with them.

“I see the Apollo robot as a spiritual successor to Valkyrie. It’s not Valkyrie 2—Apollo is its own platform, but we’re working with Apptronik to adapt it as much as we can to space use cases.”
—Shaun Azimi, NASA Johnson Space Center

The challenge with this app approach is that there’s a critical mass that’s required to get it to work—after all, the primary motivation to develop an iPhone app is that there are a bajillion iPhones out there already. Apptronik is hoping that there are enough basic manipulation tasks in the supply-chain space that Apollo can leverage to scale to that critical-mass point. “This is a huge opportunity where the tasks that you need a robot to do are pretty straightforward,” Cardenas tells us. “Picking single items, moving things with two hands, and other manipulation tasks where industrial automation only gets you to a certain point. These companies have a huge labor challenge—they’re missing labor across every part of their business.”

While Apptronik’s goal is for Apollo to be autonomous, in the short to medium term, its approach will be hybrid autonomy, with a human overseeing first a few and eventually a lot of Apollos with the ability to step in and provide direct guidance through teleoperation when necessary. “That’s really where there’s a lot of business opportunity,” says Paine. Cardenas agrees. “I came into this thinking that we’d need to make Rosie the robot before we could have a successful commercial product. But I think the bar is much lower than that. There are fairly simple tasks that we can enter the market with, and then as we mature our controls and software, we can graduate to more complicated tasks.”

Apptronik is still keeping details about Apollo’s design under wraps, for now. We were shown renderings of the robot, but Apptronik is understandably hesitant to make those public, since the design of the robot may change. It does have a firm date for unveiling Apollo for the first time: SXSW, which takes place in Austin in March.



With Boston Dynamics’ recent(ish) emphasis on making robots that can do things that are commercially useful, it’s always good to be gently reminded that the company is still at the cutting edge of dynamic humanoid robotics. Or in this case, forcefully reminded. In its latest video, Boston Dynamics demonstrates some spectacular new capabilities with Atlas focusing on perception and manipulation, and the Atlas team lead answers some of our questions about how they pulled it off.

One of the highlights here is Atlas’s ability to move and interact dynamically with objects, and especially with objects that have significant mass to them. The 180 while holding the plank is impressive, since Atlas has to account for all that added momentum. Same with the spinning bag toss: As soon as the robot releases the bag in midair, its momentum changes, which it has to compensate for on landing. And shoving that box over has to be done by leaning into it, but carefully, so that Atlas doesn’t topple off the platform after it.

While the physical capabilities that Atlas demonstrates here are impressive (to put it mildly), this demonstration also highlights just how much work remains to be done to teach robots to be useful like this in an autonomous, or even a semi-autonomous, way. For example, environmental modification is something that humans do all the time, but we rely heavily on our knowledge of the world to do it effectively. I’m pretty sure that Atlas doesn’t have the capability to see a nontraversable gap, consider what kind of modification would be required to render the gap traversable, locate the necessary resources (without being told where they are first), and then make the appropriate modification autonomously in the way a human would—the video shows advances in manipulation rather than decision making. This certainly isn’t a criticism of what Boston Dynamics is showing in this video; it’s just to emphasize there is still a lot of work to be done on the world understanding and reasoning side before robots will be able to leverage these impressive physical skills on their own in a productive way.

There’s a lot more going on in this video, and Boston Dynamics has helpfully put together a bit of a behind-the-scenes explainer:

And for a bit more on this, we sent a couple of questions over to Boston Dynamics, and Atlas Team Lead Scott Kuindersma was kind enough to answer them for us.

How much does Atlas know in advance about the objects that it will be manipulating, and how important is this knowledge for real-world manipulation?

Scott Kuindersma: In this video, the robot has a high-level map that includes where we want it to go, what we want it to pick up, and what stunts it should do along the way. This map is not an exact geometric match for the real environment; it is an approximate description containing obstacle templates and annotated actions that is adapted online by the robot’s perception system. The robot has object-relative grasp targets that were computed offline, and the model-predictive controller (MPC) has access to approximate mass properties.

We think that real-world robots will similarly leverage priors about their tasks and environments, but what form these priors take and how much information they provide could vary a lot based on the application. The requirements for a video like this lead naturally to one set of choices—and maybe some of those requirements will align with some early commercial applications—but we’re also building capabilities that allow Atlas to operate at other points on this spectrum.

How often is what you want to do with Atlas constrained by its hardware capabilities? At this point, how much of a difference does improving hardware make, relative to improving software?

Kuindersma: Not frequently. When we occasionally spend time on something like the inverted 540, we are intentionally pushing boundaries and coming at it from a place of playful exploration. Aside from being really fun for us and (hopefully) inspiring to others, these activities nearly always bear enduring fruit and leave us with more capable software for approaching other problems.

The tight integration between our hardware and software groups—and our ability to design, iterate, and learn from each other—is one of the things that makes our team special. This occasionally leads to behavior-enabling hardware upgrades and, less often, major redesigns. But from a software perspective, we continuously feel like we’re just scratching the surface on what we can do with Atlas.

Can you elaborate on the troubleshooting process you used to make sure that Atlas could successfully execute that final trick?

Kuindersma: The controller works by using a model of the robot to predict and optimize its future states. The improvement made in this case was an extension to this model to include the geometric shape of the robot’s limbs and constraints to prevent them from intersecting. In other words, rather than specifically tuning this one behavior to avoid self-collisions, we added more model detail to the controller to allow it to better avoid infeasible configurations. This way, the benefits carry forward to all of Atlas’s behaviors.

Is the little hop at the end of the 540 part of the planned sequence, or is Atlas able to autonomously use motions like that to recover from dynamic behaviors that don’t end up exactly as expected? How important will this kind of capability be for real-world robots?

Kuindersma: The robot has the ability to autonomously take steps, lean, and/or wave its limbs around to recover balance, which we leverage on pretty much a daily basis in our experimental work. The hop jump after the inverted 540 was part of the behavior sequence in the sense that it was told that it should jump after landing, but where it jumped to and how it landed came from the controller (and generally varied between individual robots and runs).

Our experience with deploying Spot all over the world has reinforced the importance for mobile robots to be able to adjust and recover if they get bumped, slip, fall, or encounter unexpected obstacles. We expect the same will be true for future robots doing work in the real world.

What else can you share with us about what went into making the video?

Kuindersma: A few fun facts:

The core new technologies around MPC and manipulation were developed throughout this year, but the time between our whiteboard sketch for the video and completing filming was six weeks.

The tool bag throw and spin jump with the 2- by 12-inch plank are online generalizations of the same 180 jump behavior that was created two years ago as part of our mobility work. The only differences in the controller inputs are the object model and the desired object motion.

Although the robot has a good understanding of throwing mechanics, the real-world performance was sensitive to the precise timing of the release and whether the bag cloth happened to get caught on the finger during release. These details weren’t well represented by our simulation tools, so we relied primarily on hardware experiments to refine the behavior until it worked every time.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREARoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCECLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILRSS 2023: 10–14 July 2023, DAEGU, KOREAICRA 2023: 29 May–2 June 2023, LONDONRobotics Summit & Expo: 10–11 May 2023, BOSTON

Enjoy today’s videos!

With the historic Kunming-Montreal Agreement of 18 December 2022, more than 200 countries agreed to halt and reverse biodiversity loss. But becoming nature-positive is an ambitious goal, also held back by the lack of efficient and accurate tools to capture snapshots of global biodiversity. This is a task where robots, in combination with environmental DNA (eDNA) technologies, can make a difference.

Our recent findings show a new way to sample surface eDNA with a drone, which could be helpful in monitoring biodiversity in terrestrial ecosystems. The eDrone can land on branches and collect eDNA from the bark using a sticky surface. The eDrone collected surface eDNA from the bark of seven different trees, and by sequencing the collected eDNA we were able to identify 21 taxa, including insects, mammals, and birds.

[ ETH Zurich ]

Thanks, Stefano!

How can we bring limbed robots into real-world environments to complete challenging tasks? Dr. Dimitrios Kanoulas and the team at UCL Computer Science’s Robot Perception and Learning Lab are exploring how we can use autonomous and semi-autonomous robots to work in environments that humans cannot.

[ RPL UCL ]

Thanks, Dimitrios!

Bidirectional design, four-wheel steering, and a compact length give our robotaxi unique agility and freedom of movement in dense urban environments—or in games of tic-tac-toe. May the best robot win.

Okay, but how did they not end this video with one of the cars drawing a “Z” off to the left side of the middle row?

[ Zoox ]

Thanks, Whitney!

DEEP Robotics wishes y’all happy, good health in the year of the rabbit!

Binkies!

[ Deep Robotics ]

This work presents a safety-critical locomotion-control framework for quadrupedal robots. Our goal is to enable quadrupedal robots to safely navigate in cluttered environments.

[ Hybrid Robotics ]

At 360.50 kilometers per hour, this is the world speed record for a quadrotor.

[ Quad Star Drones ] via [ Gizmodo ]

When it rains, it pours—and we’re designing the Waymo Driver to handle it. See how shower tests, thermal chambers, and rugged tracks at our closed-course facilities ensure our system can navigate safely, no matter the forecast.

[ Waymo ]

You know what’s easier than picking blueberries? Picking greenberries, which are much less squishy.

[ Sanctuary AI ]

The Official Wrap-Up of ABU ROBOCON 2022 New Delhi, India.

[ ROBOCON ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

I don’t know what your robots do at night, but at Fraunhofer, this is what they get up to.

[ Fraunhofer IPA ]

This choreorobotics dance is part atavistic ceremony, part celestial conjuring, and part ecstatic romping. It features three human dancers and two Boston Dynamics Spot robots with original music, choreography, and video. It was the first robot-human dance performed at any Smithsonian building in its history and premiered on July 6th, 2022. This work was created as the culmination of Catie Cuan’s Futurist-in-Residence appointment at the Smithsonian Arts and Industries Building.

[ Catie Cuan ]

Several soft-bodied crawling animals in nature such as inchworms, caterpillars, etc., have remarkable locomotion abilities for complex navigation across a variety of substrates....We have developed a bio-inspired soft robotic model (driven by only a single source of pressure) that unveils the fundamental aspects of frictional anisotropic locomotion in crawling animals. This breakthrough is interesting from an animal biomechanics point of view and crucial for the development of inspection and exploration robots.

A paper on this work, titled “Frictional Anisotropic Locomotion and Adaptive Neural Control for a Soft Crawling Robot,” has been published in Soft Robotics.

[ VISTEC ]

Thanks, Poramate!

Quadrotors are deployed to more and more applications nowadays. Yet quadrotors’ flight performance is subject to various uncertainties and disturbances, e.g., ground effect, slosh payload, damaged propeller, downwash, and sudden weight change, just to name a few. The researchers from the Advanced Controls Research Laboratory at UIUC bring up L1Quad: an L1 adaptive augmentation for compensating for the uncertainties and disturbances experienced by the quadrotor. The video below shows the superior performance of L1Quad in various challenging scenarios without retuning the controller parameters case by case.

[ Illinois ]

Thanks, Sheng!

These robots can handle my muffins anytime.

[ Fanuc ]

This is maybe the most specific gripper I’ve ever seen.

[ PRISMA Lab ]

A little weird that this video from MIT is titled “Behind MIT’s Robot Dog” while featuring a Unitree robot dog rather than a Mini Cheetah.

[ MIT CSAIL ]

When you spend years training a system for the full gamut of driving scenarios, unexpected situations become mere possibilities. See how we consistently put the Waymo Driver to the test in our closed-course facilities, ensuring we’ve built a Driver that’s ready for anything.

[ Waymo ]

Robots attend valves
Opening and closing with grace
Steady and precise

[ Sanctuary AI ]

REInvest Robotics in conversation with Brian Gerkey, cofounder and now former CEO of Open Robotics on his wishlist for robotics.

[ REInvest Robotics ]

This Stanford Seminar is from Aaron Edsinger of Hello Robot, on humanizing robot design.

We are at the beginning of a transformation where robots and humans cohabitate and collaborate in everyday life. From caring for older adults to supporting workers in service industries, collaborative robots hold incredible potential to improve the quality of life for millions of people. These robots need to be safe, intuitive, and simple to use. They need to be affordable enough to allow widespread access and adoption. Ultimately, acceptance of these robots in society will require that the human experience is at the center of their design. In this presentation I will highlight some of my work to humanize robot design over the last two decades. This work includes compliant and safe actuation for humanoids, low-cost collaborative robot arms, and assistive mobile manipulators. Our recent work at Hello Robot has been to commercialize a mobile manipulator named Stretch that can assist older adults and people with disabilities. I’ll detail the human-centered research and development process behind Stretch and present recent work to allow an individual with quadriplegia to control Stretch for everyday tasks. Finally I’ll highlight some of the results by the growing community of researchers working with Stretch.

[ Hello Robot ] via [ Stanford ]



Three days before astronauts left on Apollo 8, the first-ever flight around the moon, NASA’s safety chief, Jerome Lederer, gave a speech that was at once reassuring and chilling. Yes, he said, America’s moon program was safe and well-planned—but even so, “Apollo 8 has 5,600,000 parts and one and one half million systems, subsystems, and assemblies. Even if all functioned with 99.9 percent reliability, we could expect 5,600 defects.”

The mission, in December 1968, was nearly flawless—a prelude to the Apollo 11 landing the next summer. But even today, half a century later, engineers wrestle with the sheer complexity of the machines they build to go to space. NASA’s Artemis I, its Space Launch System rocket mandated by Congress in 2010, endured a host of delays before it finally launched in November 2022. And Elon Musk’s SpaceX may be lauded for its engineering acumen, but it struggled for six years before its first successful flight into orbit.

Relativity envisions 3D printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there.

Is there a better way? An upstart company called Relativity Space is about to try one. Its Terran 1 rocket, it says, has about a tenth as many parts as comparable launch vehicles, because it is made through 3D printing. Instead of bending metal and milling and welding, engineers program a robot to deposit layers of metal alloy in place.

Relativity’s first rocket, the company says, is ready to go from Launch Complex 16 at Cape Canaveral in Florida. When it happens, possibly later this month, the company says it will stream the liftoff on YouTube.

Artist’s concept of Relativity’s planned Terran R rocket. The company says it should be able to carry a 20,000 kg payload into low Earth orbit.Relativity

“Over 85 percent of the rocket by mass is 3D printed,” said Scott Van Vliet, Relativity’s head of software engineering. “And what’s really cool is not only are we reducing the amount of parts and labor that go into building one of these vehicles over time, but we’re also reducing the complexity, we’re reducing the chance of failure when you reduce the part count, and you streamline the build process.”

Relativity says it can put together a Terran rocket in two months, compared to two years for some conventionally built ones. The speed and cost of making a prototype—say, for wind-tunnel testing—are reduced because you tell the printer to make a scaled-down model. There is less waste because the process is additive. And if something needs to be modified, you reprogram the 3D printer instead of slow, expensive retooling.

Investors have noticed. The company says financial backers have included BlackRock, Y Combinator and the entrepreneur Mark Cuban.

“If you walk into any rocket factory today other than ours,” said Josh Brost, the company’s head of business development, “you still will see hundreds of thousands of parts coming from thousands of vendors, and still being assembled using lots of touch labor and lots of big-fix tools.”

Terran 1 Nose Cone Timelapse Check out this timelapse of our nose cone build for Terran 1. This milestone marks the first time we’ve created this unique shape ...

Terran 1, rated as capable of putting a 1,250 kg payload in low Earth orbit, is mainly intended as a test bed. Relativity has signed up a variety of future customers for satellite launches, but the first Terran 1 (“Terran” is a word for earthling) will not carry a paying customer’s satellite. The first flight has been given the playful name “Good Luck, Have Fun”—GLHF for short. Eventually, if things are going well, Relativity will build larger boosters, called Terran R, which, it hopes, will compete with the SpaceX Falcon 9 for launches of up to 20,000 kg. Relativity says the Terran R should be fully reusable, including the upper stage—something that other commercial launch companies have not accomplished. In current renderings, the rocket is, as the company puts it, “inspired by nature,” shaped to slice through the atmosphere as it ascends and comes back for recovery.

A number of Relativity’s top people came from Musk’s SpaceX or Jeff Bezos’ space company, Blue Origin, and, like Musk, they say their vision is a permanent presence on Mars. Brost calls it “the long-term North Star for us.” They say they can envision 3D printing facilities someday on the Martian surface, fabricating much of what people from Earth would need to live there.For that to happen,” says Brost, “you need to have manufacturing capabilities that are autonomous and incredibly flexible.”

Relativity’s fourth-generation Stargate 3D printer.Relativity

Just how Relativity will do all these things is a work in progress. It says its 3D technology will help it work iteratively—finding mistakes as it goes, then correcting them as it prints the next rocket, and the next, and so on.

“In traditional manufacturing, you have to do a ton of work up front and have a lot of the design features done well ahead of time,” says Van Vliet. “You have to invest in fixed tooling that can often take years to build before you’ve actually developed an article for your launch vehicle. With 3D printing, additive manufacturing, we get to building something very, very quickly.”

The next step is to get the first rocket off the pad. Will it succeed? Brost says a key test will be getting through max q—the point of maximum dynamic pressure on the rocket as it accelerates through the atmosphere before the air around it thins out.

“If you look at history, at new space companies doing large rockets, there’s not a single one that’s done their first rocket on their first try. It would be quite an achievement if we were able to achieve orbit on our inaugural launch,” says Brost.

“I’ve been to many launches in my career,” he says, “and it never gets less exciting or nerve wracking to me.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA

Enjoy today’s videos!

Meet Dog-E, the One in a Million Robot Dog!

Uncrate this pup to reveal a unique combination of colorful lights, sounds and personality traits, so no two Dog-Es are ever the same! Unique movements, personality and sounds brings this robot dog to life, and Dog-E’s personality influences how it behaves and responds to you with over 200 sounds and reactions. Dog-E talks with its tail, using persistence of vision (POV) technology to communicate with you. Train your Dog-E to learn your name and do tricks, track its needs or even toss it a treat! Multiple people can mint, save and load unique profiles with the app, so Dog-E is a robot dog for the whole family!

[ WowWee ]

The average human spends 26 years sleeping and 30 years working. That leaves just 1–2 hours in a day to truly connect with family, elders, or pets—if we’re lucky. With all that time apart and no one there to supervise, there can be a lot of concern about the health and safety of our loved ones. This is why we created EBO X—not just for you but for ourselves as well.

[ Ebo X ]

Labrador Systems is at CES this week, demonstrating its Retriever robot, now with Amazon Echo integration.

[ Labrador ]

With a wrap-up of the main events that marked 2022 for us, the RSL team wishes you a happy and eventful 2023.

[ RSL ]

What if you could walk way faster without trying any harder? Moonwalkers basically put an electric moving sidewalk right under your feet. WIRED’s Brent Rose has some questions: Are they real? Are they safe? Are they actually any good? Brent goes inside Shift Robotics’ research and development lab to get some answers.

[ Wired ]

How Wing designs its delivery drones.

[ Wing ]

Breaking news: Robot passes mirror test.

[ Sanctuary AI ]

The Guardian XM intelligent manipulator offers speed, dexterity, precision, and strength in a compact, lightweight package. With six degrees of freedom, an optimized strength-to-weight ratio, embedded intelligence, and a sleek hardware design that can withstand extreme temperatures and environmental conditions (IP66), the Guardian robotic arm can be used for a variety of complex outdoor and indoor applications.

[ Sarcos ]

A custom, closed-course testing fortress and an urban, high-speed proving ground? Yeah, you could say we take our structured testing seriously. Experience how we put the WaymoDriver to the test at each of our state-of-the-art facilities.

[ Waymo ]

Skydio, the leading American drone manufacturer, believes the responsible use of drones is the core of any public safety mission and we bake responsible engagement into our DNA. We developed the Skydio Engagement and Responsible Use Principles—a groundbreaking set of policy and ethical principles to guide our work and drive the industry forward. We also partnered with DRONERESPONDERS—the leading association focused on first-responder drone programs—to develop the “Five C’s” of responsible drone use by public-safety agencies.

Of course, Skydio’s drones are a lot of fun for nonemergencies, too:

[ Skydio ]



When we hear about manipulation robots in warehouses, it’s almost always in the context of picking. That is, about grasping a single item from a bin of items, and then dropping that item into a different bin, where it may go toward building a customer order. Picking a single item from a jumble of items can be tricky for robots (especially when the number of different items may be in the millions). While the problem’s certainly not solved, in a well-structured and optimized environment, robots are nevertheless still getting pretty good at this kind of thing.

Amazon has been on a path toward the kind of robots that can pick items since at least 2015, when the company sponsored the Amazon Picking Challenge at ICRA. And just a month ago, Amazon introduced Sparrow, which it describes as “the first robotic system in our warehouses that can detect, select, and handle individual products in our inventory.” What’s important to understand about Sparrow, however, is that like most practical and effective industrial robots, the system surrounding it is doing a lot of heavy lifting—Sparrow is being presented with very robot-friendly bins that makes its job far easier than it would be otherwise. This is not unique to Amazon, and in highly automated warehouses with robotic picking systems it’s typical to see bins that either include only identical items or have just a few different items to help the picking robot be successful.

Doing the picking task in reverse is called stowing, and it’s the way that items get into Amazon’s warehouse workflow in the first place.

But robot-friendly bins are simply not the reality for the vast majority of items in an Amazon warehouse, and a big part of the reason for this is (as per usual) humans making an absolute mess of things, in this case when they stow products into bins in the first place. Sidd Srinivasa, the director of Amazon Robotics AI, described the problem of stowing items as “a nightmare.... Stow fundamentally breaks all existing industrial robotic thinking.” But over the past few years, Amazon Robotics researchers have put some serious work into solving it.

First, it’s important to understand the difference between the robot-friendly workflows that we typically see with bin-picking robots, and the way that most Amazon warehouses are actually run. That is, with humans doing most of the complex manipulation.

You may already be familiar with Amazon’s drive units—the mobile robots with shelves on top (called pods) that autonomously drive themselves past humans who pick items off of the shelves to build up orders for customers. This is (obviously) the picking task, but doing the same task in reverse is called stowing, and it’s the way that items get into Amazon’s warehouse workflow in the first place. It turns out that humans who stow things on Amazon’s mobile shelves do so in what is essentially a random way in order to maximize space most efficiently. This sounds counterintuitive, but it actually makes a lot of sense.

When an Amazon warehouse gets a new shipment of stuff, let’s say Extremely Very Awesome Nuggets (EVANs), the obvious thing to do might be to call up a pod with enough empty shelves to stow all of the EVANs in at once. That way, when someone places an order for an EVAN, the pod full of EVANs shows up, and a human can pick an EVAN off one of the shelves. The problem with this method, however, is that if the pod full of EVANs gets stuck or breaks or is otherwise inaccessible, then nobody can get their EVANs, slowing the entire system down (demand for EVANs being very, very high). Amazon’s strategy is to instead distribute EVANs across multiple pods, so that some EVANs are always available.

The process for this distributed stow is random in the sense that a human stower might get a couple of EVANs to put into whatever pod shows up next. Each pod has an array shelves, some of which are empty. It’s up to the human to decide where the EVANs best fit, and Amazon doesn’t really care as long as human tells the inventory system where the EVANs ended up. Here’s what this process looks like:

Two things are immediately obvious from this video: First, the way that Amazon products are stowed at automated warehouses like this one is entirely incompatible with most current bin-picking robots. Second, it’s easy to see why this kind of stowing is “a nightmare” for robots. As if the need to carefully manipulate a jumble of objects to make room in a bin wasn’t a hard enough problem, you also have to deal with those elastic bins that get in the way of both manipulation and visualization, and you have to be able to grasp and manipulate the item that you’re trying to stow. Oof.

“For me, it’s hard, but it’s not too hard—it’s on the cutting edge of what’s feasible for robots,” says Aaron Parness, senior manager of applied science at Amazon Robotics & AI. “It’s crazy fun to work on.” Parness came to Amazon from Stanford and JPL, where he worked on robots like StickyBot and LEMUR and was responsible for this bonkers microspine gripper designed to grasp asteroids in microgravity. “Having robots that can interact in high-clutter and high-contact environments is superexciting because I think it unlocks a wave of applications,” continues Parness. “This is exactly why I came to Amazon; to work on that kind of a problem and try to scale it.”

What makes stowing at Amazon both cutting edge and nightmarish for robots is that it’s a task that has been highly optimized for humans. Amazon has invested heavily in human optimization, and (at least for now) the company is very reliant on humans. This means that any robotic solution that would have a significant impact on the human-centered workflow is probably not going to get very far. So Parness, along with Senior Applied Scientist Parker Owan, had to develop hardware and software that could solve the problem as is. Here’s what they came up with:

On the hardware side, there’s a hook system that lifts the elastic bands out of the way to provide access to each bin. But that’s the easy part; the hard part is embodied in the end-of-arm tool (EOAT), which consists of two long paddles that can gently squeeze an item to pick it up, with conveyor belts on their inner surfaces to shoot the item into the bin. An extendable thin metal spatula of sorts can go into the bin before the paddles and shift items around to make room when necessary.

To use all of this hardware requires some very complex software, since the system needs to be able to perceive the items in the bin (which may be occluding each other and also behind the elastic bands), estimate the characteristics of each item, consider ways in which those items could be safely shoved around to maximize available bin space based on the object to be stowed, and then execute the right motions to make all of that happen. By identifying and then chaining together a series of motion primitives, the Amazon researchers have been able to achieve stowing success rates (in the lab) of better than 90 percent.

After years of work, the system is functioning well enough that prototypes are stowing actual inventory items at an Amazon fulfillment center in Washington state. The goal is to be able to stow 85 percent of the products that Amazon stocks (millions of items), but since the system can be installed within the same workflow that humans use, there’s no need to hit 100 percent. If the system can’t handle it, it just passes it along to a human worker. This means that the system doesn’t even need to reach 85 percent before it can be useful, since if it can do even a small percentage of items, it can offload some of that basic stuff from humans. And if you’re a human who has to do a lot of basic stuff over and over, that seems like it might be nice. Thanks, robots!

But of course there’s a lot more going on here on the robotics side, and we spoke with Aaron Parness to learn more.

IEEE Spectrum: Stowing in an Amazon warehouse is a highly human-optimized task. Does this make things at lot more challenging for robots?

Aaron Parness, senior manager of applied science at Amazon Robotics & AIAmazon

Aaron Parness: In a home, in a hospital, on the space station, in these kinds of settings, you have these human-built environments. I don’t really think that’s a driver for us. The hard problem we’re trying to solve involves contact and also the reasoning. And that doesn’t change too much with the environment, I don’t think. Most of my team is not focused on questions of that nature, questions like, “If we could only make the bins this height,” or, “If we could only change this or that other small thing.” I don’t mean to say that Amazon won’t ever change processes or alter systems. Obviously, we are doing that all the time. It’s easier to do that in new buildings than in old buildings, but Amazon is still totally doing that. We just try to think about our product fitting into those existing environments.

I think there’s a general statement that you can make that when you take robots from the lab and put them into the real world, you’re always constrained by the environment that you put them into. With the stowing problem, that’s definitely true. These fabric pods are horizontal surfaces, so orientation with respect to gravity can be a factor. The elastic bands that block our view are a challenge. The stiffness of the environment also matters, because we’re doing this force-in-the-loop control, and the incredible diversity of items that Amazon sells means that some of the items are compressible. So those factors are part of our environment as well. So in our case, dealing with this unstructured contact, this unexpected contact, that’s the hardest part of the problem.

“Handling contact is a new thing for industrial robots, especially unexpected, unpredictable contact. It’s both a hard problem, and a worthy one.”
—Aaron Parness

What information do you have about what’s in each bin, and how much does that help you to stow items?

Parness: We have the inventory of what’s in the bins, and a bunch of information about each of those items. We also know all the information about the items in our buffer [to be stowed]. And we have a 3D representation from our perception system. But there’s also a quality-control thing where the inventory system says there’s four items in the bin, but in reality, there’s only three items in the bin, because there’s been a defect somewhere. At Amazon, because we’re talking about millions of items per day, that’s a regular occurrence for us.

The configuration of the items in each bin is one of the really challenging things. If you had the same five items: a soccer ball, a teddy bear, a T-shirt, a pair of jeans, and an SD card and you put them in a bin 100 times, they’re going to look different in each of those 100 cases. You also get things that can look very similar. If you have a red pair of jeans or a red T-shirt and red sweatpants, your perception system can’t necessarily tell which one is which. And we do have to think about potentially damaging items—our algorithm decides which items should go to which bins and what confidence we have that we would be successful in making that stow, along with what risk there is that we would damage an item if we flip things up or squish things.

“Contact and clutter are the two things that keep me up at night.”
—Aaron Parness

How do you make sure that you don’t damage anything when you may be operating with incomplete information about what’s in the bin?

Parness: There are two things to highlight there. One is the approach and how we make our decisions about what actions to take. And then the second is how to make sure you don’t damage items as you do those kinds of actions, like squishing as far as you can.

With the first thing, we use a decision tree. We use that item information to claim all the easy stuff—if the bin is empty, put the biggest thing you can in the bin. If there’s only one item in the bin, and you know that item is a book, you can make an assumption it’s incompressible, and you can manipulate it accordingly. As you work down that decision tree, you get to certain branches and leaves that are too complicated to have a set of heuristics, and that’s where we use machine learning to predict things like, if I sweep this point cloud, how much space am I likely to make in the bin?

And this is where the contact-based manipulation comes in because the other thing is, in a warehouse, you need to have speed. You can’t stow one item per hour and be efficient. This is where putting force and torque in the control loop makes a difference—we need to have a high rate, a couple of hundred hertz loop that’s closing around that sensor and a bunch of special sauce in our admittance controller and our motion-planning stack to make sure we can do those motions without damaging items.

An overhead view of Amazon’s new stowing robotAmazon

Since you’re operating in these human-optimized environments, how closely does your robotic approach mimic what a human would be doing?

Parness: We started by doing it ourselves. We also did it ourselves while holding a robotic end effector. And this matters a lot, because you don’t realize that you’re doing all these kinds of fine-control motions, and you have so many sensors on your hand, right? This is a thing. But when we did this task ourselves, when we observed experts doing it, this is where the idea of motion primitives kind of emerged, which made the problem a little more achievable.

What made you use the motion primitives approach as opposed to a more generalized learning technique?

Parness: I’ll give you an honest answer—I was never tempted by reinforcement learning. But there were some in my team that were tempted by that, and we had a debate, since I really believe in iterative design philosophy and in the value of prototyping. We did a bunch of early-stage prototypes, trying to make a data-driven decision, and the end-to-end reinforcement learning seemed intractable. But the motion-primitive strategy actually turned me from a bit of a skeptic about whether robots could even do this job, and made me think, “Oh, yeah, this is the thing. We got to go for this.” That was a turning point, getting those motion primitives and recognizing that that was a way to structure the problem to make it solvable, because they get you most of the way there—you can handle everything but the long tail. And with the tail, maybe sometimes a human is looking in, and saying, “Well, if I play Tetris and I do this incredibly complicated and slow thing I can make the perfect unicorn shaped hole to put this unicorn shaped object into.” The robot won’t do that, and doesn’t need to do that. It can handle the bulk.

You really didn’t think that the problem was solvable at all, originally?

Parness: Yes. Parker Owan, who’s one of the lead scientists on my team, went off into the corner of the lab and started to set up some experiments. And I would look over there while working on other stuff, and be like, “Oh, that young guy, how brave. This problem will show him.” And then I started to get interested. Ultimately, there were two things, like I said: it was discovering that you could use these motion primitives to accomplish the bulk of the in-bin manipulation, because really that’s the hardest part of the problem. The second thing was on the gripper, on the end-of-arm tool.

“If the robot is doing well, I’m like, ‘This is achievable!’ And when we have some new problems, and then all of a sudden I’m like, ‘This is the hardest thing in the world!’ ”
—Aaron Parness

The end effector looks pretty specialized—how did you develop that?

Parness: Looking around the industry, there’s a lot of suction cups, a lot of pinch grasps. And when you have those kinds of grippers, all of a sudden you’re trying to use the item you’re gripping to manipulate the other items that are in the bin, right? When we decided to go with the paddle approach and encapsulate the item, it both gave us six degrees of freedom control over the item, so to make sure it wasn’t going into spaces we didn’t want it to, while also giving us a known engineering surface on the gripper. Maybe I can only predict in a general way the stiffness or the contact properties or the items that are in the bin, but I know I’m touching it with the back of my paddle, which is aluminum.

But then we realized that the end effector actually takes up a lot of space in the bin, and the whole point is that we’re trying to fill these bins up so that we can have a lot of stuff for sale on Amazon.com. So we say, okay, well, we’re going to stay outside the bin, but we’ll have this spatula that will be our in-bin manipulator. It’s a super simple tool that you can use for pushing on stuff, flipping stuff, squashing stuff.... You’re definitely not doing 27-degree-of-freedom human-hand stuff, but because we have these motion primitives, the hardware complemented that.

However, the paddles presented a new problem, because when using them we basically had to drop the item and then try to push it in at the same time. It was this kind of dynamic—let go and shove—which wasn’t great. That’s what led to putting the conveyor belts onto the paddles, which took us to the moon in terms of being successful. I’m the biggest believer there is now! Parker Owan has to kind of slow me down sometimes because I’m so excited about it.

It must have been tempting to keep iterating on the end effector.

Parness: Yeah, it is tempting, especially when you have scientists and engineers on your team. They want everything. It’s always like, “I can make it better. I can make it better. I can make it better.” I have that in me too, for sure. There’s another phrase I really love which is just, “so simple, it might work.” Are we inventing and complexifying, or are we making an elegant solution? Are we making this easier? Because the other thing that’s different about the lab and an actual fulfillment center is that we’ve got to work with our operators. We need it to be serviceable. We need it to be accessible and easy to use. You can’t have four Ph.D.s around each of the robots constantly kind of tinkering and optimizing it. We really try to balance that, but is there a temptation? Yeah. I want to put every sensor known to man on the robot! That’s a temptation, but I know better.

To what extent is picking just stowing in reverse? Could you run your system backwards and have picking solved as well?

Parness: That’s a good question, because obviously I think about that too, but picking is a little harder. With stowing, it’s more about how you make space in a bin, and then how you fit an item into space. For picking, you need to identify the item—when that bin shows up, the machine learning, the computer vision, that system has to be able to find the right item in clutter. But once we can handle contact and we can handle clutter, pick is for sure an application that opens up.

When I think really long term, if Amazon were to deploy a bunch of these stowing robots, all of a sudden you can start to track items, and you can remember that this robot stowed this item in this place in this bin. You can start to build up container maps. Right now, though, the system doesn’t remember.

Regarding picking in particular, a nice thing Amazon has done in the last couple of years is start to engage with the academic community more. My team sponsors research at MIT and at the University of Washington. And the team at University of Washington is actually looking at picking. Stow and pick are both really hard and really appealing problems, and in time, I hope I get to solve both!



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA

Enjoy today’s videos!

Following the great success of the miniature humanoid robot DARwIn-OP we have developed, RoMeLa is proud to introduce the next generation humanoid robot for research and education, BRUCE (Bipedal Robot Unit with Compliance Enhanced.) BRUCE is an open-platform humanoid robot that utilizes the BEAR proprioceptive actuators, enabling it to have stunning dynamic performance capabilities never before seen in this class of robots. Originally developed at RoMeLa in joint effort with Westwood Robotics, BRUCE will be made open source to the robotics community and also be made available via Westwood Robotics.

BRUCE has a total 16 DoF, is 70cm in height and weights only 4.8kg. With a 3000mAh lithium battery it can lasts for about 20 minutes with continuous dynamic motions. Besides its excellent dynamic performance, BRUCE is very robust and user-friendly, along with great compatibility and expandability. BRUCE makes humanoid robotics research efficient, safe and fun.

[ Westwood Robotics ]

This video shows evoBOT, a dynamically stable and autonomous transport robot.

[ Fraunhofer IML ]

ASL Team wishes you all the best for 2023 :-)

[ ASL ]

Holidays are a magical time. But if you feel like our robot dog Marvin, the magic needs to catch up and find you. Keep your eyes and heart open for possibilities – jolliness is closer than you realize!

[ Accenture Baltics ]

In this Christmas clip, the robots of a swarm transport Christmas decorations and they cooperate to carry the decorated tree. Each robot has enough strength to carry the decorations itself, however, no robot can carry the tree on its own. The solution: they carry the tree by working together!

[ Demiurge ]

Thanks David!

Our VoloDrone team clearly got the holiday feels in snowy Germany while sling load testing cargo – definitely a new way of disposing of a Christmas tree before the New Year.

[ Volocopter ]

What if we race three commercially available quadruped robots for a bit of fun...? Out of the box configuration, ‘full sticks forward’ on the remotes on flat ground. Hope you enjoy the results ;-)

[ CSIRO Data61 ]

Happy Holidays From Veo!

[ Veo ]

In ETH Zurich’s Soft Robotics Lab, a white robot hand reaches for a beer can, lifts it up and moves it to a glass at the other end of the table. There, the hand carefully tilts the can to the right and pours the sparkling, gold-coloured liquid into the glass without spilling it. Cheers!

[ SRL ]

Bingo (aka Santa) found herself a new sleigh! All of us at CSIRO’s Data61 Robotics and Autonomous Systems Group wish everyone a Merry Christmas and Happy Holidays!

[ CSIRO Data61 ]

From 2020, a horse-inspired walking robot.

[ Ishikawa Minami Lab ]

Landing an unmanned aerial vehicle (UAV) on top of an unmanned surface vehicle (USV) in harsh open waters is a challenging problem, owing to forces that can damage the UAV due to a severe roll and/or pitch angle of the USV during touchdown. To tackle this, we propose a novel model predictive control (MPC) approach enabling a UAV to land autonomously on a USV in these harsh conditions.

[ MRS CTU ]

GITAI has a fancy new office in Los Angeles that they’re filling with space robots.

[ GITAI ]

This Maryland Robotics Center seminar is from CMU’s Vickie Webster-Wood: “It’s Alive! Bioinspired and biohybrid approaches towards life-like and living robots.”

In this talk, I will share efforts from my group in our two primary research thrusts: Bioinspired robotics, and biohybrid robotics. By using neuromechanical models and bioinspired robots as tools for basic research we are developing new models of how animals achieve multifunctional, adaptable behaviors. Building on our understanding of animal systems and living tissues, our research in biohybrid robotics is enabling new approaches toward the creation of autonomous biodegradable living robots. Such robotic systems have future applications in medicine, search and rescue, and environmental monitoring of sensitive environments (e.g., coral reefs).

[ UMD ]



Even simple robotic grippers can perform complex tasks—so long as it’s smart about using its environment as its handy aide. This, at least, is the finding of new research from Carnegie Mellon University’s Robotics Institute.

In robotics, simple grippers are typically assigned straightforward tasks such as picking up objects and placing them somewhere. However, by making use of their surroundings, such as pushing an item against a table or wall, simple grippers can perform skillful maneuvers usually thought achievable only by more complex, fragile and expensive, multi-fingered artificial hands.

However, previous research on this strategy, known as “extrinsic dexterity,” often made assumptions about the way in which grippers would grasp items. This in turn required specific gripper designs or robot motions.

“Simple grippers are underrated.”
—Wenxuan Zhou, Carnegie Mellon University

In the new study, scientists used AI to overcome these limitations to apply extrinsic dexterity to more general settings and successfully grasp items of various sizes, weights, shapes and surfaces.

“This research may open up new possibilities in manipulation with a simple gripper,” says study lead author Wenxuan Zhou at Carnegie Mellon University. “Potential applications include warehouse robots or housekeeping robots that help people to organize their home.”

The researchers employed reinforcement learning to train a neural network. They had the AI system attempt random actions to grasp an object, rewarding those series of actions that led to success. The system, then, ultimately adopted the most successful patterns of behavior. It learned, in so many words. After first training their system in a physics simulator, they next tested it in a simple robot with a pincer-like grip.

The scientists had the robot attempt to grab items confined within an open bin that were initially oriented in ways that meant the robot could not pick them up. For example, the robot might be given an object that was too wide for its gripper to grasp. The AI needed to figure out a way to push the item against the wall of the bin so the robot could then grab it from its side.

“Initially, we thought the robot might try to do something like scooping underneath the object, as humans do,” Zhou says. “However, the algorithm gave us an unexpected answer.” After nudging an item against the wall, the robot pushed its top finger against the side of the object to lever it up, “and then let the object drop on the bottom finger to grasp it.”

In experiments, Zhou and her colleagues tested their system on items such as cardboard boxes, plastic bottles, a toy purse and a container of Cool Whip. These varied in weight, shape and how slippery they were. They found their simple grippers could successfully grasp these items with a 78 percent success rate.

“Simple grippers are underrated,” Zhou says. “Robots should exploit extrinsic dexterity for more skillful manipulation.”

In the future, the group hopes to generalize their findings to, Zhou says, “a wider range of objects and scenarios,” Zhou says. “We are also interested in exploring more complex tasks with a simple gripper with extrinsic dexterity.”

The scientists detailed their findings 18 December at the Conference on Robot Learning in Auckland, New Zealand.



2022 was a huge year for robotics. Yes, I might say this every year, and yes, every year I might also say that each year is more significant than any other. But seriously: This year trumped them all. After a tough pandemic (which, let’s be clear, is still not over), conferences and events have started to come back, research has resumed, and robots have continued to make their way into the world. It really has been a great year.

And on a personal note, we’d like to thank you, all of you, for reading (and hopefully enjoying) our work. We’d be remiss if we didn’t also thank those of you who provide awesome stuff for us to write about. So, please enjoy this quick look back at some of our most popular and most impactful stories of 2022. Here’s wishing for more and better in 2023!

The Bionic-Hand Arms Race

Robotic technology can be a powerful force for good, but using robots to make the world a better place has to be done respectfully. This is especially true when what you’re working on has a direct physical impact on a user, as is the case with bionic limbs. Britt Young has a more personal perspective on this than most, and in this article, she weaved together history, technology, and her own experience to explore bionic limb design. With over 100,000 views, this was our most popular robotics story of 2022.

For Better or Worse, Tesla Bot Is Exactly What We Expected

After Elon Musk announced Tesla’s development of a new humanoid robot, we were left wondering whether the car company would be able to somehow deliver something magical. We found out this year that the answer is a resounding “Not really.” There was nothing wrong with Tesla Bot, but it was immediately obvious that Tesla had not managed to do anything groundbreaking with it, either. While there is certainly potential for the future, at this point it’s just another humanoid robot with a long and difficult development path ahead of it.

Autonomous Drones Challenge Human Champions in First “Fair” Race

Usually, the kinds of things that humans are really good at and the kinds of things that robots are really good at don’t overlap all that much. So, it’s always impressive when robots get anywhere close to human performance in activities that play to our strengths. This year, autonomous drones from the University of Zurich managed for the first time to defeat the best human pilots in the world in a “fair” drone race, where both humans and robots relied entirely on their onboard brains and visual perception.

How Robots Can Help Us Act and Feel Younger

Gill Pratt has a unique perspective on the robotics world, going from academia to DARPA program manager to the current CEO of Toyota Research. His leadership position at TRI means that he can visualize how to make robots that best help humanity, and then actually work towards putting that vision into practice—commercially and at scale. His current focus is assistive robots that help us live fuller, happier lives as we age.

DARPA’s RACER Program Sends High-Speed Autonomous Vehicles Off-Road

Getting autonomous vehicles to drive themselves is not easy, but the fact that they work even as well as they do is arguably due to the influence of DARPA’s 2005 Grand Challenge. That’s why it’s so exciting to hear about DARPA’s newest autonomous vehicle challenge, aimed at putting fully autonomous vehicles out into the wilderness to fend for themselves completely off-road.

Boston Dynamics AI Institute Targets Basic Research

Boston Dynamics is arguably best known for developing amazing robots with questionable practicality. As the company seeks to change that by exploring commercial applications for its existing platforms, founder Marc Raibert has decided to keep focusing on basic research by starting a completely new institute with the backing of Hyundai.

Alphabet’s Intrinsic Acquires Majority of Open Robotics

The Open Source Robotics Foundation (OSRF) spun out of Willow Garage 10 years ago. This year’s acquisition of most of the Open Robotics team by Alphabet’s Intrinsic represents a milestone for the Robotics Operating System (ROS). the fact that it’s even possible for Open Robotics to move on like this is a testament to just how robust the ROS community is. The Open Robotics folks will still be contributing to ROS, with a much smaller OSRF supporting the community directly. But it’s hard to say goodbye to what OSRF used to be.

The 11 Commandments of Hugging Robots

Hugging robots is super important to me, and it should be important to you, too! And to everyone, everywhere! While, personally, I’m perfectly happy to hug just about any robot, very few of them can hug back—at least in part because the act of hugging is a complex human interaction task that requires either experience being a human or a lot of research for a robot. Much of that research has now been done, giving robots some data-driven guidelines about how to give really good hugs.


Labrador Addresses Critical Need With Deceptively Simple Home Robot

It’s not often that we see a new autonomous home robot with a compelling use case. But this year, Labrador Systems introduced Retriever, a semi-autonomous mobile table that can transport objects for folks with mobility challenges. If Retriever doesn’t sound like a big deal, that’s probably because you have no use for a robot like this; but it has the potential to make a huge impact on people who need it.

Even as It Retires, ASIMO Still Manages to Impress

ASIMO has been setting the standard for humanoid robots for literally a decade. Honda’s tiny humanoid was walking, running, and jumping back in 2011 (!)—and that was just the most recent version. ASIMO has been under development since the mid-1980s, which is some seriously ancient history as far as humanoid robots go. Honda decided to retire the little white robot this year, but ASIMO’s legacy lives on in Honda’s humanoid robot program. We’ll miss you, buddy.




Video Friday is your weekly selection of awesome robotics videos (special holiday edition!) collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA

Enjoy today’s videos!

We hope you have an uplifting holiday season! Spot was teleoperated by professional operators, don’t try this at home.

[ Boston Dynamics ]

This year, our robot Husky was very busy working for the European Space Agency (ESA). But will he have to spend Christmas alone, apart from his robot friends at the FZI – alone on the moon? His friends want to change that! So, they train very hard to reunite with Husky! Will they succeed?

[ FZI ]

Thanks, Arne!

We heard Santa is starting to automate at the North Pole and loads the sledge with robots now. Enjoy our little Christmas movie!

[ Leverage Robotics ]

Thanks, Roman!

A self healing soft robot finger developed by VUB-imec Brubotics and FYSC sending in morse to the world “MERRY XMAS”.

[ BruBrotics ]

Thanks, Bram!

After the research team made some gingerbread houses, we wanted to see how Nadia would do walking over them. Happy Holidays everyone!

[ IHMC Robotics ]

In this festive robotic Christmas sketch, a group of highly advanced robots come together to celebrate the holiday season. The “Berliner Hochschule für Technik” wishes a merry Christmas and a happy new year!

[ BHT ]

Thanks, Hannes!

Our GoFa cobot had a fantastic year and is ready for new challenges in the new year, but right now, its time for some celebrations with some cobot-made delicious cookies.

[ ABB ]

Helping with the office tree, from Sanctuary AI.

Flavor text from the video description: “Decorated Christmas trees originated during the 16th-century in Germany. Protestant reformer Martin Luther is known for being among the first major historical figures to add candles to an evergreen tree. It is unclear whether this was, even then, considered to be a good idea.”

[ Sanctuary ]

Merry Christmas from qbrobotics!

[ qbrobotics ]

Christmas, delivered by robots!

[ Naver Labs ]

Bernadett dressed Ecowalker in Xmas lights. Enjoy the holidays!

[ Max Planck ]

Warmest greetings this holiday season and best wishes for a happy New Year from Kawasaki Robotics.

[ Kawasaki Robotics ]

Robotnik wishes you a Merry Christmas 2022.

[ Robotnik ]

CYBATHLON wishes you all a happy festive season and a happy new year 2023!

[ Cybathlon ]

Here’s what LiDAR-based SLAM in a snow gust looks like. Enjoy the weather out there!

[ NORLAB ]

We present advances on the development of proactive control for online individual user adaptation in a welfare robot guidance scenario. The proposed control approach can drive a mobile robot to autonomously navigate in relevant indoor environments. All in all, this study captures a wide range of research from robot control technology development to technological validity in a relevant environment and system prototype demonstration in an operational environment (i.e., an elderly care center).

[ Paper ]

Thanks, Poramate!

“Every day in a research job :)”

[ Chengxu Zhou ]

Robots like Digit are purpose-built to do tasks in environments made for humans. We aren’t trying to just mimic the look of people or make a humanoid robot. Every design and engineering decision is looked at through a function-first lens. To easily walk into warehouses and work alongside people, to do the kinds of dynamic reaching, carrying, and walking that we do, Digit has some similar characteristics. Our Co-Founder and Chief Technology Officer Jonathan Hurst, discusses the difference between humanoid and human-centric robotics.

[ Agility Robotics ]

This year, the KUKA Innovation Award is all about medicine and health. After all, new technologies are playing an increasingly important role in healthcare and will be virtually indispensable in the future. Researchers, developers and young entrepreneurs from all over the world submitted their concepts for the “Robotics in Healthcare Challenge”. An international jury of experts evaluated the concepts and selected our five finalists.

[ Kuka ]

In the summer of 2003, two NASA rovers began their journeys to Mars at a time when the Red Planet and Earth were the nearest they had been to each other in 60,000 years. To capitalize on this alignment, the rovers had been built at breakneck speed by teams at NASA’s Jet Propulsion Laboratory. The mission came amid further pressures, from mounting international competition to increasing public scrutiny following the loss of the space shuttle Columbia and its crew of seven. NASA was in great need of a success.
“Landing on Mars” is the story of Opportunity and Spirit surviving a massive solar flare during cruise, the now well-known “six minutes of terror,” and what came close to being a mission-ending software error for the first rover once it was on the ground.

[ JPL ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA

Enjoy today’s videos!

Well, now humans aren’t necessary for launching drones, landing drones, charging drones, or flying drones. Thanks, Skydio!

[ Skydio ]

Do not underestimate the pleasure of hearing little metal feet climbing up metal walls.

[ Science Robotics ]

The latest in the Zoox testing series, this video showcases how Zoox tests the maneuverability capabilities of its robotaxi, which are critical for operation in dense urban environments. Four-wheel steering, bidirectional design, and active suspension are some of the features integrated into the Zoox robotaxi to ensure every ride is a smooth ride.

[ Zoox ]

Thanks, Whitney!

The Ligō device is a novel 3D bioprinting platform that supports the functional healing of skin tissue after acute skin injuries such as extensive burns. It is developed in Australia by an interdisciplinary team at Sydney-based startup Inventia Life Science. The Ligō robot prints tiny droplets containing the patient’s skin cells and optimized biomaterials into the wound directly in the operating room, combining the Kuka LBR Med and Inventia’s patented 3D bioprinting technology. In this way, tissue-guided regeneration is stimulated, allowing the body to heal itself and restore healthy skin that improves the quality of life for skin-injury survivors.

[ Inventia ]

In the first quarter of 2022, our group demoed ANYmal and Spot carrying out automated inspection at Chevron’s blending plant in Ghent, Belgium.

[ ORI ]

I have to think that for teleoperation, this is much harder than it looks.

[ Sanctuary AI ]

Meet the software Development Engineers from Amazon’s Global Ops Robotics, who are working together to deliver innovations that will shape the future of Amazon operations.

[ Amazon ]

This video highlights the impact of Covariant’s AI-powered Robotic Putwall, at Capacity, a third-party logistics company serving some of the world’s largest e-commerce brands. Affectionately named Waldo, the autonomous put wall has been fulfilling thousands of customer orders at over 500 picks per hour, with less than 0.1 percent of them needing human intervention.

[ Covariant ]

What does Moxie do? Best to just ask Moxie.

[ Embodied ]

I’m not sure what this is, but I’ll be watching!

[ Fraunhofer ]

It still kind of blows my mind that you can just go and buy yourself a robot dog.

[ Trossen ]

Here are a series of talks from the Can We Build Baymax? workshop, focusing on education and open source for humanoid robots.

[ CWBB ]

This University of Pennsylvania GRASP on Robotics talk is from Harold Soh at the National University of Singapore: “Towards Trustworthy Robots That Interact With People.”

What will it take to develop robots that work with us in real-world tasks? In this talk, we’ll discuss some of our work across the autonomy stack of a robot as we make progress towards an answer. We’ll begin with multimodal sensing and perception, and then move on to modeling humans with little data. We’ll end with the primary insights gained in our journey and a discussion of challenges in deriving robots that we trust to operate in social environments.

[ UPenn ]



Today’s announcement of the acquisition of the Open Source Robotics Corporation by Intrinsic has generated a lot of questions, and no small amount of uncertainty about how this will affect the future of ROS. We have a bunch more information in this article, and there are also three blog posts you can take a look at: one from Open Robotics, one from Intrinsic, and one on ROS Discourse.

Earlier this week, we were able to speak with Brian Gerkey, co-founder and CEO of Open Robotics, alongside Wendy Tan White, CEO of Intrinsic, to ask them about the importance of this partnership and how it’s going to change things for the ROS community.

IEEE Spectrum: Why is Intrinsic acquiring OSRC?

Brian Gerkey: Things are really different from how they were 10 years ago, when we started OSRF. At that time, we were just starting to see companies coming onto the scene and rolling out robots in a serious way, but now, the robotics industry has really taken off. Which is a great thing—but that’s also meant that we here at Open Robotics, with our small independent team, have been feeling the pressure. We’re trying to support this really broad community, and that has become increasingly difficult for us to do as a small company with limited resources. And so, we were really excited by the opportunity to team up with Intrinsic as a partner who is philosophically aligned with us, and who is going to support us as we continue doing this open source work while also building some industry-hardened ROS solutions on top.

Wendy Tan White: When Brian talks about common alignment, our whole mission at Intrinsic is about democratizing robotics. The demand is there now, but access is still limited. It’s still very much either the preserve of researchers, or of heavy industry that’s been using robots in the same way for the last 30 years. As an example, when you wanted to build a website in the old days, you’d have to build your own servers, your own middleware, and your own front end. But now, you can knock up a website tomorrow and add commerce and actually be running a business. That kind of access isn’t there yet for robotics, and I feel like the world needs that. To me, what Brian has done with the ROS community is to try to lift that, and I think that if we join forces, we can lift everyone together.

Open Robotics’ model has been to remain independent while helping other companies do what they want to do with ROS. Why acquire Open Robotics as opposed to continuing that relationship?

White: If you think about a model like Linux and Red Hat, Red Hat became almost like a commercial or industrialized arm of Linux. That commercialization and hardening around Red Hat meant that industry was willing to commit to it more broadly, and what Brian was finding at Open Robotics is that he was starting to get that pull to build to that level, and then ended up building loads of customized things on top of ROS when really what was needed was a more industrialized platform. But he didn’t want to force the ROS community into that alone. So, the way that we see this is that we’d be more like the Red Hat to the Linux of OSRF. And we’re actually going to carry on investing in OSRF, to give it more stability. I don’t think Brian would have wanted to do this unless he felt like OSRF was going to stay independent.

Gerkey: Yes, absolutely. And I think one thing to keep in mind is that in terms of how open source communities are typically structured, the way that Open Robotics has done it for the last 10 years has been the exception. The situation where the foundation has not just a responsibility for governance and organization, but also happens to employ (directly or indirectly) the core developer team, is a very uncommon situation. Much more common is what we’re going to end up with on the other side of this week, which is that the foundation is a relatively small focused entity that does all of the non-code development activities, while the people who are developing the code are at companies like Intrinsic (but not only Intrinsic) where they’re building products and services on top.

And in terms of my own motivation here, I’ve been doing open source robot software since the late 1990s, even before ROS. I’ve built a career out of this. I wouldn’t go into this transition if I didn’t completely believe it was going to be good for that mission that I’ve committed myself to, personally and professionally. I am highly confident that this is going to be a very good move for the community, for the platform, for our team, and I think we’re going to build great things with Intrinsic.

“If you think about a model like Linux and Red Hat, Red Hat became almost like a commercial or industrialized arm of Linux … So, the way that we see this is that we’d be more like the Red Hat to the Linux of OSRF.” —Wendy Tan White

There are many other companies who contribute substantially to ROS, and who understand the value of the ROS ecosystem. Why is Intrinsic the right choice for Open Robotics?

Gerkey: We thought hard about this. When our leadership team recognized the situation we talked about earlier—where the demands on us from the robotics community were getting to the point where we were not going to be able to do justice to this whole community that we were responsible for supporting on our own—we decided that the best way for us to be able to do that would be to join with a larger partner. We went through a lengthy strategic process and considered lots and lots of partners. I approached Stefan Schaal [Chief Science Officer at Intrinsic], who was actually on my thesis proposal committee at USC 20 years ago, thinking that somewhere within the Alphabet universe there might be a good home for us. And I was honestly surprised as Stefan told me more about what Intrinsic was doing, and about their vision to (as Wendy puts it) democratize access to robotics by building a software platform that makes robotic applications easier to develop, deploy, and maintain. That sounded a whole lot like what we’re trying to do, and it was clear pretty quickly that Intrinsic was the right match.

What is Intrinsic actually acquiring?

Gerkey: The OSRC and OSRC Singapore teams will be joining Intrinsic as part of the transaction. But what’s not going is important: ROS, Gazebo, ROSCon, TurtleBot—the ownership of the trademarks, the domain names, the management of the websites, the operations of all those things, all of that remains with OSRF. And that gets back to what I was talking about earlier: that’s the traditional role for a non-profit foundation, as a steward of an open source community.

Can you tell us how much the acquisition is for?

White: We can’t talk about the amount. But I really felt that it was fair value, and it’s in our best interest to make sure that the team is happy and that OSRF is happy through this process.

How many people will be working at OSRF after the acquisition?

Gerkey: Vanessa Yamzon Orsi is going to step up and take over as CEO. Geoffrey Biggs will step up as CTO. We’re going to have some additional outside engineers to help with running some of the infrastructure. And then from there, we’ll assess and see how big OSRF needs to be. So it’s going to be a much smaller organization, but that’s on purpose, and it’s doable because it’s no longer operating the consulting business that Open Robotics has historically been known for.

How much of the core ROS code is currently being generated and maintained by the community, and how much is being generated and maintained by the team that has been acquired by Intrinsic? How has that been changing over time?

Gerkey: Our team certainly spends more of our time on the core of ROS than other organizations tend to, and that’s in part because of that legacy of expertise that started at Willow Garage. It’s also because historically, that’s been the hardest part of the ecosystem to get external contributions to. When people start using ROS and want to contribute something, they’re much more likely to want to contribute a new thing, like a driver for a new sensor, or a new algorithm, and it’s harder to get volunteers to contribute to the core. Having said that, one of the developments over the last couple of years is that we introduced the ROS 2 technical steering committee, and that has brought in core contributions from folks like Bosch and iRobot.

“OSRC teams will be joining Intrinsic as part of the transaction. But what’s not going is important: ROS, Gazebo, ROSCon, TurtleBot—the ownership of the trademarks, the domain names, the management of the websites, the operations of all those things, all of that remains with OSRF.” —Brian Gerkey

But should we be concerned that many of the folks who have been making core ROS contributions at Open Robotics since Willow will now be moving to Intrinsic, where their focus may be different?

Gerkey: That’s a totally fair question. These are people who are well known within the community for the work that they’ve done. But I think that should be one of the most encouraging things for folks who are hearing this news: the reason these people are known is that they’ve established a track record of good action within the community. They’ve spent years and years making open source contributions. And what I would ask of the community is to give them the benefit of the doubt, that they’re going to continue doing that. That’s what they’ve always done, and that’s what we intend to keep doing.

White: I think that’s the reason Brian chose us rather than the other partners he could have been with—because Intrinsic will provide that space and latitude. Why? Because it’s actually symbiotic. I’ve never seen a step change in any industry with open source unless those relationships are symbiotic. And Alphabet has a good track record of honoring that too, and striking that balance of understanding.

I’m a believer that actions and evidence speak for themselves better than me giving you some bullshit story about how it’s going to be. I hope you will hold us to it.

Broadly speaking, how close do you think the alignment is between Intrinsic’s goals as an independent company, and the goal of supporting core ROS functionality and contributing to the ROS community?

White: I think it’s very close to where Brian was trying to take the whole of Open Robotics anyway. If you grow a set of libraries and tooling organically through a community, the problem you’ve got is that for it to reach the industrial quality that businesses want, it really will take something like Intrinsic and Alphabet to make that happen. The incumbent industry suppliers have no interest in shifting to that model. The startups do, but they’re finding it really hard to break into old industry. We’re able to bridge the two, and I think that’s the difference.

Brian, you say in one of the blog posts that being part of Intrinsic is a “big opportunity” that will have “long-term benefits” for the ROS community.” Can you elaborate on that?

Gerkey: At a high level, the real advantage is going to be that there will be more sustained investment in the core platform that people have always wanted us to improve. Given the way Open Robotics operated historically, that was always a thing that we tended to do in the margins. Rarely did a customer come to us and say, “we’d like this item from your technical roadmap implemented for the next version.” It was much more like, “here’s what we need to make our application work tomorrow.” And so we’ve always had a limited ability to make the longer range investments in the platform, and so we’re going to be in a much better position to do that with Intrinsic.

To be more specific, if you look at Intrinsic’s near-term focus in industrial manufacturing, I think we can expect to see some really great synergies with the ROS Industrial community. Intrinsic has internally developed some tools and algorithms that I think would be interesting and there are discussions about how to make those contributions. So, somewhere between better and more consistent involvement in the core platform and specific improvements in industrial use cases are probably what people should look for in the near term.

“How is the community going to react? … There are certainly going to be people in the community who are not convinced; there are going to be folks out there who react negatively to this. And it’s going to be on us to bring them around over time.” —Brian Gerkey

What was your biggest concern about this partnership, and how did you resolve that concern?

Gerkey: It’s a coin toss for me between “what is my team going to think about this,” and “what is the community going to think about this.” Those were my two biggest concerns. And not because this is a bad or borderline thing and I’m going to have to convince people about some shady deal, but just more like, this is a big surprise. This has been consistently the theme as we’ve disclosed things to members of our team: the first reaction is, “what?!” Before they can even get to deciding if it’s good or bad, it’s just really different from what they expected. But then we tell them about it, and they can see it as a great opportunity, which has helped me feel better about it.

And then how is the community going to react? I mean, we’re going to find out this week. I’d say that we’ve done everything we can in good faith to structure the deal and make plans so that we are acting in the best interests of the community. We have the backing of the current OSRF to do this, and that’s a big endorsement. There are certainly going to be people in the community who are not convinced; there are going to be folks out there who react negatively to this. And it’s going to be on us to bring them around over time. We can only do that through action, we can’t do that through promises.

White: My greatest concern has been about the community. As Brian said, we sort of tested it out with a couple of folks, and even though there’s surprise, there’s normally also genuine excitement and curiosity. But there’s also some skepticism. And my own experience with dev communities around that, is that the only way to prove ourselves is to do it together.



Today, Open Robotics, which is the organization that includes the nonprofit Open Source Robotics Foundation (OSRF) as well as the for-profit Open Source Robotics Corporation (OSRC), is announcing that OSRC is being acquired by Intrinsic, a standalone company within Alphabet that’s developing software to make industrial robots intuitive and accessible.

Open Robotics is of course the organization that spun off from Willow Garage in 2012 to provide some independent structure and guidance for ROS, the Robot Operating System. Over the past dozen-ish years, ROS has expanded from specialized software for robotics nerds into a powerful platform for research and industry, supported by an enthusiastic and highly engaged open source community. Open Robotics, meanwhile, branched out in 2016 from a strict non-profit to also take on some high-profile projects for the likes of the Toyota Research Institute and NVIDIA. It has supported itself commercially by leveraging its experience and expertise in ROS development. Open Robotics currently employs more than three dozen engineers, most of whom are part of the for-profit corporation.

Intrinsic is a recent graduate from X, Alphabet’s moonshot factory; the offshoot’s mission is to “democratize access to robotics” through software tools that give traditional industrial robots “the ability to sense, learn, and automatically make adjustments as they’re completing tasks.” This, the thinking goes, will improve versatility while lowering costs. Intrinsic is certainly not unique in harboring this vision, which can be traced back to Rethink Robotics (if not beyond). But Intrinsic is focused on the software side, relying on learning techniques and simulation to help industrial robots adapt and scale in a way that won’t place an undue burden on industries that may not be used to flexible automation. Earlier this year, Intrinsic acquired intelligent automation startup Vicarious, which had been working on AI-based approaches to making robots “as commonplace and easy to use as mobile phones.”

Intrinsic’s acquisition of Open Robotics is certainly unexpected, and the question now is what it means for the ROS community and the future of ROS itself. We’ll take a look at the information that’s available today, and then speak with Open Robotics CEO Brian Gerkey as well as Intrinsic CEO Wendy Tan White to get a better understanding of exactly what’s happening.

Before we get into the details, it’s important to understand the structure of Open Robotics, which has been kind of confusing for a long time—and probably never really mattered all that much to most people until this very moment. Open Robotics is an “umbrella brand” that includes OSRF (the Open Source Robotics Foundation), OSRC (the Open Source Robotics Corporation), and OSRC-SG, OSRC’s Singapore office. OSRF is the original non-profit Willow Garage spinout, the primary mission of which was “to support the development, distribution, and adoption of open source software for use in robotics research, education, and product development.” Which is exactly what OSRF has done. But OSRF’s status as a non-profit placed some restrictions on the ways in which it was allowed to support itself. So, in 2016, OSRF created the Open Source Robotics Corporation as a for-profit subsidiary to take on contract work doing ROS development for corporate and government clients. An OSRC office in Singapore opened in 2019. If you combine these three entities, you get Open Robotics.

The reason why these distinctions are super important today is because Intrinsic is acquiring OSRC and OSRC-SG, but not OSRF. Or, as Open Robotics CEO Brian Gerkey puts it in a blog post this morning:

Intrinsic is acquiring assets only from these for-profit subsidiaries, OSRC and OSRC-SG. OSRF continues as the independent nonprofit it’s always been, with the same mission, now with some new faces and a clearer focus on governance, community engagement, and other stewardship activities. That means there is no disruption in the day-to-day activities with respect to our core commitment to ROS, Gazebo, Open-RMF, and the entire community.

To be clear: Intrinsic is not acquiring ROS. Intrinsic is not acquiring Gazebo. Intrinsic is not taking over technical roadmaps, the build infrastructure, or TurtleBot, or ROSCon. As Open Robotics’ Community Director Tully Foote says in this ROS Discourse discussion forum post: “Basically, if it is an open-source tool or project it will stay with the Foundation.” What Intrinsic is acquiring is almost all of the of the Open Robotics team, which includes many of the folks who were fundamental architects of ROS at Willow Garage, were founding members of OSRF, but have been focused primarily on the Open Robotics’ corporation side (OSRC) rather than the foundation side (OSRF) for the past five years.

Still, while ROS itself is not part of the transaction, it’s not like OSRC hasn’t been a huge driving force behind ROS development and maintenance—in large part because of the folks who work there. Now, the vast majority of those folks will be working for a different company with its own priorities and agenda that (I would argue) simply cannot be as closely aligned with the goals of the broader ROS community as was possible when OSRC was independent. And this whole thing reminds me a little bit of when Google/Alphabet swallowed a bunch of roboticists back in 2013; while those roboticists weren’t exactly never heard from again, there was certainly a real sense of disappointment and community loss.

Hopefully, this will not be the case with Intrinsic. Gerkey’s blog post delivers a note of optimism:

With Intrinsic’s investment in ROS, we anticipate long-term benefits for the entire community through increased development on the core open source platforms. The team at Intrinsic includes many long-time ROS and Gazebo users and we have come to realize how much they value the ROS community and want to maintain and contribute.

For its part, Intrinsic’s blog post from CEO Wendy Tan White focuses more on how awesome the Open Robotics team is:

For years, we’ve admired Brian and his team’s relentless passion, skill, and dedication making the Robot Operating System (ROS) an essential platform for robotics developers worldwide (including us here at Intrinsic). We’re looking forward to supporting Brian and members of the OSRC team as they continue to push the boundaries of open-source development and what’s possible with ROS.

There’s still a lot about this acquisition that we don’t know. We don’t know the exact circumstances surrounding it, or why it’s happening now. But it sounds like the business model of OSRC may not have been sustainable, or not compatible with Open Robotics’ broader vision, or both. We also don’t know the acquisition price, which might provide some additional context. The scariest part, however, is that we just don’t know what’s going to happen next. Both Brian Gerkey and Wendy Tan White seem to be doing their best to make the community feel comfortable with (or at least somewhat accepting of) this transition for OSRC. And I have no reason to think that they’re not being honest about what they want to happen. It’s just important to remember that Intrinsic is buying OSRC primarily because buying OSRC is good for Intrinsic.

If, as Gerkey says, this partnership turns out to be a long-term benefit for the entire ROS community, then that’s wonderful, and I’m sure that’s what we’re all hoping for. In the post from Foote on the ROS Discourse discussion forum, Intrinsic CTO Torsten Kroeger says very explicitly that “the top priority for the OSRC team is to nurture and grow the ROS community.” And according to Foote, the team will have “dedicated bandwidth to work on core ROS packages, Gazebo, and Open-RMF.” But of course, priorities can change, and however things end up, OSRC will still be owned by Intrinsic. Fundamentally, all we can do is trust that the people involved (many of whom the community knows quite well) will be doing their best to ensure that this is the best path forward for everyone.

The other thing to remember here is that, as important as the broader ROS community is, everyone at Open Robotics is also a part of the ROS community, and we should (and do) want what’s best for them. These are people who have committed a huge chunk of their lives to ROS; expecting that they’ll all keep doing so indefinitely out of inertia or obligation or whatever just not realistic or kind. If the OSRC team is excited about Intrinsic and wants to try something new, that’s fantastic, more power to them, and I hope they all get massive raises. They deserve it.

And much of what happens going forward is up to the ROS community itself, as it has always been. Are you worried about updates or packages getting maintained? Contribute some code. Worried about support? Participate in ROS Answers or add some documentation to the wiki. Worried about long-term vision or governance? There are plenty of ways to volunteer your time and expertise and enthusiasm to help keep the ROS community robust and healthy. And from the sound of things, this is exactly what the OSRC team hopes to be doing, just from inside Intrinsic instead of inside Open Robotics.

Our interview with Open Robotics CEO Brian Gerkey and Intrinsic CEO Wendy Tan White is here. And if you have specific questions, there’s a ROS Discourse thread for them here, where the Intrinsic and Open Robotics teams will be doing their best to provide answers.



Sometime next year, an autonomous robot might deliver food from an airport restaurant to your gate.

The idea for Ottobot, a delivery robot, came out of a desire to help restaurants meet the increased demand for takeout orders during the COVID-19 pandemic. Ottobot can find its way around indoor spaces where GPS can’t penetrate.


Founded 2020

Headquarters Santa Monica, Calif.

Founders Ritukar Vijay, Pradyot Korupolu, Ashish Gupta and Hardik Sharma

Ottobot is the brainchild of Ritukar Vijay, Ashish Gupta, Hardik Sharma, and Pradyot Korupolu. The four founded Ottonomy in 2020 in Santa Monica, Calif. The startup now has 40 employees in the United States and India.

Ottonomy, which has raised more than US $4.5 million in funding, received a Sustainability Product of the Year Award last year from the Business Intelligence Group.

Today Ottobot is being piloted not only by restaurants but also grocery stores, postal services, and airports.

Vijay and his colleagues say they focused on three qualities: full autonomy, ease of maneuverability, and accessibility.

“The robot is not replacing any staff members; it’s aiding them in their duties,” Vijay says. “It’s rewarding seeing staff members at our pilot locations so happy about having the robot helping them do their tasks. It’s also very rewarding seeing people take their delivery order from the Ottobot.”

Focusing on autonomous technology

For 15 years Vijay, an IEEE senior member, worked on autonomous robots and vehicles at companies including HCL Technologies, Tata Consultancy Services, and THRSL. In 2019 he joined Aptiv, an automotive technology supplier headquartered in Dublin. There he worked on BMW’s urban mobility project, which is developing autonomous transportation and traffic-control systems.

During Vijay’s time there, he noticed that Aptiv and its competitors were focusing more on developing electric cars rather than autonomous ones. He figured it was going to take a long time for autonomous cars to become mainstream, so he began to look for niche applications. He hit upon restaurants and other businesses that were struggling to keep up with deliveries.

Ottobot reduces delivery costs by up to 70 percent, Vijay says, and it can reduce carbon emissions for small-distance deliveries almost 40 percent.

Using wheelchair technology, the Ottobot can maneuver over curbs and other obstacles. robot on wheel strolling down a city sidewalk

Ottobot as an airport assistant

Within the first few months of the startup’s launch, Vijay and the Ottonomy team began working with Cincinnati/Northern Kentucky Airport. The facility wanted to give passengers the option of having food from the airport’s restaurants and convenience stores delivered to their gate, but it couldn’t find an autonomous robot that could navigate the crowded facility without GPS access, Vijay says.

To substitute for GPS, the robot used 3-D lidars, cameras, and ultrasonic sensors. The lidars provide geometric information about the environment. The cameras collect semantic and depth data, and the short-range ultrasonic sensors ensure that the Ottobot detects poles and other obstructions. The Ottonomy team wrote its own software to enable the robot to create high-information maps—a 3D digital twin of the facility.

Vijay says there’s a safety mechanism in place that lets a staff member “take over the controls if the robot can’t decide how to maneuver on its own, such as through a crowd.” The safety mechanism also notifies an Ottonomy engineer if the robot’s battery runs low on power, Vijay says.

“Imagine passengers are boarding their plane at a gate,” he says. “Those areas get very crowded. During the robot’s development process, one of our engineers joked around, saying that the only way to navigate a crowd of this size was to move sideways. We laughed at it then, but three weeks later we started developing a way for the robot to walk sideways.”

The team took its inspiration from electric-powered wheelchairs. All four of the Ottobot’s wheels are powered and can steer simultaneously—which allows it to move laterally, swerve, and take zero-radius turns.

The wheelchair technology also allows the Ottobot to maneuver outside an airport setting. The wheels can carry the robot over sidewalk curbs and other obstacles.

“It’s rewarding seeing staff members at our pilot locations so happy about having the robot helping them do their tasks.”

Ottobot is 1.5 meters tall—enough to make it visible. It can adjust its position and height so that its cargo can be reached by children, the elderly, and people with disabilities, Vijay says.

The robot’s compartments can hold products of different sizes, and they are large enough to allow it to make multiple deliveries in a single run.

To place orders, customers scan a QR code at the entrance of a business or at their gate to access Crave, a food ordering and delivery mobile app. After placing their order, customers provide their location. In an airport, the location would be the gate number. The customers then are sent a QR code that matches them to their order.

A store or restaurant employee loads the ordered items into Ottobot. The robot’s location and estimated arrival time is updated continuously on the app.

Delivery time and pricing varies by location, but on average retail orders can be delivered in as quickly as 10 minutes, while the delivery time for restaurant orders generally ranges from 20 to 25 minutes, Vijay says.

Once the robot reaches its final destination, it sends an alert to the customer’s phone. The Ottobot then scans the person’s QR code, which unlocks the compartment.

Pilot programs are being run with Rome Airport and Posten, a Norwegian postal and logistics group.

Ottonomy says it expects Ottobot to be used at airports, college campuses, restaurants, and retailers next year in Europe and North America.

Why IEEE membership is vital

Being an IEEE member has given Vijay the opportunity to interact with other practicing engineers, he says. He attends conferences frequently and participates in online events.

“When my team and I were facing difficulties during the development of the Ottonomy robot,” he says, “I was able to reach out to the IEEE members I’m connected with for help.”

Access to IEEE publications such as IEEE Robotics and Automation Magazine, IEEE Robotics and Automations Letters, and IEEE Transactions on Automation Science and Engineering has been vital to his success, he says. His team referred to the journals throughout the Ottobot’s development and cited them in their technical papers and when completing their patent applications.

“Being an IEEE member, for me, is a no-brainer,” Vijay says.

www.youtube.com

Pages