Feed aggregator



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

When creating robots, it can be challenging to achieve the right combination of qualities, which sometimes contradict one another. For example, it’s difficult to make a robot that is both flexible and strong—but not impossible.

In a recent study, researchers created a robot that yields high degrees of flexibility while still maintaining high tension within its “muscles,” giving it sufficient torsional motion to accomplish difficult tasks. In an experiment, the robot was able to remove a cap from a bottle, while yielding a torsional motion that was 2.5 times greater than the next leading robot of its type. The results were published January 13 in IEEE Robotics and Automation Letters.

Soft Tensegrity Robot Arm with Twist Manipulation Suzumori Endo Lab, Tokyo Tech has developed soft tensegrity arm with twist manipulation. Project members: Ryota Kobayashi, ...

Tensegrity robots are made of networks of rigid frames and soft cables, which enable them to change their shape by adjusting their internal tension.

“Tensegrity structures are intriguing due to their unique characteristics—lightweight, flexible, and durable,” explains Ryota Kobayashi, a Master’s student at the Tokyo Institute of Technology, who was involved in the study. “These robots could operate in challenging unknown environments, such as caves or space, with more sophisticated and effective behavior.”

Tensegrity robots can have a foundational structure with varying numbers of rigid structures, or “bars,” ranging from two to twelve or some time even more bars—but as a general rule of thumb, robots with more bars are typically more complex and difficult to design.

In their study, Kobayashi’s team created a tensegrity robot, which relies on six-bar tensegrity modules. To ensure the robot achieve large torsion, a virtual map of triangles is used, whereby the robot’s artificial muscles were placed so that they connected the vertices of the triangles. When the muscles contract, it brings the vertices of the triangles closer together.

Relying on this technique, the robot was achieved a large torsional motion of 50 degrees in two directions using only a 20% contraction of the artificial muscle. Kobayashi says his team was surprised at the efficiency of the system—just small contractions of the artificial muscle resulted in large contractions and torsional deformations.

“Most six-bar tensegrity robots only roll with slight deformations of the structure, resulting in limited movements,” says Dr. Hiroyuki Nabae, an assistant professor in Tokyo Institute of Technology who was also involved in the study. Notably, the authors report that their six-bar robot yields large torsional motion that are 2.5 times more than any other six-bar tensegrity robot they could find in the literature.

Next, the research team attached rubber fingers to the robot to help it grip objects and tested its ability to complete tasks. In one experiment, the robot arm is lowered to a Coca-Cola bottle, grips the cap, twists, raises the arm and repeats one more grip and twist motion to remove the cap in a matter of seconds.

The researchers are considering ways to build upon this technology, for example by increasing the robot’s ability to bend in different directions and incorporating tech that allows the robot to recognize new shapes in its environment. This latter advancement could help the robot adapt more to novel environments and tasks as needed.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

When creating robots, it can be challenging to achieve the right combination of qualities, which sometimes contradict one another. For example, it’s difficult to make a robot that is both flexible and strong—but not impossible.

In a recent study, researchers created a robot that yields high degrees of flexibility while still maintaining high tension within its “muscles,” giving it sufficient torsional motion to accomplish difficult tasks. In an experiment, the robot was able to remove a cap from a bottle, while yielding a torsional motion that was 2.5 times greater than the next leading robot of its type. The results were published January 13 in IEEE Robotics and Automation Letters.

Soft Tensegrity Robot Arm with Twist Manipulation Suzumori Endo Lab, Tokyo Tech has developed soft tensegrity arm with twist manipulation. Project members: Ryota Kobayashi, ...

Tensegrity robots are made of networks of rigid frames and soft cables, which enable them to change their shape by adjusting their internal tension.

“Tensegrity structures are intriguing due to their unique characteristics—lightweight, flexible, and durable,” explains Ryota Kobayashi, a Master’s student at the Tokyo Institute of Technology, who was involved in the study. “These robots could operate in challenging unknown environments, such as caves or space, with more sophisticated and effective behavior.”

Tensegrity robots can have a foundational structure with varying numbers of rigid structures, or “bars,” ranging from two to twelve or some time even more bars—but as a general rule of thumb, robots with more bars are typically more complex and difficult to design.

In their study, Kobayashi’s team created a tensegrity robot, which relies on six-bar tensegrity modules. To ensure the robot achieve large torsion, a virtual map of triangles is used, whereby the robot’s artificial muscles were placed so that they connected the vertices of the triangles. When the muscles contract, it brings the vertices of the triangles closer together.

Relying on this technique, the robot was achieved a large torsional motion of 50 degrees in two directions using only a 20% contraction of the artificial muscle. Kobayashi says his team was surprised at the efficiency of the system—just small contractions of the artificial muscle resulted in large contractions and torsional deformations.

“Most six-bar tensegrity robots only roll with slight deformations of the structure, resulting in limited movements,” says Dr. Hiroyuki Nabae, an assistant professor in Tokyo Institute of Technology who was also involved in the study. Notably, the authors report that their six-bar robot yields large torsional motion that are 2.5 times more than any other six-bar tensegrity robot they could find in the literature.

Next, the research team attached rubber fingers to the robot to help it grip objects and tested its ability to complete tasks. In one experiment, the robot arm is lowered to a Coca-Cola bottle, grips the cap, twists, raises the arm and repeats one more grip and twist motion to remove the cap in a matter of seconds.

The researchers are considering ways to build upon this technology, for example by increasing the robot’s ability to bend in different directions and incorporating tech that allows the robot to recognize new shapes in its environment. This latter advancement could help the robot adapt more to novel environments and tasks as needed.

Creating burrows through natural soils and sediments is a problem that evolution has solved numerous times, yet burrowing locomotion is challenging for biomimetic robots. As for every type of locomotion, forward thrust must overcome resistance forces. In burrowing, these forces will depend on the sediment mechanical properties that can vary with grain size and packing density, water saturation, organic matter and depth. The burrower typically cannot change these environmental properties, but can employ common strategies to move through a range of sediments. Here we propose four challenges for burrowers to solve. First, the burrower has to create space in a solid substrate, overcoming resistance by e.g., excavation, fracture, compression, or fluidization. Second, the burrower needs to locomote into the confined space. A compliant body helps fit into the possibly irregular space, but reaching the new space requires non-rigid kinematics such as longitudinal extension through peristalsis, unbending, or eversion. Third, to generate the required thrust to overcome resistance, the burrower needs to anchor within the burrow. Anchoring can be achieved through anisotropic friction or radial expansion, or both. Fourth, the burrower must sense and navigate to adapt the burrow shape to avoid or access different parts of the environment. Our hope is that by breaking the complexity of burrowing into these component challenges, engineers will be better able to learn from biology, since animal performance tends to exceed that of their robotic counterparts. Since body size strongly affects space creation, scaling may be a limiting factor for burrowing robotics, which are typically built at larger scales. Small robots are becoming increasingly feasible, and larger robots with non-biologically-inspired anteriors (or that traverse pre-existing tunnels) can benefit from a deeper understanding of the breadth of biological solutions in current literature and to be explored by continued research.



What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard.

Such scenarios may seem like science fiction, but recent progress in robotics and neuroscience makes extra robotic limbs conceivable with today’s technology. Our research groups at Imperial College London and the University of Freiburg, in Germany, together with partners in the European project NIMA, are now working to figure out whether such augmentation can be realized in practice to extend human abilities. The main questions we’re tackling involve both neuroscience and neurotechnology: Is the human brain capable of controlling additional body parts as effectively as it controls biological parts? And if so, what neural signals can be used for this control?

We think that extra robotic limbs could be a new form of human augmentation, improving people’s abilities on tasks they can already perform as well as expanding their ability to do things they simply cannot do with their natural human bodies. If humans could easily add and control a third arm, or a third leg, or a few more fingers, they would likely use them in tasks and performances that went beyond the scenarios mentioned here, discovering new behaviors that we can’t yet even imagine.

Levels of human augmentation

Robotic limbs have come a long way in recent decades, and some are already used by people to enhance their abilities. Most are operated via a joystick or other hand controls. For example, that’s how workers on manufacturing lines wield mechanical limbs that hold and manipulate components of a product. Similarly, surgeons who perform robotic surgery sit at a console across the room from the patient. While the surgical robot may have four arms tipped with different tools, the surgeon’s hands can control only two of them at a time. Could we give these surgeons the ability to control four tools simultaneously?

Robotic limbs are also used by people who have amputations or paralysis. That includes people in powered wheelchairs controlling a robotic arm with the chair’s joystick and those who are missing limbs controlling a prosthetic by the actions of their remaining muscles. But a truly mind-controlled prosthesis is a rarity.

If humans could easily add and control a third arm, they would likely use them in new behaviors that we can’t yet even imagine.

The pioneers in brain-controlled prosthetics are people with tetraplegia, who are often paralyzed from the neck down. Some of these people have boldly volunteered for clinical trials of brain implants that enable them to control a robotic limb by thought alone, issuing mental commands that cause a robot arm to lift a drink to their lips or help with other tasks of daily life. These systems fall under the category of brain-machine interfaces (BMI). Other volunteers have used BMI technologies to control computer cursors, enabling them to type out messages, browse the Internet, and more. But most of these BMI systems require brain surgery to insert the neural implant and include hardware that protrudes from the skull, making them suitable only for use in the lab.

Augmentation of the human body can be thought of as having three levels. The first level increases an existing characteristic, in the way that, say, a powered exoskeleton can give the wearer super strength. The second level gives a person a new degree of freedom, such as the ability to move a third arm or a sixth finger, but at a cost—if the extra appendage is controlled by a foot pedal, for example, the user sacrifices normal mobility of the foot to operate the control system. The third level of augmentation, and the least mature technologically, gives a user an extra degree of freedom without taking mobility away from any other body part. Such a system would allow people to use their bodies normally by harnessing some unused neural signals to control the robotic limb. That’s the level that we’re exploring in our research.

Deciphering electrical signals from muscles

Third-level human augmentation can be achieved with invasive BMI implants, but for everyday use, we need a noninvasive way to pick up brain commands from outside the skull. For many research groups, that means relying on tried-and-true electroencephalography (EEG) technology, which uses scalp electrodes to pick up brain signals. Our groups are working on that approach, but we are also exploring another method: using electromyography (EMG) signals produced by muscles. We’ve spent more than a decade investigating how EMG electrodes on the skin’s surface can detect electrical signals from the muscles that we can then decode to reveal the commands sent by spinal neurons.

Electrical signals are the language of the nervous system. Throughout the brain and the peripheral nerves, a neuron “fires” when a certain voltage—some tens of millivolts—builds up within the cell and causes an action potential to travel down its axon, releasing neurotransmitters at junctions, or synapses, with other neurons, and potentially triggering those neurons to fire in turn. When such electrical pulses are generated by a motor neuron in the spinal cord, they travel along an axon that reaches all the way to the target muscle, where they cross special synapses to individual muscle fibers and cause them to contract. We can record these electrical signals, which encode the user’s intentions, and use them for a variety of control purposes.

How the Neural Signals Are Decoded

A training module [orange] takes an initial batch of EMG signals read by the electrode array [left], determines how to extract signals of individual neurons, and summarizes the process mathematically as a separation matrix and other parameters. With these tools, the real-time decoding module [green] can efficiently extract individual neurons’ sequences of spikes, or “spike trains” [right], from an ongoing stream of EMG signals. Chris Philpot

Deciphering the individual neural signals based on what can be read by surface EMG, however, is not a simple task. A typical muscle receives signals from hundreds of spinal neurons. Moreover, each axon branches at the muscle and may connect with a hundred or more individual muscle fibers distributed throughout the muscle. A surface EMG electrode picks up a sampling of this cacophony of pulses.

A breakthrough in noninvasive neural interfaces came with the discovery in 2010 that the signals picked up by high-density EMG, in which tens to hundreds of electrodes are fastened to the skin, can be disentangled, providing information about the commands sent by individual motor neurons in the spine. Such information had previously been obtained only with invasive electrodes in muscles or nerves. Our high-density surface electrodes provide good sampling over multiple locations, enabling us to identify and decode the activity of a relatively large proportion of the spinal motor neurons involved in a task. And we can now do it in real time, which suggests that we can develop noninvasive BMI systems based on signals from the spinal cord.

A typical muscle receives signals from hundreds of spinal neurons.

The current version of our system consists of two parts: a training module and a real-time decoding module. To begin, with the EMG electrode grid attached to their skin, the user performs gentle muscle contractions, and we feed the recorded EMG signals into the training module. This module performs the difficult task of identifying the individual motor neuron pulses (also called spikes) that make up the EMG signals. The module analyzes how the EMG signals and the inferred neural spikes are related, which it summarizes in a set of parameters that can then be used with a much simpler mathematical prescription to translate the EMG signals into sequences of spikes from individual neurons.

With these parameters in hand, the decoding module can take new EMG signals and extract the individual motor neuron activity in real time. The training module requires a lot of computation and would be too slow to perform real-time control itself, but it usually has to be run only once each time the EMG electrode grid is fixed in place on a user. By contrast, the decoding algorithm is very efficient, with latencies as low as a few milliseconds, which bodes well for possible self-contained wearable BMI systems. We validated the accuracy of our system by comparing its results with signals obtained concurrently by two invasive EMG electrodes inserted into the user’s muscle.

Exploiting extra bandwidth in neural signals

Developing this real-time method to extract signals from spinal motor neurons was the key to our present work on controlling extra robotic limbs. While studying these neural signals, we noticed that they have, essentially, extra bandwidth. The low-frequency part of the signal (below about 7 hertz) is converted into muscular force, but the signal also has components at higher frequencies, such as those in the beta band at 13 to 30 Hz, which are too high to control a muscle and seem to go unused. We don’t know why the spinal neurons send these higher-frequency signals; perhaps the redundancy is a buffer in case of new conditions that require adaptation. Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.

That discovery set us thinking about what could be done with the spare frequencies. In particular, we wondered if we could take that extraneous neural information and use it to control a robotic limb. But we didn’t know if people would be able to voluntarily control this part of the signal separately from the part they used to control their muscles. So we designed an experiment to find out.

Neural Control Demonstrated

A volunteer exploits unused neural bandwidth to direct the motion of a cursor on the screen in front of her. Neural signals pass from her brain, through spinal neurons, to the muscle in her shin, where they are read by an electromyography (EMG) electrode array on her leg and deciphered in real time. These signals include low-frequency components [blue] that control muscle contractions, higher frequencies [beta band, yellow] with no known biological purpose, and noise [gray]. Chris Philpot; Source: M. Bräcklein et al., Journal of Neural Engineering

In our first proof-of-concept experiment, volunteers tried to use their spare neural capacity to control computer cursors. The setup was simple, though the neural mechanism and the algorithms involved were sophisticated. Each volunteer sat in front of a screen, and we placed an EMG system on their leg, with 64 electrodes in a 4-by-10-centimeter patch stuck to their shin over the tibialis anterior muscle, which flexes the foot upward when it contracts. The tibialis has been a workhorse for our experiments: It occupies a large area close to the skin, and its muscle fibers are oriented along the leg, which together make it ideal for decoding the activity of spinal motor neurons that innervate it.

These are some results from the experiment in which low- and high-frequency neural signals, respectively, controlled horizontal and vertical motion of a computer cursor. Colored ellipses (with plus signs at centers) show the target areas. The top three diagrams show the trajectories (each one starting at the lower left) achieved for each target across three trials by one user. At bottom, dots indicate the positions achieved across many trials and users. Colored crosses mark the mean positions and the range of results for each target.Source: M. Bräcklein et al., Journal of Neural Engineering

We asked our volunteers to steadily contract the tibialis, essentially holding it tense, and throughout the experiment we looked at the variations within the extracted neural signals. We separated these signals into the low frequencies that controlled the muscle contraction and spare frequencies at about 20 Hz in the beta band, and we linked these two components respectively to the horizontal and vertical control of a cursor on a computer screen. We asked the volunteers to try to move the cursor around the screen, reaching all parts of the space, but we didn’t, and indeed couldn’t, explain to them how to do that. They had to rely on the visual feedback of the cursor’s position and let their brains figure out how to make it move.

Remarkably, without knowing exactly what they were doing, these volunteers mastered the task within minutes, zipping the cursor around the screen, albeit shakily. Beginning with one neural command signal—contract the tibialis anterior muscle—they were learning to develop a second signal to control the computer cursor’s vertical motion, independently from the muscle control (which directed the cursor’s horizontal motion). We were surprised and excited by how easily they achieved this big first step toward finding a neural control channel separate from natural motor tasks. But we also saw that the control was not accurate enough for practical use. Our next step will be to see if more accurate signals can be obtained and if people can use them to control a robotic limb while also performing independent natural movements.

We are also interested in understanding more about how the brain performs feats like the cursor control. In a recent study using a variation of the cursor task, we concurrently used EEG to see what was happening in the user’s brain, particularly in the area associated with the voluntary control of movements. We were excited to discover that the changes happening to the extra beta-band neural signals arriving at the muscles were tightly related to similar changes at the brain level. As mentioned, the beta neural signals remain something of a mystery since they play no known role in controlling muscles, and it isn’t even clear where they originate. Our result suggests that our volunteers were learning to modulate brain activity that was sent down to the muscles as beta signals. This important finding is helping us unravel the potential mechanisms behind these beta signals.

Meanwhile, at Imperial College London we have set up a system for testing these new technologies with extra robotic limbs, which we call the MUlti-limb Virtual Environment, or MUVE. Among other capabilities, MUVE will enable users to work with as many as four lightweight wearable robotic arms in scenarios simulated by virtual reality. We plan to make the system open for use by other researchers worldwide.

Next steps in human augmentation

Connecting our control technology to a robotic arm or other external device is a natural next step, and we’re actively pursuing that goal. The real challenge, however, will not be attaching the hardware, but rather identifying multiple sources of control that are accurate enough to perform complex and precise actions with the robotic body parts.

We are also investigating how the technology will affect the neural processes of the people who use it. For example, what will happen after someone has six months of experience using an extra robotic arm? Would the natural plasticity of the brain enable them to adapt and gain a more intuitive kind of control? A person born with six-fingered hands can have fully developed brain regions dedicated to controlling the extra digits, leading to exceptional abilities of manipulation. Could a user of our system develop comparable dexterity over time? We’re also wondering how much cognitive load will be involved in controlling an extra limb. If people can direct such a limb only when they’re focusing intently on it in a lab setting, this technology may not be useful. However, if a user can casually employ an extra hand while doing an everyday task like making a sandwich, then that would mean the technology is suited for routine use.

Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.

Other research groups are pursuing the same neuroscience questions. Some are experimenting with control mechanisms involving either scalp-based EEG or neural implants, while others are working on muscle signals. It is early days for movement augmentation, and researchers around the world have just begun to address the most fundamental questions of this emerging field.

Two practical questions stand out: Can we achieve neural control of extra robotic limbs concurrently with natural movement, and can the system work without the user’s exclusive concentration? If the answer to either of these questions is no, we won’t have a practical technology, but we’ll still have an interesting new tool for research into the neuroscience of motor control. If the answer to both questions is yes, we may be ready to enter a new era of human augmentation. For now, our (biological) fingers are crossed.



What could you do with an extra limb? Consider a surgeon performing a delicate operation, one that needs her expertise and steady hands—all three of them. As her two biological hands manipulate surgical instruments, a third robotic limb that’s attached to her torso plays a supporting role. Or picture a construction worker who is thankful for his extra robotic hand as it braces the heavy beam he’s fastening into place with his other two hands. Imagine wearing an exoskeleton that would let you handle multiple objects simultaneously, like Spiderman’s Dr. Octopus. Or contemplate the out-there music a composer could write for a pianist who has 12 fingers to spread across the keyboard.

Such scenarios may seem like science fiction, but recent progress in robotics and neuroscience makes extra robotic limbs conceivable with today’s technology. Our research groups at Imperial College London and the University of Freiburg, in Germany, together with partners in the European project NIMA, are now working to figure out whether such augmentation can be realized in practice to extend human abilities. The main questions we’re tackling involve both neuroscience and neurotechnology: Is the human brain capable of controlling additional body parts as effectively as it controls biological parts? And if so, what neural signals can be used for this control?

We think that extra robotic limbs could be a new form of human augmentation, improving people’s abilities on tasks they can already perform as well as expanding their ability to do things they simply cannot do with their natural human bodies. If humans could easily add and control a third arm, or a third leg, or a few more fingers, they would likely use them in tasks and performances that went beyond the scenarios mentioned here, discovering new behaviors that we can’t yet even imagine.

Levels of human augmentation

Robotic limbs have come a long way in recent decades, and some are already used by people to enhance their abilities. Most are operated via a joystick or other hand controls. For example, that’s how workers on manufacturing lines wield mechanical limbs that hold and manipulate components of a product. Similarly, surgeons who perform robotic surgery sit at a console across the room from the patient. While the surgical robot may have four arms tipped with different tools, the surgeon’s hands can control only two of them at a time. Could we give these surgeons the ability to control four tools simultaneously?

Robotic limbs are also used by people who have amputations or paralysis. That includes people in powered wheelchairs controlling a robotic arm with the chair’s joystick and those who are missing limbs controlling a prosthetic by the actions of their remaining muscles. But a truly mind-controlled prosthesis is a rarity.

If humans could easily add and control a third arm, they would likely use them in new behaviors that we can’t yet even imagine.

The pioneers in brain-controlled prosthetics are people with tetraplegia, who are often paralyzed from the neck down. Some of these people have boldly volunteered for clinical trials of brain implants that enable them to control a robotic limb by thought alone, issuing mental commands that cause a robot arm to lift a drink to their lips or help with other tasks of daily life. These systems fall under the category of brain-machine interfaces (BMI). Other volunteers have used BMI technologies to control computer cursors, enabling them to type out messages, browse the Internet, and more. But most of these BMI systems require brain surgery to insert the neural implant and include hardware that protrudes from the skull, making them suitable only for use in the lab.

Augmentation of the human body can be thought of as having three levels. The first level increases an existing characteristic, in the way that, say, a powered exoskeleton can give the wearer super strength. The second level gives a person a new degree of freedom, such as the ability to move a third arm or a sixth finger, but at a cost—if the extra appendage is controlled by a foot pedal, for example, the user sacrifices normal mobility of the foot to operate the control system. The third level of augmentation, and the least mature technologically, gives a user an extra degree of freedom without taking mobility away from any other body part. Such a system would allow people to use their bodies normally by harnessing some unused neural signals to control the robotic limb. That’s the level that we’re exploring in our research.

Deciphering electrical signals from muscles

Third-level human augmentation can be achieved with invasive BMI implants, but for everyday use, we need a noninvasive way to pick up brain commands from outside the skull. For many research groups, that means relying on tried-and-true electroencephalography (EEG) technology, which uses scalp electrodes to pick up brain signals. Our groups are working on that approach, but we are also exploring another method: using electromyography (EMG) signals produced by muscles. We’ve spent more than a decade investigating how EMG electrodes on the skin’s surface can detect electrical signals from the muscles that we can then decode to reveal the commands sent by spinal neurons.

Electrical signals are the language of the nervous system. Throughout the brain and the peripheral nerves, a neuron “fires” when a certain voltage—some tens of millivolts—builds up within the cell and causes an action potential to travel down its axon, releasing neurotransmitters at junctions, or synapses, with other neurons, and potentially triggering those neurons to fire in turn. When such electrical pulses are generated by a motor neuron in the spinal cord, they travel along an axon that reaches all the way to the target muscle, where they cross special synapses to individual muscle fibers and cause them to contract. We can record these electrical signals, which encode the user’s intentions, and use them for a variety of control purposes.

How the Neural Signals Are Decoded

A training module [orange] takes an initial batch of EMG signals read by the electrode array [left], determines how to extract signals of individual neurons, and summarizes the process mathematically as a separation matrix and other parameters. With these tools, the real-time decoding module [green] can efficiently extract individual neurons’ sequences of spikes, or “spike trains” [right], from an ongoing stream of EMG signals. Chris Philpot

Deciphering the individual neural signals based on what can be read by surface EMG, however, is not a simple task. A typical muscle receives signals from hundreds of spinal neurons. Moreover, each axon branches at the muscle and may connect with a hundred or more individual muscle fibers distributed throughout the muscle. A surface EMG electrode picks up a sampling of this cacophony of pulses.

A breakthrough in noninvasive neural interfaces came with the discovery in 2010 that the signals picked up by high-density EMG, in which tens to hundreds of electrodes are fastened to the skin, can be disentangled, providing information about the commands sent by individual motor neurons in the spine. Such information had previously been obtained only with invasive electrodes in muscles or nerves. Our high-density surface electrodes provide good sampling over multiple locations, enabling us to identify and decode the activity of a relatively large proportion of the spinal motor neurons involved in a task. And we can now do it in real time, which suggests that we can develop noninvasive BMI systems based on signals from the spinal cord.

A typical muscle receives signals from hundreds of spinal neurons.

The current version of our system consists of two parts: a training module and a real-time decoding module. To begin, with the EMG electrode grid attached to their skin, the user performs gentle muscle contractions, and we feed the recorded EMG signals into the training module. This module performs the difficult task of identifying the individual motor neuron pulses (also called spikes) that make up the EMG signals. The module analyzes how the EMG signals and the inferred neural spikes are related, which it summarizes in a set of parameters that can then be used with a much simpler mathematical prescription to translate the EMG signals into sequences of spikes from individual neurons.

With these parameters in hand, the decoding module can take new EMG signals and extract the individual motor neuron activity in real time. The training module requires a lot of computation and would be too slow to perform real-time control itself, but it usually has to be run only once each time the EMG electrode grid is fixed in place on a user. By contrast, the decoding algorithm is very efficient, with latencies as low as a few milliseconds, which bodes well for possible self-contained wearable BMI systems. We validated the accuracy of our system by comparing its results with signals obtained concurrently by two invasive EMG electrodes inserted into the user’s muscle.

Exploiting extra bandwidth in neural signals

Developing this real-time method to extract signals from spinal motor neurons was the key to our present work on controlling extra robotic limbs. While studying these neural signals, we noticed that they have, essentially, extra bandwidth. The low-frequency part of the signal (below about 7 hertz) is converted into muscular force, but the signal also has components at higher frequencies, such as those in the beta band at 13 to 30 Hz, which are too high to control a muscle and seem to go unused. We don’t know why the spinal neurons send these higher-frequency signals; perhaps the redundancy is a buffer in case of new conditions that require adaptation. Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.

That discovery set us thinking about what could be done with the spare frequencies. In particular, we wondered if we could take that extraneous neural information and use it to control a robotic limb. But we didn’t know if people would be able to voluntarily control this part of the signal separately from the part they used to control their muscles. So we designed an experiment to find out.

Neural Control Demonstrated

A volunteer exploits unused neural bandwidth to direct the motion of a cursor on the screen in front of her. Neural signals pass from her brain, through spinal neurons, to the muscle in her shin, where they are read by an electromyography (EMG) electrode array on her leg and deciphered in real time. These signals include low-frequency components [blue] that control muscle contractions, higher frequencies [beta band, yellow] with no known biological purpose, and noise [gray]. Chris Philpot; Source: M. Bräcklein et al., Journal of Neural Engineering

In our first proof-of-concept experiment, volunteers tried to use their spare neural capacity to control computer cursors. The setup was simple, though the neural mechanism and the algorithms involved were sophisticated. Each volunteer sat in front of a screen, and we placed an EMG system on their leg, with 64 electrodes in a 4-by-10-centimeter patch stuck to their shin over the tibialis anterior muscle, which flexes the foot upward when it contracts. The tibialis has been a workhorse for our experiments: It occupies a large area close to the skin, and its muscle fibers are oriented along the leg, which together make it ideal for decoding the activity of spinal motor neurons that innervate it.

These are some results from the experiment in which low- and high-frequency neural signals, respectively, controlled horizontal and vertical motion of a computer cursor. Colored ellipses (with plus signs at centers) show the target areas. The top three diagrams show the trajectories (each one starting at the lower left) achieved for each target across three trials by one user. At bottom, dots indicate the positions achieved across many trials and users. Colored crosses mark the mean positions and the range of results for each target.Source: M. Bräcklein et al., Journal of Neural Engineering

We asked our volunteers to steadily contract the tibialis, essentially holding it tense, and throughout the experiment we looked at the variations within the extracted neural signals. We separated these signals into the low frequencies that controlled the muscle contraction and spare frequencies at about 20 Hz in the beta band, and we linked these two components respectively to the horizontal and vertical control of a cursor on a computer screen. We asked the volunteers to try to move the cursor around the screen, reaching all parts of the space, but we didn’t, and indeed couldn’t, explain to them how to do that. They had to rely on the visual feedback of the cursor’s position and let their brains figure out how to make it move.

Remarkably, without knowing exactly what they were doing, these volunteers mastered the task within minutes, zipping the cursor around the screen, albeit shakily. Beginning with one neural command signal—contract the tibialis anterior muscle—they were learning to develop a second signal to control the computer cursor’s vertical motion, independently from the muscle control (which directed the cursor’s horizontal motion). We were surprised and excited by how easily they achieved this big first step toward finding a neural control channel separate from natural motor tasks. But we also saw that the control was not accurate enough for practical use. Our next step will be to see if more accurate signals can be obtained and if people can use them to control a robotic limb while also performing independent natural movements.

We are also interested in understanding more about how the brain performs feats like the cursor control. In a recent study using a variation of the cursor task, we concurrently used EEG to see what was happening in the user’s brain, particularly in the area associated with the voluntary control of movements. We were excited to discover that the changes happening to the extra beta-band neural signals arriving at the muscles were tightly related to similar changes at the brain level. As mentioned, the beta neural signals remain something of a mystery since they play no known role in controlling muscles, and it isn’t even clear where they originate. Our result suggests that our volunteers were learning to modulate brain activity that was sent down to the muscles as beta signals. This important finding is helping us unravel the potential mechanisms behind these beta signals.

Meanwhile, at Imperial College London we have set up a system for testing these new technologies with extra robotic limbs, which we call the MUlti-limb Virtual Environment, or MUVE. Among other capabilities, MUVE will enable users to work with as many as four lightweight wearable robotic arms in scenarios simulated by virtual reality. We plan to make the system open for use by other researchers worldwide.

Next steps in human augmentation

Connecting our control technology to a robotic arm or other external device is a natural next step, and we’re actively pursuing that goal. The real challenge, however, will not be attaching the hardware, but rather identifying multiple sources of control that are accurate enough to perform complex and precise actions with the robotic body parts.

We are also investigating how the technology will affect the neural processes of the people who use it. For example, what will happen after someone has six months of experience using an extra robotic arm? Would the natural plasticity of the brain enable them to adapt and gain a more intuitive kind of control? A person born with six-fingered hands can have fully developed brain regions dedicated to controlling the extra digits, leading to exceptional abilities of manipulation. Could a user of our system develop comparable dexterity over time? We’re also wondering how much cognitive load will be involved in controlling an extra limb. If people can direct such a limb only when they’re focusing intently on it in a lab setting, this technology may not be useful. However, if a user can casually employ an extra hand while doing an everyday task like making a sandwich, then that would mean the technology is suited for routine use.

Whatever the reason, humans evolved a nervous system in which the signal that comes out of the spinal cord has much richer information than is needed to command a muscle.

Other research groups are pursuing the same neuroscience questions. Some are experimenting with control mechanisms involving either scalp-based EEG or neural implants, while others are working on muscle signals. It is early days for movement augmentation, and researchers around the world have just begun to address the most fundamental questions of this emerging field.

Two practical questions stand out: Can we achieve neural control of extra robotic limbs concurrently with natural movement, and can the system work without the user’s exclusive concentration? If the answer to either of these questions is no, we won’t have a practical technology, but we’ll still have an interesting new tool for research into the neuroscience of motor control. If the answer to both questions is yes, we may be ready to enter a new era of human augmentation. For now, our (biological) fingers are crossed.



Apptronik, a Texas-based robotics company with its roots in the Human Centered Robotics Lab at University of Texas at Austin, has spent the last few years working towards a practical, general purpose humanoid robot. By designing their robot (called Apollo) completely from the ground up, including electronics and actuators, Apptronik is hoping that they’ll be able to deliver something affordable, reliable, and broadly useful. But at the moment, the most successful robots are not generalized systems—they’re uni-taskers, robots that can do one specific task very well but more or less nothing else. A general purpose robot, especially one in a human form factor, would have enormous potential. But the challenge is enormous, too.

So why does Apptronik believe that they have the answer to general purpose humanoid robots with Apollo? To find out, we spoke with Apptronik’s founders, CEO Jeff Cardenas and CTO Nick Paine.

IEEE Spectrum: Why are you developing a general purpose robot when the most successful robots in the supply chain focus on specific tasks?

Nick Paine: It’s about our level of ambition. A specialized tool is always going to beat a general tool at one task, but if you’re trying to solve ten tasks, or 100 tasks, or 1000 tasks, it’s more logical to put your effort into a single versatile hardware platform with specialized software that solves a myriad of different problems.

How do you know that you’ve reached an inflection point where building a general purpose commercial humanoid is now realistic, when it wasn’t before?

Paine: There are a number of different things. For one, Moore’s Law has slowed down, but computers are evolving in a way that has helped advance the complexity of algorithms that can be deployed on mobile systems. Also, there are new algorithms that have been developed recently that have enabled advancements in legged locomotion, machine vision, and manipulation. And along with algorithmic improvements, there have been sensing improvements. All of this has influenced the ability to design these types of legged systems for unstructured environments.

Jeff Cardenas: I think it’s taken decades for it to be the right time. After many many iterations as a company, we’ve gotten to the point where we’ve said, “Okay, we see all the pieces to where we believe we can build a robust, capable, affordable system that can really go out and do work.” It’s still the beginning, but we’re now at an inflection point where there’s demand from the market, and we can get these out into the world.

The reason that I got into robotics is that I was sick of seeing robots just dancing all the time. I really wanted to make robots that could be useful in the world.
—Nick Paine, CTO Apptronik

Why did you need to develop and test 30 different actuators for Apollo, and how did you know that the 30th actuator was the right one?

Paine: The reason for the variety was that we take a first-principles approach to designing robotic systems. The way you control the system really impacts how you design the system, and that goes all the way down to the actuators. A certain type of actuator is not always the silver bullet: every actuator has its strengths and weaknesses, and we’ve explored that space to understand the limitations of physics to guide us toward the right solutions.

With your focus on making a system that’s affordable, how much are you relying on software to help you minimize hardware costs?

Paine: Some groups have tried masking the deficiencies of cheap, low-quality hardware with software. That’s not at all the approach we’re taking. We are leaning on our experience building these kinds of systems over the years from a first principles approach. Building from the core requirements for this type of system, we’ve found a solution that hits our performance targets while also being far more mass producible compared to anything we’ve seen in this space previously. We’re really excited about the solution that we’ve found.

How much effort are you putting into software at this stage? How will you teach Apollo to do useful things?

Paine: There are some basic applications that we need to solve for Apollo to be fundamentally useful. It needs to be able to walk around, to use its upper body and its arms to interact with the environment. Those are the core capabilities that we’re working on, and once those are at a certain level of maturity, that’s where we can open up the platform for third party application developers to build on top of that.

Cardenas: If you look at Willow Garage with the PR2, they had a similar approach, which was to build a solid hardware platform, create a powerful API, and then let others build applications on it. But then you’re really putting your destiny in the hands of other developers. One of the things that we learned from that is if you want to enable that future, you have to prove that initial utility. So what we’re doing is handling the full stack development on the initial applications, which will be targeting supply chain and logistics.

NASA officials have expressed their interest in Apptronik developing “technology and talent that will sustain us through the Artemis program and looking forward to Mars.”

“In robotics, seeing is believing. You can say whatever you want, but you really have to prove what you can do, and that’s been our focus. We want to show versus tell.”
—Jeff Cardenas, CEO Apptronik

Apptronik plans for the alpha version of Apollo to be ready in March, in time for a sneak peak for a small audience at SXSW. From there, the alpha Apollos will go through pilots as Apptronik collects feedback to develop a beta version that will begin larger deployments. The company expects these programs to lead to full a gamma version and full production runs by the end of 2024.



Apptronik, a Texas-based robotics company with its roots in the Human Centered Robotics Lab at University of Texas at Austin, has spent the last few years working towards a practical, general purpose humanoid robot. By designing their robot (called Apollo) completely from the ground up, including electronics and actuators, Apptronik is hoping that they’ll be able to deliver something affordable, reliable, and broadly useful. But at the moment, the most successful robots are not generalized systems—they’re uni-taskers, robots that can do one specific task very well but more or less nothing else. A general purpose robot, especially one in a human form factor, would have enormous potential. But the challenge is enormous, too.

So why does Apptronik believe that they have the answer to general purpose humanoid robots with Apollo? To find out, we spoke with Apptronik’s founders, CEO Jeff Cardenas and CTO Nick Paine.

IEEE Spectrum: Why are you developing a general purpose robot when the most successful robots in the supply chain focus on specific tasks?

Nick Paine: It’s about our level of ambition. A specialized tool is always going to beat a general tool at one task, but if you’re trying to solve ten tasks, or 100 tasks, or 1000 tasks, it’s more logical to put your effort into a single versatile hardware platform with specialized software that solves a myriad of different problems.

How do you know that you’ve reached an inflection point where building a general purpose commercial humanoid is now realistic, when it wasn’t before?

Paine: There are a number of different things. For one, Moore’s Law has slowed down, but computers are evolving in a way that has helped advance the complexity of algorithms that can be deployed on mobile systems. Also, there are new algorithms that have been developed recently that have enabled advancements in legged locomotion, machine vision, and manipulation. And along with algorithmic improvements, there have been sensing improvements. All of this has influenced the ability to design these types of legged systems for unstructured environments.

Jeff Cardenas: I think it’s taken decades for it to be the right time. After many many iterations as a company, we’ve gotten to the point where we’ve said, “Okay, we see all the pieces to where we believe we can build a robust, capable, affordable system that can really go out and do work.” It’s still the beginning, but we’re now at an inflection point where there’s demand from the market, and we can get these out into the world.

The reason that I got into robotics is that I was sick of seeing robots just dancing all the time. I really wanted to make robots that could be useful in the world.
—Nick Paine, CTO Apptronik

Why did you need to develop and test 30 different actuators for Apollo, and how did you know that the 30th actuator was the right one?

Paine: The reason for the variety was that we take a first-principles approach to designing robotic systems. The way you control the system really impacts how you design the system, and that goes all the way down to the actuators. A certain type of actuator is not always the silver bullet: every actuator has its strengths and weaknesses, and we’ve explored that space to understand the limitations of physics to guide us toward the right solutions.

With your focus on making a system that’s affordable, how much are you relying on software to help you minimize hardware costs?

Paine: Some groups have tried masking the deficiencies of cheap, low-quality hardware with software. That’s not at all the approach we’re taking. We are leaning on our experience building these kinds of systems over the years from a first principles approach. Building from the core requirements for this type of system, we’ve found a solution that hits our performance targets while also being far more mass producible compared to anything we’ve seen in this space previously. We’re really excited about the solution that we’ve found.

How much effort are you putting into software at this stage? How will you teach Apollo to do useful things?

Paine: There are some basic applications that we need to solve for Apollo to be fundamentally useful. It needs to be able to walk around, to use its upper body and its arms to interact with the environment. Those are the core capabilities that we’re working on, and once those are at a certain level of maturity, that’s where we can open up the platform for third party application developers to build on top of that.

Cardenas: If you look at Willow Garage with the PR2, they had a similar approach, which was to build a solid hardware platform, create a powerful API, and then let others build applications on it. But then you’re really putting your destiny in the hands of other developers. One of the things that we learned from that is if you want to enable that future, you have to prove that initial utility. So what we’re doing is handling the full stack development on the initial applications, which will be targeting supply chain and logistics.

NASA officials have expressed their interest in Apptronik developing “technology and talent that will sustain us through the Artemis program and looking forward to Mars.”

“In robotics, seeing is believing. You can say whatever you want, but you really have to prove what you can do, and that’s been our focus. We want to show versus tell.”
—Jeff Cardenas, CEO Apptronik

Apptronik plans for the alpha version of Apollo to be ready in March, in time for a sneak peak for a small audience at SXSW. From there, the alpha Apollos will go through pilots as Apptronik collects feedback to develop a beta version that will begin larger deployments. The company expects these programs to lead to full a gamma version and full production runs by the end of 2024.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREARoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCECLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILRSS 2023: 10–14 July 2023, DAEGU, KOREAICRA 2023: 29 May–2 June 2023, LONDONRobotics Summit & Expo: 10–11 May 2023, BOSTON

Enjoy today’s videos!

Sometimes, watching a robot almost but not quite fail is way cooler than watching it succeed.

[ Boston Dynamics ]

Simulation-based reinforcement learning approaches are leading the next innovations in legged robot control. However, the resulting control policies are still not applicable on soft and deformable terrains, especially at high speed. To this end, we introduce a versatile and computationally efficient granular media model for reinforcement learning. We applied our techniques to the Raibo robot, a dynamic quadrupedal robot developed in-house. The trained networks demonstrated high-speed locomotion capabilities on deformable terrains.

[ Kaist ]

A lonely badminton player’s best friend.

[ YouTube ]

Come along for the (autonomous) ride with Yorai Shaoul, and see what a day is like for a Ph.D. student at Carnegie Mellon University Robotics Institute.

[ AirLab ]

In this video we showcase a Husky-based robot that’s preparing for its journey across the continent to live with a family of alpacas on Formant’s farm in Denver, Colorado.

[ Clearpath ]

Arm prostheses are becoming smarter, more customized and more versatile. We’re closer to replicating everyday movements than ever before, but we’re not there yet. Can you do better? Join teams to revolutionize prosthetics and build a world without barriers.

[ Cybathlon 2024 ]

RB-VOGUI is the robot developed for this success story and is mainly responsible for the navigation and collection of high quality data, which is transferred in real time to the relevant personnel. After the implementation of the fleet of autonomous mobile robots, only one operator is needed to monitor the fleet from a control centre.

[ Robotnik ]

Bagging groceries isn’t only a physical task: knowing how to order the items to prevent damage requires human-like intelligence. Also … bin packing.

[ Sanctuary AI ]

Seems like lidar is everywhere nowadays, but it started at NASA back in the 1980s.

[ NASA ]

This GRASP on Robotics talk is by Frank Dellaert at Georgia Tech, on “Factor Graphs for Perception and Action.”

Factor graphs have been very successful in providing a lingua franca in which to phrase robotics perception and navigation problems. In this talk I will revisit some of those successes, also discussed in depth in a recent review article. However, I will focus on our more recent work in the talk, centered on using factor graphs for action. I will discuss our efforts in motion planning, trajectory optimization, optimal control, and model-predictive control, highlighting SCATE, our recent work on collision avoidance for autonomous spacecraft.

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREARoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCECLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILRSS 2023: 10–14 July 2023, DAEGU, KOREAICRA 2023: 29 May–2 June 2023, LONDONRobotics Summit & Expo: 10–11 May 2023, BOSTON

Enjoy today’s videos!

Sometimes, watching a robot almost but not quite fail is way cooler than watching it succeed.

[ Boston Dynamics ]

Simulation-based reinforcement learning approaches are leading the next innovations in legged robot control. However, the resulting control policies are still not applicable on soft and deformable terrains, especially at high speed. To this end, we introduce a versatile and computationally efficient granular media model for reinforcement learning. We applied our techniques to the Raibo robot, a dynamic quadrupedal robot developed in-house. The trained networks demonstrated high-speed locomotion capabilities on deformable terrains.

[ Kaist ]

A lonely badminton player’s best friend.

[ YouTube ]

Come along for the (autonomous) ride with Yorai Shaoul, and see what a day is like for a Ph.D. student at Carnegie Mellon University Robotics Institute.

[ AirLab ]

In this video we showcase a Husky-based robot that’s preparing for its journey across the continent to live with a family of alpacas on Formant’s farm in Denver, Colorado.

[ Clearpath ]

Arm prostheses are becoming smarter, more customized and more versatile. We’re closer to replicating everyday movements than ever before, but we’re not there yet. Can you do better? Join teams to revolutionize prosthetics and build a world without barriers.

[ Cybathlon 2024 ]

RB-VOGUI is the robot developed for this success story and is mainly responsible for the navigation and collection of high quality data, which is transferred in real time to the relevant personnel. After the implementation of the fleet of autonomous mobile robots, only one operator is needed to monitor the fleet from a control centre.

[ Robotnik ]

Bagging groceries isn’t only a physical task: knowing how to order the items to prevent damage requires human-like intelligence. Also … bin packing.

[ Sanctuary AI ]

Seems like lidar is everywhere nowadays, but it started at NASA back in the 1980s.

[ NASA ]

This GRASP on Robotics talk is by Frank Dellaert at Georgia Tech, on “Factor Graphs for Perception and Action.”

Factor graphs have been very successful in providing a lingua franca in which to phrase robotics perception and navigation problems. In this talk I will revisit some of those successes, also discussed in depth in a recent review article. However, I will focus on our more recent work in the talk, centered on using factor graphs for action. I will discuss our efforts in motion planning, trajectory optimization, optimal control, and model-predictive control, highlighting SCATE, our recent work on collision avoidance for autonomous spacecraft.

[ UPenn ]



There’s a handful of robotics companies currently working on what could be called general-purpose humanoid robots. That is, human-size, human-shaped robots with legs for mobility and arms for manipulation that can (or, may one day be able to) perform useful tasks in environments designed primarily for humans. The value proposition is obvious—drop-in replacement of humans for dull, dirty, or dangerous tasks. This sounds a little ominous, but the fact is that people don’t want to be doing the jobs that these robots are intended to do in the short term, and there just aren’t enough people to do these jobs as it is.

We tend to look at claims of commercializable general-purpose humanoid robots with some skepticism, because humanoids are really, really hard. They’re still really hard in a research context, which is usually where things have to get easier before anyone starts thinking about commercialization. There are certainly companies out there doing some amazing work toward practical legged systems, but at this point, “practical” is more about not falling over than it is about performance or cost effectiveness. The overall approach toward solving humanoids in this way tends to be to build something complex and expensive that does what you want, with the goal of cost reduction over time to get it to a point where it’s affordable enough to be a practical solution to a real problem.

Apptronik, based in Austin, Texas, is the latest company to attempt to figure out how to make a practical general-purpose robot. Its approach is to focus on things like cost and reliability from the start, developing (for example) its own actuators from scratch in a way that it can be sure will be cost effective and supply-chain friendly. Apptronik’s goal is to develop a platform that costs well under US $100,000 of which it hopes to be able to deliver a million by 2030, although the plan is to demonstrate a prototype early this year. Based on what we’ve seen of commercial humanoid robots recently, this seems like a huge challenge. And in part two of this story (to be posted tomorrow), we will be talking in depth to Apptronik’s cofounders to learn more about how they’re going to make general-purpose humanoids happen.

First, though, some company history. Apptronik spun out from the Human Centered Robotics Lab at the University of Texas at Austin in 2016, but the company traces its robotics history back a little farther, to 2015’s DARPA Robotics Challenge. Apptronik’s CTO and cofounder, Nick Paine, was on the NASA-JSC Valkyrie DRC team, and Apptronik’s first contract was to work on next-gen actuation and controls for NASA. Since then, the company has been working on robotics projects for a variety of large companies. In particular, Apptronik developed Astra, a humanoid upper body for dexterous bimanual manipulation that’s currently being tested for supply-chain use.

But Apptronik has by no means abandoned its NASA roots. In 2019, NASA had plans for what was essentially going to be a Valkyrie 2, which was to be a ground-up redesign of the Valkyrie platform. As with many of the coolest NASA projects, the potential new humanoid didn’t survive budget prioritization for very long, but even at the time it wasn’t clear to us why NASA wanted to build its own humanoid rather than asking someone else to build one for it considering how much progress we’ve seen with humanoid robots over the last decade. Ultimately, NASA decided to move forward with more of a partnership model, which is where Apptronik fits in—a partnership between Apptronik and NASA will help accelerate commercialization of Apollo.

“We recognize that Apptronik is building a production robot that’s designed for terrestrial use,” says NASA’s Shaun Azimi, who leads the Dexterous Robotics Team at NASA’s Johnson Space Center. “From NASA’s perspective, what we’re aiming to do with this partnership is to encourage the development of technology and talent that will sustain us through the Artemis program and looking forward to Mars.”

Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile system. It is imagining an “iPhone of robots.”

“Apollo is the robot that we always wanted to build,” says Jeff Cardenas, Apptronik cofounder and CEO. This new humanoid is the culmination of an astonishing amount of R&D, all the way down to the actuator level. “As a company, we’ve built more than 30 unique electric actuators,” Cardenas explains. “You name it, we’ve tried it. Liquid cooling, cable driven, series elastic, parallel elastic, quasi-direct drive…. And we’ve now honed our approach and are applying it to commercial humanoids.”

Apptronik’s emphasis on commercialization gives it a much different perspective on robotics development than you get when focusing on pure research the way that NASA does. To build a commercial product rather than a handful of totally cool but extremely complex bespoke humanoids, you need to consider things like minimizing part count, maximizing maintainability and robustness, and keeping the overall cost manageable. “Our starting point was figuring out what the minimum viable humanoid robot looked like,” explains Apptronik CTO Nick Paine. “Iteration is then necessary to add complexity as needed to solve particular problems.”

This robot is called Astra. It’s only an upper body, and it’s Apptronik’s first product, but (not having any legs) it’s designed for manipulation rather than dynamic locomotion. Astra is force controlled, with series-elastic torque-controlled actuators, giving it the compliance necessary to work in dynamic environments (and particularly around humans). “Astra is pretty unique,” says Paine. “What we were trying to do with the system is to approach and achieve human-level capability in terms of manipulation workspace and payload. This robot taught us a lot about manipulation and actually doing useful work in the world, so that’s why it’s where we wanted to start.”

While Astra is currently out in the world doing pilot projects with clients (mostly in the logistics space), internally Apptronik has moved on to robots with legs. The following video, which Apptronik is sharing publicly for the first time, shows a robot that the company is calling its Quick Development Humanoid, or QDH:


QDH builds on Astra by adding legs, along with a few extra degrees of freedom in the upper body to help with mobility and balance while simplifying the upper body for more basic manipulation capability. It uses only three different types of actuators, and everything (from structure to actuators to electronics to software) has been designed and built by Apptronik. “With QDH, we’re approaching minimum viable product from a usefulness standpoint,” says Paine, “and this is really what’s driving our development, both in software and hardware.”

“What people have done in humanoid robotics is to basically take the same sort of architectures that have been used in industrial robotics and apply those to building what is in essence a multi-degree-of-freedom industrial robot,” adds Cardenas. “We’re thinking of new ways to build these systems, leveraging mass manufacturing techniques to allow us to develop a high-degree-of-freedom robot that’s as affordable as many industrial robots that are out there today.”

Cardenas explains that a major driver for the cost of humanoid robots is the number of different parts, the precision machining of some specific parts, and the resulting time and effort it then takes to put these robots together. As an internal-controls test bed, QDH has helped Apptronik to explore how it can switch to less complex parts and lower the total part count. The plan for Apollo is to not use any high-precision or proprietary components at all, which mitigates many supply-chain issues and will help Apptronik reach its target price point for the robot.

Apollo will be a completely new robot, based around the lessons Apptronik has learned from QDH. It’ll be average human size: about 1.75 meters tall, weighing around 75 kilograms, with the ability to lift 25 kg. It’s designed to operate untethered, either indoors or outdoors. Broadly, Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile robot that can do a bunch of different things. It is imagining an “iPhone of robots,” where apps can be created for the robot to perform specific tasks. To extend the iPhone metaphor, Apptronik itself will make sure that Apollo can do all of the basics (such as locomotion and manipulation) so that it has fundamental value, but the company sees versatility as the way to get to large-scale deployments and the cost savings that come with them.

“I see the Apollo robot as a spiritual successor to Valkyrie. It’s not Valkyrie 2—Apollo is its own platform, but we’re working with Apptronik to adapt it as much as we can to space use cases.”
—Shaun Azimi, NASA Johnson Space Center

The challenge with this app approach is that there’s a critical mass that’s required to get it to work—after all, the primary motivation to develop an iPhone app is that there are a bajillion iPhones out there already. Apptronik is hoping that there are enough basic manipulation tasks in the supply-chain space that Apollo can leverage to scale to that critical-mass point. “This is a huge opportunity where the tasks that you need a robot to do are pretty straightforward,” Cardenas tells us. “Picking single items, moving things with two hands, and other manipulation tasks where industrial automation only gets you to a certain point. These companies have a huge labor challenge—they’re missing labor across every part of their business.”

While Apptronik’s goal is for Apollo to be autonomous, in the short to medium term, its approach will be hybrid autonomy, with a human overseeing first a few and eventually a lot of Apollos with the ability to step in and provide direct guidance through teleoperation when necessary. “That’s really where there’s a lot of business opportunity,” says Paine. Cardenas agrees. “I came into this thinking that we’d need to make Rosie the robot before we could have a successful commercial product. But I think the bar is much lower than that. There are fairly simple tasks that we can enter the market with, and then as we mature our controls and software, we can graduate to more complicated tasks.”

Apptronik is still keeping details about Apollo’s design under wraps, for now. We were shown renderings of the robot, but Apptronik is understandably hesitant to make those public, since the design of the robot may change. It does have a firm date for unveiling Apollo for the first time: SXSW, which takes place in Austin in March.



There’s a handful of robotics companies currently working on what could be called general-purpose humanoid robots. That is, human-size, human-shaped robots with legs for mobility and arms for manipulation that can (or, may one day be able to) perform useful tasks in environments designed primarily for humans. The value proposition is obvious—drop-in replacement of humans for dull, dirty, or dangerous tasks. This sounds a little ominous, but the fact is that people don’t want to be doing the jobs that these robots are intended to do in the short term, and there just aren’t enough people to do these jobs as it is.

We tend to look at claims of commercializable general-purpose humanoid robots with some skepticism, because humanoids are really, really hard. They’re still really hard in a research context, which is usually where things have to get easier before anyone starts thinking about commercialization. There are certainly companies out there doing some amazing work toward practical legged systems, but at this point, “practical” is more about not falling over than it is about performance or cost effectiveness. The overall approach toward solving humanoids in this way tends to be to build something complex and expensive that does what you want, with the goal of cost reduction over time to get it to a point where it’s affordable enough to be a practical solution to a real problem.

Apptronik, based in Austin, Texas, is the latest company to attempt to figure out how to make a practical general-purpose robot. Its approach is to focus on things like cost and reliability from the start, developing (for example) its own actuators from scratch in a way that it can be sure will be cost effective and supply-chain friendly. Apptronik’s goal is to develop a platform that costs well under US $100,000 of which it hopes to be able to deliver a million by 2030, although the plan is to demonstrate a prototype early this year. Based on what we’ve seen of commercial humanoid robots recently, this seems like a huge challenge. And in part two of this story (to be posted tomorrow), we will be talking in depth to Apptronik’s cofounders to learn more about how they’re going to make general-purpose humanoids happen.

First, though, some company history. Apptronik spun out from the Human Centered Robotics Lab at the University of Texas at Austin in 2016, but the company traces its robotics history back a little farther, to 2015’s DARPA Robotics Challenge. Apptronik’s CTO and cofounder, Nick Paine, was on the NASA-JSC Valkyrie DRC team, and Apptronik’s first contract was to work on next-gen actuation and controls for NASA. Since then, the company has been working on robotics projects for a variety of large companies. In particular, Apptronik developed Astra, a humanoid upper body for dexterous bimanual manipulation that’s currently being tested for supply-chain use.

But Apptronik has by no means abandoned its NASA roots. In 2019, NASA had plans for what was essentially going to be a Valkyrie 2, which was to be a ground-up redesign of the Valkyrie platform. As with many of the coolest NASA projects, the potential new humanoid didn’t survive budget prioritization for very long, but even at the time it wasn’t clear to us why NASA wanted to build its own humanoid rather than asking someone else to build one for it considering how much progress we’ve seen with humanoid robots over the last decade. Ultimately, NASA decided to move forward with more of a partnership model, which is where Apptronik fits in—a partnership between Apptronik and NASA will help accelerate commercialization of Apollo.

“We recognize that Apptronik is building a production robot that’s designed for terrestrial use,” says NASA’s Shaun Azimi, who leads the Dexterous Robotics Team at NASA’s Johnson Space Center. “From NASA’s perspective, what we’re aiming to do with this partnership is to encourage the development of technology and talent that will sustain us through the Artemis program and looking forward to Mars.”

Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile system. It is imagining an “iPhone of robots.”

“Apollo is the robot that we always wanted to build,” says Jeff Cardenas, Apptronik cofounder and CEO. This new humanoid is the culmination of an astonishing amount of R&D, all the way down to the actuator level. “As a company, we’ve built more than 30 unique electric actuators,” Cardenas explains. “You name it, we’ve tried it. Liquid cooling, cable driven, series elastic, parallel elastic, quasi-direct drive…. And we’ve now honed our approach and are applying it to commercial humanoids.”

Apptronik’s emphasis on commercialization gives it a much different perspective on robotics development than you get when focusing on pure research the way that NASA does. To build a commercial product rather than a handful of totally cool but extremely complex bespoke humanoids, you need to consider things like minimizing part count, maximizing maintainability and robustness, and keeping the overall cost manageable. “Our starting point was figuring out what the minimum viable humanoid robot looked like,” explains Apptronik CTO Nick Paine. “Iteration is then necessary to add complexity as needed to solve particular problems.”

This robot is called Astra. It’s only an upper body, and it’s Apptronik’s first product, but (not having any legs) it’s designed for manipulation rather than dynamic locomotion. Astra is force controlled, with series-elastic torque-controlled actuators, giving it the compliance necessary to work in dynamic environments (and particularly around humans). “Astra is pretty unique,” says Paine. “What we were trying to do with the system is to approach and achieve human-level capability in terms of manipulation workspace and payload. This robot taught us a lot about manipulation and actually doing useful work in the world, so that’s why it’s where we wanted to start.”

While Astra is currently out in the world doing pilot projects with clients (mostly in the logistics space), internally Apptronik has moved on to robots with legs. The following video, which Apptronik is sharing publicly for the first time, shows a robot that the company is calling its Quick Development Humanoid, or QDH:


QDH builds on Astra by adding legs, along with a few extra degrees of freedom in the upper body to help with mobility and balance while simplifying the upper body for more basic manipulation capability. It uses only three different types of actuators, and everything (from structure to actuators to electronics to software) has been designed and built by Apptronik. “With QDH, we’re approaching minimum viable product from a usefulness standpoint,” says Paine, “and this is really what’s driving our development, both in software and hardware.”

“What people have done in humanoid robotics is to basically take the same sort of architectures that have been used in industrial robotics and apply those to building what is in essence a multi-degree-of-freedom industrial robot,” adds Cardenas. “We’re thinking of new ways to build these systems, leveraging mass manufacturing techniques to allow us to develop a high-degree-of-freedom robot that’s as affordable as many industrial robots that are out there today.”

Cardenas explains that a major driver for the cost of humanoid robots is the number of different parts, the precision machining of some specific parts, and the resulting time and effort it then takes to put these robots together. As an internal-controls test bed, QDH has helped Apptronik to explore how it can switch to less complex parts and lower the total part count. The plan for Apollo is to not use any high-precision or proprietary components at all, which mitigates many supply-chain issues and will help Apptronik reach its target price point for the robot.

Apollo will be a completely new robot, based around the lessons Apptronik has learned from QDH. It’ll be average human size: about 1.75 meters tall, weighing around 75 kilograms, with the ability to lift 25 kg. It’s designed to operate untethered, either indoors or outdoors. Broadly, Apptronik is positioning Apollo as a high-performance, easy-to-use, and versatile robot that can do a bunch of different things. It is imagining an “iPhone of robots,” where apps can be created for the robot to perform specific tasks. To extend the iPhone metaphor, Apptronik itself will make sure that Apollo can do all of the basics (such as locomotion and manipulation) so that it has fundamental value, but the company sees versatility as the way to get to large-scale deployments and the cost savings that come with them.

“I see the Apollo robot as a spiritual successor to Valkyrie. It’s not Valkyrie 2—Apollo is its own platform, but we’re working with Apptronik to adapt it as much as we can to space use cases.”
—Shaun Azimi, NASA Johnson Space Center

The challenge with this app approach is that there’s a critical mass that’s required to get it to work—after all, the primary motivation to develop an iPhone app is that there are a bajillion iPhones out there already. Apptronik is hoping that there are enough basic manipulation tasks in the supply-chain space that Apollo can leverage to scale to that critical-mass point. “This is a huge opportunity where the tasks that you need a robot to do are pretty straightforward,” Cardenas tells us. “Picking single items, moving things with two hands, and other manipulation tasks where industrial automation only gets you to a certain point. These companies have a huge labor challenge—they’re missing labor across every part of their business.”

While Apptronik’s goal is for Apollo to be autonomous, in the short to medium term, its approach will be hybrid autonomy, with a human overseeing first a few and eventually a lot of Apollos with the ability to step in and provide direct guidance through teleoperation when necessary. “That’s really where there’s a lot of business opportunity,” says Paine. Cardenas agrees. “I came into this thinking that we’d need to make Rosie the robot before we could have a successful commercial product. But I think the bar is much lower than that. There are fairly simple tasks that we can enter the market with, and then as we mature our controls and software, we can graduate to more complicated tasks.”

Apptronik is still keeping details about Apollo’s design under wraps, for now. We were shown renderings of the robot, but Apptronik is understandably hesitant to make those public, since the design of the robot may change. It does have a firm date for unveiling Apollo for the first time: SXSW, which takes place in Austin in March.

This paper introduces an optimal algorithm for solving the discrete grid-based coverage path planning (CPP) problem. This problem consists in finding a path that covers a given region completely. First, we propose a CPP-solving baseline algorithm based on the iterative deepening depth-first search (ID-DFS) approach. Then, we introduce two branch-and-bound strategies (Loop detection and an Admissible heuristic function) to improve the results of our baseline algorithm. We evaluate the performance of our planner using six types of benchmark grids considered in this study: Coast-like, Random links, Random walk, Simple-shapes, Labyrinth and Wide-Labyrinth grids. We are first to consider these types of grids in the context of CPP. All of them find their practical applications in real-world CPP problems from a variety of fields. The obtained results suggest that the proposed branch-and-bound algorithm solves the problem optimally (i.e., the exact solution is found in each case) orders of magnitude faster than an exhaustive search CPP planner. To the best of our knowledge, no general CPP-solving exact algorithms, apart from an exhaustive search planner, have been proposed in the literature.

The usage of socially assistive robots for autism therapies has increased in recent years. This novel therapeutic tool allows the specialist to keep track of the improvement in socially assistive tasks for autistic children, who hypothetically prefer object-based over human interactions. These kinds of tools also allow the collection of new information to early diagnose neurodevelopment disabilities. This work presents the integration of an output feedback adaptive controller for trajectory tracking and energetic autonomy of a mobile socially assistive robot for autism spectrum disorder under an event-driven control scheme. The proposed implementation integrates facial expression and emotion recognition algorithms to detect the emotions and identities of users (providing robustness to the algorithm since it automatically generates the missing input parameters, which allows it to complete the recognition) to detonate a set of adequate trajectories. The algorithmic implementation for the proposed socially assistive robot is presented and implemented in the Linux-based Robot Operating System. It is considered that the optimization of energetic consumption of the proposal is the main contribution of this work, as it will allow therapists to extend and adapt sessions with autistic children. The experiment that validates the energetic optimization of the proposed integration of an event-driven control scheme is presented.

Introduction: Backchannels, i.e., short interjections by an interlocutor to indicate attention, understanding or agreement regarding utterances by another conversation participant, are fundamental in human-human interaction. Lack of backchannels or if they have unexpected timing or formulation may influence the conversation negatively, as misinterpretations regarding attention, understanding or agreement may occur. However, several studies over the years have shown that there may be cultural differences in how backchannels are provided and perceived and that these differences may affect intercultural conversations. Culturally aware robots must hence be endowed with the capability to detect and adapt to the way these conversational markers are used across different cultures. Traditionally, culture has been defined in terms of nationality, but this is more and more considered to be a stereotypic simplification. We therefore investigate several socio-cultural factors, such as the participants’ gender, age, first language, extroversion and familiarity with robots, that may be relevant for the perception of backchannels.

Methods: We first cover existing research on cultural influence on backchannel formulation and perception in human-human interaction and on backchannel implementation in Human-Robot Interaction. We then present an experiment on second language spoken practice, in which we investigate how backchannels from the social robot Furhat influence interaction (investigated through speaking time ratios and ethnomethodology and multimodal conversation analysis) and impression of the robot (measured by post-session ratings). The experiment, made in a triad word game setting, is focused on if activity-adaptive robot backchannels may redistribute the participants’ speaking time ratio, and/or if the participants’ assessment of the robot is influenced by the backchannel strategy. The goal is to explore how robot backchannels should be adapted to different language learners to encourage their participation while being perceived as socio-culturally appropriate.

Results: We find that a strategy that displays more backchannels towards a less active speaker may substantially decrease the difference in speaking time between the two speakers, that different socio-cultural groups respond differently to the robot’s backchannel strategy and that they also perceive the robot differently after the session.

Discussion: We conclude that the robot may need different backchanneling strategies towards speakers from different socio-cultural groups in order to encourage them to speak and have a positive perception of the robot.

The Vulcano challenge is a new and innovative robotic challenge for legged robots in a physical and simulated scenario of a volcanic eruption. In this scenario, robots must climb a volcano’s escarpment and collect data from areas with high temperatures and toxic gases. This paper presents the main idea behind this challenge, with a detailed description of the simulated and physical scenario of the volcano ramp, the rules proposed for the competition, and the conception of a robot prototype, Vulcano, used in the competition. Finally, it discusses the performance of teams invited to participate in the challenge in the context of Azorean Robotics Open, the Azoresbot 2022. This first test for this challenge provided insights into what the participants found exciting and positive and what they found less positive.

This paper presents the singularity analysis of 3-DOF planar parallel continuum robots (PCR) with three identical legs. Each of the legs contains two passive conventional rigid 1-DOF joints and one actuated planar continuum link, which bends with a constant curvature. All possible PCR architectures featuring such legs are enumerated and the kinematic velocity equations are provided for each of them. Afterwards, a singularity analysis is conducted based on the obtained Jacobian matrices, providing a geometrical understanding of singularity occurences. It is shown that while loci and occurrences of type II singularities are mostly analogous to conventional parallel kinematic mechanisms (PKM), type I singularity occurences for the PCR studied in this work are quite different from conventional PKM and less geometrically intuitive. The study provided in this paper can promote further investigations on planar parallel continuum robots, such as structural design and control.

Positioning and navigation represent relevant topics in the field of robotics, due to their multiple applications in real-world scenarios, ranging from autonomous driving to harsh environment exploration. Despite localization in outdoor environments is generally achieved using a Global Navigation Satellite System (GNSS) receiver, global navigation satellite system-denied environments are typical of many situations, especially in indoor settings. Autonomous robots are commonly equipped with multiple sensors, including laser rangefinders, IMUs, and odometers, which can be used for mapping and localization, overcoming the need for global navigation satellite system data. In literature, almost no information can be found on the positioning accuracy and precision of 6 Degrees of Freedom Light Detection and Ranging (LiDAR) localization systems, especially for real-world scenarios. In this paper, we present a short review of state-of-the-art light detection and ranging localization methods in global navigation satellite system-denied environments, highlighting their advantages and disadvantages. Then, we evaluate two state-of-the-art Simultaneous Localization and Mapping (SLAM) systems able to also perform localization, one of which implemented by us. We benchmark these two algorithms on manually collected dataset, with the goal of providing an insight into their attainable precision in real-world scenarios. In particular, we present two experimental campaigns, one indoor and one outdoor, to measure the precision of these algorithms. After creating a map for each of the two environments, using the simultaneous localization and mapping part of the systems, we compute a custom localization error for multiple, different trajectories. Results show that the two algorithms are comparable in terms of precision, having a similar mean translation and rotation errors of about 0.01 m and 0.6°, respectively. Nevertheless, the system implemented by us has the advantage of being modular, customizable and able to achieve real-time performance.



With Boston Dynamics’ recent(ish) emphasis on making robots that can do things that are commercially useful, it’s always good to be gently reminded that the company is still at the cutting edge of dynamic humanoid robotics. Or in this case, forcefully reminded. In its latest video, Boston Dynamics demonstrates some spectacular new capabilities with Atlas focusing on perception and manipulation, and the Atlas team lead answers some of our questions about how they pulled it off.

One of the highlights here is Atlas’s ability to move and interact dynamically with objects, and especially with objects that have significant mass to them. The 180 while holding the plank is impressive, since Atlas has to account for all that added momentum. Same with the spinning bag toss: As soon as the robot releases the bag in midair, its momentum changes, which it has to compensate for on landing. And shoving that box over has to be done by leaning into it, but carefully, so that Atlas doesn’t topple off the platform after it.

While the physical capabilities that Atlas demonstrates here are impressive (to put it mildly), this demonstration also highlights just how much work remains to be done to teach robots to be useful like this in an autonomous, or even a semi-autonomous, way. For example, environmental modification is something that humans do all the time, but we rely heavily on our knowledge of the world to do it effectively. I’m pretty sure that Atlas doesn’t have the capability to see a nontraversable gap, consider what kind of modification would be required to render the gap traversable, locate the necessary resources (without being told where they are first), and then make the appropriate modification autonomously in the way a human would—the video shows advances in manipulation rather than decision making. This certainly isn’t a criticism of what Boston Dynamics is showing in this video; it’s just to emphasize there is still a lot of work to be done on the world understanding and reasoning side before robots will be able to leverage these impressive physical skills on their own in a productive way.

There’s a lot more going on in this video, and Boston Dynamics has helpfully put together a bit of a behind-the-scenes explainer:

And for a bit more on this, we sent a couple of questions over to Boston Dynamics, and Atlas Team Lead Scott Kuindersma was kind enough to answer them for us.

How much does Atlas know in advance about the objects that it will be manipulating, and how important is this knowledge for real-world manipulation?

Scott Kuindersma: In this video, the robot has a high-level map that includes where we want it to go, what we want it to pick up, and what stunts it should do along the way. This map is not an exact geometric match for the real environment; it is an approximate description containing obstacle templates and annotated actions that is adapted online by the robot’s perception system. The robot has object-relative grasp targets that were computed offline, and the model-predictive controller (MPC) has access to approximate mass properties.

We think that real-world robots will similarly leverage priors about their tasks and environments, but what form these priors take and how much information they provide could vary a lot based on the application. The requirements for a video like this lead naturally to one set of choices—and maybe some of those requirements will align with some early commercial applications—but we’re also building capabilities that allow Atlas to operate at other points on this spectrum.

How often is what you want to do with Atlas constrained by its hardware capabilities? At this point, how much of a difference does improving hardware make, relative to improving software?

Kuindersma: Not frequently. When we occasionally spend time on something like the inverted 540, we are intentionally pushing boundaries and coming at it from a place of playful exploration. Aside from being really fun for us and (hopefully) inspiring to others, these activities nearly always bear enduring fruit and leave us with more capable software for approaching other problems.

The tight integration between our hardware and software groups—and our ability to design, iterate, and learn from each other—is one of the things that makes our team special. This occasionally leads to behavior-enabling hardware upgrades and, less often, major redesigns. But from a software perspective, we continuously feel like we’re just scratching the surface on what we can do with Atlas.

Can you elaborate on the troubleshooting process you used to make sure that Atlas could successfully execute that final trick?

Kuindersma: The controller works by using a model of the robot to predict and optimize its future states. The improvement made in this case was an extension to this model to include the geometric shape of the robot’s limbs and constraints to prevent them from intersecting. In other words, rather than specifically tuning this one behavior to avoid self-collisions, we added more model detail to the controller to allow it to better avoid infeasible configurations. This way, the benefits carry forward to all of Atlas’s behaviors.

Is the little hop at the end of the 540 part of the planned sequence, or is Atlas able to autonomously use motions like that to recover from dynamic behaviors that don’t end up exactly as expected? How important will this kind of capability be for real-world robots?

Kuindersma: The robot has the ability to autonomously take steps, lean, and/or wave its limbs around to recover balance, which we leverage on pretty much a daily basis in our experimental work. The hop jump after the inverted 540 was part of the behavior sequence in the sense that it was told that it should jump after landing, but where it jumped to and how it landed came from the controller (and generally varied between individual robots and runs).

Our experience with deploying Spot all over the world has reinforced the importance for mobile robots to be able to adjust and recover if they get bumped, slip, fall, or encounter unexpected obstacles. We expect the same will be true for future robots doing work in the real world.

What else can you share with us about what went into making the video?

Kuindersma: A few fun facts:

The core new technologies around MPC and manipulation were developed throughout this year, but the time between our whiteboard sketch for the video and completing filming was six weeks.

The tool bag throw and spin jump with the 2- by 12-inch plank are online generalizations of the same 180 jump behavior that was created two years ago as part of our mobility work. The only differences in the controller inputs are the object model and the desired object motion.

Although the robot has a good understanding of throwing mechanics, the real-world performance was sensitive to the precise timing of the release and whether the bag cloth happened to get caught on the finger during release. These details weren’t well represented by our simulation tools, so we relied primarily on hardware experiments to refine the behavior until it worked every time.



With Boston Dynamics’ recent(ish) emphasis on making robots that can do things that are commercially useful, it’s always good to be gently reminded that the company is still at the cutting edge of dynamic humanoid robotics. Or in this case, forcefully reminded. In its latest video, Boston Dynamics demonstrates some spectacular new capabilities with Atlas focusing on perception and manipulation, and the Atlas team lead answers some of our questions about how they pulled it off.

One of the highlights here is Atlas’s ability to move and interact dynamically with objects, and especially with objects that have significant mass to them. The 180 while holding the plank is impressive, since Atlas has to account for all that added momentum. Same with the spinning bag toss: As soon as the robot releases the bag in midair, its momentum changes, which it has to compensate for on landing. And shoving that box over has to be done by leaning into it, but carefully, so that Atlas doesn’t topple off the platform after it.

While the physical capabilities that Atlas demonstrates here are impressive (to put it mildly), this demonstration also highlights just how much work remains to be done to teach robots to be useful like this in an autonomous, or even a semi-autonomous, way. For example, environmental modification is something that humans do all the time, but we rely heavily on our knowledge of the world to do it effectively. I’m pretty sure that Atlas doesn’t have the capability to see a nontraversable gap, consider what kind of modification would be required to render the gap traversable, locate the necessary resources (without being told where they are first), and then make the appropriate modification autonomously in the way a human would—the video shows advances in manipulation rather than decision making. This certainly isn’t a criticism of what Boston Dynamics is showing in this video; it’s just to emphasize there is still a lot of work to be done on the world understanding and reasoning side before robots will be able to leverage these impressive physical skills on their own in a productive way.

There’s a lot more going on in this video, and Boston Dynamics has helpfully put together a bit of a behind-the-scenes explainer:

And for a bit more on this, we sent a couple of questions over to Boston Dynamics, and Atlas Team Lead Scott Kuindersma was kind enough to answer them for us.

How much does Atlas know in advance about the objects that it will be manipulating, and how important is this knowledge for real-world manipulation?

Scott Kuindersma: In this video, the robot has a high-level map that includes where we want it to go, what we want it to pick up, and what stunts it should do along the way. This map is not an exact geometric match for the real environment; it is an approximate description containing obstacle templates and annotated actions that is adapted online by the robot’s perception system. The robot has object-relative grasp targets that were computed offline, and the model-predictive controller (MPC) has access to approximate mass properties.

We think that real-world robots will similarly leverage priors about their tasks and environments, but what form these priors take and how much information they provide could vary a lot based on the application. The requirements for a video like this lead naturally to one set of choices—and maybe some of those requirements will align with some early commercial applications—but we’re also building capabilities that allow Atlas to operate at other points on this spectrum.

How often is what you want to do with Atlas constrained by its hardware capabilities? At this point, how much of a difference does improving hardware make, relative to improving software?

Kuindersma: Not frequently. When we occasionally spend time on something like the inverted 540, we are intentionally pushing boundaries and coming at it from a place of playful exploration. Aside from being really fun for us and (hopefully) inspiring to others, these activities nearly always bear enduring fruit and leave us with more capable software for approaching other problems.

The tight integration between our hardware and software groups—and our ability to design, iterate, and learn from each other—is one of the things that makes our team special. This occasionally leads to behavior-enabling hardware upgrades and, less often, major redesigns. But from a software perspective, we continuously feel like we’re just scratching the surface on what we can do with Atlas.

Can you elaborate on the troubleshooting process you used to make sure that Atlas could successfully execute that final trick?

Kuindersma: The controller works by using a model of the robot to predict and optimize its future states. The improvement made in this case was an extension to this model to include the geometric shape of the robot’s limbs and constraints to prevent them from intersecting. In other words, rather than specifically tuning this one behavior to avoid self-collisions, we added more model detail to the controller to allow it to better avoid infeasible configurations. This way, the benefits carry forward to all of Atlas’s behaviors.

Is the little hop at the end of the 540 part of the planned sequence, or is Atlas able to autonomously use motions like that to recover from dynamic behaviors that don’t end up exactly as expected? How important will this kind of capability be for real-world robots?

Kuindersma: The robot has the ability to autonomously take steps, lean, and/or wave its limbs around to recover balance, which we leverage on pretty much a daily basis in our experimental work. The hop jump after the inverted 540 was part of the behavior sequence in the sense that it was told that it should jump after landing, but where it jumped to and how it landed came from the controller (and generally varied between individual robots and runs).

Our experience with deploying Spot all over the world has reinforced the importance for mobile robots to be able to adjust and recover if they get bumped, slip, fall, or encounter unexpected obstacles. We expect the same will be true for future robots doing work in the real world.

What else can you share with us about what went into making the video?

Kuindersma: A few fun facts:

The core new technologies around MPC and manipulation were developed throughout this year, but the time between our whiteboard sketch for the video and completing filming was six weeks.

The tool bag throw and spin jump with the 2- by 12-inch plank are online generalizations of the same 180 jump behavior that was created two years ago as part of our mobility work. The only differences in the controller inputs are the object model and the desired object motion.

Although the robot has a good understanding of throwing mechanics, the real-world performance was sensitive to the precise timing of the release and whether the bag cloth happened to get caught on the finger during release. These details weren’t well represented by our simulation tools, so we relied primarily on hardware experiments to refine the behavior until it worked every time.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREARoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCECLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILRSS 2023: 10–14 July 2023, DAEGU, KOREAICRA 2023: 29 May–2 June 2023, LONDONRobotics Summit & Expo: 10–11 May 2023, BOSTON

Enjoy today’s videos!

With the historic Kunming-Montreal Agreement of 18 December 2022, more than 200 countries agreed to halt and reverse biodiversity loss. But becoming nature-positive is an ambitious goal, also held back by the lack of efficient and accurate tools to capture snapshots of global biodiversity. This is a task where robots, in combination with environmental DNA (eDNA) technologies, can make a difference.

Our recent findings show a new way to sample surface eDNA with a drone, which could be helpful in monitoring biodiversity in terrestrial ecosystems. The eDrone can land on branches and collect eDNA from the bark using a sticky surface. The eDrone collected surface eDNA from the bark of seven different trees, and by sequencing the collected eDNA we were able to identify 21 taxa, including insects, mammals, and birds.

[ ETH Zurich ]

Thanks, Stefano!

How can we bring limbed robots into real-world environments to complete challenging tasks? Dr. Dimitrios Kanoulas and the team at UCL Computer Science’s Robot Perception and Learning Lab are exploring how we can use autonomous and semi-autonomous robots to work in environments that humans cannot.

[ RPL UCL ]

Thanks, Dimitrios!

Bidirectional design, four-wheel steering, and a compact length give our robotaxi unique agility and freedom of movement in dense urban environments—or in games of tic-tac-toe. May the best robot win.

Okay, but how did they not end this video with one of the cars drawing a “Z” off to the left side of the middle row?

[ Zoox ]

Thanks, Whitney!

DEEP Robotics wishes y’all happy, good health in the year of the rabbit!

Binkies!

[ Deep Robotics ]

This work presents a safety-critical locomotion-control framework for quadrupedal robots. Our goal is to enable quadrupedal robots to safely navigate in cluttered environments.

[ Hybrid Robotics ]

At 360.50 kilometers per hour, this is the world speed record for a quadrotor.

[ Quad Star Drones ] via [ Gizmodo ]

When it rains, it pours—and we’re designing the Waymo Driver to handle it. See how shower tests, thermal chambers, and rugged tracks at our closed-course facilities ensure our system can navigate safely, no matter the forecast.

[ Waymo ]

You know what’s easier than picking blueberries? Picking greenberries, which are much less squishy.

[ Sanctuary AI ]

The Official Wrap-Up of ABU ROBOCON 2022 New Delhi, India.

[ ROBOCON ]

Pages