Feed aggregator



Just last month, Oslo, Norway-based 1X (formerly Halodi Robotics) announced a massive $100 million Series B, and clearly they’ve been putting the work in. A new video posted last week shows a [insert collective noun for humanoid robots here] of EVE android-ish mobile manipulators doing a wide variety of tasks leveraging end-to-end neural networks (pixels to actions). And best of all, the video seems to be more or less an honest one: a single take, at (appropriately) 1X speed, and full autonomy. But we still had questions! And 1X has answers.

If, like me, you had some very important questions after watching this video, including whether that plant is actually dead and the fate of the weighted companion cube, you’ll want to read this Q&A with Eric Jang, Vice President of Artificial Intelligence at 1X.

IEEE Spectrum: How many takes did it take to get this take?

Eric Jang: About 10 takes that lasted more than a minute; this was our first time doing a video like this, so it was more about learning how to coordinate the film crew and set up the shoot to look impressive.

Did you train your robots specifically on floppy things and transparent things?

Jang: Nope! We train our neural network to pick up all kinds of objects—both rigid and deformable and transparent things. Because we train manipulation end-to-end from pixels, picking up deformables and transparent objects is much easier than a classical grasping pipeline, where you have to figure out the exact geometry of what you are trying to grasp.

What keeps your robots from doing these tasks faster?

Jang: Our robots learn from demonstrations, so they go at exactly the same speed the human teleoperators demonstrate the task at. If we gathered demonstrations where we move faster, so would the robots.

How many weighted companion cubes were harmed in the making of this video?

Jang: At 1X, weighted companion cubes do not have rights.

That’s a very cool method for charging, but it seems a lot more complicated than some kind of drive-on interface directly with the base. Why use manipulation instead?

Jang: You’re right that this isn’t the simplest way to charge the robot, but if we are going to succeed at our mission to build generally capable and reliable robots that can manipulate all kinds of objects, our neural nets have to be able to do this task at the very least. Plus, it reduces costs quite a bit and simplifies the system!

What animal is that blue plush supposed to be?

Jang: It’s an obese shark, I think.

How many different robots are in this video?

Jang: 17? And more that are stationary.

How do you tell the robots apart?

Jang: They have little numbers printed on the base.

Is that plant dead?

Jang: Yes, we put it there because no CGI / 3D rendered video would ever go through the trouble of adding a dead plant.

What sort of existential crisis is the robot at the window having?

Jang: It was supposed to be opening and closing the window repeatedly (good for testing statistical significance).

If one of the robots was actually a human in a helmet and a suit holding grippers and standing on a mobile base, would I be able to tell?

Jang: I was super flattered by this comment on the Youtube video:

But if you look at the area where the upper arm tapers at the shoulder, it’s too thin for a human to fit inside while still having such broad shoulders:

Why are your robots so happy all the time? Are you planning to do more complex HRI stuff with their faces?

Jang: Yes, more complex HRI stuff is in the pipeline!

Are your robots able to autonomously collaborate with each other?

Jang: Stay tuned!

Is the skew tetromino the most difficult tetromino for robotic manipulation?

Jang: Good catch! Yes, the green one is the worst of them all because there are many valid ways to pinch it with the gripper and lift it up. In robotic learning, if there are multiple ways to pick something up, it can actually confuse the machine learning model. Kind of like asking a car to turn left and right at the same time to avoid a tree.

Everyone else’s robots are making coffee. Can your robots make coffee?

Jang: Yep! We were planning to throw in some coffee making on this video as an easter egg, but the coffee machine broke right before the film shoot and it turns out it’s impossible to get a Keurig K-Slim in Norway via next day shipping.

1X is currently hiring both AI researchers (imitation learning, reinforcement learning, large-scale training, etc) and android operators (!) which actually sounds like a super fun and interesting job. More here.



Just last month, Oslo, Norway-based 1X (formerly Halodi Robotics) announced a massive $100 million Series B, and clearly they’ve been putting the work in. A new video posted last week shows a [insert collective noun for humanoid robots here] of EVE android-ish mobile manipulators doing a wide variety of tasks leveraging end-to-end neural networks (pixels to actions). And best of all, the video seems to be more or less an honest one: a single take, at (appropriately) 1X speed, and full autonomy. But we still had questions! And 1X has answers.

If, like me, you had some very important questions after watching this video, including whether that plant is actually dead and the fate of the weighted companion cube, you’ll want to read this Q&A with Eric Jang, Vice President of Artificial Intelligence at 1X.

IEEE Spectrum: How many takes did it take to get this take?

Eric Jang: About 10 takes that lasted more than a minute; this was our first time doing a video like this, so it was more about learning how to coordinate the film crew and set up the shoot to look impressive.

Did you train your robots specifically on floppy things and transparent things?

Jang: Nope! We train our neural network to pick up all kinds of objects—both rigid and deformable and transparent things. Because we train manipulation end-to-end from pixels, picking up deformables and transparent objects is much easier than a classical grasping pipeline, where you have to figure out the exact geometry of what you are trying to grasp.

What keeps your robots from doing these tasks faster?

Jang: Our robots learn from demonstrations, so they go at exactly the same speed the human teleoperators demonstrate the task at. If we gathered demonstrations where we move faster, so would the robots.

How many weighted companion cubes were harmed in the making of this video?

Jang: At 1X, weighted companion cubes do not have rights.

That’s a very cool method for charging, but it seems a lot more complicated than some kind of drive-on interface directly with the base. Why use manipulation instead?

Jang: You’re right that this isn’t the simplest way to charge the robot, but if we are going to succeed at our mission to build generally capable and reliable robots that can manipulate all kinds of objects, our neural nets have to be able to do this task at the very least. Plus, it reduces costs quite a bit and simplifies the system!

What animal is that blue plush supposed to be?

Jang: It’s an obese shark, I think.

How many different robots are in this video?

Jang: 17? And more that are stationary.

How do you tell the robots apart?

Jang: They have little numbers printed on the base.

Is that plant dead?

Jang: Yes, we put it there because no CGI / 3D rendered video would ever go through the trouble of adding a dead plant.

What sort of existential crisis is the robot at the window having?

Jang: It was supposed to be opening and closing the window repeatedly (good for testing statistical significance).

If one of the robots was actually a human in a helmet and a suit holding grippers and standing on a mobile base, would I be able to tell?

Jang: I was super flattered by this comment on the Youtube video:

But if you look at the area where the upper arm tapers at the shoulder, it’s too thin for a human to fit inside while still having such broad shoulders:

Why are your robots so happy all the time? Are you planning to do more complex HRI stuff with their faces?

Jang: Yes, more complex HRI stuff is in the pipeline!

Are your robots able to autonomously collaborate with each other?

Jang: Stay tuned!

Is the skew tetromino the most difficult tetromino for robotic manipulation?

Jang: Good catch! Yes, the green one is the worst of them all because there are many valid ways to pinch it with the gripper and lift it up. In robotic learning, if there are multiple ways to pick something up, it can actually confuse the machine learning model. Kind of like asking a car to turn left and right at the same time to avoid a tree.

Everyone else’s robots are making coffee. Can your robots make coffee?

Jang: Yep! We were planning to throw in some coffee making on this video as an easter egg, but the coffee machine broke right before the film shoot and it turns out it’s impossible to get a Keurig K-Slim in Norway via next day shipping.

1X is currently hiring both AI researchers (imitation learning, reinforcement learning, large-scale training, etc) and android operators (!) which actually sounds like a super fun and interesting job. More here.



This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

If Disney’s history of storytelling has taught us anything, it’s to never underestimate the power of a great sidekick. Even though sidekicks aren’t the stars of the show, they provide life and energy and move the story along in important ways. It’s hard to imagine Aladdin without the Genie, or Peter Pan without Tinker Bell.

In robotics, however, solo acts proliferate. Even when multiple robots are used, they usually act in parallel. One key reason for this is that most robots are designed in ways that make direct collaboration with other robots difficult. Stiff, strong robots are more repeatable and easier to control, but those designs have very little forgiveness for the imperfections and mismatches that are inherent in coming into contact with another robot.

Having robots work together–especially if they have complementary skill sets–can open up some exciting opportunities, especially in the entertainment robotics space. At Walt Disney Imagineering, our research and development teams have been working on this idea of collaboration between robots, and we were able to show off the result of one such collaboration in Shanghai this week, when a little furry character interrupted the opening moments for the first-ever Zootopia land.

Our newest robotic character, Duke Weaselton, rolled onstage at the Shanghai Disney Resort for the first time last December, pushing a purple kiosk and blasting pop music. As seen in the video below, the audience got a kick out of watching him hop up on top of the kiosk and try to negotiate with the Chairman of Disney Experiences, Josh D’Amaro, for a new job. And of course, some new perks. After a few moments of wheeling and dealing, Duke gets gently escorted offstage by team members Richard Landon and Louis Lambie.

What might not be obvious at first is that the moment you just saw was enabled not by one robot, but by two. Duke Weaselton is the star of the show, but his dynamic motion wouldn’t be possible without the kiosk, which is its own independent, actuated robot. While these two robots are very different, by working together as one system, they’re able to do things that neither could do alone.

The character and the kiosk bring two very different kinds of motion together, and create something more than the sum of their parts in the process. The character is an expressive, bipedal robot with an exaggerated, animated motion style. It looks fantastic, but it’s not optimized for robust, reliable locomotion. The kiosk, meanwhile, is a simple wheeled system that behaves in a highly predictable way. While that’s great for reliability, it means that by itself it’s not likely to surprise you. But when we combine these two robots, we get the best of both worlds. The character robot can bring a zany, unrestrained energy and excitement as it bounces up, over, and alongside the kiosk, while the kiosk itself ensures that both robots reliably get to wherever they are going.

Harout Jarchafjian, Sophie Bowe, Tony Dohi, Bill West, Marcela de los Rios, Bob Michel, and Morgan Pope.Morgan Pope

The collaboration between the two robots is enabled by designing them to be robust and flexible, and with motions that can tolerate a large amount of uncertainty while still delivering a compelling show. This is a direct result from lessons learned from an earlier robot, one that tumbled across the stage at SXSW earlier this year. Our basic insight is that a small, lightweight robot can be surprisingly tough, and that this toughness enables new levels of creative freedom in the design and execution of a show.

This level of robustness also makes collaboration between robots easier. Because the character robot is tough and because there is some flexibility built into its motors and joints, small errors in placement and pose don’t create big problems like they might for a more conventional robot. The character can lean on the motorized kiosk to create the illusion that it is pushing it across the stage. The kiosk then uses a winch to hoist the character onto a platform, where electromagnets help stabilize its feet. Essentially, the kiosk is compensating for the fact that Duke himself can’t climb, and might be a little wobbly without having his feet secured. The overall result is a free-ranging bipedal robot that moves in a way that feels natural and engaging, but that doesn’t require especially complicated controls or highly precise mechanical design. Here’s a behind-the-scenes look at our development of these systems:

Disney Imagineering

To program Duke’s motions, our team uses an animation pipeline that was originally developed for the SXSW demo, where a designer can pose the robot by hand to create new motions. We have since developed an interface which can also take motions from conventional animation software tools. Motions can then be adjusted to adapt to the real physical constraints of the robots, and that information can be sent back to the animation tool. As animations are developed, it’s critical to retain a tight synchronization between the kiosk and the character. The system is designed so that the motion of both robots is always coordinated, while simultaneously supporting the ability to flexibly animate individual robots–or individual parts of the robot, like the mouth and eyes.

Over the past nine months, we explored a few different kinds of collaborative locomotion approaches. The GIFs below show some early attempts at riding a tricycle, skateboarding, and pushing a crate. In each case, the idea is for a robotic character to eventually collaborate with another robotic system that helps bring that character’s motions to life in a stable and repeatable way.

Disney hopes that their Judy Hopps robot will soon be able to use the help of a robotic tricycle, crate, or skateboard to enable new forms of locomotion.Morgan Pope

This demo with Duke Weaselton and his kiosk is just the beginning, says Principal R&D Imagineer Tony Dohi, who leads the project for us. “Ultimately, what we showed today is an important step towards a bigger vision. This project is laying the groundwork for robots that can interact with each other in surprising and emotionally satisfying ways. Today it’s a character and a kiosk, but moving forward we want to have multiple characters that can engage with each other and with our guests.”

Walt Disney Imagineering R&D is exploring a multi-pronged development strategy for our robotic characters. Engaging character demonstrations like Duke Weasleton focus on quickly prototyping complete experiences using immediately accessible techniques. In parallel, our research group is developing new technologies and capabilities that become the building blocks for both elevating existing experiences, and designing and delivering completely new shows. The robotics team led by Moritz Bächer shared one such building block–embodied in a highly expressive and stylized robotic walking character–at IROS in October. The capabilities demonstrated there can eventually be used to help robots like Duke Weaselton perform more flexibly, more reliably, and more spectacularly.

“Authentic character demonstrations are useful because they help inform what tools are the most valuable for us to develop,” explains Bächer. “In the end our goal is to create tools that enable our teams to produce and deliver these shows rapidly and efficiently.” This ties back to the fundamental technical idea behind the Duke Weaselton show moment–collaboration is key!



This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

If Disney’s history of storytelling has taught us anything, it’s to never underestimate the power of a great sidekick. Even though sidekicks aren’t the stars of the show, they provide life and energy and move the story along in important ways. It’s hard to imagine Aladdin without the Genie, or Peter Pan without Tinker Bell.

In robotics, however, solo acts proliferate. Even when multiple robots are used, they usually act in parallel. One key reason for this is that most robots are designed in ways that make direct collaboration with other robots difficult. Stiff, strong robots are more repeatable and easier to control, but those designs have very little forgiveness for the imperfections and mismatches that are inherent in coming into contact with another robot.

Having robots work together–especially if they have complementary skill sets–can open up some exciting opportunities, especially in the entertainment robotics space. At Walt Disney Imagineering, our research and development teams have been working on this idea of collaboration between robots, and we were able to show off the result of one such collaboration in Shanghai this week, when a little furry character interrupted the opening moments for the first-ever Zootopia land.

Our newest robotic character, Duke Weaselton, rolled onstage at the Shanghai Disney Resort for the first time last December, pushing a purple kiosk and blasting pop music. As seen in the video below, the audience got a kick out of watching him hop up on top of the kiosk and try to negotiate with the Chairman of Disney Experiences, Josh D’Amaro, for a new job. And of course, some new perks. After a few moments of wheeling and dealing, Duke gets gently escorted offstage by team members Richard Landon and Louis Lambie.

What might not be obvious at first is that the moment you just saw was enabled not by one robot, but by two. Duke Weaselton is the star of the show, but his dynamic motion wouldn’t be possible without the kiosk, which is its own independent, actuated robot. While these two robots are very different, by working together as one system, they’re able to do things that neither could do alone.

The character and the kiosk bring two very different kinds of motion together, and create something more than the sum of their parts in the process. The character is an expressive, bipedal robot with an exaggerated, animated motion style. It looks fantastic, but it’s not optimized for robust, reliable locomotion. The kiosk, meanwhile, is a simple wheeled system that behaves in a highly predictable way. While that’s great for reliability, it means that by itself it’s not likely to surprise you. But when we combine these two robots, we get the best of both worlds. The character robot can bring a zany, unrestrained energy and excitement as it bounces up, over, and alongside the kiosk, while the kiosk itself ensures that both robots reliably get to wherever they are going.

Harout Jarchafjian, Sophie Bowe, Tony Dohi, Bill West, Marcela de los Rios, Bob Michel, and Morgan Pope.Morgan Pope

The collaboration between the two robots is enabled by designing them to be robust and flexible, and with motions that can tolerate a large amount of uncertainty while still delivering a compelling show. This is a direct result from lessons learned from an earlier robot, one that tumbled across the stage at SXSW earlier this year. Our basic insight is that a small, lightweight robot can be surprisingly tough, and that this toughness enables new levels of creative freedom in the design and execution of a show.

This level of robustness also makes collaboration between robots easier. Because the character robot is tough and because there is some flexibility built into its motors and joints, small errors in placement and pose don’t create big problems like they might for a more conventional robot. The character can lean on the motorized kiosk to create the illusion that it is pushing it across the stage. The kiosk then uses a winch to hoist the character onto a platform, where electromagnets help stabilize its feet. Essentially, the kiosk is compensating for the fact that Duke himself can’t climb, and might be a little wobbly without having his feet secured. The overall result is a free-ranging bipedal robot that moves in a way that feels natural and engaging, but that doesn’t require especially complicated controls or highly precise mechanical design. Here’s a behind-the-scenes look at our development of these systems:

Disney Imagineering

To program Duke’s motions, our team uses an animation pipeline that was originally developed for the SXSW demo, where a designer can pose the robot by hand to create new motions. We have since developed an interface which can also take motions from conventional animation software tools. Motions can then be adjusted to adapt to the real physical constraints of the robots, and that information can be sent back to the animation tool. As animations are developed, it’s critical to retain a tight synchronization between the kiosk and the character. The system is designed so that the motion of both robots is always coordinated, while simultaneously supporting the ability to flexibly animate individual robots–or individual parts of the robot, like the mouth and eyes.

Over the past nine months, we explored a few different kinds of collaborative locomotion approaches. The GIFs below show some early attempts at riding a tricycle, skateboarding, and pushing a crate. In each case, the idea is for a robotic character to eventually collaborate with another robotic system that helps bring that character’s motions to life in a stable and repeatable way.

Disney hopes that their Judy Hopps robot will soon be able to use the help of a robotic tricycle, crate, or skateboard to enable new forms of locomotion.Morgan Pope

This demo with Duke Weaselton and his kiosk is just the beginning, says Principal R&D Imagineer Tony Dohi, who leads the project for us. “Ultimately, what we showed today is an important step towards a bigger vision. This project is laying the groundwork for robots that can interact with each other in surprising and emotionally satisfying ways. Today it’s a character and a kiosk, but moving forward we want to have multiple characters that can engage with each other and with our guests.”

Walt Disney Imagineering R&D is exploring a multi-pronged development strategy for our robotic characters. Engaging character demonstrations like Duke Weasleton focus on quickly prototyping complete experiences using immediately accessible techniques. In parallel, our research group is developing new technologies and capabilities that become the building blocks for both elevating existing experiences, and designing and delivering completely new shows. The robotics team led by Moritz Bächer shared one such building block–embodied in a highly expressive and stylized robotic walking character–at IROS in October. The capabilities demonstrated there can eventually be used to help robots like Duke Weaselton perform more flexibly, more reliably, and more spectacularly.

“Authentic character demonstrations are useful because they help inform what tools are the most valuable for us to develop,” explains Bächer. “In the end our goal is to create tools that enable our teams to produce and deliver these shows rapidly and efficiently.” This ties back to the fundamental technical idea behind the Duke Weaselton show moment–collaboration is key!



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICHHRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

In this video, we present Ringbot, a novel leg-wheel transformer robot incorporating a monocycle mechanism with legs. Ringbot aims to provide versatile mobility by replacing the driver and driving components of a conventional monocycle vehicle with legs mounted on compact driving modules inside the wheel.

[ Paper ] via [ KIMLAB ]

Making money with robots has always been a struggle, but I think ALOHA 2 has figured it out.

Seriously, though, that is some impressive manipulation capability. I don’t know what that freakish panda thing is, but getting a contact lens from the package onto its bizarre eyeball was some wild dexterity.

[ ALOHA 2 ]

Highlights from testing our new arms built by Boardwalk Robotics. Installed in October of 2023, these new arms are not just for boxing, and are provide much greater speed and power. This matches the mobility and manipulation goals we have for Nadia!

The least dramatic but possibly most important bit of that video is when Nadia uses her arms to help her balance against a wall, which is one of those things that humans do all the time without thinking about it. And we always appreciate being shown things that don’t go perfectly alongside things that do. The bit at the end there was Nadia not quite managing to do lateral arm raises. I can relate; that’s my reaction when I lift weights, too.

[ IHMC ]

Thanks, Robert!

The recent progress in commercial humanoids is just exhausting.

[ Unitree ]

We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia.

[ Science Robotics ]

Have you ever seen a robot skiing?! Ascento robot enjoying a day in the ski slopes of Davos.

[ Ascento ]

Can’t trip Atlas up! Our humanoid robot gets ready for real work combining strength, perception, and mobility.

Notable that Boston Dynamics is now saying that Atlas “gets ready for real work.” Wonder how much to read into that?

[ Boston Dynamics ]

You deserve to be free from endless chores! YOU! DESERVE! CHORE! FREEDOM!

Pretty sure this is teleoperated, so someone is still doing the chores, sadly.

[ MagicLab ]

Multimodal UAVs (Unmanned Aerial Vehicles) are rarely capable of more than two modalities, i.e., flying and walking or flying and perching. However, being able to fly, perch, and walk could further improve their usefulness by expanding their operating envelope. For instance, an aerial robot could fly a long distance, perch in a high place to survey the surroundings, then walk to avoid obstacles that could potentially inhibit flight. Birds are capable of these three tasks, and so offer a practical example of how a robot might be developed to do the same.

[ Paper ] via [ EPFL LIS ]

Nissan announces the concept model of “Iruyo”, a robot that supports babysitting while driving. Ilyo relieves the anxiety of the mother, father, and baby in the driver’s seat. We support safe and secure driving for parents and children. Nissan and Akachan Honpo are working on a project to make life better with cars and babies. Iruyo was born out of the voices of mothers and fathers who said, “I can’t hold my baby while driving alone.”

[ Nissan ]

Building 937 houses the coolest robots at CERN. This is where the action happens to build and program robots that can tackle the unconventional challenges presented by the Laboratory’s unique facilities. Recently, a new type of robot called CERNquadbot has entered CERN’s robot pool and successfully completed its first radiation protection test in the North Area.

[ CERN ]

Congrats to Starship, the OG robotic delivery service, on their $90m raise.

[ Starship ]

By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.

[ GitHub ] via [ MIT ]

This is one of those things that’s far more difficult than it might look.

[ ROAM Lab ]

Our current care system does not scale and our populations are ageing fast. Robodies are multipliers for care staff, allowing them to work together with local helpers to provide protection and assistance around the clock while maintaining personal contact with people in the community.

[ DEVANTHRO ]

It’s the world’s smallest humanoid robot, until someone comes out with slightly smaller servos!

[ Guinness ]

Deep Robotics wishes you a happy year of the dragon!

[ Deep Robotics ]

SEAS researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) is led by Purdue University, in partnership with SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard ]

Find out how a bold vision became a success story! The DLR Institute of Robotics and Mechatronics has been researching robotic arms since the 1990s - originally for use in space. It was a long and ambitious journey before these lightweight robotic arms could be used on earth and finally in operating theaters, a journey that required concentrated robotics expertise, interdisciplinary cooperation and ultimately a successful technology transfer.]

[ DLR MIRO ]

Robotics is changing the world, driven by focused teams of diverse experts. Willow Garage operated with the mantra “Impact first, return on capital second” and through ROS and the PR2 had enormous impact. Autonomous mobile robots are finally being accepted in the service industry, and Savioke (now Relay Robotics) was created to drive that impact. This talk will trace the evolution of Relay robots and their deployment in hotels, hospitals and other service industries, starting with roots at Willow Garage. As robotics technology is poised for the next round of advances, how do we create and maintain the organizations that continue to drive progress?

[ Northwestern ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICHHRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

In this video, we present Ringbot, a novel leg-wheel transformer robot incorporating a monocycle mechanism with legs. Ringbot aims to provide versatile mobility by replacing the driver and driving components of a conventional monocycle vehicle with legs mounted on compact driving modules inside the wheel.

[ Paper ] via [ KIMLAB ]

Making money with robots has always been a struggle, but I think ALOHA 2 has figured it out.

Seriously, though, that is some impressive manipulation capability. I don’t know what that freakish panda thing is, but getting a contact lens from the package onto its bizarre eyeball was some wild dexterity.

[ ALOHA 2 ]

Highlights from testing our new arms built by Boardwalk Robotics. Installed in October of 2023, these new arms are not just for boxing, and are provide much greater speed and power. This matches the mobility and manipulation goals we have for Nadia!

The least dramatic but possibly most important bit of that video is when Nadia uses her arms to help her balance against a wall, which is one of those things that humans do all the time without thinking about it. And we always appreciate being shown things that don’t go perfectly alongside things that do. The bit at the end there was Nadia not quite managing to do lateral arm raises. I can relate; that’s my reaction when I lift weights, too.

[ IHMC ]

Thanks, Robert!

The recent progress in commercial humanoids is just exhausting.

[ Unitree ]

We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia.

[ Science Robotics ]

Have you ever seen a robot skiing?! Ascento robot enjoying a day in the ski slopes of Davos.

[ Ascento ]

Can’t trip Atlas up! Our humanoid robot gets ready for real work combining strength, perception, and mobility.

Notable that Boston Dynamics is now saying that Atlas “gets ready for real work.” Wonder how much to read into that?

[ Boston Dynamics ]

You deserve to be free from endless chores! YOU! DESERVE! CHORE! FREEDOM!

Pretty sure this is teleoperated, so someone is still doing the chores, sadly.

[ MagicLab ]

Multimodal UAVs (Unmanned Aerial Vehicles) are rarely capable of more than two modalities, i.e., flying and walking or flying and perching. However, being able to fly, perch, and walk could further improve their usefulness by expanding their operating envelope. For instance, an aerial robot could fly a long distance, perch in a high place to survey the surroundings, then walk to avoid obstacles that could potentially inhibit flight. Birds are capable of these three tasks, and so offer a practical example of how a robot might be developed to do the same.

[ Paper ] via [ EPFL LIS ]

Nissan announces the concept model of “Iruyo”, a robot that supports babysitting while driving. Ilyo relieves the anxiety of the mother, father, and baby in the driver’s seat. We support safe and secure driving for parents and children. Nissan and Akachan Honpo are working on a project to make life better with cars and babies. Iruyo was born out of the voices of mothers and fathers who said, “I can’t hold my baby while driving alone.”

[ Nissan ]

Building 937 houses the coolest robots at CERN. This is where the action happens to build and program robots that can tackle the unconventional challenges presented by the Laboratory’s unique facilities. Recently, a new type of robot called CERNquadbot has entered CERN’s robot pool and successfully completed its first radiation protection test in the North Area.

[ CERN ]

Congrats to Starship, the OG robotic delivery service, on their $90m raise.

[ Starship ]

By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.

[ GitHub ] via [ MIT ]

This is one of those things that’s far more difficult than it might look.

[ ROAM Lab ]

Our current care system does not scale and our populations are ageing fast. Robodies are multipliers for care staff, allowing them to work together with local helpers to provide protection and assistance around the clock while maintaining personal contact with people in the community.

[ DEVANTHRO ]

It’s the world’s smallest humanoid robot, until someone comes out with slightly smaller servos!

[ Guinness ]

Deep Robotics wishes you a happy year of the dragon!

[ Deep Robotics ]

SEAS researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) is led by Purdue University, in partnership with SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard ]

Find out how a bold vision became a success story! The DLR Institute of Robotics and Mechatronics has been researching robotic arms since the 1990s - originally for use in space. It was a long and ambitious journey before these lightweight robotic arms could be used on earth and finally in operating theaters, a journey that required concentrated robotics expertise, interdisciplinary cooperation and ultimately a successful technology transfer.]

[ DLR MIRO ]

Robotics is changing the world, driven by focused teams of diverse experts. Willow Garage operated with the mantra “Impact first, return on capital second” and through ROS and the PR2 had enormous impact. Autonomous mobile robots are finally being accepted in the service industry, and Savioke (now Relay Robotics) was created to drive that impact. This talk will trace the evolution of Relay robots and their deployment in hotels, hospitals and other service industries, starting with roots at Willow Garage. As robotics technology is poised for the next round of advances, how do we create and maintain the organizations that continue to drive progress?

[ Northwestern ]

Over the past few years, there has been a noticeable surge in efforts to design novel tools and approaches that incorporate Artificial Intelligence (AI) into rehabilitation of persons with lower-limb impairments, using robotic exoskeletons. The potential benefits include the ability to implement personalized rehabilitation therapies by leveraging AI for robot control and data analysis, facilitating personalized feedback and guidance. Despite this, there is a current lack of literature review specifically focusing on AI applications in lower-limb rehabilitative robotics. To address this gap, our work aims at performing a review of 37 peer-reviewed papers. This review categorizes selected papers based on robotic application scenarios or AI methodologies. Additionally, it uniquely contributes by providing a detailed summary of input features, AI model performance, enrolled populations, exoskeletal systems used in the validation process, and specific tasks for each paper. The innovative aspect lies in offering a clear understanding of the suitability of different algorithms for specific tasks, intending to guide future developments and support informed decision-making in the realm of lower-limb exoskeleton and AI applications.



It’s kind of astonishing how quadrotors have scaled over the past decade. Like, we’re now at the point where they’re verging on disposable, at least from a commercial or research perspective—for a bit over US $200, you can buy a little 27-gram, completely open-source drone, and all you have to do is teach it to fly. That’s where things do get a bit more challenging, though, because teaching drones to fly is not a straightforward process. Thanks to good simulation and techniques like reinforcement learning, it’s much easier to imbue drones with autonomy than it used to be. But it’s not typically a fast process, and it can be finicky to make a smooth transition from simulation to reality.

New York University’s Agile Robotics and Perception Lab has managed to streamline the process of getting basic autonomy to work on drones, and streamline it by a lot: The lab’s system is able to train a drone in simulation from nothing up to stable and controllable flying in 18 seconds flat on a MacBook Pro. And it actually takes longer to compile and flash the firmware onto the drone itself than it does for the entire training process.

ARPL NYU

So not only is the drone able to keep a stable hover while rejecting pokes and nudges and wind, but it’s also able to fly specific trajectories. Not bad for 18 seconds, right?

One of the things that typically slows down training times is the need to keep refining exactly what you’re training for, without refining it so much that you’re only training your system to fly in your specific simulation rather than the real world. The strategy used here is what the researchers call a curriculum (you can also think of it as a sort of lesson plan) to adjust the reward function used to train the system through reinforcement learning. The curriculum starts things off being more forgiving and gradually increasing the penalties to emphasize robustness and reliability. This is all about efficiency: Doing that training that you need to do in the way that it needs to be done to get the results you want, and no more.

There are other, more straightforward, tricks that optimize this technique for speed as well. The deep-reinforcement learning algorithms are particularly efficient, and leverage the hardware acceleration that comes along with Apple’s M-series processors. The simulator efficiency multiplies the benefits of the curriculum-driven sample efficiency of the reinforcement-learning pipeline, leading to that wicked-fast training time.

This approach isn’t limited to simple tiny drones—it’ll work on pretty much any drone, including bigger and more expensive ones, or even a drone that you yourself build from scratch.

Jonas Eschmann

We’re told that it took minutes rather than seconds to train a policy for the drone in the video above, although the researchers expect that 18 seconds is achievable even for a more complex drone like this in the near future. And it’s all open source, so you can, in fact, build a drone and teach it to fly with this system. But if you wait a little bit, it’s only going to get better: The researchers tell us that they’re working on integrating with the PX4 open source drone autopilot. Longer term, the idea is to have a single policy that can adapt to different environmental conditions, as well as different vehicle configurations, meaning that this could work on all kinds of flying robots rather than just quadrotors.

Everything you need to run this yourself is available on GitHub, and the paper is on ArXiv here.



It’s kind of astonishing how quadrotors have scaled over the past decade. Like, we’re now at the point where they’re verging on disposable, at least from a commercial or research perspective—for a bit over US $200, you can buy a little 27-gram, completely open-source drone, and all you have to do is teach it to fly. That’s where things do get a bit more challenging, though, because teaching drones to fly is not a straightforward process. Thanks to good simulation and techniques like reinforcement learning, it’s much easier to imbue drones with autonomy than it used to be. But it’s not typically a fast process, and it can be finicky to make a smooth transition from simulation to reality.

New York University’s Agile Robotics and Perception Lab has managed to streamline the process of getting basic autonomy to work on drones, and streamline it by a lot: The lab’s system is able to train a drone in simulation from nothing up to stable and controllable flying in 18 seconds flat on a MacBook Pro. And it actually takes longer to compile and flash the firmware onto the drone itself than it does for the entire training process.

ARPL NYU

So not only is the drone able to keep a stable hover while rejecting pokes and nudges and wind, but it’s also able to fly specific trajectories. Not bad for 18 seconds, right?

One of the things that typically slows down training times is the need to keep refining exactly what you’re training for, without refining it so much that you’re only training your system to fly in your specific simulation rather than the real world. The strategy used here is what the researchers call a curriculum (you can also think of it as a sort of lesson plan) to adjust the reward function used to train the system through reinforcement learning. The curriculum starts things off being more forgiving and gradually increasing the penalties to emphasize robustness and reliability. This is all about efficiency: Doing that training that you need to do in the way that it needs to be done to get the results you want, and no more.

There are other, more straightforward, tricks that optimize this technique for speed as well. The deep-reinforcement learning algorithms are particularly efficient, and leverage the hardware acceleration that comes along with Apple’s M-series processors. The simulator efficiency multiplies the benefits of the curriculum-driven sample efficiency of the reinforcement-learning pipeline, leading to that wicked-fast training time.

This approach isn’t limited to simple tiny drones—it’ll work on pretty much any drone, including bigger and more expensive ones, or even a drone that you yourself build from scratch.

Jonas Eschmann

We’re told that it took minutes rather than seconds to train a policy for the drone in the video above, although the researchers expect that 18 seconds is achievable even for a more complex drone like this in the near future. And it’s all open source, so you can, in fact, build a drone and teach it to fly with this system. But if you wait a little bit, it’s only going to get better: The researchers tell us that they’re working on integrating with the PX4 open source drone autopilot. Longer term, the idea is to have a single policy that can adapt to different environmental conditions, as well as different vehicle configurations, meaning that this could work on all kinds of flying robots rather than just quadrotors.

Everything you need to run this yourself is available on GitHub, and the paper is on ArXiv here.

Finding actual causes of unmanned aerial vehicle (UAV) failures can be split into two main tasks: building causal models and performing actual causality analysis (ACA) over them. While there are available solutions in the literature to perform ACA, building comprehensive causal models is still an open problem. The expensive and time-consuming process of building such models, typically performed manually by domain experts, has hindered the widespread application of causality-based diagnosis solutions in practice. This study proposes a methodology based on natural language processing for automating causal model generation for UAVs. After collecting textual data from online resources, causal keywords are identified in sentences. Next, cause–effect phrases are extracted from sentences based on predefined dependency rules between tokens. Finally, the extracted cause–effect pairs are merged to form a causal graph, which we then use for ACA. To demonstrate the applicability of our framework, we scrape online text resources of Ardupilot, an open-source UAV controller software. Our evaluations using real flight logs show that the generated graphs can successfully be used to find the actual causes of unwanted events. Moreover, our hybrid cause–effect extraction module performs better than a purely deep-learning based tool (i.e., CiRA) by 32% in precision and 25% in recall in our Ardupilot use case.

Colorectal cancer as a major disease that poses a serious threat to human health continues to rise in incidence. And the timely colon examinations are crucial for the prevention, diagnosis, and treatment of this disease. Clinically, gastroscopy is used as a universal means of examination, prevention and diagnosis of this disease, but this detection method is not patient-friendly and can easily cause damage to the intestinal mucosa. Soft robots as an emerging technology offer a promising approach to examining, diagnosing, and treating intestinal diseases due to their high flexibility and patient-friendly interaction. However, existing research on intestinal soft robots mainly focuses on controlled movement and observation within the colon or colon-like environments, lacking additional functionalities such as sample collection from the intestine. Here, we designed and developed an earthworm-like soft robot specifically for colon sampling. It consists of a robot body with an earthworm-like structure for movement in the narrow and soft pipe-environments, and a sampling part with a flexible arm structure resembling an elephant trunk for bidirectional bending sampling. This soft robot is capable of flexible movement and sample collection within an colon-like environment. By successfully demonstrating the feasibility of utilizing soft robots for colon sampling, this work introduces a novel method for non-destructive inspection and sampling in the colon. It represents a significant advancement in the field of medical robotics, offering a potential solution for more efficient and accurate examination and diagnosis of intestinal diseases, specifically for colorectal cancer.



About a decade ago, there was a lot of excitement in the robotics world around gecko-inspired directional adhesives, which are materials that stick without being sticky using the same van der Waals forces that allow geckos to scamper around on vertical panes of glass. They were used extensively in different sorts of climbing robots, some of them quite lovely. Gecko adhesives are uniquely able to stick to very smooth things where your only other option might be suction, which requires all kinds of extra infrastructure to work.

We haven’t seen gecko adhesives around as much of late, for a couple of reasons. First, the ability to only stick to smooth surfaces (which is what gecko adhesives are best at) is a bit of a limitation for mobile robots. And second, the gap between research and useful application is wide and deep and full of crocodiles. I’m talking about the mean kind of crocodiles, not the cuddly kind. But Flexiv Robotics has made gecko adhesives practical for robotic grasping in a commercial environment, thanks in part to a sort of robotic tongue that licks the gecko tape clean.

If you zoom way, way in on a gecko’s foot, you’ll see that each toe is covered in millions of hair-like nanostructures called setae. Each setae branches out at the end into hundreds of more hairs with flat bits at the end called spatulas. The result of this complex arrangement of setae and spatulas is that gecko toes have a ridiculous amount of surface area, meaning that they can leverage the extremely weak van der Waals forces between molecules to stick themselves to perfectly flat and smooth surfaces. This technique works exceptionally well: Geckos can hang from glass by a single toe, and a fully adhered gecko can hold something like 140 kg (which, unfortunately, seems to be an extrapolation rather than an experimental result). And luckily for the gecko, the structure of the spatulas makes the adhesion directional, so that when its toes are no longer being loaded, they can be easily peeled off of whatever they’re attached to.

Natural gecko adhesive structure, along with a synthetic adhesive (f).Gecko adhesion: evolutionary nanotechnology, by Kellar Autumn and Nick Gravish

Since geckos don’t “stick” to things in the sense that we typically use the word “sticky,” a better way of characterizing what geckos can do is as “dry adhesion,” as opposed to something that involves some sort of glue. You can also think about gecko toes as just being very, very high friction, and it’s this perspective that is particularly interesting in the context of robotic grippers.

This is Flexiv’s “Grav Enhanced” gripper, which uses a combination of pinch grasping and high friction gecko adhesive to lift heavy and delicate objects without having to squeeze them. When you think about a traditional robotic grasping system trying to lift something like a water balloon, you have to squeeze that balloon until the friction between the side of the gripper and the side of the balloon overcomes the weight of the balloon itself. The higher the friction, the lower the squeeze required, and although a water balloon might be an extreme example, maximizing gripper friction can make a huge difference when it comes to fragile or deformable objects.

There are a couple of problems with dry adhesive, however. The tiny structures that make the adhesive adhere can be prone to damage, and the fact that dry adhesive will stick to just about anything it can make good contact with means that it’ll rapidly accumulate dirt outside of a carefully controlled environment. In research contexts, these problems aren’t all that significant, but for a commercial system, you can’t have something that requires constant attention.

Flexiv says that the microstructure material that makes up their gecko adhesive was able to sustain two million gripping cycles without any visible degradation in performance, suggesting that as long as you use the stuff within the tolerances that it’s designed for, it should keep on adhering to things indefinitely—although trying to lift too much weight will tear the microstructures, ruining the adhesive properties after just a few cycles. And to keep the adhesive from getting clogged up with debris, Flexiv came up with this clever little cleaning station that acts like a little robotic tongue of sorts:

Interestingly, geckos themselves don’t seem to use their own tongues to clean their toes. They lick their eyeballs on the regular, like all normal humans do, but gecko toes appear to be self-cleaning, which is a pretty neat trick. It’s certainly possible to make self-cleaning synthetic gecko adhesive, but Flexiv tells us that “due to technical and practical limitations, replicating this process in our own gecko adhesive material is not possible. Essentially, we replicate the microstructure of a gecko’s footpad, but not its self-cleaning process.” This likely goes back to that whole thing about what works in a research context versus what works in a commercial context, and Flexiv needs their gecko adhesive to handle all those millions of cycles.

Flexiv says that they were made aware of the need for a system like this when one of their clients started using the gripper for the extra-dirty task of sorting trash from recycling, and that the solution was inspired by a lint roller. And I have to say, I appreciate the simplicity of the system that Flexiv came up with to solve the problem directly and efficiently. Maybe one day, they’ll be able to replicate a real gecko’s natural self-cleaning toes with a durable and affordable artificial dry adhesive, but until that happens, an artificial tongue does the trick.


About a decade ago, there was a lot of excitement in the robotics world around gecko-inspired directional adhesives, which are materials that stick without being sticky using the same van der Waals forces that allow geckos to scamper around on vertical panes of glass. They were used extensively in different sorts of climbing robots, some of them quite lovely. Gecko adhesives are uniquely able to stick to very smooth things where your only other option might be suction, which requires all kinds of extra infrastructure to work.

We haven’t seen gecko adhesives around as much of late, for a couple of reasons. First, the ability to only stick to smooth surfaces (which is what gecko adhesives are best at) is a bit of a limitation for mobile robots. And second, the gap between research and useful application is wide and deep and full of crocodiles. I’m talking about the mean kind of crocodiles, not the cuddly kind. But Flexiv Robotics has made gecko adhesives practical for robotic grasping in a commercial environment, thanks in part to a sort of robotic tongue that licks the gecko tape clean.

If you zoom way, way in on a gecko’s foot, you’ll see that each toe is covered in millions of hair-like nanostructures called setae. Each setae branches out at the end into hundreds of more hairs with flat bits at the end called spatulas. The result of this complex arrangement of setae and spatulas is that gecko toes have a ridiculous amount of surface area, meaning that they can leverage the extremely weak van der Waals forces between molecules to stick themselves to perfectly flat and smooth surfaces. This technique works exceptionally well: Geckos can hang from glass by a single toe, and a fully adhered gecko can hold something like 140 kg (which, unfortunately, seems to be an extrapolation rather than an experimental result). And luckily for the gecko, the structure of the spatulas makes the adhesion directional, so that when its toes are no longer being loaded, they can be easily peeled off of whatever they’re attached to.

Natural gecko adhesive structure, along with a synthetic adhesive (f).Gecko adhesion: evolutionary nanotechnology, by Kellar Autumn and Nick Gravish

Since geckos don’t “stick” to things in the sense that we typically use the word “sticky,” a better way of characterizing what geckos can do is as “dry adhesion,” as opposed to something that involves some sort of glue. You can also think about gecko toes as just being very, very high friction, and it’s this perspective that is particularly interesting in the context of robotic grippers.

This is Flexiv’s “Grav Enhanced” gripper, which uses a combination of pinch grasping and high friction gecko adhesive to lift heavy and delicate objects without having to squeeze them. When you think about a traditional robotic grasping system trying to lift something like a water balloon, you have to squeeze that balloon until the friction between the side of the gripper and the side of the balloon overcomes the weight of the balloon itself. The higher the friction, the lower the squeeze required, and although a water balloon might be an extreme example, maximizing gripper friction can make a huge difference when it comes to fragile or deformable objects.

There are a couple of problems with dry adhesive, however. The tiny structures that make the adhesive adhere can be prone to damage, and the fact that dry adhesive will stick to just about anything it can make good contact with means that it’ll rapidly accumulate dirt outside of a carefully controlled environment. In research contexts, these problems aren’t all that significant, but for a commercial system, you can’t have something that requires constant attention.

Flexiv says that the microstructure material that makes up their gecko adhesive was able to sustain two million gripping cycles without any visible degradation in performance, suggesting that as long as you use the stuff within the tolerances that it’s designed for, it should keep on adhering to things indefinitely—although trying to lift too much weight will tear the microstructures, ruining the adhesive properties after just a few cycles. And to keep the adhesive from getting clogged up with debris, Flexiv came up with this clever little cleaning station that acts like a little robotic tongue of sorts:

Interestingly, geckos themselves don’t seem to use their own tongues to clean their toes. They lick their eyeballs on the regular, like all normal humans do, but gecko toes appear to be self-cleaning, which is a pretty neat trick. It’s certainly possible to make self-cleaning synthetic gecko adhesive, but Flexiv tells us that “due to technical and practical limitations, replicating this process in our own gecko adhesive material is not possible. Essentially, we replicate the microstructure of a gecko’s footpad, but not its self-cleaning process.” This likely goes back to that whole thing about what works in a research context versus what works in a commercial context, and Flexiv needs their gecko adhesive to handle all those millions of cycles.

Flexiv says that they were made aware of the need for a system like this when one of their clients started using the gripper for the extra-dirty task of sorting trash from recycling, and that the solution was inspired by a lint roller. And I have to say, I appreciate the simplicity of the system that Flexiv came up with to solve the problem directly and efficiently. Maybe one day, they’ll be able to replicate a real gecko’s natural self-cleaning toes with a durable and affordable artificial dry adhesive, but until that happens, an artificial tongue does the trick.

Accurate texture classification empowers robots to improve their perception and comprehension of the environment, enabling informed decision-making and appropriate responses to diverse materials and surfaces. Still, there are challenges for texture classification regarding the vast amount of time series data generated from robots’ sensors. For instance, robots are anticipated to leverage human feedback during interactions with the environment, particularly in cases of misclassification or uncertainty. With the diversity of objects and textures in daily activities, Active Learning (AL) can be employed to minimize the number of samples the robot needs to request from humans, streamlining the learning process. In the present work, we use AL to select the most informative samples for annotation, thus reducing the human labeling effort required to achieve high performance for classifying textures. We also use a sliding window strategy for extracting features from the sensor’s time series used in our experiments. Our multi-class dataset (e.g., 12 textures) challenges traditional AL strategies since standard techniques cannot control the number of instances per class selected to be labeled. Therefore, we propose a novel class-balancing instance selection algorithm that we integrate with standard AL strategies. Moreover, we evaluate the effect of sliding windows of two-time intervals (3 and 6 s) on our AL Strategies. Finally, we analyze in our experiments the performance of AL strategies, with and without the balancing algorithm, regarding f1-score, and positive effects are observed in terms of performance when using our proposed data pipeline. Our results show that the training data can be reduced to 70% using an AL strategy regardless of the machine learning model and reach, and in many cases, surpass a baseline performance. Finally, exploring the textures with a 6-s window achieves the best performance, and using either Extra Trees produces an average f1-score of 90.21% in the texture classification data set.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

Is “scamperiest” a word? If not, it should be, because this is the scamperiest robot I’ve ever seen.

[ ABS ]

GITAI is pleased to announce that its 1.5-meter-long autonomous dual robotic arm system (S2) has successfully arrived at the International Space Station (ISS) aboard the SpaceX Falcon 9 rocket (NG-20) to conduct an external demonstration of in-space servicing, assembly, and manufacturing (ISAM) while onboard the ISS. The success of the S2 tech demo will be a major milestone for GITAI, confirming the feasibility of this technology as a fully operational system in space.

[ GITAI ]

This work presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal robots. Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.

And if you want to get exhausted on behalf of a robot, the full 400-meter dash is below.

[ Hybrid Robotics ]

NASA’s Ingenuity Mars Helicopter pushed aerodynamic limits during the final months of its mission, setting new records for speed, distance, and altitude. Hear from Ingenuity chief engineer Travis Brown on how the data the team collected could eventually be used in future rotorcraft designs.

[ NASA ]

BigDog: 15 years of solving mobility problems its own way.

[ Boston Dynamics ]

[Harvard School of Engineering and Applied Sciences] researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) led by Purdue University, in partnership with [Harvard] SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard SEAS ]

Researchers from Huazhong University of Science and Technology (HUST) in a recent T-RO paper describe and construct a novel variable stiffness spherical joint motor that enables dexterous motion and joint compliance in omni-directions.

[ Paper ]

Thanks, Ram!

We are told that this new robot from HEBI is called “Mark Suckerberg” and that they’ve got a pretty cool application in mind for it, to be revealed later this year.

[ HEBI Robotics ]

Thanks, Dave!

Dive into the first edition of our new Real-World-Robotics class at ETH Zürich! Our students embarked on an incredible journey, creating their human-like robotic hands from scratch. In just three months, the teams designed, built, and programmed their tendon-driven robotic hands, mastering dexterous manipulation with reinforcement learning! The result? A spectacular display of innovation and skill during our grand final.

[ SRL ETHZ ]

Carnegie Mellon researchers have built a system with a robotic arm atop a RangerMini 2.0 robotic cart from AgileX robotics to make what they’re calling a platform for “intelligent movement and processing.”

[ CMU ] via [ AgileX ]

Picassnake is our custom-made robot that paints pictures from music. Picassnake consists of an arm and a head, embedded in a plush snake doll. The robot is connected to a laptop for control and music processing, which can be fed through a microphone or an MP3 file. To open the media source, an operator can use the graphical user interface or place a text QR code in front of a webcam. Once the media source is opened, Picassnake generates unique strokes based on the music and translates the strokes to physical movement to paint them on canvas.

[ Picassnake ]

In April 2021, NASA’s Ingenuity Mars Helicopter became the first spacecraft to achieve powered, controlled flight on another world. With 72 successful flights, Ingenuity has far surpassed its originally planned technology demonstration of up to five flights. On Jan. 18, Ingenuity flew for the final time on the Red Planet. Join Tiffany Morgan, NASA’s Mars Exploration Program Deputy Director, and Teddy Tzanetos, Ingenuity Project Manager, as they discuss these historic flights and what they could mean for future extraterrestrial aerial exploration.

[ NASA ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

Is “scamperiest” a word? If not, it should be, because this is the scamperiest robot I’ve ever seen.

[ ABS ]

GITAI is pleased to announce that its 1.5-meter-long autonomous dual robotic arm system (S2) has successfully arrived at the International Space Station (ISS) aboard the SpaceX Falcon 9 rocket (NG-20) to conduct an external demonstration of in-space servicing, assembly, and manufacturing (ISAM) while onboard the ISS. The success of the S2 tech demo will be a major milestone for GITAI, confirming the feasibility of this technology as a fully operational system in space.

[ GITAI ]

This work presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal robots. Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.

And if you want to get exhausted on behalf of a robot, the full 400-meter dash is below.

[ Hybrid Robotics ]

NASA’s Ingenuity Mars Helicopter pushed aerodynamic limits during the final months of its mission, setting new records for speed, distance, and altitude. Hear from Ingenuity chief engineer Travis Brown on how the data the team collected could eventually be used in future rotorcraft designs.

[ NASA ]

BigDog: 15 years of solving mobility problems its own way.

[ Boston Dynamics ]

[Harvard School of Engineering and Applied Sciences] researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) led by Purdue University, in partnership with [Harvard] SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard SEAS ]

Researchers from Huazhong University of Science and Technology (HUST) in a recent T-RO paper describe and construct a novel variable stiffness spherical joint motor that enables dexterous motion and joint compliance in omni-directions.

[ Paper ]

Thanks, Ram!

We are told that this new robot from HEBI is called “Mark Suckerberg” and that they’ve got a pretty cool application in mind for it, to be revealed later this year.

[ HEBI Robotics ]

Thanks, Dave!

Dive into the first edition of our new Real-World-Robotics class at ETH Zürich! Our students embarked on an incredible journey, creating their human-like robotic hands from scratch. In just three months, the teams designed, built, and programmed their tendon-driven robotic hands, mastering dexterous manipulation with reinforcement learning! The result? A spectacular display of innovation and skill during our grand final.

[ SRL ETHZ ]

Carnegie Mellon researchers have built a system with a robotic arm atop a RangerMini 2.0 robotic cart from AgileX robotics to make what they’re calling a platform for “intelligent movement and processing.”

[ CMU ] via [ AgileX ]

Picassnake is our custom-made robot that paints pictures from music. Picassnake consists of an arm and a head, embedded in a plush snake doll. The robot is connected to a laptop for control and music processing, which can be fed through a microphone or an MP3 file. To open the media source, an operator can use the graphical user interface or place a text QR code in front of a webcam. Once the media source is opened, Picassnake generates unique strokes based on the music and translates the strokes to physical movement to paint them on canvas.

[ Picassnake ]

In April 2021, NASA’s Ingenuity Mars Helicopter became the first spacecraft to achieve powered, controlled flight on another world. With 72 successful flights, Ingenuity has far surpassed its originally planned technology demonstration of up to five flights. On Jan. 18, Ingenuity flew for the final time on the Red Planet. Join Tiffany Morgan, NASA’s Mars Exploration Program Deputy Director, and Teddy Tzanetos, Ingenuity Project Manager, as they discuss these historic flights and what they could mean for future extraterrestrial aerial exploration.

[ NASA ]

Online questionnaires that use crowdsourcing platforms to recruit participants have become commonplace, due to their ease of use and low costs. Artificial intelligence (AI)-based large language models (LLMs) have made it easy for bad actors to automatically fill in online forms, including generating meaningful text for open-ended tasks. These technological advances threaten the data quality for studies that use online questionnaires. This study tested whether text generated by an AI for the purpose of an online study can be detected by both humans and automatic AI detection systems. While humans were able to correctly identify the authorship of such text above chance level (76% accuracy), their performance was still below what would be required to ensure satisfactory data quality. Researchers currently have to rely on a lack of interest among bad actors to successfully use open-ended responses as a useful tool for ensuring data quality. Automatic AI detection systems are currently completely unusable. If AI submissions of responses become too prevalent, then the costs associated with detecting fraudulent submissions will outweigh the benefits of online questionnaires. Individual attention checks will no longer be a sufficient tool to ensure good data quality. This problem can only be systematically addressed by crowdsourcing platforms. They cannot rely on automatic AI detection systems and it is unclear how they can ensure data quality for their paying clients.

Introduction: Navigation satellite systems can fail to work or work incorrectly in a number of conditions: signal shadowing, electromagnetic interference, atmospheric conditions, and technical problems. All of these factors can significantly affect the localization accuracy of autonomous driving systems. This emphasizes the need for other localization technologies, such as Lidar.

Methods: The use of the Kalman filter in combination with Lidar can be very effective in various applications due to the synergy of their capabilities. The Kalman filter can improve the accuracy of lidar measurements by taking into account the noise and inaccuracies present in the measurements.

Results: In this paper, we propose a parallel Kalman algorithm in three-dimensional space to speed up the computational speed of Lidar localization. At the same time, the initial localization accuracy of the latter is preserved. A distinctive feature of the proposed approach is that the Kalman localization algorithm itself is parallelized, rather than the process of building a map for navigation. The proposed algorithm allows us to obtain the result 3.8 times faster without compromising the localization accuracy, which was 3% for both cases, making it effective for real-time decision-making.

Discussion: The reliability of this result is confirmed by a preliminary theoretical estimate of the acceleration rate based on Ambdahl’s law. Accelerating the Kalman filter with CUDA for Lidar localization can be of significant practical value, especially in real-time and in conditions where large amounts of data from Lidar sensors need to be processed.



Just because an object is around a corner doesn’t mean it has to be hidden. Non-line-of-sight imaging can peek around corners and spot those objects, but it has so far been limited to a narrow band of frequencies. Now, a new sensor can help extend this technique from working with visible light to infrared. This advance could help make autonomous vehicles safer, among other potential applications.

Non-line-of-sight imaging relies on the faint signals of light beams that have reflected off surfaces in order to reconstruct images. The ability to see around corners may prove useful for machine vision—for instance, helping autonomous vehicles foresee hidden dangers to better predict how to respond to them, says Xiaolong Hu, the senior author of the study and a professor at Tianjin University in Tianjin, China. It may also improve endoscopes that help doctors peer inside the body.

The light that non-line-of-sight imaging depends on is typically very dim, and until now, the detectors that were efficient and sensitive enough for non-line-of-sight imaging could only detect either visible or near-infrared light. Moving to longer wavelengths might have several advantages, such as dealing with less interference from sunshine, and the possibility of using lasers that are safe around eyes, Hu says.

Now Hu and his colleagues have for the first time performed non-line-of-sight imaging using 1,560- and 1,997-nanometer infrared wavelengths. “This extension in spectrum paves the way for more practical applications,” Hu says.

The researchers imaged several objects with a non-line-of-sight infrared camera, both without [middle column] and with [right column] de-noising algorithms.Tianjin University

In the new study, the researchers experimented with superconducting nanowire single-photon detectors. In each device, a 40-nanometer-wide niobium titanium nitride wire was cooled to about 2 kelvins (about –271 °C), rendering the wire superconductive. A single photon could disrupt this fragile state, generating electrical pulses that enabled the efficient detection of individual photons.

The scientists contorted the nanowire in each device into a fractal pattern that took on similar shapes at various magnifications. This let the sensor detect photons of all polarizations, boosting its efficiency.

The new detector was up to nearly three times as efficient as other single-photon detectors at sensing near- and mid-infrared light. This let the researchers perform non-line-of-sight imaging, achieving a spatial resolution of roughly 1.3 to 1.5 centimeters.

In addition to an algorithm that reconstructed non-line-of-sight images based off multiple scattered light rays, the scientists developed a new algorithm that helped remove noise from their data. When each pixel during the scanning process was given 5 milliseconds to collect photons, the new de-noising algorithm reduced the root mean square error—a measure of its deviation from a perfect image—of reconstructed images by about eightfold.

The researchers now plan to arrange multiple sensors into larger arrays to boost efficiency, reduce scanning time, and extend the distance over which imaging can take place, Hu says. They would also like to test their device in daylight conditions, he adds.

The scientists detailed their findings 30 November in the journal Optics Express.



Just because an object is around a corner doesn’t mean it has to be hidden. Non-line-of-sight imaging can peek around corners and spot those objects, but it has so far been limited to a narrow band of frequencies. Now, a new sensor can help extend this technique from working with visible light to infrared. This advance could help make autonomous vehicles safer, among other potential applications.

Non-line-of-sight imaging relies on the faint signals of light beams that have reflected off surfaces in order to reconstruct images. The ability to see around corners may prove useful for machine vision—for instance, helping autonomous vehicles foresee hidden dangers to better predict how to respond to them, says Xiaolong Hu, the senior author of the study and a professor at Tianjin University in Tianjin, China. It may also improve endoscopes that help doctors peer inside the body.

The light that non-line-of-sight imaging depends on is typically very dim, and until now, the detectors that were efficient and sensitive enough for non-line-of-sight imaging could only detect either visible or near-infrared light. Moving to longer wavelengths might have several advantages, such as dealing with less interference from sunshine, and the possibility of using lasers that are safe around eyes, Hu says.

Now Hu and his colleagues have for the first time performed non-line-of-sight imaging using 1,560- and 1,997-nanometer infrared wavelengths. “This extension in spectrum paves the way for more practical applications,” Hu says.

The researchers imaged several objects with a non-line-of-sight infrared camera, both without [middle column] and with [right column] de-noising algorithms.Tianjin University

In the new study, the researchers experimented with superconducting nanowire single-photon detectors. In each device, a 40-nanometer-wide niobium titanium nitride wire was cooled to about 2 kelvins (about –271 °C), rendering the wire superconductive. A single photon could disrupt this fragile state, generating electrical pulses that enabled the efficient detection of individual photons.

The scientists contorted the nanowire in each device into a fractal pattern that took on similar shapes at various magnifications. This let the sensor detect photons of all polarizations, boosting its efficiency.

The new detector was up to nearly three times as efficient as other single-photon detectors at sensing near- and mid-infrared light. This let the researchers perform non-line-of-sight imaging, achieving a spatial resolution of roughly 1.3 to 1.5 centimeters.

In addition to an algorithm that reconstructed non-line-of-sight images based off multiple scattered light rays, the scientists developed a new algorithm that helped remove noise from their data. When each pixel during the scanning process was given 5 milliseconds to collect photons, the new de-noising algorithm reduced the root mean square error—a measure of its deviation from a perfect image—of reconstructed images by about eightfold.

The researchers now plan to arrange multiple sensors into larger arrays to boost efficiency, reduce scanning time, and extend the distance over which imaging can take place, Hu says. They would also like to test their device in daylight conditions, he adds.

The scientists detailed their findings 30 November in the journal Optics Express.

Pages