Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICHHRI 2024: 11–15 March 2024, BOULDER, COLO.Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

In this video, we present Ringbot, a novel leg-wheel transformer robot incorporating a monocycle mechanism with legs. Ringbot aims to provide versatile mobility by replacing the driver and driving components of a conventional monocycle vehicle with legs mounted on compact driving modules inside the wheel.

[ Paper ] via [ KIMLAB ]

Making money with robots has always been a struggle, but I think ALOHA 2 has figured it out.

Seriously, though, that is some impressive manipulation capability. I don’t know what that freakish panda thing is, but getting a contact lens from the package onto its bizarre eyeball was some wild dexterity.

[ ALOHA 2 ]

Highlights from testing our new arms built by Boardwalk Robotics. Installed in October of 2023, these new arms are not just for boxing, and are provide much greater speed and power. This matches the mobility and manipulation goals we have for Nadia!

The least dramatic but possibly most important bit of that video is when Nadia uses her arms to help her balance against a wall, which is one of those things that humans do all the time without thinking about it. And we always appreciate being shown things that don’t go perfectly alongside things that do. The bit at the end there was Nadia not quite managing to do lateral arm raises. I can relate; that’s my reaction when I lift weights, too.

[ IHMC ]

Thanks, Robert!

The recent progress in commercial humanoids is just exhausting.

[ Unitree ]

We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia.

[ Science Robotics ]

Have you ever seen a robot skiing?! Ascento robot enjoying a day in the ski slopes of Davos.

[ Ascento ]

Can’t trip Atlas up! Our humanoid robot gets ready for real work combining strength, perception, and mobility.

Notable that Boston Dynamics is now saying that Atlas “gets ready for real work.” Wonder how much to read into that?

[ Boston Dynamics ]

You deserve to be free from endless chores! YOU! DESERVE! CHORE! FREEDOM!

Pretty sure this is teleoperated, so someone is still doing the chores, sadly.

[ MagicLab ]

Multimodal UAVs (Unmanned Aerial Vehicles) are rarely capable of more than two modalities, i.e., flying and walking or flying and perching. However, being able to fly, perch, and walk could further improve their usefulness by expanding their operating envelope. For instance, an aerial robot could fly a long distance, perch in a high place to survey the surroundings, then walk to avoid obstacles that could potentially inhibit flight. Birds are capable of these three tasks, and so offer a practical example of how a robot might be developed to do the same.

[ Paper ] via [ EPFL LIS ]

Nissan announces the concept model of “Iruyo”, a robot that supports babysitting while driving. Ilyo relieves the anxiety of the mother, father, and baby in the driver’s seat. We support safe and secure driving for parents and children. Nissan and Akachan Honpo are working on a project to make life better with cars and babies. Iruyo was born out of the voices of mothers and fathers who said, “I can’t hold my baby while driving alone.”

[ Nissan ]

Building 937 houses the coolest robots at CERN. This is where the action happens to build and program robots that can tackle the unconventional challenges presented by the Laboratory’s unique facilities. Recently, a new type of robot called CERNquadbot has entered CERN’s robot pool and successfully completed its first radiation protection test in the North Area.

[ CERN ]

Congrats to Starship, the OG robotic delivery service, on their $90m raise.

[ Starship ]

By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.

[ GitHub ] via [ MIT ]

This is one of those things that’s far more difficult than it might look.

[ ROAM Lab ]

Our current care system does not scale and our populations are ageing fast. Robodies are multipliers for care staff, allowing them to work together with local helpers to provide protection and assistance around the clock while maintaining personal contact with people in the community.

[ DEVANTHRO ]

It’s the world’s smallest humanoid robot, until someone comes out with slightly smaller servos!

[ Guinness ]

Deep Robotics wishes you a happy year of the dragon!

[ Deep Robotics ]

SEAS researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) is led by Purdue University, in partnership with SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard ]

Find out how a bold vision became a success story! The DLR Institute of Robotics and Mechatronics has been researching robotic arms since the 1990s - originally for use in space. It was a long and ambitious journey before these lightweight robotic arms could be used on earth and finally in operating theaters, a journey that required concentrated robotics expertise, interdisciplinary cooperation and ultimately a successful technology transfer.]

[ DLR MIRO ]

Robotics is changing the world, driven by focused teams of diverse experts. Willow Garage operated with the mantra “Impact first, return on capital second” and through ROS and the PR2 had enormous impact. Autonomous mobile robots are finally being accepted in the service industry, and Savioke (now Relay Robotics) was created to drive that impact. This talk will trace the evolution of Relay robots and their deployment in hotels, hospitals and other service industries, starting with roots at Willow Garage. As robotics technology is poised for the next round of advances, how do we create and maintain the organizations that continue to drive progress?

[ Northwestern ]

Over the past few years, there has been a noticeable surge in efforts to design novel tools and approaches that incorporate Artificial Intelligence (AI) into rehabilitation of persons with lower-limb impairments, using robotic exoskeletons. The potential benefits include the ability to implement personalized rehabilitation therapies by leveraging AI for robot control and data analysis, facilitating personalized feedback and guidance. Despite this, there is a current lack of literature review specifically focusing on AI applications in lower-limb rehabilitative robotics. To address this gap, our work aims at performing a review of 37 peer-reviewed papers. This review categorizes selected papers based on robotic application scenarios or AI methodologies. Additionally, it uniquely contributes by providing a detailed summary of input features, AI model performance, enrolled populations, exoskeletal systems used in the validation process, and specific tasks for each paper. The innovative aspect lies in offering a clear understanding of the suitability of different algorithms for specific tasks, intending to guide future developments and support informed decision-making in the realm of lower-limb exoskeleton and AI applications.



It’s kind of astonishing how quadrotors have scaled over the past decade. Like, we’re now at the point where they’re verging on disposable, at least from a commercial or research perspective—for a bit over US $200, you can buy a little 27-gram, completely open-source drone, and all you have to do is teach it to fly. That’s where things do get a bit more challenging, though, because teaching drones to fly is not a straightforward process. Thanks to good simulation and techniques like reinforcement learning, it’s much easier to imbue drones with autonomy than it used to be. But it’s not typically a fast process, and it can be finicky to make a smooth transition from simulation to reality.

New York University’s Agile Robotics and Perception Lab has managed to streamline the process of getting basic autonomy to work on drones, and streamline it by a lot: The lab’s system is able to train a drone in simulation from nothing up to stable and controllable flying in 18 seconds flat on a MacBook Pro. And it actually takes longer to compile and flash the firmware onto the drone itself than it does for the entire training process.

ARPL NYU

So not only is the drone able to keep a stable hover while rejecting pokes and nudges and wind, but it’s also able to fly specific trajectories. Not bad for 18 seconds, right?

One of the things that typically slows down training times is the need to keep refining exactly what you’re training for, without refining it so much that you’re only training your system to fly in your specific simulation rather than the real world. The strategy used here is what the researchers call a curriculum (you can also think of it as a sort of lesson plan) to adjust the reward function used to train the system through reinforcement learning. The curriculum starts things off being more forgiving and gradually increasing the penalties to emphasize robustness and reliability. This is all about efficiency: Doing that training that you need to do in the way that it needs to be done to get the results you want, and no more.

There are other, more straightforward, tricks that optimize this technique for speed as well. The deep-reinforcement learning algorithms are particularly efficient, and leverage the hardware acceleration that comes along with Apple’s M-series processors. The simulator efficiency multiplies the benefits of the curriculum-driven sample efficiency of the reinforcement-learning pipeline, leading to that wicked-fast training time.

This approach isn’t limited to simple tiny drones—it’ll work on pretty much any drone, including bigger and more expensive ones, or even a drone that you yourself build from scratch.

Jonas Eschmann

We’re told that it took minutes rather than seconds to train a policy for the drone in the video above, although the researchers expect that 18 seconds is achievable even for a more complex drone like this in the near future. And it’s all open source, so you can, in fact, build a drone and teach it to fly with this system. But if you wait a little bit, it’s only going to get better: The researchers tell us that they’re working on integrating with the PX4 open source drone autopilot. Longer term, the idea is to have a single policy that can adapt to different environmental conditions, as well as different vehicle configurations, meaning that this could work on all kinds of flying robots rather than just quadrotors.

Everything you need to run this yourself is available on GitHub, and the paper is on ArXiv here.



It’s kind of astonishing how quadrotors have scaled over the past decade. Like, we’re now at the point where they’re verging on disposable, at least from a commercial or research perspective—for a bit over US $200, you can buy a little 27-gram, completely open-source drone, and all you have to do is teach it to fly. That’s where things do get a bit more challenging, though, because teaching drones to fly is not a straightforward process. Thanks to good simulation and techniques like reinforcement learning, it’s much easier to imbue drones with autonomy than it used to be. But it’s not typically a fast process, and it can be finicky to make a smooth transition from simulation to reality.

New York University’s Agile Robotics and Perception Lab has managed to streamline the process of getting basic autonomy to work on drones, and streamline it by a lot: The lab’s system is able to train a drone in simulation from nothing up to stable and controllable flying in 18 seconds flat on a MacBook Pro. And it actually takes longer to compile and flash the firmware onto the drone itself than it does for the entire training process.

ARPL NYU

So not only is the drone able to keep a stable hover while rejecting pokes and nudges and wind, but it’s also able to fly specific trajectories. Not bad for 18 seconds, right?

One of the things that typically slows down training times is the need to keep refining exactly what you’re training for, without refining it so much that you’re only training your system to fly in your specific simulation rather than the real world. The strategy used here is what the researchers call a curriculum (you can also think of it as a sort of lesson plan) to adjust the reward function used to train the system through reinforcement learning. The curriculum starts things off being more forgiving and gradually increasing the penalties to emphasize robustness and reliability. This is all about efficiency: Doing that training that you need to do in the way that it needs to be done to get the results you want, and no more.

There are other, more straightforward, tricks that optimize this technique for speed as well. The deep-reinforcement learning algorithms are particularly efficient, and leverage the hardware acceleration that comes along with Apple’s M-series processors. The simulator efficiency multiplies the benefits of the curriculum-driven sample efficiency of the reinforcement-learning pipeline, leading to that wicked-fast training time.

This approach isn’t limited to simple tiny drones—it’ll work on pretty much any drone, including bigger and more expensive ones, or even a drone that you yourself build from scratch.

Jonas Eschmann

We’re told that it took minutes rather than seconds to train a policy for the drone in the video above, although the researchers expect that 18 seconds is achievable even for a more complex drone like this in the near future. And it’s all open source, so you can, in fact, build a drone and teach it to fly with this system. But if you wait a little bit, it’s only going to get better: The researchers tell us that they’re working on integrating with the PX4 open source drone autopilot. Longer term, the idea is to have a single policy that can adapt to different environmental conditions, as well as different vehicle configurations, meaning that this could work on all kinds of flying robots rather than just quadrotors.

Everything you need to run this yourself is available on GitHub, and the paper is on ArXiv here.

Finding actual causes of unmanned aerial vehicle (UAV) failures can be split into two main tasks: building causal models and performing actual causality analysis (ACA) over them. While there are available solutions in the literature to perform ACA, building comprehensive causal models is still an open problem. The expensive and time-consuming process of building such models, typically performed manually by domain experts, has hindered the widespread application of causality-based diagnosis solutions in practice. This study proposes a methodology based on natural language processing for automating causal model generation for UAVs. After collecting textual data from online resources, causal keywords are identified in sentences. Next, cause–effect phrases are extracted from sentences based on predefined dependency rules between tokens. Finally, the extracted cause–effect pairs are merged to form a causal graph, which we then use for ACA. To demonstrate the applicability of our framework, we scrape online text resources of Ardupilot, an open-source UAV controller software. Our evaluations using real flight logs show that the generated graphs can successfully be used to find the actual causes of unwanted events. Moreover, our hybrid cause–effect extraction module performs better than a purely deep-learning based tool (i.e., CiRA) by 32% in precision and 25% in recall in our Ardupilot use case.

Colorectal cancer as a major disease that poses a serious threat to human health continues to rise in incidence. And the timely colon examinations are crucial for the prevention, diagnosis, and treatment of this disease. Clinically, gastroscopy is used as a universal means of examination, prevention and diagnosis of this disease, but this detection method is not patient-friendly and can easily cause damage to the intestinal mucosa. Soft robots as an emerging technology offer a promising approach to examining, diagnosing, and treating intestinal diseases due to their high flexibility and patient-friendly interaction. However, existing research on intestinal soft robots mainly focuses on controlled movement and observation within the colon or colon-like environments, lacking additional functionalities such as sample collection from the intestine. Here, we designed and developed an earthworm-like soft robot specifically for colon sampling. It consists of a robot body with an earthworm-like structure for movement in the narrow and soft pipe-environments, and a sampling part with a flexible arm structure resembling an elephant trunk for bidirectional bending sampling. This soft robot is capable of flexible movement and sample collection within an colon-like environment. By successfully demonstrating the feasibility of utilizing soft robots for colon sampling, this work introduces a novel method for non-destructive inspection and sampling in the colon. It represents a significant advancement in the field of medical robotics, offering a potential solution for more efficient and accurate examination and diagnosis of intestinal diseases, specifically for colorectal cancer.



About a decade ago, there was a lot of excitement in the robotics world around gecko-inspired directional adhesives, which are materials that stick without being sticky using the same van der Waals forces that allow geckos to scamper around on vertical panes of glass. They were used extensively in different sorts of climbing robots, some of them quite lovely. Gecko adhesives are uniquely able to stick to very smooth things where your only other option might be suction, which requires all kinds of extra infrastructure to work.

We haven’t seen gecko adhesives around as much of late, for a couple of reasons. First, the ability to only stick to smooth surfaces (which is what gecko adhesives are best at) is a bit of a limitation for mobile robots. And second, the gap between research and useful application is wide and deep and full of crocodiles. I’m talking about the mean kind of crocodiles, not the cuddly kind. But Flexiv Robotics has made gecko adhesives practical for robotic grasping in a commercial environment, thanks in part to a sort of robotic tongue that licks the gecko tape clean.

If you zoom way, way in on a gecko’s foot, you’ll see that each toe is covered in millions of hair-like nanostructures called setae. Each setae branches out at the end into hundreds of more hairs with flat bits at the end called spatulas. The result of this complex arrangement of setae and spatulas is that gecko toes have a ridiculous amount of surface area, meaning that they can leverage the extremely weak van der Waals forces between molecules to stick themselves to perfectly flat and smooth surfaces. This technique works exceptionally well: Geckos can hang from glass by a single toe, and a fully adhered gecko can hold something like 140 kg (which, unfortunately, seems to be an extrapolation rather than an experimental result). And luckily for the gecko, the structure of the spatulas makes the adhesion directional, so that when its toes are no longer being loaded, they can be easily peeled off of whatever they’re attached to.

Natural gecko adhesive structure, along with a synthetic adhesive (f).Gecko adhesion: evolutionary nanotechnology, by Kellar Autumn and Nick Gravish

Since geckos don’t “stick” to things in the sense that we typically use the word “sticky,” a better way of characterizing what geckos can do is as “dry adhesion,” as opposed to something that involves some sort of glue. You can also think about gecko toes as just being very, very high friction, and it’s this perspective that is particularly interesting in the context of robotic grippers.

This is Flexiv’s “Grav Enhanced” gripper, which uses a combination of pinch grasping and high friction gecko adhesive to lift heavy and delicate objects without having to squeeze them. When you think about a traditional robotic grasping system trying to lift something like a water balloon, you have to squeeze that balloon until the friction between the side of the gripper and the side of the balloon overcomes the weight of the balloon itself. The higher the friction, the lower the squeeze required, and although a water balloon might be an extreme example, maximizing gripper friction can make a huge difference when it comes to fragile or deformable objects.

There are a couple of problems with dry adhesive, however. The tiny structures that make the adhesive adhere can be prone to damage, and the fact that dry adhesive will stick to just about anything it can make good contact with means that it’ll rapidly accumulate dirt outside of a carefully controlled environment. In research contexts, these problems aren’t all that significant, but for a commercial system, you can’t have something that requires constant attention.

Flexiv says that the microstructure material that makes up their gecko adhesive was able to sustain two million gripping cycles without any visible degradation in performance, suggesting that as long as you use the stuff within the tolerances that it’s designed for, it should keep on adhering to things indefinitely—although trying to lift too much weight will tear the microstructures, ruining the adhesive properties after just a few cycles. And to keep the adhesive from getting clogged up with debris, Flexiv came up with this clever little cleaning station that acts like a little robotic tongue of sorts:

Interestingly, geckos themselves don’t seem to use their own tongues to clean their toes. They lick their eyeballs on the regular, like all normal humans do, but gecko toes appear to be self-cleaning, which is a pretty neat trick. It’s certainly possible to make self-cleaning synthetic gecko adhesive, but Flexiv tells us that “due to technical and practical limitations, replicating this process in our own gecko adhesive material is not possible. Essentially, we replicate the microstructure of a gecko’s footpad, but not its self-cleaning process.” This likely goes back to that whole thing about what works in a research context versus what works in a commercial context, and Flexiv needs their gecko adhesive to handle all those millions of cycles.

Flexiv says that they were made aware of the need for a system like this when one of their clients started using the gripper for the extra-dirty task of sorting trash from recycling, and that the solution was inspired by a lint roller. And I have to say, I appreciate the simplicity of the system that Flexiv came up with to solve the problem directly and efficiently. Maybe one day, they’ll be able to replicate a real gecko’s natural self-cleaning toes with a durable and affordable artificial dry adhesive, but until that happens, an artificial tongue does the trick.


About a decade ago, there was a lot of excitement in the robotics world around gecko-inspired directional adhesives, which are materials that stick without being sticky using the same van der Waals forces that allow geckos to scamper around on vertical panes of glass. They were used extensively in different sorts of climbing robots, some of them quite lovely. Gecko adhesives are uniquely able to stick to very smooth things where your only other option might be suction, which requires all kinds of extra infrastructure to work.

We haven’t seen gecko adhesives around as much of late, for a couple of reasons. First, the ability to only stick to smooth surfaces (which is what gecko adhesives are best at) is a bit of a limitation for mobile robots. And second, the gap between research and useful application is wide and deep and full of crocodiles. I’m talking about the mean kind of crocodiles, not the cuddly kind. But Flexiv Robotics has made gecko adhesives practical for robotic grasping in a commercial environment, thanks in part to a sort of robotic tongue that licks the gecko tape clean.

If you zoom way, way in on a gecko’s foot, you’ll see that each toe is covered in millions of hair-like nanostructures called setae. Each setae branches out at the end into hundreds of more hairs with flat bits at the end called spatulas. The result of this complex arrangement of setae and spatulas is that gecko toes have a ridiculous amount of surface area, meaning that they can leverage the extremely weak van der Waals forces between molecules to stick themselves to perfectly flat and smooth surfaces. This technique works exceptionally well: Geckos can hang from glass by a single toe, and a fully adhered gecko can hold something like 140 kg (which, unfortunately, seems to be an extrapolation rather than an experimental result). And luckily for the gecko, the structure of the spatulas makes the adhesion directional, so that when its toes are no longer being loaded, they can be easily peeled off of whatever they’re attached to.

Natural gecko adhesive structure, along with a synthetic adhesive (f).Gecko adhesion: evolutionary nanotechnology, by Kellar Autumn and Nick Gravish

Since geckos don’t “stick” to things in the sense that we typically use the word “sticky,” a better way of characterizing what geckos can do is as “dry adhesion,” as opposed to something that involves some sort of glue. You can also think about gecko toes as just being very, very high friction, and it’s this perspective that is particularly interesting in the context of robotic grippers.

This is Flexiv’s “Grav Enhanced” gripper, which uses a combination of pinch grasping and high friction gecko adhesive to lift heavy and delicate objects without having to squeeze them. When you think about a traditional robotic grasping system trying to lift something like a water balloon, you have to squeeze that balloon until the friction between the side of the gripper and the side of the balloon overcomes the weight of the balloon itself. The higher the friction, the lower the squeeze required, and although a water balloon might be an extreme example, maximizing gripper friction can make a huge difference when it comes to fragile or deformable objects.

There are a couple of problems with dry adhesive, however. The tiny structures that make the adhesive adhere can be prone to damage, and the fact that dry adhesive will stick to just about anything it can make good contact with means that it’ll rapidly accumulate dirt outside of a carefully controlled environment. In research contexts, these problems aren’t all that significant, but for a commercial system, you can’t have something that requires constant attention.

Flexiv says that the microstructure material that makes up their gecko adhesive was able to sustain two million gripping cycles without any visible degradation in performance, suggesting that as long as you use the stuff within the tolerances that it’s designed for, it should keep on adhering to things indefinitely—although trying to lift too much weight will tear the microstructures, ruining the adhesive properties after just a few cycles. And to keep the adhesive from getting clogged up with debris, Flexiv came up with this clever little cleaning station that acts like a little robotic tongue of sorts:

Interestingly, geckos themselves don’t seem to use their own tongues to clean their toes. They lick their eyeballs on the regular, like all normal humans do, but gecko toes appear to be self-cleaning, which is a pretty neat trick. It’s certainly possible to make self-cleaning synthetic gecko adhesive, but Flexiv tells us that “due to technical and practical limitations, replicating this process in our own gecko adhesive material is not possible. Essentially, we replicate the microstructure of a gecko’s footpad, but not its self-cleaning process.” This likely goes back to that whole thing about what works in a research context versus what works in a commercial context, and Flexiv needs their gecko adhesive to handle all those millions of cycles.

Flexiv says that they were made aware of the need for a system like this when one of their clients started using the gripper for the extra-dirty task of sorting trash from recycling, and that the solution was inspired by a lint roller. And I have to say, I appreciate the simplicity of the system that Flexiv came up with to solve the problem directly and efficiently. Maybe one day, they’ll be able to replicate a real gecko’s natural self-cleaning toes with a durable and affordable artificial dry adhesive, but until that happens, an artificial tongue does the trick.

Accurate texture classification empowers robots to improve their perception and comprehension of the environment, enabling informed decision-making and appropriate responses to diverse materials and surfaces. Still, there are challenges for texture classification regarding the vast amount of time series data generated from robots’ sensors. For instance, robots are anticipated to leverage human feedback during interactions with the environment, particularly in cases of misclassification or uncertainty. With the diversity of objects and textures in daily activities, Active Learning (AL) can be employed to minimize the number of samples the robot needs to request from humans, streamlining the learning process. In the present work, we use AL to select the most informative samples for annotation, thus reducing the human labeling effort required to achieve high performance for classifying textures. We also use a sliding window strategy for extracting features from the sensor’s time series used in our experiments. Our multi-class dataset (e.g., 12 textures) challenges traditional AL strategies since standard techniques cannot control the number of instances per class selected to be labeled. Therefore, we propose a novel class-balancing instance selection algorithm that we integrate with standard AL strategies. Moreover, we evaluate the effect of sliding windows of two-time intervals (3 and 6 s) on our AL Strategies. Finally, we analyze in our experiments the performance of AL strategies, with and without the balancing algorithm, regarding f1-score, and positive effects are observed in terms of performance when using our proposed data pipeline. Our results show that the training data can be reduced to 70% using an AL strategy regardless of the machine learning model and reach, and in many cases, surpass a baseline performance. Finally, exploring the textures with a 6-s window achieves the best performance, and using either Extra Trees produces an average f1-score of 90.21% in the texture classification data set.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

Is “scamperiest” a word? If not, it should be, because this is the scamperiest robot I’ve ever seen.

[ ABS ]

GITAI is pleased to announce that its 1.5-meter-long autonomous dual robotic arm system (S2) has successfully arrived at the International Space Station (ISS) aboard the SpaceX Falcon 9 rocket (NG-20) to conduct an external demonstration of in-space servicing, assembly, and manufacturing (ISAM) while onboard the ISS. The success of the S2 tech demo will be a major milestone for GITAI, confirming the feasibility of this technology as a fully operational system in space.

[ GITAI ]

This work presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal robots. Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.

And if you want to get exhausted on behalf of a robot, the full 400-meter dash is below.

[ Hybrid Robotics ]

NASA’s Ingenuity Mars Helicopter pushed aerodynamic limits during the final months of its mission, setting new records for speed, distance, and altitude. Hear from Ingenuity chief engineer Travis Brown on how the data the team collected could eventually be used in future rotorcraft designs.

[ NASA ]

BigDog: 15 years of solving mobility problems its own way.

[ Boston Dynamics ]

[Harvard School of Engineering and Applied Sciences] researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) led by Purdue University, in partnership with [Harvard] SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard SEAS ]

Researchers from Huazhong University of Science and Technology (HUST) in a recent T-RO paper describe and construct a novel variable stiffness spherical joint motor that enables dexterous motion and joint compliance in omni-directions.

[ Paper ]

Thanks, Ram!

We are told that this new robot from HEBI is called “Mark Suckerberg” and that they’ve got a pretty cool application in mind for it, to be revealed later this year.

[ HEBI Robotics ]

Thanks, Dave!

Dive into the first edition of our new Real-World-Robotics class at ETH Zürich! Our students embarked on an incredible journey, creating their human-like robotic hands from scratch. In just three months, the teams designed, built, and programmed their tendon-driven robotic hands, mastering dexterous manipulation with reinforcement learning! The result? A spectacular display of innovation and skill during our grand final.

[ SRL ETHZ ]

Carnegie Mellon researchers have built a system with a robotic arm atop a RangerMini 2.0 robotic cart from AgileX robotics to make what they’re calling a platform for “intelligent movement and processing.”

[ CMU ] via [ AgileX ]

Picassnake is our custom-made robot that paints pictures from music. Picassnake consists of an arm and a head, embedded in a plush snake doll. The robot is connected to a laptop for control and music processing, which can be fed through a microphone or an MP3 file. To open the media source, an operator can use the graphical user interface or place a text QR code in front of a webcam. Once the media source is opened, Picassnake generates unique strokes based on the music and translates the strokes to physical movement to paint them on canvas.

[ Picassnake ]

In April 2021, NASA’s Ingenuity Mars Helicopter became the first spacecraft to achieve powered, controlled flight on another world. With 72 successful flights, Ingenuity has far surpassed its originally planned technology demonstration of up to five flights. On Jan. 18, Ingenuity flew for the final time on the Red Planet. Join Tiffany Morgan, NASA’s Mars Exploration Program Deputy Director, and Teddy Tzanetos, Ingenuity Project Manager, as they discuss these historic flights and what they could mean for future extraterrestrial aerial exploration.

[ NASA ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

Is “scamperiest” a word? If not, it should be, because this is the scamperiest robot I’ve ever seen.

[ ABS ]

GITAI is pleased to announce that its 1.5-meter-long autonomous dual robotic arm system (S2) has successfully arrived at the International Space Station (ISS) aboard the SpaceX Falcon 9 rocket (NG-20) to conduct an external demonstration of in-space servicing, assembly, and manufacturing (ISAM) while onboard the ISS. The success of the S2 tech demo will be a major milestone for GITAI, confirming the feasibility of this technology as a fully operational system in space.

[ GITAI ]

This work presents a comprehensive study on using deep reinforcement learning (RL) to create dynamic locomotion controllers for bipedal robots. Going beyond focusing on a single locomotion skill, we develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.

And if you want to get exhausted on behalf of a robot, the full 400-meter dash is below.

[ Hybrid Robotics ]

NASA’s Ingenuity Mars Helicopter pushed aerodynamic limits during the final months of its mission, setting new records for speed, distance, and altitude. Hear from Ingenuity chief engineer Travis Brown on how the data the team collected could eventually be used in future rotorcraft designs.

[ NASA ]

BigDog: 15 years of solving mobility problems its own way.

[ Boston Dynamics ]

[Harvard School of Engineering and Applied Sciences] researchers are helping develop resilient and autonomous deep space and extraterrestrial habitations by developing technologies to let autonomous robots repair or replace damaged components in a habitat. The research is part of the Resilient ExtraTerrestrial Habitats institute (RETHi) led by Purdue University, in partnership with [Harvard] SEAS, the University of Connecticut and the University of Texas at San Antonio. Its goal is to “design and operate resilient deep space habitats that can adapt, absorb and rapidly recover from expected and unexpected disruptions.”

[ Harvard SEAS ]

Researchers from Huazhong University of Science and Technology (HUST) in a recent T-RO paper describe and construct a novel variable stiffness spherical joint motor that enables dexterous motion and joint compliance in omni-directions.

[ Paper ]

Thanks, Ram!

We are told that this new robot from HEBI is called “Mark Suckerberg” and that they’ve got a pretty cool application in mind for it, to be revealed later this year.

[ HEBI Robotics ]

Thanks, Dave!

Dive into the first edition of our new Real-World-Robotics class at ETH Zürich! Our students embarked on an incredible journey, creating their human-like robotic hands from scratch. In just three months, the teams designed, built, and programmed their tendon-driven robotic hands, mastering dexterous manipulation with reinforcement learning! The result? A spectacular display of innovation and skill during our grand final.

[ SRL ETHZ ]

Carnegie Mellon researchers have built a system with a robotic arm atop a RangerMini 2.0 robotic cart from AgileX robotics to make what they’re calling a platform for “intelligent movement and processing.”

[ CMU ] via [ AgileX ]

Picassnake is our custom-made robot that paints pictures from music. Picassnake consists of an arm and a head, embedded in a plush snake doll. The robot is connected to a laptop for control and music processing, which can be fed through a microphone or an MP3 file. To open the media source, an operator can use the graphical user interface or place a text QR code in front of a webcam. Once the media source is opened, Picassnake generates unique strokes based on the music and translates the strokes to physical movement to paint them on canvas.

[ Picassnake ]

In April 2021, NASA’s Ingenuity Mars Helicopter became the first spacecraft to achieve powered, controlled flight on another world. With 72 successful flights, Ingenuity has far surpassed its originally planned technology demonstration of up to five flights. On Jan. 18, Ingenuity flew for the final time on the Red Planet. Join Tiffany Morgan, NASA’s Mars Exploration Program Deputy Director, and Teddy Tzanetos, Ingenuity Project Manager, as they discuss these historic flights and what they could mean for future extraterrestrial aerial exploration.

[ NASA ]

Online questionnaires that use crowdsourcing platforms to recruit participants have become commonplace, due to their ease of use and low costs. Artificial intelligence (AI)-based large language models (LLMs) have made it easy for bad actors to automatically fill in online forms, including generating meaningful text for open-ended tasks. These technological advances threaten the data quality for studies that use online questionnaires. This study tested whether text generated by an AI for the purpose of an online study can be detected by both humans and automatic AI detection systems. While humans were able to correctly identify the authorship of such text above chance level (76% accuracy), their performance was still below what would be required to ensure satisfactory data quality. Researchers currently have to rely on a lack of interest among bad actors to successfully use open-ended responses as a useful tool for ensuring data quality. Automatic AI detection systems are currently completely unusable. If AI submissions of responses become too prevalent, then the costs associated with detecting fraudulent submissions will outweigh the benefits of online questionnaires. Individual attention checks will no longer be a sufficient tool to ensure good data quality. This problem can only be systematically addressed by crowdsourcing platforms. They cannot rely on automatic AI detection systems and it is unclear how they can ensure data quality for their paying clients.

Introduction: Navigation satellite systems can fail to work or work incorrectly in a number of conditions: signal shadowing, electromagnetic interference, atmospheric conditions, and technical problems. All of these factors can significantly affect the localization accuracy of autonomous driving systems. This emphasizes the need for other localization technologies, such as Lidar.

Methods: The use of the Kalman filter in combination with Lidar can be very effective in various applications due to the synergy of their capabilities. The Kalman filter can improve the accuracy of lidar measurements by taking into account the noise and inaccuracies present in the measurements.

Results: In this paper, we propose a parallel Kalman algorithm in three-dimensional space to speed up the computational speed of Lidar localization. At the same time, the initial localization accuracy of the latter is preserved. A distinctive feature of the proposed approach is that the Kalman localization algorithm itself is parallelized, rather than the process of building a map for navigation. The proposed algorithm allows us to obtain the result 3.8 times faster without compromising the localization accuracy, which was 3% for both cases, making it effective for real-time decision-making.

Discussion: The reliability of this result is confirmed by a preliminary theoretical estimate of the acceleration rate based on Ambdahl’s law. Accelerating the Kalman filter with CUDA for Lidar localization can be of significant practical value, especially in real-time and in conditions where large amounts of data from Lidar sensors need to be processed.



Just because an object is around a corner doesn’t mean it has to be hidden. Non-line-of-sight imaging can peek around corners and spot those objects, but it has so far been limited to a narrow band of frequencies. Now, a new sensor can help extend this technique from working with visible light to infrared. This advance could help make autonomous vehicles safer, among other potential applications.

Non-line-of-sight imaging relies on the faint signals of light beams that have reflected off surfaces in order to reconstruct images. The ability to see around corners may prove useful for machine vision—for instance, helping autonomous vehicles foresee hidden dangers to better predict how to respond to them, says Xiaolong Hu, the senior author of the study and a professor at Tianjin University in Tianjin, China. It may also improve endoscopes that help doctors peer inside the body.

The light that non-line-of-sight imaging depends on is typically very dim, and until now, the detectors that were efficient and sensitive enough for non-line-of-sight imaging could only detect either visible or near-infrared light. Moving to longer wavelengths might have several advantages, such as dealing with less interference from sunshine, and the possibility of using lasers that are safe around eyes, Hu says.

Now Hu and his colleagues have for the first time performed non-line-of-sight imaging using 1,560- and 1,997-nanometer infrared wavelengths. “This extension in spectrum paves the way for more practical applications,” Hu says.

The researchers imaged several objects with a non-line-of-sight infrared camera, both without [middle column] and with [right column] de-noising algorithms.Tianjin University

In the new study, the researchers experimented with superconducting nanowire single-photon detectors. In each device, a 40-nanometer-wide niobium titanium nitride wire was cooled to about 2 kelvins (about –271 °C), rendering the wire superconductive. A single photon could disrupt this fragile state, generating electrical pulses that enabled the efficient detection of individual photons.

The scientists contorted the nanowire in each device into a fractal pattern that took on similar shapes at various magnifications. This let the sensor detect photons of all polarizations, boosting its efficiency.

The new detector was up to nearly three times as efficient as other single-photon detectors at sensing near- and mid-infrared light. This let the researchers perform non-line-of-sight imaging, achieving a spatial resolution of roughly 1.3 to 1.5 centimeters.

In addition to an algorithm that reconstructed non-line-of-sight images based off multiple scattered light rays, the scientists developed a new algorithm that helped remove noise from their data. When each pixel during the scanning process was given 5 milliseconds to collect photons, the new de-noising algorithm reduced the root mean square error—a measure of its deviation from a perfect image—of reconstructed images by about eightfold.

The researchers now plan to arrange multiple sensors into larger arrays to boost efficiency, reduce scanning time, and extend the distance over which imaging can take place, Hu says. They would also like to test their device in daylight conditions, he adds.

The scientists detailed their findings 30 November in the journal Optics Express.



Just because an object is around a corner doesn’t mean it has to be hidden. Non-line-of-sight imaging can peek around corners and spot those objects, but it has so far been limited to a narrow band of frequencies. Now, a new sensor can help extend this technique from working with visible light to infrared. This advance could help make autonomous vehicles safer, among other potential applications.

Non-line-of-sight imaging relies on the faint signals of light beams that have reflected off surfaces in order to reconstruct images. The ability to see around corners may prove useful for machine vision—for instance, helping autonomous vehicles foresee hidden dangers to better predict how to respond to them, says Xiaolong Hu, the senior author of the study and a professor at Tianjin University in Tianjin, China. It may also improve endoscopes that help doctors peer inside the body.

The light that non-line-of-sight imaging depends on is typically very dim, and until now, the detectors that were efficient and sensitive enough for non-line-of-sight imaging could only detect either visible or near-infrared light. Moving to longer wavelengths might have several advantages, such as dealing with less interference from sunshine, and the possibility of using lasers that are safe around eyes, Hu says.

Now Hu and his colleagues have for the first time performed non-line-of-sight imaging using 1,560- and 1,997-nanometer infrared wavelengths. “This extension in spectrum paves the way for more practical applications,” Hu says.

The researchers imaged several objects with a non-line-of-sight infrared camera, both without [middle column] and with [right column] de-noising algorithms.Tianjin University

In the new study, the researchers experimented with superconducting nanowire single-photon detectors. In each device, a 40-nanometer-wide niobium titanium nitride wire was cooled to about 2 kelvins (about –271 °C), rendering the wire superconductive. A single photon could disrupt this fragile state, generating electrical pulses that enabled the efficient detection of individual photons.

The scientists contorted the nanowire in each device into a fractal pattern that took on similar shapes at various magnifications. This let the sensor detect photons of all polarizations, boosting its efficiency.

The new detector was up to nearly three times as efficient as other single-photon detectors at sensing near- and mid-infrared light. This let the researchers perform non-line-of-sight imaging, achieving a spatial resolution of roughly 1.3 to 1.5 centimeters.

In addition to an algorithm that reconstructed non-line-of-sight images based off multiple scattered light rays, the scientists developed a new algorithm that helped remove noise from their data. When each pixel during the scanning process was given 5 milliseconds to collect photons, the new de-noising algorithm reduced the root mean square error—a measure of its deviation from a perfect image—of reconstructed images by about eightfold.

The researchers now plan to arrange multiple sensors into larger arrays to boost efficiency, reduce scanning time, and extend the distance over which imaging can take place, Hu says. They would also like to test their device in daylight conditions, he adds.

The scientists detailed their findings 30 November in the journal Optics Express.

Introduction: Humans and robots are increasingly collaborating on complex tasks such as firefighting. As robots are becoming more autonomous, collaboration in human-robot teams should be combined with meaningful human control. Variable autonomy approaches can ensure meaningful human control over robots by satisfying accountability, responsibility, and transparency. To verify whether variable autonomy approaches truly ensure meaningful human control, the concept should be operationalized to allow its measurement. So far, designers of variable autonomy approaches lack metrics to systematically address meaningful human control.

Methods: Therefore, this qualitative focus group (n = 5 experts) explored quantitative operationalizations of meaningful human control during dynamic task allocation using variable autonomy in human-robot teams for firefighting. This variable autonomy approach requires dynamic allocation of moral decisions to humans and non-moral decisions to robots, using robot identification of moral sensitivity. We analyzed the data of the focus group using reflexive thematic analysis.

Results: Results highlight the usefulness of quantifying the traceability requirement of meaningful human control, and how situation awareness and performance can be used to objectively measure aspects of the traceability requirement. Moreover, results emphasize that team and robot outcomes can be used to verify meaningful human control but that identifying reasons underlying these outcomes determines the level of meaningful human control.

Discussion: Based on our results, we propose an evaluation method that can verify if dynamic task allocation using variable autonomy in human-robot teams for firefighting ensures meaningful human control over the robot. This method involves subjectively and objectively quantifying traceability using human responses during and after simulations of the collaboration. In addition, the method involves semi-structured interviews after the simulation to identify reasons underlying outcomes and suggestions to improve the variable autonomy approach.

To effectively control a robot’s motion, it is common to employ a simplified model that approximates the robot’s dynamics. Nevertheless, discrepancies between the actual mechanical properties of the robot and the simplified model can result in motion failures. To address this issue, this study introduces a pneumatic-driven bipedal musculoskeletal robot designed to closely match the mechanical characteristics of a simplified spring-loaded inverted pendulum (SLIP) model. The SLIP model is widely utilized in robotics due to its passive stability and dynamic properties resembling human walking patterns. A musculoskeletal bipedal robot was designed and manufactured to concentrate its center of mass within a compact body around the hip joint, featuring low leg inertia in accordance with SLIP model principles. Furthermore, we validated that the robot exhibits similar dynamic characteristics to the SLIP model through a sequential jumping experiment and by comparing its performance to SLIP model simulation.

Deep generative models (DGM) are increasingly employed in emergent communication systems. However, their application in multimodal data contexts is limited. This study proposes a novel model that combines multimodal DGM with the Metropolis-Hastings (MH) naming game, enabling two agents to focus jointly on a shared subject and develop common vocabularies. The model proves that it can handle multimodal data, even in cases of missing modalities. Integrating the MH naming game with multimodal variational autoencoders (VAE) allows agents to form perceptual categories and exchange signs within multimodal contexts. Moreover, fine-tuning the weight ratio to favor a modality that the model could learn and categorize more readily improved communication. Our evaluation of three multimodal approaches - mixture-of-experts (MoE), product-of-experts (PoE), and mixture-of-product-of-experts (MoPoE)–suggests an impact on the creation of latent spaces, the internal representations of agents. Our results from experiments with the MNIST + SVHN and Multimodal165 datasets indicate that combining the Gaussian mixture model (GMM), PoE multimodal VAE, and MH naming game substantially improved information sharing, knowledge formation, and data reconstruction.

Exoskeletons that assist in ankle plantarflexion can improve energy economy in locomotion. Characterizing the joint-level mechanisms behind these reductions in energy cost can lead to a better understanding of how people interact with these devices, as well as to improved device design and training protocols. We examined the biomechanical responses to exoskeleton assistance in exoskeleton users trained with a lengthened protocol. Kinematics at unassisted joints were generally unchanged by assistance, which has been observed in other ankle exoskeleton studies. Peak plantarflexion angle increased with plantarflexion assistance, which led to increased total and biological mechanical power despite decreases in biological joint torque and whole-body net metabolic energy cost. Ankle plantarflexor activity also decreased with assistance. Muscles that act about unassisted joints also increased activity for large levels of assistance, and this response should be investigated over long-term use to prevent overuse injuries.

Introduction: Patients who are hospitalized may be at a higher risk for falling, which can result in additional injuries, longer hospitalizations, and extra cost for healthcare organizations. A frequent context for these falls is when a hospitalized patient needs to use the bathroom. While it is possible that “high-tech” tools like robots and AI applications can help, adopting a human-centered approach and engaging users and other affected stakeholders in the design process can help to maximize benefits and avoid unintended consequences.

Methods: Here, we detail our findings from a human-centered design research effort to investigate how the process of toileting a patient can be ameliorated through the application of advanced tools like robots and AI. We engaged healthcare professionals in interviews, focus groups, and a co-creation session in order to recognize common barriers in the toileting process and find opportunities for improvement.

Results: In our conversations with participants, who were primarily nurses, we learned that toileting is more than a nuisance for technology to remove through automation. Nurses seem keenly aware and responsive to the physical and emotional pains experienced by patients during the toileting process, and did not see technology as a feasible or welcomed substitute. Instead, nurses wanted tools which supported them in providing this care to their patients. Participants envisioned tools which helped them anticipate and understand patient toileting assistance needs so they could plan to assist at convenient times during their existing workflows. Participants also expressed favorability towards mechanical assistive features which were incorporated into existing equipment to ensure ubiquitous availability when needed without adding additional mass to an already cramped and awkward environment.

Discussion: We discovered that the act of toileting served more than one function, and can be viewed as a valuable touchpoint in which nurses can assess, support, and encourage their patients to engage in their own recovery process as they perform a necessary and normal function of life. While we found opportunities for technology to make the process safer and less burdensome for patients and clinical staff alike, we believe that designers should preserve and enhance the therapeutic elements of the nurse-patient interaction rather than eliminate it through automation.

Pages