Feed aggregator

Can we conceive machines that can formulate autonomous intentions and make conscious decisions? If so, how would this ability affect their ethical behavior? Some case studies help us understand how advances in understanding artificial consciousness can contribute to creating ethical AI systems.

Successful conversational interaction with a social robot requires not only an assessment of a user’s contribution to an interaction, but also awareness of their emotional and attitudinal states as the interaction unfolds. To this end, our research aims to systematically trigger, but then interpret human behaviors to track different states of potential user confusion in interaction so that systems can be primed to adjust their policies in light of users entering confusion states. In this paper, we present a detailed human-robot interaction study to prompt, investigate, and eventually detect confusion states in users. The study itself employs a Wizard-of-Oz (WoZ) style design with a Pepper robot to prompt confusion states for task-oriented dialogues in a well-defined manner. The data collected from 81 participants includes audio and visual data, from both the robot’s perspective and the environment, as well as participant survey data. From these data, we evaluated the correlations of induced confusion conditions with multimodal data, including eye gaze estimation, head pose estimation, facial emotion detection, silence duration time, and user speech analysis—including emotion and pitch analysis. Analysis shows significant differences of participants’ behaviors in states of confusion based on these signals, as well as a strong correlation between confusion conditions and participants own self-reported confusion scores. The paper establishes strong correlations between confusion levels and these observable features, and lays the ground or a more complete social and affect oriented strategy for task-oriented human-robot interaction. The contributions of this paper include the methodology applied, dataset, and our systematic analysis.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEXASCybathlon Challenges: 2 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE

Enjoy today’s videos!

Fourier Intelligence has just announced the mass production of their GR-1 humanoid, and they’ve got at least a dozen of them.

[ Fourier Intelligence ]

Thanks, Ni Tao!

This collaborative work between researchers from the University of Southern Denmark and VISTEC introduces a biomorphic soft robotic skin for a hexapod robot platform, featuring a central pattern generator–based neural controller for generating respiratory-like motions on the skin. The design enables visuo-haptic nonverbal communication between humans and robots and improves the robot’s aesthetics by enhancing its biomorphic qualities.

[ Paper ]

Thanks, Mads!

According to data from 2010, around 1.8 million people in the United States can’t eat on their own. Yet training a robot to feed people presents an array of challenges for researchers. A team led by researchers at the University of Washington created a set of 11 actions a robotic arm can make to pick up nearly any food attainable by fork. In tests with this set of actions, the robot picked up the foods more than 80 percent of the time, which is the user-specified benchmark for in-home use. The small set of actions allows the system to learn to pick up new foods during one meal.

[ UW ]

Thanks, Stefan!

If you watch enough robot videos, you get to know when a robot is being pushed in a way that’s easy to recover from, and when it’s actually being challenged. The end of this video shows IHMC’s Nadia getting pushed sideways against its planted foot, which necessitates a crossover step recovery.

[ Paper ] via [ IHMC ]

Thanks, Robert!

Ayato Kanada, an assistant professor at Kyushu University, wants to build woodpecker-inspired Doc Ock tentacles. And when you’re a professor, you can just do that.

Also, woodpeckers are weird.

[ Ayato Kanada ]

Thanks, Ayato!

Explore Tevel’s joint robotic fruit-harvesting pilot program with Kubota in this video, filmed during the 2023 apple harvest season in the Mazzoni Group’s orchards in Ferrara, Italy. Watch as our autonomous fruit-picking systems operate with precision, skillfully harvesting various apples in the idyllic Italian orchards.

[ Tevel ]

Understanding what’s an obstacle and what’s only obstacle-ish has always been tricky for robots, but Spot is making some progress here.

[ EVORA ]

We tried to play Street Fighter 6 by teleoperating Reachy! Well, it didn’t go as planned, as Antoine won. But it was a pretty epic fight!

[ Pollen Robotics ]

The key assets of a data center are the servers. While most of them are active in the server room, idle and new assets are stored in the IT warehouse. Focusing mainly on this IT warehouse, SeRo automates the inbound and outbound management of the data center’s assets.

[ Naver Labs ]

Humans can be so mean.

[ Flexiv ]

Interesting HRI with the flashing light on Spot here.

[ Boston Dynamics ]

Flying in circles with a big tank of gas really seems like a better job for a robot pilot than for a human one.

[ Boeing ]

On 2 November 2023, at an event hosted by the Swiss Association of Aeronautical Sciences at ETH, Professor Davide Scaramuzza presented a comprehensive overview of our latest advancements in autonomous drone technology aimed at achieving human-level performance.

[ UZH RPG ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEXASCybathlon Challenges: 2 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE

Enjoy today’s videos!

Fourier Intelligence has just announced the mass production of their GR-1 humanoid, and they’ve got at least a dozen of them.

[ Fourier Intelligence ]

Thanks, Ni Tao!

This collaborative work between researchers from the University of Southern Denmark and VISTEC introduces a biomorphic soft robotic skin for a hexapod robot platform, featuring a central pattern generator–based neural controller for generating respiratory-like motions on the skin. The design enables visuo-haptic nonverbal communication between humans and robots and improves the robot’s aesthetics by enhancing its biomorphic qualities.

[ Paper ]

Thanks, Mads!

According to data from 2010, around 1.8 million people in the United States can’t eat on their own. Yet training a robot to feed people presents an array of challenges for researchers. A team led by researchers at the University of Washington created a set of 11 actions a robotic arm can make to pick up nearly any food attainable by fork. In tests with this set of actions, the robot picked up the foods more than 80 percent of the time, which is the user-specified benchmark for in-home use. The small set of actions allows the system to learn to pick up new foods during one meal.

[ UW ]

Thanks, Stefan!

If you watch enough robot videos, you get to know when a robot is being pushed in a way that’s easy to recover from, and when it’s actually being challenged. The end of this video shows IHMC’s Nadia getting pushed sideways against its planted foot, which necessitates a crossover step recovery.

[ Paper ] via [ IHMC ]

Thanks, Robert!

Ayato Kanada, an assistant professor at Kyushu University, wants to build woodpecker-inspired Doc Ock tentacles. And when you’re a professor, you can just do that.

Also, woodpeckers are weird.

[ Ayato Kanada ]

Thanks, Ayato!

Explore Tevel’s joint robotic fruit-harvesting pilot program with Kubota in this video, filmed during the 2023 apple harvest season in the Mazzoni Group’s orchards in Ferrara, Italy. Watch as our autonomous fruit-picking systems operate with precision, skillfully harvesting various apples in the idyllic Italian orchards.

[ Tevel ]

Understanding what’s an obstacle and what’s only obstacle-ish has always been tricky for robots, but Spot is making some progress here.

[ EVORA ]

We tried to play Street Fighter 6 by teleoperating Reachy! Well, it didn’t go as planned, as Antoine won. But it was a pretty epic fight!

[ Pollen Robotics ]

The key assets of a data center are the servers. While most of them are active in the server room, idle and new assets are stored in the IT warehouse. Focusing mainly on this IT warehouse, SeRo automates the inbound and outbound management of the data center’s assets.

[ Naver Labs ]

Humans can be so mean.

[ Flexiv ]

Interesting HRI with the flashing light on Spot here.

[ Boston Dynamics ]

Flying in circles with a big tank of gas really seems like a better job for a robot pilot than for a human one.

[ Boeing ]

On 2 November 2023, at an event hosted by the Swiss Association of Aeronautical Sciences at ETH, Professor Davide Scaramuzza presented a comprehensive overview of our latest advancements in autonomous drone technology aimed at achieving human-level performance.

[ UZH RPG ]

Gait is an important basic function of human beings and an integral part of life. Many mental and physical abnormalities can cause noticeable differences in a person’s gait. Abnormal gait can lead to serious consequences such as falls, limited mobility and reduced life satisfaction. Gait analysis, which includes joint kinematics, kinetics, and dynamic Electromyography (EMG) data, is now recognized as a clinically useful tool that can provide both quantifiable and qualitative information on performance to aid in treatment planning and evaluate its outcome. With the assistance of new artificial intelligence (AI) technology, the traditional medical environment has undergone great changes. AI has the potential to reshape medicine, making gait analysis more accurate, efficient and accessible. In this study, we analyzed basic information about gait analysis and AI articles that met inclusion criteria in the WoS Core Collection database from 1992–2022, and the VosViewer software was used for web visualization and keyword analysis. Through bibliometric and visual analysis, this article systematically introduces the research status of gait analysis and AI. We introduce the application of artificial intelligence in clinical gait analysis, which affects the identification and management of gait abnormalities found in various diseases. Machine learning (ML) and artificial neural networks (ANNs) are the most often utilized AI methods in gait analysis. By comparing the predictive capability of different AI algorithms in published studies, we evaluate their potential for gait analysis in different situations. Furthermore, the current challenges and future directions of gait analysis and AI research are discussed, which will also provide valuable reference information for investors in this field.

Introduction: Effective control of rehabilitation robots requires considering the distributed and multi-contact point physical human–robot interaction and users’ biomechanical variation. This paper presents a quasi-static model for the motion of a soft robotic exo-digit while physically interacting with an anthropomorphic finger model for physical therapy.

Methods: Quasi-static analytical models were developed for modeling the motion of the soft robot, the anthropomorphic finger, and their coupled physical interaction. An intertwining of kinematics and quasi-static motion was studied to model the distributed (multiple contact points) interaction between the robot and a human finger model. The anthropomorphic finger was modeled as an articulated multi-rigid body structure with multi-contact point interaction. The soft robot was modeled as an articulated hybrid soft-and-rigid model with a constant bending curvature and a constant length for each soft segment. A hyperelastic constitute model based on Yeoh’s 3rdorder material model was used for modeling the soft elastomer. The developed models were experimentally evaluated for 1) free motion of individual soft actuators and 2) constrained motion of the soft robotic exo-digit and anthropomorphic finger model.

Results and Discussion: Simulation and experimental results were compared for performance evaluations. The theoretical and experimental results were in agreement for free motion, and the deviation from the constrained motion was in the range of the experimental errors. The outcomes also provided an insight into the importance of considering lengthening for the soft actuators.



A skeletal robotic hand with working ligaments and tendons can now be 3D-printed in one run. The creepy accomplishment was made possible by a new approach to additive manufacturing that can print both rigid and elastic materials at the same time in high resolution.

The new work is the result of a collaboration between researchers at ETH Zurich in Switzerland and a Massachusetts Institute of Technology spin-out called Inkbit, based in Medford, Mass. The group has devised a new 3D inkjet-printing technique capable of using a wider range of materials than previous devices.

In a new paper in Nature, the group has shown for the first time that the technology can be used to print complex moving devices made of multiple materials in a single print job. These include a bio-inspired robotic hand, a six-legged robot with a grabber, and a pump modeled on the heart.

“What was really exciting for us is that this technology, for the first time, allowed us to print complete functional systems that work right off the print bed,” says Thomas Buchner, a Ph.D. student at ETH Zurich and first author of the paper describing the work.

The new technique operates on principles similar to those of the kind of inkjet printer you might find in an office. Instead of colored inks, though, the printer sprays out resins that harden when exposed to ultraviolet (UV) light, and rather than just printing a single sheet, it builds up 3D objects layer by layer. It’s also capable of printing at extremely high resolution, with voxels—the 3D equivalent of pixels–just a few micrometers across.

3D Printed Robot Hand Has Working Tendons youtu.be

3D inkjet printers aren’t new, but the palette of materials they can use has typically been limited. That’s because each layer inevitably has imperfections, and the standard approach to dealing with this has been to scrape them off or roll them flat. This means that soft or slow-curing materials cannot be used as they will get smeared or squashed.

Inkbit has been working on a workaround to this problem for a number of years. The company has built a printer featuring a platform that moves up and down beneath multiple inkjet units, a UV-curing unit, and a scanning unit. After a layer has been deposited and cured, the scanner creates a depth map of the print surface, which is then compared against the 3D model to work out how to adjust the rate of deposition from the inkjet units to even out any irregularities. Areas that received too much resin on the previous layer receive less on the next, and vice versa.

This means the printer doesn’t require any contact with the materials once they’ve been deposited, says Robert Katzschmann, a robotics professor at ETH Zurich who led the research. “That leads to all kinds of benefits, because now you can use chemistries that take longer to polymerize, that take longer to harden out, and that opens up a whole new space of much more useful materials.”

“We can actually now create a structure or a robot in one shot. It might require maybe adding a motor here or there, but the actual complexity of the structure is all there.”
—Robert Katzschmann, ETH Zurich

Previously, Inkbit had been using a scanning approach that could capture images of areas only 2 centimeters across at a time. This process had to be repeated multiple times before all the images were stitched together and analyzed, which significantly slowed down fabrication times. The new technique uses a much faster laser scanning system—the device can now print 660 times as fast as before. In addition, the team has now demonstrated that they can print with elastic polymers called thiol-enes. These materials cure slowly, but they’re much springier and more durable than acrylates, the rubberlike materials that are normally used in commercial 3D inkjet printers.

To demonstrate the potential of the new 3D printing process, the researchers printed a robotic hand. The device features rigid bones modeled on MRI scans of human hands and elastic tendons that can be connected to servos to curl the fingers in toward the palm. Each fingertip also features a thin membrane with a small cavity behind, which is connected to a long tube printed into the structure of the finger. When the finger touches something, the cavity is compressed, causing the pressure inside the tube to rise. This is picked up by a pressure sensor at the end of the tube, and this signal is used to tell the fingers to stop curling once a certain pressure has been reached.

The researchers used the hand to grip a variety of objects, including a pen and a water bottle and to touch its thumb to each of its fingertips. Critically, all of the functional parts of the robotic hand, apart from the servos and the pressure sensors, were produced in a single printing job. “What we see as novel about our work is that we can actually now create a structure or a robot in one shot,” says Katzschmann. “It might require maybe adding a motor here or there, but the actual complexity of the structure is all there.”

The researchers also created a pneumatically powered six-legged robot with a gripper that was able to walk back and forth and pick up a box of Tic-Tacs, and a pump modeled on the human heart, featuring one-way valves and internal pressure sensors, that was capable of pumping 2.3 liters of fluid a minute.

Future work will look to further expand the number of materials that the printer can use, says Katzschmann. They are restricted to materials that can be cured using UV light and that aren’t too viscous to work in an inkjet printer. But these could include things like hard epoxies, hydrogels suitable for tissue engineering, or even conductive polymers that could make it possible to print electronic circuits into devices.



A skeletal robotic hand with working ligaments and tendons can now be 3D-printed in one run. The creepy accomplishment was made possible by a new approach to additive manufacturing that can print both rigid and elastic materials at the same time in high resolution.

The new work is the result of a collaboration between researchers at ETH Zurich in Switzerland and a Massachusetts Institute of Technology spin-out called Inkbit, based in Medford, Mass. The group has devised a new 3D inkjet-printing technique capable of using a wider range of materials than previous devices.

In a new paper in Nature, the group has shown for the first time that the technology can be used to print complex moving devices made of multiple materials in a single print job. These include a bio-inspired robotic hand, a six-legged robot with a grabber, and a pump modeled on the heart.

“What was really exciting for us is that this technology, for the first time, allowed us to print complete functional systems that work right off the print bed,” says Thomas Buchner, a Ph.D. student at ETH Zurich and first author of the paper describing the work.

The new technique operates on principles similar to those of the kind of inkjet printer you might find in an office. Instead of colored inks, though, the printer sprays out resins that harden when exposed to ultraviolet (UV) light, and rather than just printing a single sheet, it builds up 3D objects layer by layer. It’s also capable of printing at extremely high resolution, with voxels—the 3D equivalent of pixels–just a few micrometers across.

3D Printed Robot Hand Has Working Tendons youtu.be

3D inkjet printers aren’t new, but the palette of materials they can use has typically been limited. That’s because each layer inevitably has imperfections, and the standard approach to dealing with this has been to scrape them off or roll them flat. This means that soft or slow-curing materials cannot be used as they will get smeared or squashed.

Inkbit has been working on a workaround to this problem for a number of years. The company has built a printer featuring a platform that moves up and down beneath multiple inkjet units, a UV-curing unit, and a scanning unit. After a layer has been deposited and cured, the scanner creates a depth map of the print surface, which is then compared against the 3D model to work out how to adjust the rate of deposition from the inkjet units to even out any irregularities. Areas that received too much resin on the previous layer receive less on the next, and vice versa.

This means the printer doesn’t require any contact with the materials once they’ve been deposited, says Robert Katzschmann, a robotics professor at ETH Zurich who led the research. “That leads to all kinds of benefits, because now you can use chemistries that take longer to polymerize, that take longer to harden out, and that opens up a whole new space of much more useful materials.”

“We can actually now create a structure or a robot in one shot. It might require maybe adding a motor here or there, but the actual complexity of the structure is all there.”
—Robert Katzschmann, ETH Zurich

Previously, Inkbit had been using a scanning approach that could capture images of areas only 2 centimeters across at a time. This process had to be repeated multiple times before all the images were stitched together and analyzed, which significantly slowed down fabrication times. The new technique uses a much faster laser scanning system—the device can now print 660 times as fast as before. In addition, the team has now demonstrated that they can print with elastic polymers called thiol-enes. These materials cure slowly, but they’re much springier and more durable than acrylates, the rubberlike materials that are normally used in commercial 3D inkjet printers.

To demonstrate the potential of the new 3D printing process, the researchers printed a robotic hand. The device features rigid bones modeled on MRI scans of human hands and elastic tendons that can be connected to servos to curl the fingers in toward the palm. Each fingertip also features a thin membrane with a small cavity behind, which is connected to a long tube printed into the structure of the finger. When the finger touches something, the cavity is compressed, causing the pressure inside the tube to rise. This is picked up by a pressure sensor at the end of the tube, and this signal is used to tell the fingers to stop curling once a certain pressure has been reached.

The researchers used the hand to grip a variety of objects, including a pen and a water bottle and to touch its thumb to each of its fingertips. Critically, all of the functional parts of the robotic hand, apart from the servos and the pressure sensors, were produced in a single printing job. “What we see as novel about our work is that we can actually now create a structure or a robot in one shot,” says Katzschmann. “It might require maybe adding a motor here or there, but the actual complexity of the structure is all there.”

The researchers also created a pneumatically powered six-legged robot with a gripper that was able to walk back and forth and pick up a box of Tic-Tacs, and a pump modeled on the human heart, featuring one-way valves and internal pressure sensors, that was capable of pumping 2.3 liters of fluid a minute.

Future work will look to further expand the number of materials that the printer can use, says Katzschmann. They are restricted to materials that can be cured using UV light and that aren’t too viscous to work in an inkjet printer. But these could include things like hard epoxies, hydrogels suitable for tissue engineering, or even conductive polymers that could make it possible to print electronic circuits into devices.

This article provides a comprehensive narrative review of physical task-based assessments used to evaluate the multi-grasp dexterity and functional impact of varying control systems in pediatric and adult upper-limb prostheses. Our search returned 1,442 research articles from online databases, of which 25 tests—selected for their scientific rigor, evaluation metrics, and psychometric properties—met our review criteria. We observed that despite significant advancements in the mechatronics of upper-limb prostheses, these 25 assessments are the only validated evaluation methods that have emerged since the first measure in 1948. This not only underscores the lack of a consistently updated, standardized assessment protocol for new innovations, but also reveals an unsettling trend: as technology outpaces standardized evaluation measures, developers will often support their novel devices through custom, study-specific tests. These boutique assessments can potentially introduce bias and jeopardize validity. Furthermore, our analysis revealed that current validated evaluation methods often overlook the influence of competing interests on test success. Clinical settings and research laboratories differ in their time constraints, access to specialized equipment, and testing objectives, all of which significantly influence assessment selection and consistent use. Therefore, we propose a dual testing approach to address the varied demands of these distinct environments. Additionally, we found that almost all existing task-based assessments lack an integrated mechanism for collecting patient feedback, which we assert is essential for a holistic evaluation of upper-limb prostheses. Our review underscores the pressing need for a standardized evaluation protocol capable of objectively assessing the rapidly advancing prosthetic technologies across all testing domains.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE SSRR 2023: 13–15 November 2023, FUKUSHIMA, JAPANHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USACybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE

Enjoy today’s videos!

Unitree B2: beyond the limit. Maximum speed of 6m/s, sustained load of 40kg and sustained walking endurance of 5h. The comprehensive performance is two to three times that of existing quadruped robots worldwide! Adaptable to all terrains, large load, long-lasting endurance, and super athletic performance! Evolve, evolve, and evolve again!

[ Unitree ]

This shape-changing robot just got a lot smaller. In a new study, engineers at the University of Colorado Boulder debuted mCLARI, a 2-centimeter-long modular robot that can passively change its shape to squeeze through narrow gaps in multiple directions. It weighs less than a gram but can support over three times its body weight as an additional payload.

[ CU Boulder ]

Researchers at CMU used fossil evidence to engineer a soft robotic replica of pleurocystitids, a marine organism that existed nearly 450 million years ago and is believed to be one of the first echinoderms capable of movement using a muscular stem.

[ CMU ]

Stretch has moved over a million customer boxes in under a year, improving predictability and preventing injuries. But how did we get there? Discover how we put our expertise in robotics research to use designing, testing, and deploying a warehouse robot. Starting from the technological building blocks of Atlas, Stretch has the mobility, power, and intelligence to automate the industry’s toughest challenges.

[ Boston Dynamics ]

What do the robots do on Halloween after everyone leaves? Join the Ingenuity Labs robots on their trick or treating adventure. Happy Halloween!

[ Queens University ]

Thanks Josh!

FreeLander is a versatile, modular legged-robot hardware platform with adaptive bio-inspired neural control. The robot platform can be used to construct different bio-inspired legged robots. Each module of the platform consists of two legs designed to function as a two-legged robot, which is able to walk on a metal pipe using electromagnetic feet. Multiple modules can be combined to obtain six-legged and eight-legged robots to walk on difficult terrains, such as rough terrain, slopes, random stepfield, gravel, grass, and even in-pipe.

[ VISTEC ]

Thanks Poramate!

Energy Robotics hopes you had a Happy Halloween!

[ Energy Robotics ]

This work presents a camera model for refractive media such as water and its application in underwater visual-inertial odometry. The model is self-calibrating in real-time and is free of known correspondences or calibration targets.

[ ARL ]

Humans naturally exploit haptic feedback during contact-rich tasks like loading a dishwasher or stocking a bookshelf. Current robotic systems focus on avoiding unexpected contact, often relying on strategically placed environment sensors. In this paper we train a contact-exploiting manipulation policy in simulation for the contact-rich household task of loading plates into a slotted holder, which transfers without any fine-tuning to the real robot.

[ Paper ]

Thanks Samarth!

Presented herewith is another PAPRAS (Plug-And-Play Robotic Arm System) add-on system engineered to augment the functionalities of the quadrupedal robot, Boston Dynamics Spot. The system adeptly integrates two PAPRAS units onto the Spot, drawing inspiration from the mythological creature Orthrus—a two-headed dog in Greek mythology.

[ KIMLAB ]

Marwa Eldiwiny is a PhD student and Early Stages Researcher (ESR) at the Vrije Universiteit Brussel whose current research focus is on modelling and simulating self-healing soft materials for industrial applications. Her Master’s thesis was ‘UAV anti-stealth technology for safe operation’. She worked as a Research Engineer at Inria, Lille nord Europe, Research Scholar at Tartu Institute of Technology and a lecturer with the Mechatronics and Industrial Robotics Programme at Minia University, Egypt. Eldiwiny hosts the IEEE RAS Soft Robotics Podcast where researchers from both Academia and Industry discuss recent developments in the Soft Robotics research field.

[ SMART ITN ]

3 labs. Different robotic solutions of the future. Meet CSAIL’s machine friends.

[ MIT CSAIL ]

This UPenn GRASP SFI Seminar is by E Farrell Helbling at Cornell, on Autonomy for Insect Scale Robots.

Countless science fiction works have set our expectations for small, mobile, autonomous robots for use in a broad range of applications. The ability to move through highly dynamic and complex environments can expand capabilities in search and rescue operations and safety inspection tasks. These robots can also form a diverse collective to provide more flexibility than a multifunctional robot. I will present my work on the analysis of control and power requirements for this vehicle, as well as results on the integration of onboard sensors. I also will discuss recent results that culminate nearly two decades of effort to create a power autonomous insect-scale vehicle. Lastly, I will outline how this design strategy can be readily applied to other micro and bioinspired autonomous robots.

[ UPenn ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE SSRR 2023: 13–15 November 2023, FUKUSHIMA, JAPANHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USACybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE

Enjoy today’s videos!

Unitree B2: beyond the limit. Maximum speed of 6m/s, sustained load of 40kg and sustained walking endurance of 5h. The comprehensive performance is two to three times that of existing quadruped robots worldwide! Adaptable to all terrains, large load, long-lasting endurance, and super athletic performance! Evolve, evolve, and evolve again!

[ Unitree ]

This shape-changing robot just got a lot smaller. In a new study, engineers at the University of Colorado Boulder debuted mCLARI, a 2-centimeter-long modular robot that can passively change its shape to squeeze through narrow gaps in multiple directions. It weighs less than a gram but can support over three times its body weight as an additional payload.

[ CU Boulder ]

Researchers at CMU used fossil evidence to engineer a soft robotic replica of pleurocystitids, a marine organism that existed nearly 450 million years ago and is believed to be one of the first echinoderms capable of movement using a muscular stem.

[ CMU ]

Stretch has moved over a million customer boxes in under a year, improving predictability and preventing injuries. But how did we get there? Discover how we put our expertise in robotics research to use designing, testing, and deploying a warehouse robot. Starting from the technological building blocks of Atlas, Stretch has the mobility, power, and intelligence to automate the industry’s toughest challenges.

[ Boston Dynamics ]

What do the robots do on Halloween after everyone leaves? Join the Ingenuity Labs robots on their trick or treating adventure. Happy Halloween!

[ Queens University ]

Thanks Josh!

FreeLander is a versatile, modular legged-robot hardware platform with adaptive bio-inspired neural control. The robot platform can be used to construct different bio-inspired legged robots. Each module of the platform consists of two legs designed to function as a two-legged robot, which is able to walk on a metal pipe using electromagnetic feet. Multiple modules can be combined to obtain six-legged and eight-legged robots to walk on difficult terrains, such as rough terrain, slopes, random stepfield, gravel, grass, and even in-pipe.

[ VISTEC ]

Thanks Poramate!

Energy Robotics hopes you had a Happy Halloween!

[ Energy Robotics ]

This work presents a camera model for refractive media such as water and its application in underwater visual-inertial odometry. The model is self-calibrating in real-time and is free of known correspondences or calibration targets.

[ ARL ]

Humans naturally exploit haptic feedback during contact-rich tasks like loading a dishwasher or stocking a bookshelf. Current robotic systems focus on avoiding unexpected contact, often relying on strategically placed environment sensors. In this paper we train a contact-exploiting manipulation policy in simulation for the contact-rich household task of loading plates into a slotted holder, which transfers without any fine-tuning to the real robot.

[ Paper ]

Thanks Samarth!

Presented herewith is another PAPRAS (Plug-And-Play Robotic Arm System) add-on system engineered to augment the functionalities of the quadrupedal robot, Boston Dynamics Spot. The system adeptly integrates two PAPRAS units onto the Spot, drawing inspiration from the mythological creature Orthrus—a two-headed dog in Greek mythology.

[ KIMLAB ]

Marwa Eldiwiny is a PhD student and Early Stages Researcher (ESR) at the Vrije Universiteit Brussel whose current research focus is on modelling and simulating self-healing soft materials for industrial applications. Her Master’s thesis was ‘UAV anti-stealth technology for safe operation’. She worked as a Research Engineer at Inria, Lille nord Europe, Research Scholar at Tartu Institute of Technology and a lecturer with the Mechatronics and Industrial Robotics Programme at Minia University, Egypt. Eldiwiny hosts the IEEE RAS Soft Robotics Podcast where researchers from both Academia and Industry discuss recent developments in the Soft Robotics research field.

[ SMART ITN ]

3 labs. Different robotic solutions of the future. Meet CSAIL’s machine friends.

[ MIT CSAIL ]

This UPenn GRASP SFI Seminar is by E Farrell Helbling at Cornell, on Autonomy for Insect Scale Robots.

Countless science fiction works have set our expectations for small, mobile, autonomous robots for use in a broad range of applications. The ability to move through highly dynamic and complex environments can expand capabilities in search and rescue operations and safety inspection tasks. These robots can also form a diverse collective to provide more flexibility than a multifunctional robot. I will present my work on the analysis of control and power requirements for this vehicle, as well as results on the integration of onboard sensors. I also will discuss recent results that culminate nearly two decades of effort to create a power autonomous insect-scale vehicle. Lastly, I will outline how this design strategy can be readily applied to other micro and bioinspired autonomous robots.

[ UPenn ]

Colorectal cancer (CRC) is the third most common cancer worldwide and responsible for approximately 1 million deaths annually. Early screening is essential to increase the chances of survival, and it can also reduce the cost of treatments for healthcare centres. Colonoscopy is the gold standard for CRC screening and treatment, but it has several drawbacks, including difficulty in manoeuvring the device, patient discomfort, and high cost. Soft endorobots, small and compliant devices thatcan reduce the force exerted on the colonic wall, offer a potential solution to these issues. However, controlling these soft robots is challenging due to their deformable materials and the limitations of mathematical models. In this Review, we discuss model-free and model-based approaches for controlling soft robots that can potentially be applied to endorobots for colonoscopy. We highlight the importance of selecting appropriate control methods based on various parameters, such as sensor and actuator solutions. This review aims to contribute to the development of smart control strategies for soft endorobots that can enhance the effectiveness and safety of robotics in colonoscopy. These strategies can be defined based on the available information about the robot and surrounding environment, control demands, mechanical design impact and characterization data based on calibration.

Foldable wings serve as an effective solution for reducing the size of micro air vehicles (MAVs) during non-flight phases, without compromising the gliding capacity provided by the wing area. Among insects, earwigs exhibit the highest folding ratio in their wings. Inspired by the intricate folding mechanism in earwig hindwings, we aimed to develop artificial wings with similar high-folding ratios. By leveraging an origami hinge, which is a compliant mechanism, we successfully designed and prototyped wings capable of opening and folding in the wind, which helps reduce the surface area by a factor of seven. The experimental evaluation involved measuring the lift force generated by the wings under Reynolds numbers less than 2.2 × 104. When in the open position, our foldable wings demonstrated increased lift force proportional to higher wind speeds. Properties such as wind responsiveness, efficient folding ratios, and practical feasibility highlight the potential of these wings for diverse applications in MAVs.



Although robots are already in warehouses, shuffling small items between bins for shipping or storage, they have yet to take over the job of lugging big, heavy things. And that’s just where they could be of the most use, because lugging is hard for people to do.

Several companies are working on the problem, and there’s likely to be plenty of room for all of them, because the opportunity is enormous. There are a lot of trailers out there that need to be unloaded. Arguably the most interesting approach comes from Dextrous Robotics, which has a robot that moves boxes around with a giant pair of chopsticks.

We first wrote about Dextrous Robotics in 2021, when they were working on a proof of concept using Franka Panda robotic arms. Since then, the concept has been proved successfully, and Dextrous has scaled up to a much larger robot that can handle hundreds of heavy boxes per hour with its chopstick manipulators.

“The chopstick type of approach is very robust,” Dextrous CEO Evan Drumwright tells us. “We can carry heavy payloads and small items with very precise manipulation. Independently posable chopsticks permit grasping a nearly limitless variety of objects with a straightforward mechanical design. It’s a real simplification of the grasping problem.”

The video above shows the robot moving about 150 boxes per hour in a scenario that simulates unloading a packed trailer, but the system is capable of operating much faster. The demonstration was done without any path optimization. In an uncluttered environment, Dextrous has been able to operate the system at 900 boxes per hour, about twice as fast as the 300 to 500 boxes per hour that a person can handle.

Of course, the heavier the box, the harder it is for a person to maintain that pace. And once a box gets heavier than about 20 kilograms, it takes two people to move it. At that point, labor becomes far less efficient. On paper, the hardware of Dextrous’s robot is capable of handling 40 kg boxes at an acceleration of up to 3 gs, and up to 65 kg at a lower acceleration. That would equate to 2,000 boxes per hour. True, this is just a theoretical maximum, but it’s what Dextrous is working toward.

If the only problem was to move heavy boxes quickly, robots would have solved it long ago. However, before you can move the box you first have to pick it up, and that complicates matters. Other robotics companies use suction to pick things up. Dextrous alone favors giant chopsticks.

Suction does have the advantage of being somewhat easier to handle on the perception and planning side: Find a flat surface, stick to it, and there you go. That approach assumes you can find a flat surface, but the well-ordered stacks of boxes seen in most demo videos aren’t necessarily what you’ll get in a warehouse. Suction has other problems: It typically has a payload limit of 20 kg or so, it doesn’t work very well with odd-size boxes, and it has trouble operating in temperatures below 10 °C. Suction systems also pull in a lot of dirt, which can cause mechanical problems.

A suction system typically attaches to just one surface, and that limits how fast it can move without losing its grip or tearing open a box. The Dextrous chopsticks can support a box on two sides. But making full use of this capability adds difficulty to the perception and planning side.

“Just getting to this point has been hardcore,” Drumwright says. “We’ve had to get to a level of precision in the perception system and the manipulation to be able to understand what we’re picking with high confidence. Our initial engineering hurdle has been very, very high.”

Manipulating rigid objects with rigid manipulators like chopsticks has taken Dextrous several years to perfect. “Figuring out how to get a robot to perceive and understand its environment, figure out the best item to pick, and then manipulating that item and doing all that in a reasonable length of time—that is really, really hard,” Drumwright tells us. “I’m not going to say we’ve solved that 100 percent, but it’s working very well. We still have plenty of stuff left to do, but the proof of concept of actually getting a robot that does contact-based manipulation to pick variably sized objects out of an unconstrained environment in a reasonable time period? We’ve solved that.”

Here’s another video showing a sustained box-handling sequence; if you watch carefully, you’ll notice all kinds of precise little motions as the robot uses its manipulators to slightly reposition boxes to give it the best grasp:

All of those motions makes the robot look almost like it’s being teleoperated, but Drumwright assures me that it’s completely autonomous. It turns out that teleoperation doesn’t work very well in this context. “We looked at doing teleop, and we actually could not do it. We found that our controllers are so precise that we could not actually make the system behave better through teleop than it did autonomously.” As to how the robot decides what to do what it does, “I can’t tell you exactly where these behaviors came from,” Drumwright says, “Let’s just call it AI. But these are all autonomous manipulation behaviors, and the robot is able to utilize this diverse set of skills to figure out how to pick every single box.”

You may have noticed that the boxes in the videos are pretty beat up. That’s because the robot has been practicing with those boxes for months, but Dextrous is mindful of the fact that care is necessary, says Drumwright. “One of the things that we were worried about from the very beginning was, how do we do this in a gentle way? But our newest version of the robot has the sensitivity to be very gentle with the boxes.”

I asked Drumwright what would be the most difficult object for his robot to pick up. I suggested a bowling ball (heavy, slippery, spherical). “Challenging, but by no means impossible,” was his response, citing research from Siddhartha Srinivasa at the University of Washington showing that a robot with chopsticks can learn to do dynamic fine manipulation of spherical objects. Dextrous isn’t above cheating slightly, though, by adding a thin coating of hard rubber to the chopsticks’ end effectors to add just a tiny bit of compliance—not enough to mess with planning or control, but enough to make grasping some tricky objects a little easier.

By a year ago, Dextrous had shown that it could move boxes at high speeds under limited scenarios. For the past year, it has been making sure that the system can handle the full range of scenarios that it’s likely to encounter in warehouses. Up next is combining those two things—cranking the speed back up while still working reliably and autonomously.

“On the manipulation side, the system is fully autonomous,” Drumwright says. “We currently have humans involved in driving the robot into the container and then joysticking it forward once it’s picked all that it can reach, but we’re making that fully autonomous, too.” And the robot has so far been quite reliable, requiring little more than lubrication.

According to Drumwright, the biggest challenge on the business side at this point is simply manufacturing enough robots, since the company builds the hardware in-house. The remaining question is how long it will take to make the transition from experiment to product. The company is starting a few commercial pilots, and Drumwright says the thing that’s slowing them down the most is building enough robots to keep up with demand.

“We’ve solved all of the hardest technical problems,” he says. “And now, it’s the business part.”



Although robots are already in warehouses, shuffling small items between bins for shipping or storage, they have yet to take over the job of lugging big, heavy things. And that’s just where they could be of the most use, because lugging is hard for people to do.

Several companies are working on the problem, and there’s likely to be plenty of room for all of them, because the opportunity is enormous. There are a lot of trailers out there that need to be unloaded. Arguably the most interesting approach comes from Dextrous Robotics, which has a robot that moves boxes around with a giant pair of chopsticks.

We first wrote about Dextrous Robotics in 2021, when they were working on a proof of concept using Franka Panda robotic arms. Since then, the concept has been proved successfully, and Dextrous has scaled up to a much larger robot that can handle hundreds of heavy boxes per hour with its chopstick manipulators.

“The chopstick type of approach is very robust,” Dextrous CEO Evan Drumwright tells us. “We can carry heavy payloads and small items with very precise manipulation. Independently posable chopsticks permit grasping a nearly limitless variety of objects with a straightforward mechanical design. It’s a real simplification of the grasping problem.”

The video above shows the robot moving about 150 boxes per hour in a scenario that simulates unloading a packed trailer, but the system is capable of operating much faster. The demonstration was done without any path optimization. In an uncluttered environment, Dextrous has been able to operate the system at 900 boxes per hour, about twice as fast as the 300 to 500 boxes per hour that a person can handle.

Of course, the heavier the box, the harder it is for a person to maintain that pace. And once a box gets heavier than about 20 kilograms, it takes two people to move it. At that point, labor becomes far less efficient. On paper, the hardware of Dextrous’s robot is capable of handling 40 kg boxes at an acceleration of up to 3 gs, and up to 65 kg at a lower acceleration. That would equate to 2,000 boxes per hour. True, this is just a theoretical maximum, but it’s what Dextrous is working toward.

If the only problem was to move heavy boxes quickly, robots would have solved it long ago. However, before you can move the box you first have to pick it up, and that complicates matters. Other robotics companies use suction to pick things up. Dextrous alone favors giant chopsticks.

Suction does have the advantage of being somewhat easier to handle on the perception and planning side: Find a flat surface, stick to it, and there you go. That approach assumes you can find a flat surface, but the well-ordered stacks of boxes seen in most demo videos aren’t necessarily what you’ll get in a warehouse. Suction has other problems: It typically has a payload limit of 20 kg or so, it doesn’t work very well with odd-size boxes, and it has trouble operating in temperatures below 10 °C. Suction systems also pull in a lot of dirt, which can cause mechanical problems.

A suction system typically attaches to just one surface, and that limits how fast it can move without losing its grip or tearing open a box. The Dextrous chopsticks can support a box on two sides. But making full use of this capability adds difficulty to the perception and planning side.

“Just getting to this point has been hardcore,” Drumwright says. “We’ve had to get to a level of precision in the perception system and the manipulation to be able to understand what we’re picking with high confidence. Our initial engineering hurdle has been very, very high.”

Manipulating rigid objects with rigid manipulators like chopsticks has taken Dextrous several years to perfect. “Figuring out how to get a robot to perceive and understand its environment, figure out the best item to pick, and then manipulating that item and doing all that in a reasonable length of time—that is really, really hard,” Drumwright tells us. “I’m not going to say we’ve solved that 100 percent, but it’s working very well. We still have plenty of stuff left to do, but the proof of concept of actually getting a robot that does contact-based manipulation to pick variably sized objects out of an unconstrained environment in a reasonable time period? We’ve solved that.”

Here’s another video showing a sustained box-handling sequence; if you watch carefully, you’ll notice all kinds of precise little motions as the robot uses its manipulators to slightly reposition boxes to give it the best grasp:

All of those motions makes the robot look almost like it’s being teleoperated, but Drumwright assures me that it’s completely autonomous. It turns out that teleoperation doesn’t work very well in this context. “We looked at doing teleop, and we actually could not do it. We found that our controllers are so precise that we could not actually make the system behave better through teleop than it did autonomously.” As to how the robot decides what to do what it does, “I can’t tell you exactly where these behaviors came from,” Drumwright says, “Let’s just call it AI. But these are all autonomous manipulation behaviors, and the robot is able to utilize this diverse set of skills to figure out how to pick every single box.”

You may have noticed that the boxes in the videos are pretty beat up. That’s because the robot has been practicing with those boxes for months, but Dextrous is mindful of the fact that care is necessary, says Drumwright. “One of the things that we were worried about from the very beginning was, how do we do this in a gentle way? But our newest version of the robot has the sensitivity to be very gentle with the boxes.”

I asked Drumwright what would be the most difficult object for his robot to pick up. I suggested a bowling ball (heavy, slippery, spherical). “Challenging, but by no means impossible,” was his response, citing research from Siddhartha Srinivasa at the University of Washington showing that a robot with chopsticks can learn to do dynamic fine manipulation of spherical objects. Dextrous isn’t above cheating slightly, though, by adding a thin coating of hard rubber to the chopsticks’ end effectors to add just a tiny bit of compliance—not enough to mess with planning or control, but enough to make grasping some tricky objects a little easier.

By a year ago, Dextrous had shown that it could move boxes at high speeds under limited scenarios. For the past year, it has been making sure that the system can handle the full range of scenarios that it’s likely to encounter in warehouses. Up next is combining those two things—cranking the speed back up while still working reliably and autonomously.

“On the manipulation side, the system is fully autonomous,” Drumwright says. “We currently have humans involved in driving the robot into the container and then joysticking it forward once it’s picked all that it can reach, but we’re making that fully autonomous, too.” And the robot has so far been quite reliable, requiring little more than lubrication.

According to Drumwright, the biggest challenge on the business side at this point is simply manufacturing enough robots, since the company builds the hardware in-house. The remaining question is how long it will take to make the transition from experiment to product. The company is starting a few commercial pilots, and Drumwright says the thing that’s slowing them down the most is building enough robots to keep up with demand.

“We’ve solved all of the hardest technical problems,” he says. “And now, it’s the business part.”

The incessant progress of robotic technology and rationalization of human manpower induces high expectations in society, but also resentment and even fear. In this paper, we present a quantitative normalized comparison of performance, to shine a light onto the pressing question, “How close is the current state of humanoid robotics to outperforming humans in their typical functions (e.g., locomotion, manipulation), and their underlying structures (e.g., actuators/muscles) in human-centered domains?” This is the most comprehensive comparison of the literature so far. Most state-of-the-art robotic structures required for visual, tactile, or vestibular perception outperform human structures at the cost of slightly higher mass and volume. Electromagnetic and fluidic actuation outperform human muscles w.r.t. speed, endurance, force density, and power density, excluding components for energy storage and conversion. Artificial joints and links can compete with the human skeleton. In contrast, the comparison of locomotion functions shows that robots are trailing behind in energy efficiency, operational time, and transportation costs. Robots are capable of obstacle negotiation, object manipulation, swimming, playing soccer, or vehicle operation. Despite the impressive advances of humanoid robots in the last two decades, current robots are not yet reaching the dexterity and versatility to cope with more complex manipulation and locomotion tasks (e.g., in confined spaces). We conclude that state-of-the-art humanoid robotics is far from matching the dexterity and versatility of human beings. Despite the outperforming technical structures, robot functions are inferior to human ones, even with tethered robots that could place heavy auxiliary components off-board. The persistent advances in robotics let us anticipate the diminishing of the gap.

Multi-robot cooperative control has been extensively studied using model-based distributed control methods. However, such control methods rely on sensing and perception modules in a sequential pipeline design, and the separation of perception and controls may cause processing latencies and compounding errors that affect control performance. End-to-end learning overcomes this limitation by implementing direct learning from onboard sensing data, with control commands output to the robots. Challenges exist in end-to-end learning for multi-robot cooperative control, and previous results are not scalable. We propose in this article a novel decentralized cooperative control method for multi-robot formations using deep neural networks, in which inter-robot communication is modeled by a graph neural network (GNN). Our method takes LiDAR sensor data as input, and the control policy is learned from demonstrations that are provided by an expert controller for decentralized formation control. Although it is trained with a fixed number of robots, the learned control policy is scalable. Evaluation in a robot simulator demonstrates the triangular formation behavior of multi-robot teams of different sizes under the learned control policy.



When IEEE Spectrum editors are putting together an issue of the magazine, a story on the website, or an episode of a podcast, we try to facilitate dialogue about technologies, their development, and their implications for society and the planet. We feature expert voices to articulate technical challenges and describe the engineering solutions they’ve devised to meet them.

So when Senior Editor Evan Ackerman cooked up a concept for a robotics podcast, he leaned hard into that idea. Ackerman, the world’s premier robotics journalist, talks with roboticists every day, and recording those conversations to turn those interviews into a podcast is usually a relatively straightforward process. But Ackerman wanted to try something a little bit different: bringing two roboticists together and just getting out of the way.

“The way the Chatbot podcast works is that we invite a couple of robotics experts to talk with each other about a topic they have in common,” Ackerman explains. “They come up with the questions, not us, which results in the kinds of robotics conversations you won’t hear anywhere else—uniquely informative but also surprising and fun.”

Each episode focuses on a general topic the roboticists have in common, but once they get to chatting, the guests are free to ask each other about whatever interests them. Ackerman is there to make sure they don’t wander too far into the weeds, because we want everyone to be able to enjoy these conversations. “But otherwise, I’ll mostly just be listening,” Ackerman says, “because I’ll be as excited as you are to see how each episode unfolds.”

We think this unique format gives the listener the inside scoop on aspects of robotics that only the roboticists themselves could get each other to reveal. Our first few episodes are already live. They include Skydio CEO Adam Bry and the University of Zurich professor Davide Scaramuzza talking about autonomous drones, Labrador Systems CEO Mike Dooley and iRobot chief technology officer Chris Jones on the challenges domestic robots face in unpredictable dwellings, and choreographer Monica Thomas and Amy LaViers of the Robotics, Automation, and Dance (RAD) Lab discussing how to make Boston Dynamics’ robot dance.

We have plenty more Chatbot episodes in the works, so please subscribe on whatever podcast service you like, listen and read the transcript on our website, or watch the video versions on the Spectrum YouTube channel. While you’re at it, subscribe to our other biweekly podcast, Fixing the Future, where we talk with experts and Spectrum editors about sustainable solutions to climate change and other topics of interest. And we’d love to hear what you think about our podcasts: what you like, what you don’t like, and especially who you’d like to hear on future episodes.



When IEEE Spectrum editors are putting together an issue of the magazine, a story on the website, or an episode of a podcast, we try to facilitate dialogue about technologies, their development, and their implications for society and the planet. We feature expert voices to articulate technical challenges and describe the engineering solutions they’ve devised to meet them.

So when Senior Editor Evan Ackerman cooked up a concept for a robotics podcast, he leaned hard into that idea. Ackerman, the world’s premier robotics journalist, talks with roboticists every day, and recording those conversations to turn those interviews into a podcast is usually a relatively straightforward process. But Ackerman wanted to try something a little bit different: bringing two roboticists together and just getting out of the way.

“The way the Chatbot podcast works is that we invite a couple of robotics experts to talk with each other about a topic they have in common,” Ackerman explains. “They come up with the questions, not us, which results in the kinds of robotics conversations you won’t hear anywhere else—uniquely informative but also surprising and fun.”

Each episode focuses on a general topic the roboticists have in common, but once they get to chatting, the guests are free to ask each other about whatever interests them. Ackerman is there to make sure they don’t wander too far into the weeds, because we want everyone to be able to enjoy these conversations. “But otherwise, I’ll mostly just be listening,” Ackerman says, “because I’ll be as excited as you are to see how each episode unfolds.”

We think this unique format gives the listener the inside scoop on aspects of robotics that only the roboticists themselves could get each other to reveal. Our first few episodes are already live. They include Skydio CEO Adam Bry and the University of Zurich professor Davide Scaramuzza talking about autonomous drones, Labrador Systems CEO Mike Dooley and iRobot chief technology officer Chris Jones on the challenges domestic robots face in unpredictable dwellings, and choreographer Monica Thomas and Amy LaViers of the Robotics, Automation, and Dance (RAD) Lab discussing how to make Boston Dynamics’ robot dance.

We have plenty more Chatbot episodes in the works, so please subscribe on whatever podcast service you like, listen and read the transcript on our website, or watch the video versions on the Spectrum YouTube channel. While you’re at it, subscribe to our other biweekly podcast, Fixing the Future, where we talk with experts and Spectrum editors about sustainable solutions to climate change and other topics of interest. And we’d love to hear what you think about our podcasts: what you like, what you don’t like, and especially who you’d like to hear on future episodes.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE SSRR 2023: 13–15 November 2023, FUKUSHIMA, JAPANHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE

Enjoy today’s videos!

An overview of ongoing work by Hello Robot, UIUC, UW, and Robots for Humanity to empower Henry Evans’ independence through the use of the mobile manipulator Stretch.

And of course, you can read more about this project in this month’s issue of Spectrum magazine.

[ Hello Robot ]

At KIMLAB, we have a unique way of carving Halloween pumpkins! Our MOMO (Mobile Object Manipulation Operator) is equipped with PAPRAS arms featuring prosthetic hands, allowing it to use human tools.

[ KIMLAB ]

This new haptic system from CMU seems actually amazing, although watching the haptic arrays pulse is wigging me out a little bit for some reason.

[ Fluid Reality Group ]

We are excited to introduce you to the Dingo 1.5, the next generation of our popular Dingo platform! With enhanced hardware and software updates, the Dingo 1.5 is ready to tackle even more challenging tasks with ease.

[ Clearpath ]

A little bit of a jump scare here from ANYbotics.

[ ANYbotics ]

Happy haunting from Boston Dynamics!

[ Boston Dynamics ]

I’m guessing this is some sort of testing setup but it’s low-key terrifying.

[ Flexiv ]

KUKA has teamed up with Augsburger Puppenkiste to build a mobile show cell in which two robots do the work of the puppeteers.

[ KUKA ]

In this video, we showcase the Advanced Grasping premium software package’s capabilities. We demonstrate how TIAGo collects objects and places them, how the gripper adapts to different shapes, and the TIAGo robot’s perception and manipulation capabilities.

[ PAL Robotics ]

HEBI Robotics produces a platform for robot development. Our long term vision is to make it easy and practical for any worker, technician, farmer, etc. to create robots as needed. Today the platform is used by researchers around the world and HEBI is using it to solve challenging automation tasks related to inspections and maintenance.

[ HEBI Robotics ]

Folded robots are a rapidly growing field that is revolutionizing how we think about robotics. Taking inspiration from the ancient art of origami results in thinner, lighter, more flexible autonomous robots.

[ NSF ]

Can I have a pet T-Rex? Is a short interdisciplinary portrait documentary featuring paleontologist and Kod*lab postdoc, Aja Mia Carter and Kod*lab robotics researchers, Postdoc Wei-Hsi Chen and PhD student J.Diego Caporale. Dr. Chen applies the art of origami to make a hopping robot while Mr. Caporale adds a degree of freedom to the spine of a quadruped robot to interrogate ideas about twisting and locomotion. An expert in the evolution of tetrapod spines from 380 millon years ago, Dr. Carter is still motivated by her childhood dream for a pet T-Rex, but how can these robotics researchers get her closer to her vision?

[ Kodlab ]

Pages