Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 17–21 April 2024, KASSEL, GERMANYAUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGOEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

USC, UPenn, Texas A&M, Oregon State, Georgia Tech, Temple University, and NASA Johnson Space Center are teaching dog-like robots to navigate craters of the moon and other challenging planetary surfaces in research funded by NASA.

[ USC ]

AMBIDEX is a revolutionary robot that is fast, lightweight, and capable of human-like manipulation. We have added a sensor head and the torso and the waist to greatly expand the range of movement. Compared to the previous arm-centered version, the overall impression and balance has completely changed.

[ Naver Labs ]

It still needs a lot of work, but the six-armed pollinator, Stickbug, can autonomously navigate and pollinate flowers in a greenhouse now.

I think “needs a lot of work” really means “needs a couple more arms.”

[ Paper ]

Experience the future of robotics as UBTECH’s humanoid robot integrates with Baidu’s ERNIE through AppBuilder! Witness robots [that] understand language and autonomously perform tasks like folding clothes and object sorting.

[ UBTECH ]

I know the fins on this robot are for walking underwater rather than on land, but watching it move, I feel like it’s destined to evolve into something a little more terrestrial.

[ Paper ] via [ HERO Lab ]

iRobot has a new Roomba that vacuums and mops—and at $275, it’s a pretty good deal.

Also, if you are a robot vacuum owner, please, please remember to clean the poor thing out from time to time. Here’s how to do it with a Roomba:

[ iRobot ]

The video demonstrates the wave-basin testing of a 43 kg (95 lb) amphibious cycloidal propeller unmanned underwater vehicle (Cyclo-UUV) developed at the Advanced Vertical Flight Laboratory, Texas A&M University. The use of cyclo-propellers allows for 360 degree thrust vectoring for more robust dynamic controllability compared to UUVs with conventional screw propellers.

[ AVFL ]

Sony is still upgrading Aibo with new features, like the ability to listen to your terrible music and dance along.

[ Aibo ]

Operating robots precisely and at high speeds has been a long-standing goal of robotics research. To enable precise and safe dynamic motions, we introduce a four degree-of-freedom (DoF) tendon-driven robot arm. Tendons allow placing the actuation at the base to reduce the robot’s inertia, which we show significantly reduces peak collision forces compared to conventional motor-driven systems. Pairing our robot with pneumatic muscles allows generating high forces and highly accelerated motions, while benefiting from impact resilience through passive compliance.

[ Max Planck Institute ]

Rovers on Mars have previously been caught in loose soils, and turning the wheels dug them deeper, just like a car stuck in sand. To avoid this, Rosalind Franklin has a unique wheel-walking locomotion mode to overcome difficult terrain, as well as autonomous navigation software.

[ ESA ]

Cassie is able to walk on sand, gravel, and rocks inside the Robot Playground at the University of Michigan.

Aww, they stopped before they got to the fun rocks.

[ Paper ] via [ Michigan Robotics ]

Not bad for 2016, right?

[ Namiki Lab ]

MOMO has learned the Bam Yang Gang dance moves with its hand dexterity. :) By analyzing 2D dance videos, we extract detailed hand skeleton data, allowing us to recreate the moves in 3D using a hand model. With this information, MOMO replicates the dance motions with its arm and hand joints.

[ RILAB ] via [ KIMLAB ]

This UPenn GRASP SFI Seminar is from Eric Jang at 1X Technologies, on “Data Engines for Humanoid Robots.”

1X’s mission is to create an abundant supply of physical labor through androids that work alongside humans. I will share some of the progress 1X has been making towards general-purpose mobile manipulation. We have scaled up the number of tasks our androids can do by combining an end-to-end learning strategy with a no-code system to add new robotic capabilities. Our Android Operations team trains their own models on the data they gather themselves, producing an extremely high-quality “farm-to-table” dataset that can be used to learn extremely capable behaviors. I’ll also share an early preview of the progress we’ve been making towards a generalist “World Model” for humanoid robots.

[ UPenn ]

This Microsoft Future Leaders in Robotics and AI Seminar is from Chahat Deep Singh at the University of Maryland, on “Minimal Perception: Enabling Autonomy in Palm-Sized Robots.”

The solution to robot autonomy lies at the intersection of AI, computer vision, computational imaging, and robotics—resulting in minimal robots. This talk explores the challenge of developing a minimal perception framework for tiny robots (less than 6 inches) used in field operations such as space inspections in confined spaces and robot pollination. Furthermore, we will delve into the realm of selective perception, embodied AI, and the future of robot autonomy in the palm of your hands.

[ UMD ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 17–21 April 2024, KASSEL, GERMANYAUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGOEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

USC, UPenn, Texas A&M, Oregon State, Georgia Tech, Temple University, and NASA Johnson Space Center are teaching dog-like robots to navigate craters of the moon and other challenging planetary surfaces in research funded by NASA.

[ USC ]

AMBIDEX is a revolutionary robot that is fast, lightweight, and capable of human-like manipulation. We have added a sensor head and the torso and the waist to greatly expand the range of movement. Compared to the previous arm-centered version, the overall impression and balance has completely changed.

[ Naver Labs ]

It still needs a lot of work, but the six-armed pollinator, Stickbug, can autonomously navigate and pollinate flowers in a greenhouse now.

I think “needs a lot of work” really means “needs a couple more arms.”

[ Paper ]

Experience the future of robotics as UBTECH’s humanoid robot integrates with Baidu’s ERNIE through AppBuilder! Witness robots [that] understand language and autonomously perform tasks like folding clothes and object sorting.

[ UBTECH ]

I know the fins on this robot are for walking underwater rather than on land, but watching it move, I feel like it’s destined to evolve into something a little more terrestrial.

[ Paper ] via [ HERO Lab ]

iRobot has a new Roomba that vacuums and mops—and at $275, it’s a pretty good deal.

Also, if you are a robot vacuum owner, please, please remember to clean the poor thing out from time to time. Here’s how to do it with a Roomba:

[ iRobot ]

The video demonstrates the wave-basin testing of a 43 kg (95 lb) amphibious cycloidal propeller unmanned underwater vehicle (Cyclo-UUV) developed at the Advanced Vertical Flight Laboratory, Texas A&M University. The use of cyclo-propellers allows for 360 degree thrust vectoring for more robust dynamic controllability compared to UUVs with conventional screw propellers.

[ AVFL ]

Sony is still upgrading Aibo with new features, like the ability to listen to your terrible music and dance along.

[ Aibo ]

Operating robots precisely and at high speeds has been a long-standing goal of robotics research. To enable precise and safe dynamic motions, we introduce a four degree-of-freedom (DoF) tendon-driven robot arm. Tendons allow placing the actuation at the base to reduce the robot’s inertia, which we show significantly reduces peak collision forces compared to conventional motor-driven systems. Pairing our robot with pneumatic muscles allows generating high forces and highly accelerated motions, while benefiting from impact resilience through passive compliance.

[ Max Planck Institute ]

Rovers on Mars have previously been caught in loose soils, and turning the wheels dug them deeper, just like a car stuck in sand. To avoid this, Rosalind Franklin has a unique wheel-walking locomotion mode to overcome difficult terrain, as well as autonomous navigation software.

[ ESA ]

Cassie is able to walk on sand, gravel, and rocks inside the Robot Playground at the University of Michigan.

Aww, they stopped before they got to the fun rocks.

[ Paper ] via [ Michigan Robotics ]

Not bad for 2016, right?

[ Namiki Lab ]

MOMO has learned the Bam Yang Gang dance moves with its hand dexterity. :) By analyzing 2D dance videos, we extract detailed hand skeleton data, allowing us to recreate the moves in 3D using a hand model. With this information, MOMO replicates the dance motions with its arm and hand joints.

[ RILAB ] via [ KIMLAB ]

This UPenn GRASP SFI Seminar is from Eric Jang at 1X Technologies, on “Data Engines for Humanoid Robots.”

1X’s mission is to create an abundant supply of physical labor through androids that work alongside humans. I will share some of the progress 1X has been making towards general-purpose mobile manipulation. We have scaled up the number of tasks our androids can do by combining an end-to-end learning strategy with a no-code system to add new robotic capabilities. Our Android Operations team trains their own models on the data they gather themselves, producing an extremely high-quality “farm-to-table” dataset that can be used to learn extremely capable behaviors. I’ll also share an early preview of the progress we’ve been making towards a generalist “World Model” for humanoid robots.

[ UPenn ]

This Microsoft Future Leaders in Robotics and AI Seminar is from Chahat Deep Singh at the University of Maryland, on “Minimal Perception: Enabling Autonomy in Palm-Sized Robots.”

The solution to robot autonomy lies at the intersection of AI, computer vision, computational imaging, and robotics—resulting in minimal robots. This talk explores the challenge of developing a minimal perception framework for tiny robots (less than 6 inches) used in field operations such as space inspections in confined spaces and robot pollination. Furthermore, we will delve into the realm of selective perception, embodied AI, and the future of robot autonomy in the palm of your hands.

[ UMD ]



When we think about robotic manipulation, the default is usually to think about grippers—about robots using manipulators (like fingers or other end effectors) to interact with objects. For most humans, though, interacting with objects can be a lot more complicated, and we use whatever body parts are convenient to help us deal with objects that are large or heavy or awkward.

This somewhat constrained definition of robotic manipulation isn’t robotics’ fault, really. The word “manipulation” itself comes from the Latin for getting handsy with stuff, so there’s a millennium or two’s-worth of hand-related inertia behind the term. The Los Altos, Calif.-based Toyota Research Institute (TRI) is taking a more expansive view with their new humanoid, Punyo, which uses its soft body to help it manipulate objects that would otherwise be pretty much impossible to manage with grippers alone.

“An anthropomorphic embodiment allows us to explore the complexities of social interactions like physical assistance, non-verbal communication, intent, predictability, and trust, to name just a few.” —Alex Alspach, Toyota Research Institute (TRI)

Punyo started off as just a squishy gripper at TRI, but the idea was always to scale up to a big squishy humanoid, hence this concept art of a squishified T-HR3:

This concept image shows what Toyota’s T-HR3 humanoid might look like when bubble-ized.TRI

“We use the term ‘bubble-ized,’ says Alex Alspach, Tech Lead for Punyo at TRI. Alspach tells us that the concept art above doesn’t necessarily reflect what the Punyo humanoid will eventually look like, but “it gave us some physical constraints and a design language. It also reinforced the idea that we are after general hardware and software solutions that can augment and enable both future and existing robots to take full advantage of their whole bodies for manipulation.”

This version of Punyo isn’t quite at “whole” body manipulation, but it can get a lot done using its arms and chest, which are covered with air bladders that provide both sensing and compliance:

Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier. This is not a trivial problem for robots, though, because interactions between point contacts that are rigid (like how most robotics manipulators handle the world) are fairly well understood. Once you throw big squishy surfaces into the mix, along with big squishy objects, it’s just not something that most robots are ready for.

“A soft robot does not interact with the world at a single point.” —Russ Tedrake, TRI

“Current robot manipulation evolved from big, strong industrial robots moving car parts and big tools with their end effectors,” Alspach says. “I think it’s wise to take inspiration from the human form—we are strong enough to perform most everyday tasks with our hands, but when a big, heavy object comes around, we need to get creative with how we wrap our arms around it and position our body to lift it.”

Robots are notorious for lifting big and heavy objects, primarily by manipulating them with robot-y form factors in robot-y ways. So what’s so great about the human form factor, anyway? This question goes way beyond Punyo, of course, but we wanted to get the Punyo team’s take on humanoids, and we tossed a couple more questions at them just for fun.

IEEE Spectrum: So why humanoids?

Alspach: The humanoid robot checks a few important boxes. First of all, the environments we intend to work in were built for humans, so the humanoid form helps a robot make use of the spaces and tools around it. Independently, multiple teams at TRI have converged on bi-manual systems for tasks like grocery shopping and food preparation. A chest between these arms is a simple addition that gives us useful contact surfaces for manipulating big objects, too. Furthermore, our Human-Robot Interaction (HRI) team has done, and continues to do, extensive research with older adults, the people we look forward to helping the most. An anthropomorphic embodiment allows us to explore the complexities of social interactions like physical assistance, non-verbal communication, intent, predictability, and trust, to name just a few.

“We focus not on highly precise tasks but on gross, whole-body manipulation, where robust strategies help stabilize and control objects, and a bit of sloppiness can be an asset.” —Alex Alspach, TRI

Does having a bubble-ized robot make anything more difficult for you?

Russ Tedrake, VP of Robotics Research: If you think of your robot as interacting with the world at a point—the standard view from e.g. impedance control—then putting a soft, passive spring in series between your robot and the world does limit performance. It reduces your control bandwidth. But that view misses the more important point. A soft robot does not interact with the world at a single point. Soft materials fundamentally change the dynamics of contact by deforming around the material—generating patch contacts that allow contact forces and moments not achievable by a rigid interaction.

Alspach: Punyo’s softness is extreme compared to other manipulation platforms that may, say, just have rubber pads on their arms or fingers. This compliance means that when we grab an object, it may not settle exactly where we planned for it to, or, for example, if we bump that object up against the edge of a table, it may move within our grasp. For these reasons, tactile sensing is an important part of our solution as we dig into how to measure and control the state of the objects we manipulate. We focus not on highly precise tasks but on gross, whole-body manipulation, where robust strategies help stabilize and control objects, and a bit of sloppiness can be an asset.

Compliance can be accomplished in different ways, including just in software. What’s the importance of having a robot that’s physically squishy rather than just one that acts squishily?

Andrew Beaulieu, Punyo Tech Lead: We do not believe that passive and active compliance should be considered mutually exclusive, and there are several advantages to having a physically squishy robot, especially when we consider having a robot operate near people and in their spaces. Having a robot that can safely make contact with the world opens up avenues of interaction and exploration. Using compliant materials on the robot also allows it to conform to complicated shapes passively in a way that would otherwise involve more complicated articulated or actuated mechanisms. Conforming to the objects allows us to increase the contact patch with the object and distribute the forces, usually creating a more robust grasp. These compliant surfaces allow us to research planning and control methods that might be less precise, rely less on accurate object localization, or use hardware with less precise control or sensing.

What’s it like to be hugged by Punyo?

Kate Tsui, Punyo HRI Tech Lead: Although Punyo isn’t a social robot, a surprising amount of emotion comes through its hug, and it feels quite comforting. A hug from Punyo feels like a long, sustained, snug squeeze from a close friend you haven’t seen for a long time and don’t want to let go.


A series of concept images shows situations in which whole body manipulation might be useful in the home.TRI

(Interview transcript ends.)

Softness seems like it could be a necessary condition for bipedal humanoids working in close proximity to humans, especially in commercial or home environments where interactions are less structured and predictable. “I think more robots using their whole body to manipulate is coming soon, especially with the recent explosion of humanoids outside of academic labs,” Alspach says. “Capable, general-purpose robotic manipulation is a competitive field, and using the whole body unlocks the ability to efficiently manipulate large, heavy, and unwieldy objects.”



When we think about robotic manipulation, the default is usually to think about grippers—about robots using manipulators (like fingers or other end effectors) to interact with objects. For most humans, though, interacting with objects can be a lot more complicated, and we use whatever body parts are convenient to help us deal with objects that are large or heavy or awkward.

This somewhat constrained definition of robotic manipulation isn’t robotics’ fault, really. The word “manipulation” itself comes from the Latin for getting handsy with stuff, so there’s a millennium or two’s-worth of hand-related inertia behind the term. The Los Altos, Calif.-based Toyota Research Institute (TRI) is taking a more expansive view with their new humanoid, Punyo, which uses its soft body to help it manipulate objects that would otherwise be pretty much impossible to manage with grippers alone.

“An anthropomorphic embodiment allows us to explore the complexities of social interactions like physical assistance, non-verbal communication, intent, predictability, and trust, to name just a few.” —Alex Alspach, Toyota Research Institute (TRI)

Punyo started off as just a squishy gripper at TRI, but the idea was always to scale up to a big squishy humanoid, hence this concept art of a squishified T-HR3:

This concept image shows what Toyota’s T-HR3 humanoid might look like when bubble-ized.TRI

“We use the term ‘bubble-ized,’ says Alex Alspach, Tech Lead for Punyo at TRI. Alspach tells us that the concept art above doesn’t necessarily reflect what the Punyo humanoid will eventually look like, but “it gave us some physical constraints and a design language. It also reinforced the idea that we are after general hardware and software solutions that can augment and enable both future and existing robots to take full advantage of their whole bodies for manipulation.”

This version of Punyo isn’t quite at “whole” body manipulation, but it can get a lot done using its arms and chest, which are covered with air bladders that provide both sensing and compliance:

Many of those motions look very human-like, because this is how humans manipulate things. Not to throw too much shade at all those humanoid warehouse robots, but as is pointed out in the video above, using just our hands outstretched in front of us to lift things is not how humans do it, because using other parts of our bodies to provide extra support makes lifting easier. This is not a trivial problem for robots, though, because interactions between point contacts that are rigid (like how most robotics manipulators handle the world) are fairly well understood. Once you throw big squishy surfaces into the mix, along with big squishy objects, it’s just not something that most robots are ready for.

“A soft robot does not interact with the world at a single point.” —Russ Tedrake, TRI

“Current robot manipulation evolved from big, strong industrial robots moving car parts and big tools with their end effectors,” Alspach says. “I think it’s wise to take inspiration from the human form—we are strong enough to perform most everyday tasks with our hands, but when a big, heavy object comes around, we need to get creative with how we wrap our arms around it and position our body to lift it.”

Robots are notorious for lifting big and heavy objects, primarily by manipulating them with robot-y form factors in robot-y ways. So what’s so great about the human form factor, anyway? This question goes way beyond Punyo, of course, but we wanted to get the Punyo team’s take on humanoids, and we tossed a couple more questions at them just for fun.

IEEE Spectrum: So why humanoids?

Alspach: The humanoid robot checks a few important boxes. First of all, the environments we intend to work in were built for humans, so the humanoid form helps a robot make use of the spaces and tools around it. Independently, multiple teams at TRI have converged on bi-manual systems for tasks like grocery shopping and food preparation. A chest between these arms is a simple addition that gives us useful contact surfaces for manipulating big objects, too. Furthermore, our Human-Robot Interaction (HRI) team has done, and continues to do, extensive research with older adults, the people we look forward to helping the most. An anthropomorphic embodiment allows us to explore the complexities of social interactions like physical assistance, non-verbal communication, intent, predictability, and trust, to name just a few.

“We focus not on highly precise tasks but on gross, whole-body manipulation, where robust strategies help stabilize and control objects, and a bit of sloppiness can be an asset.” —Alex Alspach, TRI

Does having a bubble-ized robot make anything more difficult for you?

Russ Tedrake, VP of Robotics Research: If you think of your robot as interacting with the world at a point—the standard view from e.g. impedance control—then putting a soft, passive spring in series between your robot and the world does limit performance. It reduces your control bandwidth. But that view misses the more important point. A soft robot does not interact with the world at a single point. Soft materials fundamentally change the dynamics of contact by deforming around the material—generating patch contacts that allow contact forces and moments not achievable by a rigid interaction.

Alspach: Punyo’s softness is extreme compared to other manipulation platforms that may, say, just have rubber pads on their arms or fingers. This compliance means that when we grab an object, it may not settle exactly where we planned for it to, or, for example, if we bump that object up against the edge of a table, it may move within our grasp. For these reasons, tactile sensing is an important part of our solution as we dig into how to measure and control the state of the objects we manipulate. We focus not on highly precise tasks but on gross, whole-body manipulation, where robust strategies help stabilize and control objects, and a bit of sloppiness can be an asset.

Compliance can be accomplished in different ways, including just in software. What’s the importance of having a robot that’s physically squishy rather than just one that acts squishily?

Andrew Beaulieu, Punyo Tech Lead: We do not believe that passive and active compliance should be considered mutually exclusive, and there are several advantages to having a physically squishy robot, especially when we consider having a robot operate near people and in their spaces. Having a robot that can safely make contact with the world opens up avenues of interaction and exploration. Using compliant materials on the robot also allows it to conform to complicated shapes passively in a way that would otherwise involve more complicated articulated or actuated mechanisms. Conforming to the objects allows us to increase the contact patch with the object and distribute the forces, usually creating a more robust grasp. These compliant surfaces allow us to research planning and control methods that might be less precise, rely less on accurate object localization, or use hardware with less precise control or sensing.

What’s it like to be hugged by Punyo?

Kate Tsui, Punyo HRI Tech Lead: Although Punyo isn’t a social robot, a surprising amount of emotion comes through its hug, and it feels quite comforting. A hug from Punyo feels like a long, sustained, snug squeeze from a close friend you haven’t seen for a long time and don’t want to let go.


A series of concept images shows situations in which whole body manipulation might be useful in the home.TRI

(Interview transcript ends.)

Softness seems like it could be a necessary condition for bipedal humanoids working in close proximity to humans, especially in commercial or home environments where interactions are less structured and predictable. “I think more robots using their whole body to manipulate is coming soon, especially with the recent explosion of humanoids outside of academic labs,” Alspach says. “Capable, general-purpose robotic manipulation is a competitive field, and using the whole body unlocks the ability to efficiently manipulate large, heavy, and unwieldy objects.”

This paper presents a novel webcam-based approach for gaze estimation on computer screens. Utilizing appearance based gaze estimation models, the system provides a method for mapping the gaze vector from the user’s perspective onto the computer screen. Notably, it determines the user’s 3D position in front of the screen, using only a 2D webcam without the need for additional markers or equipment. The study presents a comprehensive comparative analysis, assessing the performance of the proposed method against established eye tracking solutions. This includes a direct comparison with the purpose-built Tobii Eye Tracker 5, a high-end hardware solution, and the webcam-based GazeRecorder software. In experiments replicating head movements, especially those imitating yaw rotations, the study brings to light the inherent difficulties associated with tracking such motions using 2D webcams. This research introduces a solution by integrating Structure from Motion (SfM) into the Convolutional Neural Network (CNN) model. The study’s accomplishments include showcasing the potential for accurate screen gaze tracking with a simple webcam, presenting a novel approach for physical distance computation, and proposing compensation for head movements, laying the groundwork for advancements in real-world gaze estimation scenarios.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 17–21 April 2024, KASSEL, GERMANYAUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGO, CAEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Columbia engineers build Emo, a silicon-clad robotic face that makes eye contact and uses two AI models to anticipate and replicate a person’s smile before the person actually smiles—a major advance in robots predicting human facial expressions accurately, improving interactions, and building trust between humans and robots.

[ Columbia ]

Researchers at Stanford University have invented a way to augment electric motors to make them much more efficient at performing dynamic movements through a new type of actuator, a device that uses energy to make things move. Their actuator, published 20 March in Science Robotics, uses springs and clutches to accomplish a variety of tasks with a fraction of the energy usage of a typical electric motor.

[ Stanford ]

I’m sorry, but the world does not need more drummers.

[ Fourier Intelligence ]

Always good to see NASA’s Valakyrie doing research.

[ NASA ]

In challenging terrains, constructing structures such as antennas and cable-car masts often requires the use of helicopters to transport loads via ropes.Challenging this paradigm, we present Geranos: a specialized multirotor Unmanned Aerial Vehicle (UAV) designed to enhance aerial transportation and assembly. Our experimental demonstration mimicking antenna/cable-car mast installations showcases Geranos ability in stacking poles (3 kilograms, 2 meters long) with remarkable sub-5 centimeter placement accuracy, without the need of human manual intervention.

[ Paper ]

Flyability’s Elios 2 in November 2020 helped researchers inspect Reactor 5 at the Chernobyl nuclear disaster site to determine whether any uranium was present in the area. Prior to this, Reactor 5 had not been investigated since the disaster in 1986.

[ Flyability ]

Various musculoskeletal humanoids have been developed so far. While these humanoids have the advantage of their flexible and redundant bodies that mimic the human body, they are still far from being applied to real-world tasks. One of the reasons for this is the difficulty of bipedal walking in a flexible body. Thus, we developed a musculoskeletal wheeled robot, Musashi-W, by combining a wheeled base and musculoskeletal upper limbs for real-world applications.

[ Paper ]

Thanks, Kento!

A recent trend in industrial robotics is to have robotic manipulators working side-by-side with human operators. A challenging aspect of this coexistence is that the robot is required to reliably solve complex path-planning problems in a dynamically changing environment. To ensure the safety of the human operator while simultaneously achieving efficient task realization, this paper introduces... a scheme [that] can steer the robot arm to the desired end-effector pose in the presence of actuator saturation, limited joint ranges, speed limits, a cluttered static obstacle environment, and moving human collaborators.

[ Paper ]

Thanks, Kelly!

Our mobile manipulator Digit worked continuously for 26 hours split over the 3.5 days of Modex 2024, in Atlanta. Everything was tracked and coordinated by our newest product, Agility Arc, a cloud automation platform.

[ Agility ]

We’re building robots that can keep people out of harm’s way: Spot enables operators to remotely investigate and de-escalate hazardous situations. Robots have been used in government and public safety applications for decades but Spot’s unmatched mobility and intuitive interface is changing incident response for departments in the field today.

[ Boston Dynamics ]

This paper presents a Bistable Aerial Transformer (BAT) robot, a novel morphing hybrid aerial vehicle (HAV) that switches between quadrotor and fixed-wing modes via rapid acceleration and without any additional actuation beyond those required for normal flight.

[ Paper ]

Disney’s Baymax frequently takes the spotlight in many research presentations dedicated to soft and secure physical human-robot interaction (pHRI). KIMLAB’s recent paper in TRO showcases a step towards realizing the Baymax concept by enveloping the skeletons of PAPRAS (Plug And Play Robotic Arm System) with soft skins and utilizing them for sensory functions.

[ Paper ]

Catch me if you can!

[ CVUT ]

Deep Reinforcement Learning (RL) has demonstrated impressive results in solving complex robotic tasks such as quadruped locomotion. Yet, current solvers fail to produce efficient policies respecting hard constraints. In this work, we advocate for integrating constraints into robot learning and present Constraints as Terminations (CaT), a novel constrained RL algorithm.

[ CaT ]

Why hasn’t the dream of having a robot at home to do your chores become a reality yet? With three decades of research expertise in the field, roboticist Ken Goldberg sheds light on the clumsy truth about robots—and what it will take to build more dexterous machines to work in a warehouse or help out at home.

[ TED ]

Designed as a technology demonstration that would perform up to five experimental test flights over a span of 30 days, the Mars helicopter surpassed expectations—repeatedly—only recently completing its mission after having logged an incredible 72 flights over nearly three years. Join us for a live talk to learn how Ingenuity’s team used resourcefulness and creativity to transform the rotorcraft from a successful tech demo into a helpful scout for the Perseverance rover, ultimately proving the value of aerial exploration for future interplanetary missions.

[ JPL ]

Please join us for a lively panel discussion featuring GRASP Faculty members Dr. Pratik Chaudhari, Dr. Dinesh Jayaraman, and Dr. Michael Posa. This panel will be moderated by Dr. Kostas Daniilidis around the current hot topic of AI Embodied in Robotics.

[ Penn Engineering ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 17–21 April 2024, KASSEL, GERMANYAUVSI XPONENTIAL 2024: 22–25 April 2024, SAN DIEGO, CAEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

Columbia engineers build Emo, a silicon-clad robotic face that makes eye contact and uses two AI models to anticipate and replicate a person’s smile before the person actually smiles—a major advance in robots predicting human facial expressions accurately, improving interactions, and building trust between humans and robots.

[ Columbia ]

Researchers at Stanford University have invented a way to augment electric motors to make them much more efficient at performing dynamic movements through a new type of actuator, a device that uses energy to make things move. Their actuator, published 20 March in Science Robotics, uses springs and clutches to accomplish a variety of tasks with a fraction of the energy usage of a typical electric motor.

[ Stanford ]

I’m sorry, but the world does not need more drummers.

[ Fourier Intelligence ]

Always good to see NASA’s Valakyrie doing research.

[ NASA ]

In challenging terrains, constructing structures such as antennas and cable-car masts often requires the use of helicopters to transport loads via ropes.Challenging this paradigm, we present Geranos: a specialized multirotor Unmanned Aerial Vehicle (UAV) designed to enhance aerial transportation and assembly. Our experimental demonstration mimicking antenna/cable-car mast installations showcases Geranos ability in stacking poles (3 kilograms, 2 meters long) with remarkable sub-5 centimeter placement accuracy, without the need of human manual intervention.

[ Paper ]

Flyability’s Elios 2 in November 2020 helped researchers inspect Reactor 5 at the Chernobyl nuclear disaster site to determine whether any uranium was present in the area. Prior to this, Reactor 5 had not been investigated since the disaster in 1986.

[ Flyability ]

Various musculoskeletal humanoids have been developed so far. While these humanoids have the advantage of their flexible and redundant bodies that mimic the human body, they are still far from being applied to real-world tasks. One of the reasons for this is the difficulty of bipedal walking in a flexible body. Thus, we developed a musculoskeletal wheeled robot, Musashi-W, by combining a wheeled base and musculoskeletal upper limbs for real-world applications.

[ Paper ]

Thanks, Kento!

A recent trend in industrial robotics is to have robotic manipulators working side-by-side with human operators. A challenging aspect of this coexistence is that the robot is required to reliably solve complex path-planning problems in a dynamically changing environment. To ensure the safety of the human operator while simultaneously achieving efficient task realization, this paper introduces... a scheme [that] can steer the robot arm to the desired end-effector pose in the presence of actuator saturation, limited joint ranges, speed limits, a cluttered static obstacle environment, and moving human collaborators.

[ Paper ]

Thanks, Kelly!

Our mobile manipulator Digit worked continuously for 26 hours split over the 3.5 days of Modex 2024, in Atlanta. Everything was tracked and coordinated by our newest product, Agility Arc, a cloud automation platform.

[ Agility ]

We’re building robots that can keep people out of harm’s way: Spot enables operators to remotely investigate and de-escalate hazardous situations. Robots have been used in government and public safety applications for decades but Spot’s unmatched mobility and intuitive interface is changing incident response for departments in the field today.

[ Boston Dynamics ]

This paper presents a Bistable Aerial Transformer (BAT) robot, a novel morphing hybrid aerial vehicle (HAV) that switches between quadrotor and fixed-wing modes via rapid acceleration and without any additional actuation beyond those required for normal flight.

[ Paper ]

Disney’s Baymax frequently takes the spotlight in many research presentations dedicated to soft and secure physical human-robot interaction (pHRI). KIMLAB’s recent paper in TRO showcases a step towards realizing the Baymax concept by enveloping the skeletons of PAPRAS (Plug And Play Robotic Arm System) with soft skins and utilizing them for sensory functions.

[ Paper ]

Catch me if you can!

[ CVUT ]

Deep Reinforcement Learning (RL) has demonstrated impressive results in solving complex robotic tasks such as quadruped locomotion. Yet, current solvers fail to produce efficient policies respecting hard constraints. In this work, we advocate for integrating constraints into robot learning and present Constraints as Terminations (CaT), a novel constrained RL algorithm.

[ CaT ]

Why hasn’t the dream of having a robot at home to do your chores become a reality yet? With three decades of research expertise in the field, roboticist Ken Goldberg sheds light on the clumsy truth about robots—and what it will take to build more dexterous machines to work in a warehouse or help out at home.

[ TED ]

Designed as a technology demonstration that would perform up to five experimental test flights over a span of 30 days, the Mars helicopter surpassed expectations—repeatedly—only recently completing its mission after having logged an incredible 72 flights over nearly three years. Join us for a live talk to learn how Ingenuity’s team used resourcefulness and creativity to transform the rotorcraft from a successful tech demo into a helpful scout for the Perseverance rover, ultimately proving the value of aerial exploration for future interplanetary missions.

[ JPL ]

Please join us for a lively panel discussion featuring GRASP Faculty members Dr. Pratik Chaudhari, Dr. Dinesh Jayaraman, and Dr. Michael Posa. This panel will be moderated by Dr. Kostas Daniilidis around the current hot topic of AI Embodied in Robotics.

[ Penn Engineering ]



At NVIDIA GTC last week, Boston Dynamics CTO Aaron Saunders gave a talk about deploying AI in real world robots—namely, how Spot is leveraging reinforcement learning to get better at locomotion (We spoke with Saunders last year about robots falling over). And Spot has gotten a lot better—a Spot robot takes a tumble on average once every 50 kilometers, even as the Spot fleet collectively walks enough to circle the Earth every three months.

That fleet consists of a lot of commercial deployments, which is impressive for any mobile robot, but part of the reason for that is because the current version of Spot is really not intended for robotics research, even though over 100 universities are home to at least one Spot. Boston Dynamics has not provided developer access to Spot’s joints, meaning that anyone who has wanted to explore quadrupedal mobility has had to find some other platform that’s a bit more open and allows for some experimentation.

Boston Dynamics is now announcing a new variant of Spot that includes a low-level application programming interface (API) that gives joint-level control of the robot. This will give (nearly) full control over how Spot moves its legs, which is a huge opportunity for the robotics community, since we’ll now be able to find out exactly what Spot is capable of. For example, we’ve already heard from a credible source that Spot is capable of running much, much faster than Boston Dynamics has publicly shown, and it’s safe to assume that a speedier Spot is just the start.

An example of a new Spot capability when a custom locomotion controller can be used on the robot.Boston Dynamics

When you buy a Spot robot from Boston Dynamics, it arrives already knowing how to walk. It’s very, very good at walking. Boston Dynamics is so confident in Spot’s walking ability that you’re only allowed high-level control of the robot: You tell it where to go, it decides how to get there. If you want to do robotics research using Spot as a mobility platform, that’s totally fine, but if you want to do research on quadrupedal locomotion, it hasn’t been possible with Spot. But that’s changing.

The Spot RL Researcher Kit is a collaboration between Boston Dynamics, Nvidia, and the AI Institute. It includes a joint-level control API, an Nvidia Jetson AGX Orin payload, and a simulation environment for Spot based on Nvidia Isaac Lab. The kit will be officially released later this year, but Boston Dynamics is starting a slow rollout through an early adopter beta program.

From a certain perspective, Boston Dynamics did this whole thing with Spot backwards by first creating a commercial product and only then making it into a research platform. “At the beginning, we felt like it would be great to include that research capability, but that it wasn’t going to drive the adoption of this technology,” Saunders told us after his GTC session. Instead, Boston Dynamics first focused on getting lots of Spots out into the world in a useful way, and only now, when the company feels like they’ve gotten there, is the time right to unleash a fully-featured research version of Spot. “It was really just getting comfortable with our current product that enabled us to go back and say, ‘how can we now provide people with the kind of access that they’re itching for?’”

Getting to this point has taken a huge amount of work for Boston Dynamics. Predictably, Spot started out as a novelty for most early adopters, becoming a project for different flavors of innovation groups within businesses rather than an industrial asset. “I think there’s been a change there,” Saunders says. “We’re working with operational customers a lot more, and the composure of our sales is shifting away from being dominated by early adopters and we’re starting to see repeat sales and interest in larger fleets of robots.”

Deploying and supporting a large fleet of Spots is one of the things that allowed Boston Dynamics to feel comfortable offering a research version. Researchers are not particularly friendly to their robots, because the goal of research is often to push the envelope of what’s possible. And part of that process includes getting very well acquainted with what turns out to be not possible, resulting in robots that end up on the floor, sometimes in pieces. The research version of Spot will include a mandatory Spot Care Service Plan, which exists to serve commercial customers but will almost certainly provide more value to the research community who want to see what kinds of crazy things they can get Spot to do.

Exactly how crazy those crazy things will be remains to be seen. Boston Dynamics is starting out with a beta program for the research Spots partially because they’re not quite sure yet how many safeguards to put in place within the API. “We need to see where the problems are,” Saunders says. “We still have a little work to do to really hone in how our customers are going to use it.” Deciding how much Spot should be able to put itself at risk in the name of research may be a difficult question to answer, but I’m pretty sure that the beta program participants are going to do their best to find out how much tolerance Boston Dynamics has for Spot shenanigans. I just hope that whatever happens, they share as much video of it as possible.

The Spot Early Adopter Program for the new RL Researcher Kit is open for applications here.



At NVIDIA GTC last week, Boston Dynamics CTO Aaron Saunders gave a talk about deploying AI in real world robots—namely, how Spot is leveraging reinforcement learning to get better at locomotion (We spoke with Saunders last year about robots falling over). And Spot has gotten a lot better—a Spot robot takes a tumble on average once every 50 kilometers, even as the Spot fleet collectively walks enough to circle the Earth every three months.

That fleet consists of a lot of commercial deployments, which is impressive for any mobile robot, but part of the reason for that is because the current version of Spot is really not intended for robotics research, even though over 100 universities are home to at least one Spot. Boston Dynamics has not provided developer access to Spot’s joints, meaning that anyone who has wanted to explore quadrupedal mobility has had to find some other platform that’s a bit more open and allows for some experimentation.

Boston Dynamics is now announcing a new variant of Spot that includes a low-level application programming interface (API) that gives joint-level control of the robot. This will give (nearly) full control over how Spot moves its legs, which is a huge opportunity for the robotics community, since we’ll now be able to find out exactly what Spot is capable of. For example, we’ve already heard from a credible source that Spot is capable of running much, much faster than Boston Dynamics has publicly shown, and it’s safe to assume that a speedier Spot is just the start.

An example of a new Spot capability when a custom locomotion controller can be used on the robot.Boston Dynamics

When you buy a Spot robot from Boston Dynamics, it arrives already knowing how to walk. It’s very, very good at walking. Boston Dynamics is so confident in Spot’s walking ability that you’re only allowed high-level control of the robot: You tell it where to go, it decides how to get there. If you want to do robotics research using Spot as a mobility platform, that’s totally fine, but if you want to do research on quadrupedal locomotion, it hasn’t been possible with Spot. But that’s changing.

The Spot RL Researcher Kit is a collaboration between Boston Dynamics, Nvidia, and the AI Institute. It includes a joint-level control API, an Nvidia Jetson AGX Orin payload, and a simulation environment for Spot based on Nvidia Isaac Lab. The kit will be officially released later this year, but Boston Dynamics is starting a slow rollout through an early adopter beta program.

From a certain perspective, Boston Dynamics did this whole thing with Spot backwards by first creating a commercial product and only then making it into a research platform. “At the beginning, we felt like it would be great to include that research capability, but that it wasn’t going to drive the adoption of this technology,” Saunders told us after his GTC session. Instead, Boston Dynamics first focused on getting lots of Spots out into the world in a useful way, and only now, when the company feels like they’ve gotten there, is the time right to unleash a fully-featured research version of Spot. “It was really just getting comfortable with our current product that enabled us to go back and say, ‘how can we now provide people with the kind of access that they’re itching for?’”

Getting to this point has taken a huge amount of work for Boston Dynamics. Predictably, Spot started out as a novelty for most early adopters, becoming a project for different flavors of innovation groups within businesses rather than an industrial asset. “I think there’s been a change there,” Saunders says. “We’re working with operational customers a lot more, and the composure of our sales is shifting away from being dominated by early adopters and we’re starting to see repeat sales and interest in larger fleets of robots.”

Deploying and supporting a large fleet of Spots is one of the things that allowed Boston Dynamics to feel comfortable offering a research version. Researchers are not particularly friendly to their robots, because the goal of research is often to push the envelope of what’s possible. And part of that process includes getting very well acquainted with what turns out to be not possible, resulting in robots that end up on the floor, sometimes in pieces. The research version of Spot will include a mandatory Spot Care Service Plan, which exists to serve commercial customers but will almost certainly provide more value to the research community who want to see what kinds of crazy things they can get Spot to do.

Exactly how crazy those crazy things will be remains to be seen. Boston Dynamics is starting out with a beta program for the research Spots partially because they’re not quite sure yet how many safeguards to put in place within the API. “We need to see where the problems are,” Saunders says. “We still have a little work to do to really hone in how our customers are going to use it.” Deciding how much Spot should be able to put itself at risk in the name of research may be a difficult question to answer, but I’m pretty sure that the beta program participants are going to do their best to find out how much tolerance Boston Dynamics has for Spot shenanigans. I just hope that whatever happens, they share as much video of it as possible.

The Spot Early Adopter Program for the new RL Researcher Kit is open for applications here.

In recent years, virtual idols have garnered considerable attention because they can perform activities similar to real idols. However, as they are fictitious idols with nonphysical presence, they cannot perform physical interactions such as handshake. Combining a robotic hand with a display showing virtual idols is the one of the methods to solve this problem. Nonetheless a physical handshake is possible, the form of handshake that can effectively induce the desirable behavior is unclear. In this study, we adopted a robotic hand as an interface and aimed to imitate the behavior of real idols. To test the effects of this behavior, we conducted step-wise experiments. The series of experiments revealed that the handshake by the robotic hand increased the feeling of intimacy toward the virtual idol, and it became more enjoyable to respond to a request from the virtual idol. In addition, viewing the virtual idols during the handshake increased the feeling of intimacy with the virtual idol. Moreover, the method of the hand-shake peculiar to idols, which tried to keep holding the user’s hand after the conversation, increased the feeling of intimacy to the virtual idol.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

See NVIDIA’s journey from pioneering advanced autonomous vehicle hardware and simulation tools to accelerated perception and manipulation for autonomous mobile robots and industrial arms, culminating in the next wave of cutting-edge AI for humanoid robots.

[ NVIDIA ]

In release 4.0, we advanced Spot’s locomotion abilities thanks to the power of reinforcement learning. Paul Domanico, Robotics Engineer at Boston Dynamics talks through how Spot’s hybrid approach of combining reinforcement learning with model predictive control creates an even more stable robot in the most antagonistic environments.

[ Boston Dynamics ]

We’re excited to share our latest progress on teaching EVEs general-purpose skills. Everything in the video is all autonomous, all 1X speed, all controlled with a single set of neural network weights.

[ 1X ]

What I find interesting about the Unitree H1 doing a standing flip is where it decides to put its legs.

[ Unitree ]

At the MODEX Exposition in March of 2024, Pickle Robot demonstrated picking freight from a random pile similar to what you see in a messy truck trailer after it has bounced across many miles of highway. The piles of boxes were never the same and the demonstration was run live in front of crowds of onlookers 25 times over 4 days. No other robotic trailer/container unloading system has yet to demonstrate this ability to pick from unstructured piles.

[ Pickle ]

RunRu is a car-like robot, a robot-like car, with autonomy, sociability, and operability. This is a new type of personal vehicle that aims to create a “Jinba-Ittai” relationship with its passengers, who are not only always assertive, but also sometimes whine.

[ ICD-LAB ]

Verdie went to GTC this year and won the hearts of people but maybe not the other robots.

[ Electric Sheep ]

The “DEEPRobotics AI+” merges AI capabilities with robotic software systems to continuously boost embodied intelligence. The showcased achievement is a result of training a new AI and software system.

[ DEEP Robotics ]

If you want to collect data for robot grasping, using Stretch and a pair of tongs is about as affordable as it gets.

[ Hello Robot ]

The real reason why Digit’s legs look backwards is so that it doesn’t bang its shins taking GPUs out of the oven.

Meanwhile, some of us can bake our GPUs without even needing an oven.

[ Agility ]

P1 is LimX Dynamics’ innovative point-foot biped robot, serving as an important platform for the systematic development and modular testing of reinforcement learning. It is utilized to advance the research and iteration of basic biped locomotion abilities. The success of P1 in conquering forest terrain is a testament to LimX Dynamics’ systematic R&D in reinforcement learning.

[ LimX ]

And now, this.

[ Suzumori Endo Lab ]

Cooking in kitchens is fun. BUT doing it collaboratively with two robots is even more satisfying! We introduce MOSAIC, a modular framework that coordinates multiple robots to closely collaborate and cook with humans via natural language interaction and a repository of skills.

[ Cornell ]

neoDavid is a Robust Humanoid with Dexterous Manipulation Skills, developed at DLR. The main focus in the development of neoDavid is to get as close to human capabilities as possible—especially in terms of dynamics, dexterity and robustness.

[ DLR ]

Welcome to our customer spotlight video series where we showcase some of the remarkable robots that our customers have been working on. In this episode we showcase three Clearpath Robotics UGVs that our customers are using to create robotic assistants for three different applications.

[ Clearpath ]

This video presents KIMLAB’s new three-fingered robotic hand, featuring soft tactile sensors for enhanced grasping capabilities. Leveraging cost-effective 3D printing materials, it ensures robustness and operational efficiency.

[ KIMLAB ]

Various perception-aware planning approaches have attempted to enhance the state estimation accuracy during maneuvers, while the feature matchability among frames, a crucial factor influencing estimation accuracy, has often been overlooked. In this paper, we present APACE, an Agile and Perception-Aware trajeCtory gEneration framework for quadrotors aggressive flight, that takes into account feature matchability during trajectory planning.

[ Paper ] via [ HKUST ]

In this video, we see Samuel Kunz, the pilot of the RSL Assistance Robot Race team from ETH Zurich, as he participates in the CYBATHLON Challenges 2024. Samuel completed all four designated tasks—retrieving a parcel from a mailbox, using a toothbrush, hanging a scarf on a clothesline, and emptying a dishwasher—with the help of an assistance robot. He achieved a perfect score of 40 out of 40 points and secured first place in the race, completing the tasks in 6.34 minutes.

[ CYBATHLON ]

Florian Ledoux is a wildlife photographer with a deep love for the Arctic and its wildlife. Using the Mavic 3 Pro, he steps onto the ice ready to capture the raw beauty and the stories of this chilly, remote place.

[ DJI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Eurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPANRoboCup 2024: 17–22 July 2024, EINDHOVEN, NETHERLANDS

Enjoy today’s videos!

See NVIDIA’s journey from pioneering advanced autonomous vehicle hardware and simulation tools to accelerated perception and manipulation for autonomous mobile robots and industrial arms, culminating in the next wave of cutting-edge AI for humanoid robots.

[ NVIDIA ]

In release 4.0, we advanced Spot’s locomotion abilities thanks to the power of reinforcement learning. Paul Domanico, Robotics Engineer at Boston Dynamics talks through how Spot’s hybrid approach of combining reinforcement learning with model predictive control creates an even more stable robot in the most antagonistic environments.

[ Boston Dynamics ]

We’re excited to share our latest progress on teaching EVEs general-purpose skills. Everything in the video is all autonomous, all 1X speed, all controlled with a single set of neural network weights.

[ 1X ]

What I find interesting about the Unitree H1 doing a standing flip is where it decides to put its legs.

[ Unitree ]

At the MODEX Exposition in March of 2024, Pickle Robot demonstrated picking freight from a random pile similar to what you see in a messy truck trailer after it has bounced across many miles of highway. The piles of boxes were never the same and the demonstration was run live in front of crowds of onlookers 25 times over 4 days. No other robotic trailer/container unloading system has yet to demonstrate this ability to pick from unstructured piles.

[ Pickle ]

RunRu is a car-like robot, a robot-like car, with autonomy, sociability, and operability. This is a new type of personal vehicle that aims to create a “Jinba-Ittai” relationship with its passengers, who are not only always assertive, but also sometimes whine.

[ ICD-LAB ]

Verdie went to GTC this year and won the hearts of people but maybe not the other robots.

[ Electric Sheep ]

The “DEEPRobotics AI+” merges AI capabilities with robotic software systems to continuously boost embodied intelligence. The showcased achievement is a result of training a new AI and software system.

[ DEEP Robotics ]

If you want to collect data for robot grasping, using Stretch and a pair of tongs is about as affordable as it gets.

[ Hello Robot ]

The real reason why Digit’s legs look backwards is so that it doesn’t bang its shins taking GPUs out of the oven.

Meanwhile, some of us can bake our GPUs without even needing an oven.

[ Agility ]

P1 is LimX Dynamics’ innovative point-foot biped robot, serving as an important platform for the systematic development and modular testing of reinforcement learning. It is utilized to advance the research and iteration of basic biped locomotion abilities. The success of P1 in conquering forest terrain is a testament to LimX Dynamics’ systematic R&D in reinforcement learning.

[ LimX ]

And now, this.

[ Suzumori Endo Lab ]

Cooking in kitchens is fun. BUT doing it collaboratively with two robots is even more satisfying! We introduce MOSAIC, a modular framework that coordinates multiple robots to closely collaborate and cook with humans via natural language interaction and a repository of skills.

[ Cornell ]

neoDavid is a Robust Humanoid with Dexterous Manipulation Skills, developed at DLR. The main focus in the development of neoDavid is to get as close to human capabilities as possible—especially in terms of dynamics, dexterity and robustness.

[ DLR ]

Welcome to our customer spotlight video series where we showcase some of the remarkable robots that our customers have been working on. In this episode we showcase three Clearpath Robotics UGVs that our customers are using to create robotic assistants for three different applications.

[ Clearpath ]

This video presents KIMLAB’s new three-fingered robotic hand, featuring soft tactile sensors for enhanced grasping capabilities. Leveraging cost-effective 3D printing materials, it ensures robustness and operational efficiency.

[ KIMLAB ]

Various perception-aware planning approaches have attempted to enhance the state estimation accuracy during maneuvers, while the feature matchability among frames, a crucial factor influencing estimation accuracy, has often been overlooked. In this paper, we present APACE, an Agile and Perception-Aware trajeCtory gEneration framework for quadrotors aggressive flight, that takes into account feature matchability during trajectory planning.

[ Paper ] via [ HKUST ]

In this video, we see Samuel Kunz, the pilot of the RSL Assistance Robot Race team from ETH Zurich, as he participates in the CYBATHLON Challenges 2024. Samuel completed all four designated tasks—retrieving a parcel from a mailbox, using a toothbrush, hanging a scarf on a clothesline, and emptying a dishwasher—with the help of an assistance robot. He achieved a perfect score of 40 out of 40 points and secured first place in the race, completing the tasks in 6.34 minutes.

[ CYBATHLON ]

Florian Ledoux is a wildlife photographer with a deep love for the Arctic and its wildlife. Using the Mavic 3 Pro, he steps onto the ice ready to capture the raw beauty and the stories of this chilly, remote place.

[ DJI ]

Automated disassembly is increasingly in focus for Recycling, Re-use, and Remanufacturing (Re-X) activities. Trends in digitalization, in particular digital twin (DT) technologies and the digital product passport, as well as recently proposed European legislation such as the Net Zero and the Critical materials Acts will accelerate digitalization of product documentation and factory processes. In this contribution we look beyond these activities by discussing digital information for stakeholders at the Re-X segment of the value-chain. Furthermore, we present an approach to automated product disassembly based on different levels of available product information. The challenges for automated disassembly and the subsequent requirements on modeling of disassembly processes and product states for electronic waste are examined. The authors use a top-down (e.g., review of existing standards and process definitions) methodology to define an initial data model for disassembly processes. An additional bottom-up approach, whereby 5 exemplary electronics products were manually disassembled, was employed to analyze the efficacy of the initial data model and to offer improvements. This paper reports on our suggested informal data models for automatic electronics disassembly and the associated robotic skills.

The targeted use of social robots for the family demands a better understanding of multiple stakeholders’ privacy concerns, including those of parents and children. Through a co-learning workshop which introduced families to the functions and hypothetical use of social robots in the home, we present preliminary evidence from 6 families that exhibits how parents and children have different comfort levels with robots collecting and sharing information across different use contexts. Conversations and booklet answers reveal that parents adopted their child’s decision in scenarios where they expect children to have more agency, such as in cases of homework completion or cleaning up toys, and when children proposed what their parents found to be acceptable reasoning for their decisions. Families expressed relief when they shared the same reasoning when coming to conclusive decisions, signifying an agreement of boundary management between the robot and the family. In cases where parents and children did not agree, they rejected a binary, either-or decision and opted for a third type of response, reflecting skepticism, uncertainty and/or compromise. Our work highlights the benefits of involving parents and children in child- and family-centered research, including parental abilities to provide cognitive scaffolding and personalize hypothetical scenarios for their children.

Introduction: The teaching process plays a crucial role in the training of professionals. Traditional classroom-based teaching methods, while foundational, often struggle to effectively motivate students. The integration of interactive learning experiences, such as visuo-haptic simulators, presents an opportunity to enhance both student engagement and comprehension.

Methods: In this study, three simulators were developed to explore the impact of visuo-haptic simulations on engineering students’ engagement and their perceptions of learning basic physics concepts. The study used an adapted end-user computing satisfaction questionnaire to assess students’ experiences and perceptions of the simulators’ usability and its utility in learning.

Results: Feedback from participants suggests a positive reception towards the use of visuo-haptic simulators, highlighting their usefulness in improving the understanding of complex physics principles.

Discussion: Results suggest that incorporating visuo-haptic simulations into educational contexts may offer significant benefits, particularly in STEM courses, where traditional methods may be limited. The positive responses from participants underscore the potential of computer simulations to innovate pedagogical strategies. Future research will focus on assessing the effectiveness of these simulators in enhancing students’ learning and understanding of these concepts in higher-education physics courses.



Applying electricity for a few seconds to a soft material, such as a slice of raw tomato or chicken, can strongly bond it to a hard object, such as a graphite slab, without any tape or glue, a new study finds. This unexpected effect is also reversible—switching the direction of the electric current often easily separates the materials, scientists at the University of Maryland say. Potential applications for such “electroadhesion,” which can even work underwater, may include improved biomedical implants and biologically inspired robots.

“It is surprising that this effect was not discovered earlier,” says Srinivasa Raghavan, a professor of chemical and biomolecular engineering at the University of Maryland. “This is a discovery that could have been made pretty much since we’ve had batteries.”

In nature, soft materials such as living tissues are often bonded to hard objects such as bones. Previous research explored chemical ways to accomplish this feat, such as with glues that mimic how mussels stick to rocks and boats. However, these bonds are usually irreversible.

They tried a number of different soft materials, such as tomato, apple, beef, chicken, pork and gelatin...

Previously, Raghavan and his colleagues discovered that electricity could make gels stick to biological tissue, a discovery that might one day lead to gel patches that can help repair wounds. In the new study, instead of bonding two soft materials together, they explored whether electricity could make a soft material stick to a hard object.

The scientists began with a pair of graphite electrodes (consisting of an anode and a cathode) and an acrylamide gel. They applied five volts across the gel for three minutes. Surprisingly, they found the gel strongly bonded onto the graphite anode. Attempts to wrench the gel and electrode apart would typically break the gel, leaving pieces of it on the electrode. The bond could apparently last indefinitely after the voltage was removed, with the researchers keeping samples of gel and electrode stuck together for months.

Howeve, when the researchers switched the polarity of the current, the acrylamide gel detached from the anode. Instead, it adhered onto the other electrode.

Raghavan and his colleagues experimented with this newfound electroadhesion effect a number of different ways. They tried a number of different soft materials, such as tomato, apple, beef, chicken, pork and gelatin, as well as different electrodes, such as copper, lead, tin, nickel, iron, zinc and titanium. They also varied the strength of the voltage and the amount of time it was applied.

The researchers found the amount of salt in the soft material played a strong role in the electroadhesion effect. The salt makes the soft material conductive, and high concentrations of salt could lead gels to adhere to electrodes within seconds.

“It’s surprising how simple this effect is, and how widespread it might be”

The scientists also discovered that metals that are better at giving up their electrons, such as copper, lead and tin, are better at electroadhesion. Conversely, metals that hold onto their electrons strongly, such as nickel, iron, zinc and titanium, fared poorly.

These findings suggest that electroadhesion arises from chemical bonds between the electrode and soft material after they exchange electrons. Depending on the nature of the hard and soft materials, adhesion happened at the anode, cathode, both electrodes, or neither. Boosting the strength of the voltage and the amount of time it was applied typically increased adhesion strength.

“It’s surprising how simple this effect is, and how widespread it might be,” Raghavan says.

Potential applications for electroadhesion may include improving biomedical implants—the ability to bond tissue to steel or titanium could help reinforce implants, the researchers say. Electroadhesion may also help create biologically inspired robots with stiff bone-like skeletons and soft muscle-like elements, they add. They also suggest electroadhesion could lead to new kinds of batteries where soft electrolytes are bonded to hard electrodes, although it’s not clear if such adhesion would make much of a difference to a battery’s performance, Raghavan says.

The researchers also discovered that electroadhesion could occur underwater, which they suggest could open up an even wider range of possible applications for this effect. Typical adhesives do not work underwater, since many cannot spread onto solid surfaces that are submerged in liquids, and even those that can usually only form weak adhesive bonds due to interference from the liquid.

“It’s hard for me to pinpoint one real application for this discovery,” Raghavan says. “It reminds me of the researchers who made the discoveries behind Velcro or Post-it notes—the applications were not obvious to them when the discoveries were made, but the applications did arise over time.”

The scientists detailed their findings online 13 March in the journal ACS Central Science.



Applying electricity for a few seconds to a soft material, such as a slice of raw tomato or chicken, can strongly bond it to a hard object, such as a graphite slab, without any tape or glue, a new study finds. This unexpected effect is also reversible—switching the direction of the electric current often easily separates the materials, scientists at the University of Maryland say. Potential applications for such “electroadhesion,” which can even work underwater, may include improved biomedical implants and biologically inspired robots.

“It is surprising that this effect was not discovered earlier,” says Srinivasa Raghavan, a professor of chemical and biomolecular engineering at the University of Maryland. “This is a discovery that could have been made pretty much since we’ve had batteries.”

In nature, soft materials such as living tissues are often bonded to hard objects such as bones. Previous research explored chemical ways to accomplish this feat, such as with glues that mimic how mussels stick to rocks and boats. However, these bonds are usually irreversible.

They tried a number of different soft materials, such as tomato, apple, beef, chicken, pork and gelatin...

Previously, Raghavan and his colleagues discovered that electricity could make gels stick to biological tissue, a discovery that might one day lead to gel patches that can help repair wounds. In the new study, instead of bonding two soft materials together, they explored whether electricity could make a soft material stick to a hard object.

The scientists began with a pair of graphite electrodes (consisting of an anode and a cathode) and an acrylamide gel. They applied five volts across the gel for three minutes. Surprisingly, they found the gel strongly bonded onto the graphite anode. Attempts to wrench the gel and electrode apart would typically break the gel, leaving pieces of it on the electrode. The bond could apparently last indefinitely after the voltage was removed, with the researchers keeping samples of gel and electrode stuck together for months.

Howeve, when the researchers switched the polarity of the current, the acrylamide gel detached from the anode. Instead, it adhered onto the other electrode.

Raghavan and his colleagues experimented with this newfound electroadhesion effect a number of different ways. They tried a number of different soft materials, such as tomato, apple, beef, chicken, pork and gelatin, as well as different electrodes, such as copper, lead, tin, nickel, iron, zinc and titanium. They also varied the strength of the voltage and the amount of time it was applied.

The researchers found the amount of salt in the soft material played a strong role in the electroadhesion effect. The salt makes the soft material conductive, and high concentrations of salt could lead gels to adhere to electrodes within seconds.

“It’s surprising how simple this effect is, and how widespread it might be”

The scientists also discovered that metals that are better at giving up their electrons, such as copper, lead and tin, are better at electroadhesion. Conversely, metals that hold onto their electrons strongly, such as nickel, iron, zinc and titanium, fared poorly.

These findings suggest that electroadhesion arises from chemical bonds between the electrode and soft material after they exchange electrons. Depending on the nature of the hard and soft materials, adhesion happened at the anode, cathode, both electrodes, or neither. Boosting the strength of the voltage and the amount of time it was applied typically increased adhesion strength.

“It’s surprising how simple this effect is, and how widespread it might be,” Raghavan says.

Potential applications for electroadhesion may include improving biomedical implants—the ability to bond tissue to steel or titanium could help reinforce implants, the researchers say. Electroadhesion may also help create biologically inspired robots with stiff bone-like skeletons and soft muscle-like elements, they add. They also suggest electroadhesion could lead to new kinds of batteries where soft electrolytes are bonded to hard electrodes, although it’s not clear if such adhesion would make much of a difference to a battery’s performance, Raghavan says.

The researchers also discovered that electroadhesion could occur underwater, which they suggest could open up an even wider range of possible applications for this effect. Typical adhesives do not work underwater, since many cannot spread onto solid surfaces that are submerged in liquids, and even those that can usually only form weak adhesive bonds due to interference from the liquid.

“It’s hard for me to pinpoint one real application for this discovery,” Raghavan says. “It reminds me of the researchers who made the discoveries behind Velcro or Post-it notes—the applications were not obvious to them when the discoveries were made, but the applications did arise over time.”

The scientists detailed their findings online 13 March in the journal ACS Central Science.



Nvidia’s ongoing GTC developer conference in San Jose is, unsurprisingly, almost entirely about AI this year. But in between the AI developments, Nvidia has also made a couple of significant robotics announcements.

First, there’s Project GR00T (with each letter and number pronounced individually so as not to invoke the wrath of Disney), a foundation model for humanoid robots. And secondly, Nvidia has committed to be the founding platinum member of the Open Source Robotics Alliance, a new initiative from the Open Source Robotics Foundation intended to make sure that the Robot Operating System (ROS), a collection of open-source software libraries and tools, has the support that it needs to flourish.

GR00T

First, let’s talk about GR00T (short for “Generalist Robot 00 Technology”). The way that Nvidia presenters enunciated it letter-by-letter during their talks strongly suggests that in private they just say “Groot.” So the rest of us can also just say “Groot” as far as I’m concerned.

As a “general-purpose foundation model for humanoid robots,” GR00T is intended to provide a starting point for specific humanoid robots to do specific tasks. As you might expect from something being presented for the first time at an Nvidia keynote, it’s awfully vague at the moment, and we’ll have to get into it more later on. Here’s pretty much everything useful that Nvidia has told us so far:

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Jensen Huang, founder and CEO of NVIDIA. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”

Robots powered by GR00T... will be designed to understand natural language and emulate movements by observing human actions—quickly learning coordination, dexterity and other skills in order to navigate, adapt and interact with the real world.

This sounds good, but that “will be” is doing a lot of heavy lifting. Like, there’s a very significant “how” missing here. More specifically, we’ll need a better understanding of what’s underlying this foundation model—is there real robot data under there somewhere, or is it based on a massive amount of simulation? Are the humanoid robotic companies involved contributing data to improve GR00T, or instead training their own models based on it? It’s certainly notable that Nvidia is name-dropping most of the heavy-hitters in commercial humanoids, including 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics, and XPENG Robotics. We’ll be able to check in with some of those folks directly this week to hopefully learn more.

On the hardware side, Nvidia is also announcing a new computing platform called Jetson Thor:

Jetson Thor was created as a new computing platform capable of performing complex tasks and interacting safely and naturally with people and machines. It has a modular architecture optimized for performance, power and size. The SoC includes a next-generation GPU based on NVIDIA Blackwell architecture with a transformer engine delivering 800 teraflops of 8-bit floating point AI performance to run multimodal generative AI models like GR00T. With an integrated functional safety processor, a high-performance CPU cluster and 100GB of ethernet bandwidth, it significantly simplifies design and integration efforts.

Speaking of Nvidia’s Blackwell architecture—today the company also unveiled its B200 Blackwell GPU. And to round out the announcements, the chip foundry TSMC and Synopsys, an electronic design automation company, each said they will be moving Nvidia’s inverse lithography tool, cuLitho, into production.

The Open Source Robotics Alliance

The other big announcement is actually from the Open Source Robotics Foundation, which is launching the Open Source Robotics Alliance (OSRA), a “new initiative to strengthen the governance of our open-source robotics software projects and ensure the health of the Robot Operating System (ROS) Suite community for many years to come.” Nvidia is an inaugural platinum member of the OSRA, but they’re not alone—other platinum members include Intrinsic and Qualcomm. Other significant members include Apex, Clearpath Robotics, Ekumen, eProsima, PickNik, Silicon Valley Robotics, and Zettascale.

“The [Open Source Robotics Foundation] had planned to restructure its operations by broadening community participation and expanding its impact in the larger ROS ecosystem,” explains Vanessa Yamzon Orsi, CEO of the Open Source Robotics Foundation. “The sale of [Open Source Robotics Corporation] was the first step towards that vision, and the launch of the OSRA is the next big step towards that change.”

We had time for a brief Q&A with Orsi to better understand how this will affect the ROS community going forward.

You structured the OSRA to have a mixed membership and meritocratic model like the Linux Foundation—what does that mean, exactly?

Vanessa Yamzon Orsi: We have modeled the OSRA to allow for paths to participation in its activities through both paid memberships (for organizations and their representatives) and the community members who support the projects through their contributions. The mixed model enables participation in the way most appropriate for each organization or individual: contributing funding as a paying member, contributing directly to project development, or both.

What are some benefits for the ROS ecosystem that we can look forward to through OSRA?

Orsi: We expect the OSRA to benefit the OSRF’s projects in three significant ways.

  • By providing a stable stream of funding to cover the maintenance and development of the ROS ecosystem.
  • By encouraging greater community involvement in development through open processes and open, meritocratic status achievement.
  • By bringing greater community involvement in governance and ensuring that all stakeholders have a voice in decision-making.

Why will this be a good thing for ROS users?

Orsi: The OSRA will ensure that ROS and the suite of open source projects under the stewardship of Open Robotics will continue to be supported and strengthened for years to come. By providing organized governance and oversight, clearer paths to community participation, and financial support, it will provide stability and structure to the projects while enabling continued development.


Nvidia’s ongoing GTC developer conference in San Jose is, unsurprisingly, almost entirely about AI this year. But in between the AI developments, Nvidia has also made a couple of significant robotics announcements.

First, there’s Project GR00T (with each letter and number pronounced individually so as not to invoke the wrath of Disney), a foundation model for humanoid robots. And secondly, Nvidia has committed to be the founding platinum member of the Open Source Robotics Alliance, a new initiative from the Open Source Robotics Foundation intended to make sure that the Robot Operating System (ROS), a collection of open-source software libraries and tools, has the support that it needs to flourish.

GR00T

First, let’s talk about GR00T (short for “Generalist Robot 00 Technology”). The way that Nvidia presenters enunciated it letter-by-letter during their talks strongly suggests that in private they just say “Groot.” So the rest of us can also just say “Groot” as far as I’m concerned.

As a “general-purpose foundation model for humanoid robots,” GR00T is intended to provide a starting point for specific humanoid robots to do specific tasks. As you might expect from something being presented for the first time at an Nvidia keynote, it’s awfully vague at the moment, and we’ll have to get into it more later on. Here’s pretty much everything useful that Nvidia has told us so far:

“Building foundation models for general humanoid robots is one of the most exciting problems to solve in AI today,” said Jensen Huang, founder and CEO of NVIDIA. “The enabling technologies are coming together for leading roboticists around the world to take giant leaps towards artificial general robotics.”

Robots powered by GR00T... will be designed to understand natural language and emulate movements by observing human actions—quickly learning coordination, dexterity and other skills in order to navigate, adapt and interact with the real world.

This sounds good, but that “will be” is doing a lot of heavy lifting. Like, there’s a very significant “how” missing here. More specifically, we’ll need a better understanding of what’s underlying this foundation model—is there real robot data under there somewhere, or is it based on a massive amount of simulation? Are the humanoid robotic companies involved contributing data to improve GR00T, or instead training their own models based on it? It’s certainly notable that Nvidia is name-dropping most of the heavy-hitters in commercial humanoids, including 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Figure AI, Fourier Intelligence, Sanctuary AI, Unitree Robotics, and XPENG Robotics. We’ll be able to check in with some of those folks directly this week to hopefully learn more.

On the hardware side, Nvidia is also announcing a new computing platform called Jetson Thor:

Jetson Thor was created as a new computing platform capable of performing complex tasks and interacting safely and naturally with people and machines. It has a modular architecture optimized for performance, power and size. The SoC includes a next-generation GPU based on NVIDIA Blackwell architecture with a transformer engine delivering 800 teraflops of 8-bit floating point AI performance to run multimodal generative AI models like GR00T. With an integrated functional safety processor, a high-performance CPU cluster and 100GB of ethernet bandwidth, it significantly simplifies design and integration efforts.

Speaking of Nvidia’s Blackwell architecture—today the company also unveiled its B200 Blackwell GPU. And to round out the announcements, the chip foundry TSMC and Synopsys, an electronic design automation company, each said they will be moving Nvidia’s inverse lithography tool, cuLitho, into production.

The Open Source Robotics Alliance

The other big announcement is actually from the Open Source Robotics Foundation, which is launching the Open Source Robotics Alliance (OSRA), a “new initiative to strengthen the governance of our open-source robotics software projects and ensure the health of the Robot Operating System (ROS) Suite community for many years to come.” Nvidia is an inaugural platinum member of the OSRA, but they’re not alone—other platinum members include Intrinsic and Qualcomm. Other significant members include Apex, Clearpath Robotics, Ekumen, eProsima, PickNik, Silicon Valley Robotics, and Zettascale.

“The [Open Source Robotics Foundation] had planned to restructure its operations by broadening community participation and expanding its impact in the larger ROS ecosystem,” explains Vanessa Yamzon Orsi, CEO of the Open Source Robotics Foundation. “The sale of [Open Source Robotics Corporation] was the first step towards that vision, and the launch of the OSRA is the next big step towards that change.”

We had time for a brief Q&A with Orsi to better understand how this will affect the ROS community going forward.

You structured the OSRA to have a mixed membership and meritocratic model like the Linux Foundation—what does that mean, exactly?

Vanessa Yamzon Orsi: We have modeled the OSRA to allow for paths to participation in its activities through both paid memberships (for organizations and their representatives) and the community members who support the projects through their contributions. The mixed model enables participation in the way most appropriate for each organization or individual: contributing funding as a paying member, contributing directly to project development, or both.

What are some benefits for the ROS ecosystem that we can look forward to through OSRA?

Orsi: We expect the OSRA to benefit the OSRF’s projects in three significant ways.

  • By providing a stable stream of funding to cover the maintenance and development of the ROS ecosystem.
  • By encouraging greater community involvement in development through open processes and open, meritocratic status achievement.
  • By bringing greater community involvement in governance and ensuring that all stakeholders have a voice in decision-making.

Why will this be a good thing for ROS users?

Orsi: The OSRA will ensure that ROS and the suite of open source projects under the stewardship of Open Robotics will continue to be supported and strengthened for years to come. By providing organized governance and oversight, clearer paths to community participation, and financial support, it will provide stability and structure to the projects while enabling continued development.

Along with the development of speech and language technologies, the market for speech-enabled human-robot interactions (HRI) has grown in recent years. However, it is found that people feel their conversational interactions with such robots are far from satisfactory. One of the reasons is the habitability gap, where the usability of a speech-enabled agent drops when its flexibility increases. For social robots, such flexibility is reflected in the diverse choice of robots’ appearances, sounds and behaviours, which shape a robot’s ‘affordance’. Whilst designers or users have enjoyed the freedom of constructing a social robot by integrating off-the-shelf technologies, such freedom comes at a potential cost: the users’ perceptions and satisfaction. Designing appropriate affordances is essential for the quality of HRI. It is hypothesised that a social robot with aligned affordances could create an appropriate perception of the robot and increase users’ satisfaction when speaking with it. Given that previous studies of affordance alignment mainly focus on one interface’s characteristics and face-voice match, we aim to deepen our understanding of affordance alignment with a robot’s behaviours and use cases. In particular, we investigate how a robot’s affordances affect users’ perceptions in different types of use cases. For this purpose, we conducted an exploratory experiment that included three different affordance settings (adult-like, child-like, and robot-like) and three use cases (informative, emotional, and hybrid). Participants were invited to talk to social robots in person. A mixed-methods approach was employed for quantitative and qualitative analysis of 156 interaction samples. The results show that static affordance (face and voice) has a statistically significant effect on the perceived warmth of the first impression; use cases affect people’s perceptions more on perceived competence and warmth before and after interactions. In addition, it shows the importance of aligning static affordance with behavioural affordance. General design principles of behavioural affordances are proposed. We anticipate that our empirical evidence will provide a clearer guideline for speech-enabled social robots’ affordance design. It will be a starting point for more sophisticated design guidelines. For example, personalised affordance design for individual or group users in different contexts.

Pages