Feed aggregator



Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

This magnetically actuated soft robot is perhaps barely a robot by most definitions, but I can’t stop watching it flop around.

In this work, Ahmad Rafsanjani, Ahmet F. Demirörs, and co‐workers from SDU (DK) and ETH (CH) introduce kirigami into a soft magnetic sheet to achieve bidirectional crawling under rotating magnetic fields. Experimentally characterized crawling and deformation profiles, combined with numerical simulations, reveal programmable motion through changes in cut shape, magnet orientation, and translational motion. This work offers a simple approach toward untethered soft robots.

[ Paper ] via [ SDU ]

Thanks, Ahmad!

Winner of the earliest holiday video is the LARSEN team at Inria!

[ Inria ]

Thanks, Serena!

Even though this is just a rendering, I really appreciate Apptronik being like, “we’re into the humanoid thing, but sometimes you just don’t need legs.”

[ Apptronik ]

We’re not allowed to discuss unmentionables here at IEEE Spectrum, so I can only tell you that Digit has started working in a warehouse handling, uh, things.

[ Agility ]

Unitree’s sub-$90k H1 Humanoid suffering some abuse in a non-PR video.

[ Impress ]

Unlike me, ANYmal can perform 24/7 in all weather.

[ ANYbotics ]

Most of the world will need to turn on subtitles for this, but it’s cool to see how industrial robots can be used to make art.

[ Kuka ]

I was only 12 when this episode of Scientific American Frontiers aired, but I totally remember Alan Alda meeting Flakey!

And here’s the segment, it’s pretty great.

[ SRI ]

Agility CEO Damion Shelton talks about the hierarchy of robot control and draws similarities to the process of riding a horse.

[ Agility ]

Seeking to instill students with real-life workforce skills through hands-on learning, teachers at Central High School in Louisville, Ky., incorporated Spot into their curriculum. For students at CHS, a magnet school for Jefferson County Public Schools district, getting experience with an industrial robot has sparked a passion for engineering and robotics, kickstarted advancement into university engineering programs, and built lifelong career skills. See how students learn to operate Spot, program new behaviors for the robot, and inspire their peers with the school’s “emotional support robot” and unofficial mascot.

[ Boston Dynamics ]



Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

This magnetically actuated soft robot is perhaps barely a robot by most definitions, but I can’t stop watching it flop around.

In this work, Ahmad Rafsanjani, Ahmet F. Demirörs, and co‐workers from SDU (DK) and ETH (CH) introduce kirigami into a soft magnetic sheet to achieve bidirectional crawling under rotating magnetic fields. Experimentally characterized crawling and deformation profiles, combined with numerical simulations, reveal programmable motion through changes in cut shape, magnet orientation, and translational motion. This work offers a simple approach toward untethered soft robots.

[ Paper ] via [ SDU ]

Thanks, Ahmad!

Winner of the earliest holiday video is the LARSEN team at Inria!

[ Inria ]

Thanks, Serena!

Even though this is just a rendering, I really appreciate Apptronik being like, “we’re into the humanoid thing, but sometimes you just don’t need legs.”

[ Apptronik ]

We’re not allowed to discuss unmentionables here at IEEE Spectrum, so I can only tell you that Digit has started working in a warehouse handling, uh, things.

[ Agility ]

Unitree’s sub-$90k H1 Humanoid suffering some abuse in a non-PR video.

[ Impress ]

Unlike me, ANYmal can perform 24/7 in all weather.

[ ANYbotics ]

Most of the world will need to turn on subtitles for this, but it’s cool to see how industrial robots can be used to make art.

[ Kuka ]

I was only 12 when this episode of Scientific American Frontiers aired, but I totally remember Alan Alda meeting Flakey!

And here’s the segment, it’s pretty great.

[ SRI ]

Agility CEO Damion Shelton talks about the hierarchy of robot control and draws similarities to the process of riding a horse.

[ Agility ]

Seeking to instill students with real-life workforce skills through hands-on learning, teachers at Central High School in Louisville, Ky., incorporated Spot into their curriculum. For students at CHS, a magnet school for Jefferson County Public Schools district, getting experience with an industrial robot has sparked a passion for engineering and robotics, kickstarted advancement into university engineering programs, and built lifelong career skills. See how students learn to operate Spot, program new behaviors for the robot, and inspire their peers with the school’s “emotional support robot” and unofficial mascot.

[ Boston Dynamics ]



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Thanks to eons of evolution, vines have the ability to seek out light sources, growing in the direction that will optimize their chances of absorbing sunlight and thriving. Now, researchers have succeeded in creating a vine-inspired crawling bot that can achieve similar feats, seeking out and moving towards light and heat sources. It’s described in a study published last month in IEEE Robotics and Automation Letters.

Shivani Deglurkar, a Ph.D. candidate in the department of Mechanical and Aerospace Engineering at the University of California, San Diego, helped co-design these automated “vines.” Because of its light- and heat-seeking abilities, the system doesn’t require a complex centralized controller. Instead, the “vines” automatically move towards a desired target. “[Also], if some of the vines or roots are damaged or removed, the others remain fully functional,” she notes.

While the tech is still in its infancy, Deglurkar says she envisions it helping in different applications related to solar tracking, or perhaps even in detecting and fighting smoldering fires.

It uses a novel actuator that contracts in the presence of light, causing it to gravitate towards the source. Shivani Deglurkar et al.

To help the device automatically gravitate towards heat and light, Deglurkar’s team developed a novel actuator. It uses a photo absorber in low-boiling-point fluid, which is contained in many small, individual pouches along the sides of the vine’s body. They called this novel actuator a Photothermal Phase-change Series Actuator (PPSA).

When exposed to light, the PPSAs absorb light, heat up, inflate with vapor, and contract. As the PPSAs are pressurized, they elongate, by unfurling material from inside its tip. “At the same time, the PPSAs on the side exposed to light contract, shortening that portion of the robot, and steering it toward the [light or heat] source,” explains Deglurkar.

Her team then tested the system, placing it at different distances from an infrared light source, and confirmed that it will gravitate towards the source at short distances. Its ability to do so depends on the light intensity, whereby stronger light sources allow the device to bend more towards the heat source.

Full turning of the vine by the PPSAs takes about 90 seconds. Strikingly, the device was even able to navigate around obstacles thanks to its inherent need to seek out light and heat sources.

Charles Xiao, a Ph. D. candidate in the department of Mechanical Engineering at the University of California, Santa Barbara, helped co-design the vine. He says he was surprised to see its responsiveness in even very low lighting. “Sunlight is about 1000 W/m2, and our robot has been shown to work at a fraction of solar intensity,” he explains, noting that a lot of comparable systems require illumination greater than that of one Sun.

Xiao says that the main strength of the automated vine is its simplicity and low cost to make. But more work is needed before it can hit the market—or makes its debut fighting fires. “It is slow to respond to light and heat signals and not yet designed for high temperature applications,” explains Xiao.

Therefore future prototypes would need better performance at high temperatures and ability to sense fires in order to be deployed in a real-world environment. Moving forward, Deglurkar says her team’s next steps include designing the actuators to be more selective to the wavelengths emitted by a fire, and developing actuators with a faster response time.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Thanks to eons of evolution, vines have the ability to seek out light sources, growing in the direction that will optimize their chances of absorbing sunlight and thriving. Now, researchers have succeeded in creating a vine-inspired crawling bot that can achieve similar feats, seeking out and moving towards light and heat sources. It’s described in a study published last month in IEEE Robotics and Automation Letters.

Shivani Deglurkar, a Ph.D. candidate in the department of Mechanical and Aerospace Engineering at the University of California, San Diego, helped co-design these automated “vines.” Because of its light- and heat-seeking abilities, the system doesn’t require a complex centralized controller. Instead, the “vines” automatically move towards a desired target. “[Also], if some of the vines or roots are damaged or removed, the others remain fully functional,” she notes.

While the tech is still in its infancy, Deglurkar says she envisions it helping in different applications related to solar tracking, or perhaps even in detecting and fighting smoldering fires.

It uses a novel actuator that contracts in the presence of light, causing it to gravitate towards the source. Shivani Deglurkar et al.

To help the device automatically gravitate towards heat and light, Deglurkar’s team developed a novel actuator. It uses a photo absorber in low-boiling-point fluid, which is contained in many small, individual pouches along the sides of the vine’s body. They called this novel actuator a Photothermal Phase-change Series Actuator (PPSA).

When exposed to light, the PPSAs absorb light, heat up, inflate with vapor, and contract. As the PPSAs are pressurized, they elongate, by unfurling material from inside its tip. “At the same time, the PPSAs on the side exposed to light contract, shortening that portion of the robot, and steering it toward the [light or heat] source,” explains Deglurkar.

Her team then tested the system, placing it at different distances from an infrared light source, and confirmed that it will gravitate towards the source at short distances. Its ability to do so depends on the light intensity, whereby stronger light sources allow the device to bend more towards the heat source.

Full turning of the vine by the PPSAs takes about 90 seconds. Strikingly, the device was even able to navigate around obstacles thanks to its inherent need to seek out light and heat sources.

Charles Xiao, a Ph. D. candidate in the department of Mechanical Engineering at the University of California, Santa Barbara, helped co-design the vine. He says he was surprised to see its responsiveness in even very low lighting. “Sunlight is about 1000 W/m2, and our robot has been shown to work at a fraction of solar intensity,” he explains, noting that a lot of comparable systems require illumination greater than that of one Sun.

Xiao says that the main strength of the automated vine is its simplicity and low cost to make. But more work is needed before it can hit the market—or makes its debut fighting fires. “It is slow to respond to light and heat signals and not yet designed for high temperature applications,” explains Xiao.

Therefore future prototypes would need better performance at high temperatures and ability to sense fires in order to be deployed in a real-world environment. Moving forward, Deglurkar says her team’s next steps include designing the actuators to be more selective to the wavelengths emitted by a fire, and developing actuators with a faster response time.



Every minute counts when someone suffers a cardiac arrest. New research suggests that drones equipped with equipment to automatically restart someone’s heart could help get life-saving care to people much faster.

If your heart stops beating outside of a hospital, your chance of survival is typically less than 10 percent. One thing that can boost the prospect of pulling through is an automated external defibrillator (AED)—a device that can automatically diagnose dangerous heart rhythms and deliver an electric shock to get the heart pumping properly again.

AEDs are designed to be easy to use and provide step-by-step voice instructions, making it possible for untrained bystanders to deliver treatment before an ambulance arrives. But even though AEDs are often installed in public spaces such as shopping malls and airports, the majority of cardiac arrests outside of hospitals actually occur in homes.

A team of Swedish researchers decided to use drones to deliver AEDs directly to patients. Over the course of an 11-month trial in the suburbs of Gothenburg, the team showed they could get the devices to the scene of a medical emergency before an ambulance 67 percent of the time. Generally the AED arrived more than three minutes earlier, giving bystanders time to attach the device before paramedics reached the patient. In one case, this saved a patient’s life.

“The results are really promising because we show that it’s possible to beat the ambulance services by several minutes in a majority of cases,” says Andreas Claesson, an associate professor at the Karolinska Institute in Solna who led the research. “If you look at cardiac arrest, each minute that passes without treatment survival decreases by about 10 percent. So a time benefit of three minutes, as in this study, could potentially increase survival.”

The project was a collaboration with Gothenburg-based drone operator Everdone and covered 194.3 square kilometers of semi-urban areas around the city, with a total population of roughly 200,000. Throughout the study period, the company operated five DJI drones that could be dispatched from hangars at five different locations around the city. The drones could autonomously fly to the scene of an emergency under the watch of a single safety supervisor. Each drone carried an AED in a basket that could be winched down from an altitude of 30 meters.

When the local emergency response center received a call about a suspected cardiac arrest or ongoing CPR, one of the drones was dispatched immediately. Once the drone reached the location, it lowered the AED to the ground. If the emergency dispatcher deemed it appropriate and safe, the person who had called in the cardiac arrest was directed to retrieve the device.

Everdrone

Drones weren’t dispatched for every emergency call, because they weren’t allowed to operate in rain and strong winds, in no-fly zones, or when calls came from high-rise buildings. But in a paper in the December edition of The Lancet Digital Health, the research team reported that of the 55 cases where both a drone and an ambulance reached the scene of the emergency, the drone got there first 37 times, with a median lead time of 3 minutes and 14 seconds.

Only 18 of those emergency calls actually turned out to be cardiac arrests, but in six of those cases the caller managed to apply the AED. In two cases the device recommended applying a shock, with one of the patients surviving thanks to the intervention. The number of cases is too few to make any claims about the clinical effectiveness of the approach, says Claesson, but he says the results clearly show that drones are an effective way to improve emergency response times.

“Three minutes is quite substantial,” says Timothy Chan, a professor of mechanical and industrial engineering at the University of Toronto, who has investigated the effectiveness of drone-delivered AEDs. “Given that in most parts of the world emergency response times are fairly static over time, it would be a huge win if we could achieve and sustain a big reduction like this in widespread practice.”

The approach won’t work everywhere, admits Claesson. In rural areas, the technology would likely lead to even bigger reductions in response time, but lower population density means the cases would be too few to justify the investment. And in big cities, ambulance response times are already relatively rapid and high rise buildings would make drone operation challenging.

But in the kind of semi-urban areas where the trial was conducted, Claesson thinks the technology is very promising. Each drone system costs roughly US $125,000 a year to run and can cover an area with roughly 30,000 to 40,000 inhabitants, which he says is already fairly cost-effective. But what will make the idea even more compelling is when the drones are able to respond to a wider range of emergencies.

That could involve delivering medical supplies for other time-sensitive medical emergencies like drug overdoses, allergic reactions or severe bleeding, he says. Drones equipped with cameras could also rapidly relay video of car accidents or fires to dispatchers, enabling them to tailor the emergency response based on the nature and severity of the incident.

The biggest challenge when it comes to delivering medical support such as AEDs by drone, says Claesson, is the reliance on untrained bystanders.“It’s a really stressful event for them,” he says. “Most often it’s a relative and most often they don’t know CPR and they might not know how an AED works.”

One promising future direction could be to combine drone-delivered AEDs with existing smartphone apps that are used to quickly alert volunteers trained in first aid to nearby medical emergencies. “In Sweden, in 40 percent of cases they arrive before an ambulance,” says Claesson. “We could just send a push notification to the app saying a drone will deliver an AED in two minutes, make your way to the site.”



Every minute counts when someone suffers a cardiac arrest. New research suggests that drones equipped with equipment to automatically restart someone’s heart could help get life-saving care to people much faster.

If your heart stops beating outside of a hospital, your chance of survival is typically less than 10 percent. One thing that can boost the prospect of pulling through is an automated external defibrillator (AED)—a device that can automatically diagnose dangerous heart rhythms and deliver an electric shock to get the heart pumping properly again.

AEDs are designed to be easy to use and provide step-by-step voice instructions, making it possible for untrained bystanders to deliver treatment before an ambulance arrives. But even though AEDs are often installed in public spaces such as shopping malls and airports, the majority of cardiac arrests outside of hospitals actually occur in homes.

A team of Swedish researchers decided to use drones to deliver AEDs directly to patients. Over the course of an 11-month trial in the suburbs of Gothenburg, the team showed they could get the devices to the scene of a medical emergency before an ambulance 67 percent of the time. Generally the AED arrived more than three minutes earlier, giving bystanders time to attach the device before paramedics reached the patient. In one case, this saved a patient’s life.

“The results are really promising because we show that it’s possible to beat the ambulance services by several minutes in a majority of cases,” says Andreas Claesson, an associate professor at the Karolinska Institute in Solna who led the research. “If you look at cardiac arrest, each minute that passes without treatment survival decreases by about 10 percent. So a time benefit of three minutes, as in this study, could potentially increase survival.”

The project was a collaboration with Gothenburg-based drone operator Everdone and covered 194.3 square kilometers of semi-urban areas around the city, with a total population of roughly 200,000. Throughout the study period, the company operated five DJI drones that could be dispatched from hangars at five different locations around the city. The drones could autonomously fly to the scene of an emergency under the watch of a single safety supervisor. Each drone carried an AED in a basket that could be winched down from an altitude of 30 meters.

When the local emergency response center received a call about a suspected cardiac arrest or ongoing CPR, one of the drones was dispatched immediately. Once the drone reached the location, it lowered the AED to the ground. If the emergency dispatcher deemed it appropriate and safe, the person who had called in the cardiac arrest was directed to retrieve the device.

Everdrone

Drones weren’t dispatched for every emergency call, because they weren’t allowed to operate in rain and strong winds, in no-fly zones, or when calls came from high-rise buildings. But in a paper in the December edition of The Lancet Digital Health, the research team reported that of the 55 cases where both a drone and an ambulance reached the scene of the emergency, the drone got there first 37 times, with a median lead time of 3 minutes and 14 seconds.

Only 18 of those emergency calls actually turned out to be cardiac arrests, but in six of those cases the caller managed to apply the AED. In two cases the device recommended applying a shock, with one of the patients surviving thanks to the intervention. The number of cases is too few to make any claims about the clinical effectiveness of the approach, says Claesson, but he says the results clearly show that drones are an effective way to improve emergency response times.

“Three minutes is quite substantial,” says Timothy Chan, a professor of mechanical and industrial engineering at the University of Toronto, who has investigated the effectiveness of drone-delivered AEDs. “Given that in most parts of the world emergency response times are fairly static over time, it would be a huge win if we could achieve and sustain a big reduction like this in widespread practice.”

The approach won’t work everywhere, admits Claesson. In rural areas, the technology would likely lead to even bigger reductions in response time, but lower population density means the cases would be too few to justify the investment. And in big cities, ambulance response times are already relatively rapid and high rise buildings would make drone operation challenging.

But in the kind of semi-urban areas where the trial was conducted, Claesson thinks the technology is very promising. Each drone system costs roughly US $125,000 a year to run and can cover an area with roughly 30,000 to 40,000 inhabitants, which he says is already fairly cost-effective. But what will make the idea even more compelling is when the drones are able to respond to a wider range of emergencies.

That could involve delivering medical supplies for other time-sensitive medical emergencies like drug overdoses, allergic reactions or severe bleeding, he says. Drones equipped with cameras could also rapidly relay video of car accidents or fires to dispatchers, enabling them to tailor the emergency response based on the nature and severity of the incident.

The biggest challenge when it comes to delivering medical support such as AEDs by drone, says Claesson, is the reliance on untrained bystanders.“It’s a really stressful event for them,” he says. “Most often it’s a relative and most often they don’t know CPR and they might not know how an AED works.”

One promising future direction could be to combine drone-delivered AEDs with existing smartphone apps that are used to quickly alert volunteers trained in first aid to nearby medical emergencies. “In Sweden, in 40 percent of cases they arrive before an ambulance,” says Claesson. “We could just send a push notification to the app saying a drone will deliver an AED in two minutes, make your way to the site.”

In this article, we present RISE—a Robotics Integration and Scenario-Management Extensible-Architecture—for designing human–robot dialogs and conducting Human–Robot Interaction (HRI) studies. In current HRI research, interdisciplinarity in the creation and implementation of interaction studies is becoming increasingly important. In addition, there is a lack of reproducibility of the research results. With the presented open-source architecture, we aim to address these two topics. Therefore, we discuss the advantages and disadvantages of various existing tools from different sub-fields within robotics. Requirements for an architecture can be derived from this overview of the literature, which 1) supports interdisciplinary research, 2) allows reproducibility of the research, and 3) is accessible to other researchers in the field of HRI. With our architecture, we tackle these requirements by providing a Graphical User Interface which explains the robot behavior and allows introspection into the current state of the dialog. Additionally, it offers controlling possibilities to easily conduct Wizard of Oz studies. To achieve transparency, the dialog is modeled explicitly, and the robot behavior can be configured. Furthermore, the modular architecture offers an interface for external features and sensors and is expandable to new robots and modalities.

Introduction: Robotic exoskeletons are emerging technologies that have demonstrated their effectiveness in assisting with Activities of Daily Living. However, kinematic disparities between human and robotic joints can result in misalignment between humans and exoskeletons, leading to discomfort and potential user injuries.

Methods: In this paper, we present an ergonomic knee exoskeleton based on a dual four-bar linkage mechanism powered by hydraulic artificial muscles for stair ascent assistance. The device comprises two asymmetric four-bar linkage mechanisms on the medial and lateral sides to accommodate the internal rotation of the knee and address the kinematic discrepancies between these sides. A genetic algorithm was employed to optimize the parameters of the four-bar linkage mechanism to minimize misalignment between human and exoskeleton knee joints. The proposed device was evaluated through two experiments. The first experiment measured the reduction in undesired load due to misalignment, while the second experiment evaluated the device’s effectiveness in assisting stair ascent in a healthy subject.

Results: The experimental results indicate that the proposed device has a significantly reduced undesired load compared to the traditional revolute joint, decreasing from 14.15 N and 18.32 N to 1.88 N and 1.07 N on the medial and lateral sides, respectively. Moreover, a substantial reduction in muscle activities during stair ascent was observed, with a 55.94% reduction in surface electromyography signal.

Discussion: The reduced undesired load of the proposed dual four-bar linkage mechanism highlights the importance of the adopted asymmetrical design for reduced misalignment and increased comfort. Moreover, the proposed device was effective at reducing the effort required during stair ascent.

The present research is innovative as we followed a user-centered approach to implement and train two working memory architectures on an industrial RB-KAIROS + robot: GRU, a state-of-the-art architecture, and WorkMATe, a biologically-inspired alternative. Although user-centered approaches are essential to create a comfortable and safe HRI, they are still rare in industrial settings. Closing this research gap, we conducted two online user studies with large heterogeneous samples. The major aim of these studies was to evaluate the RB-KAIROS + robot’s appearance, movements, and perceived memory functions before (User Study 1) and after the implementation and training of robot working memory (User Study 2). In User Study 1, we furthermore explored participants’ ideas about robot memory and what aspects of the robot’s movements participants found positive and what aspects they would change. The effects of participants’ demographic background and attitudes were controlled for. In User Study 1, participants’ overall evaluations of the robot were moderate. Participant age and negative attitudes toward robots led to more negative robot evaluations. According to exploratory analyses, these effects were driven by perceived low experience with robots. Participants expressed clear ideas of robot memory and precise suggestions for a safe, efficient, and comfortable robot navigation which are valuable for further research and development. In User Study 2, the implementation of WorkMATe and GRU led to more positive evaluations of perceived robot memory, but not of robot appearance and movements. Participants’ robot evaluations were driven by their positive views of robots. Our results demonstrate that considering potential users’ views can greatly contribute to an efficient and positively perceived robot navigation, while users’ experience with robots is crucial for a positive HRI.



The tricked out version of the ANYmal quadruped, as customized by Zürich-based Swiss-Mile, just keeps getting better and better. Starting with a commercial quadruped, adding powered wheels made the robot fast and efficient, while still allowing it to handle curbs and stairs. A few years ago, the robot learned how to stand up, which is an efficient way of moving and made the robot much more pleasant to hug, but more importantly, it unlocked the potential for the robot to start doing manipulation with its wheel-hand-leg-arms.

Doing any sort of practical manipulation with ANYmal is complicated, because its limbs were designed to be legs, not arms. But at the Robotic Systems Lab at ETH Zurich, they’ve managed to teach this robot to use its limbs to open doors, and even to grasp a package off of a table and toss it into a box.

When it makes a mistake in the real world, the robot has already learned the skills to recover.

The ETHZ researchers got the robot to reliably perform these complex behaviors using a kind of reinforcement learning called ‘curiosity driven’ learning. In simulation, the robot is given a goal that it needs to achieve—in this case, the robot is rewarded for achieving the goal of passing through a doorway, or for getting a package into a box. These are very high-level goals (also called “sparse rewards”), and the robot doesn’t get any encouragement along the way. Instead, it has to figure out how to complete the entire task from scratch.

The next step is to endow the robot with a sense of contact-based surprise.

Given an impractical amount of simulation time, the robot would likely figure out how to do these tasks on its own. But to give it a useful starting point, the researchers introduced the concept of curiosity, which encourages the robot to play with goal-related objects. “In the context of this work, ‘curiosity’ refers to a natural desire or motivation for our robot to explore and learn about its environment,” says author Marko Bjelonic, “Allowing it to discover solutions for tasks without needing engineers to explicitly specify what to do.” For the door-opening task, the robot is instructed to be curious about the position of the door handle, while for the package-grasping task, the robot is told to be curious about the motion and location of the package. Leveraging this curiosity to find ways of playing around and changing those parameters helps the robot achieve its goals, without the researchers having to provide any other kind of input.

The behaviors that the robot comes up with through this process are reliable, and they’re also diverse, which is one of the benefits of using sparse rewards. “The learning process is sensitive to small changes in the training environment,” explains Bjelonic. “This sensitivity allows the agent to explore various solutions and trajectories, potentially leading to more innovative task completion in complex, dynamic scenarios.” For example, with the door opening task, the robot discovered how to open it with either one of its end-effectors, or both at the same time, which makes it better at actually completing the task in the real world. The package manipulation is even more interesting, because the robot sometimes dropped the package in training, but it autonomously learned how to pick it up again. So, when it makes a mistake in the real world, the robot has already learned the skills to recover.

There’s still a bit of research-y cheating going on here, since the robot is relying on the visual code-based AprilTags system to tell it where relevant things (like door handles) are in the real world. But that’s a fairly minor shortcut, since direct detection of things like doors and packages is a fairly well understood problem. Bjelonic says that the next step is to endow the robot with a sense of contact-based surprise, in order to encourage exploration, which is a little bit gentler than what we see here.

Remember, too, that while this is definitely a research paper, Swiss-Mile is a company that wants to get this robot out into the world doing useful stuff. So, unlike most pure research that we cover, there’s a slightly better chance here for this ANYmal to wheel-hand-leg-arm its way into some practical application.



The tricked out version of the ANYmal quadruped, as customized by Zürich-based Swiss-Mile, just keeps getting better and better. Starting with a commercial quadruped, adding powered wheels made the robot fast and efficient, while still allowing it to handle curbs and stairs. A few years ago, the robot learned how to stand up, which is an efficient way of moving and made the robot much more pleasant to hug, but more importantly, it unlocked the potential for the robot to start doing manipulation with its wheel-hand-leg-arms.

Doing any sort of practical manipulation with ANYmal is complicated, because its limbs were designed to be legs, not arms. But at the Robotic Systems Lab at ETH Zurich, they’ve managed to teach this robot to use its limbs to open doors, and even to grasp a package off of a table and toss it into a box.

When it makes a mistake in the real world, the robot has already learned the skills to recover.

The ETHZ researchers got the robot to reliably perform these complex behaviors using a kind of reinforcement learning called ‘curiosity driven’ learning. In simulation, the robot is given a goal that it needs to achieve—in this case, the robot is rewarded for achieving the goal of passing through a doorway, or for getting a package into a box. These are very high-level goals (also called “sparse rewards”), and the robot doesn’t get any encouragement along the way. Instead, it has to figure out how to complete the entire task from scratch.

The next step is to endow the robot with a sense of contact-based surprise.

Given an impractical amount of simulation time, the robot would likely figure out how to do these tasks on its own. But to give it a useful starting point, the researchers introduced the concept of curiosity, which encourages the robot to play with goal-related objects. “In the context of this work, ‘curiosity’ refers to a natural desire or motivation for our robot to explore and learn about its environment,” says author Marko Bjelonic, “Allowing it to discover solutions for tasks without needing engineers to explicitly specify what to do.” For the door-opening task, the robot is instructed to be curious about the position of the door handle, while for the package-grasping task, the robot is told to be curious about the motion and location of the package. Leveraging this curiosity to find ways of playing around and changing those parameters helps the robot achieve its goals, without the researchers having to provide any other kind of input.

The behaviors that the robot comes up with through this process are reliable, and they’re also diverse, which is one of the benefits of using sparse rewards. “The learning process is sensitive to small changes in the training environment,” explains Bjelonic. “This sensitivity allows the agent to explore various solutions and trajectories, potentially leading to more innovative task completion in complex, dynamic scenarios.” For example, with the door opening task, the robot discovered how to open it with either one of its end-effectors, or both at the same time, which makes it better at actually completing the task in the real world. The package manipulation is even more interesting, because the robot sometimes dropped the package in training, but it autonomously learned how to pick it up again. So, when it makes a mistake in the real world, the robot has already learned the skills to recover.

There’s still a bit of research-y cheating going on here, since the robot is relying on the visual code-based AprilTags system to tell it where relevant things (like door handles) are in the real world. But that’s a fairly minor shortcut, since direct detection of things like doors and packages is a fairly well understood problem. Bjelonic says that the next step is to endow the robot with a sense of contact-based surprise, in order to encourage exploration, which is a little bit gentler than what we see here.

Remember, too, that while this is definitely a research paper, Swiss-Mile is a company that wants to get this robot out into the world doing useful stuff. So, unlike most pure research that we cover, there’s a slightly better chance here for this ANYmal to wheel-hand-leg-arm its way into some practical application.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEXASCybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE

Enjoy today’s videos!

This is such an excellent use for autonomous robots: difficult, precise work that benefits from having access to lots of data. Push a button, stand back, and let the robot completely reshape your landscape.

[ Gravis Robotics ]

Universal Robots introduced the UR30 at IREX, in Tokyo, which can lift 30 kilograms—not the 63.5 kg that it says on the tire. That’s the weight of the UR30 itself.

Available for preorder now.

[ Universal Robots ]

IREX is taking place in Japan right now, and here’s a demo of Kaleido, a humanoid robot from Kawasaki.

[ Kawasaki ] via [ YouTube ]

The Unitree H1 is a full-size humanoid for under US $90,000 (!).

[ Unitree ]

This is extremely impressive but freaks me out a little to watch, and I’m not entirely sure why.

[ MIT CSAIL ]

If you look in the background of this video, there’s a person wearing an exoskeleton controlling the robot in the foreground. This is an ideal system for imitation learning, and the robot is then able to perform a similar task autonomously.

[ Github ]

Thanks, Kento!

The video shows highlights from the RoboCup 2023 Humanoid AdultSize competition in Bordeaux, France. The winning team NimbRo is based in the Autonomous Intelligent Systems lab of University of Bonn, Germany.

[ NimbRo ]

This video describes an approach to generate complex, multicontact motion trajectories using user guidance provided through Virtual Reality. User input is useful to reduce the search space through defined key frame. We show these results on the humanoid robot, Valkyrie, from NASA Johnson Space Center, in both simulation and on hardware.

[ Paper ] via [ IHMC ]

For the foreseeable future, this is likely going to be necessary for most robots doing semi-structured tasks like trailer unloading: human in (or on) the loop supervision.

Of course, one human can supervise many robots at once, so as long as most of the robots are autonomous most of the time, it’s all good.

[ Contoro ]

The Danish medical technology start-up ROPCA ApS has launched its first medical product, the arthritis robot “ARTHUR”, which is already being used in the first hospitals. It is based on the lightweight robot LBR Med and supports the early diagnosis of rheumatoid arthritis using robot-assisted ultrasound. This ultrasound robot enables autonomous examination and can thus counteract the shortage of specialists in medicine. This enables earlier treatment, which is essential for a good therapeutic outcome.

[ ROPCA ]

Since 2020, KIMLAB has dedicated efforts to craft an affordable humanoid robot tailored for educational needs, boasting vital features like an ROS-enabled processor and multimodal sensory capabilities. By incorporating a commercially available product, we seamlessly integrated an SBC (Orange PI Lite 2), a camera, and an IMU to create a cost-effective humanoid robot, priced at less than $700 in total.

[ KIMLAB ]

As the newest product launched by WEILAN, the 6th generation AlphaDog, namely BabyAlpha, is defined as a new family member of the artificial intelligence era. Designed for domestic scenarios, it was born for the purpose of providing joyful companionship. Not only do they possess autonomous emotions and distinct personalities, but they also excel in various skills such as singing and dancing, FaceTime calling, English communication, and sports.

[ Weilan ] via [ ModernExpress ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEXASCybathlon Challenges: 2 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE

Enjoy today’s videos!

This is such an excellent use for autonomous robots: difficult, precise work that benefits from having access to lots of data. Push a button, stand back, and let the robot completely reshape your landscape.

[ Gravis Robotics ]

Universal Robots introduced the UR30 at IREX, in Tokyo, which can lift 30 kilograms—not the 63.5 kg that it says on the tire. That’s the weight of the UR30 itself.

Available for preorder now.

[ Universal Robots ]

IREX is taking place in Japan right now, and here’s a demo of Kaleido, a humanoid robot from Kawasaki.

[ Kawasaki ] via [ YouTube ]

The Unitree H1 is a full-size humanoid for under US $90,000 (!).

[ Unitree ]

This is extremely impressive but freaks me out a little to watch, and I’m not entirely sure why.

[ MIT CSAIL ]

If you look in the background of this video, there’s a person wearing an exoskeleton controlling the robot in the foreground. This is an ideal system for imitation learning, and the robot is then able to perform a similar task autonomously.

[ Github ]

Thanks, Kento!

The video shows highlights from the RoboCup 2023 Humanoid AdultSize competition in Bordeaux, France. The winning team NimbRo is based in the Autonomous Intelligent Systems lab of University of Bonn, Germany.

[ NimbRo ]

This video describes an approach to generate complex, multicontact motion trajectories using user guidance provided through Virtual Reality. User input is useful to reduce the search space through defined key frame. We show these results on the humanoid robot, Valkyrie, from NASA Johnson Space Center, in both simulation and on hardware.

[ Paper ] via [ IHMC ]

For the foreseeable future, this is likely going to be necessary for most robots doing semi-structured tasks like trailer unloading: human in (or on) the loop supervision.

Of course, one human can supervise many robots at once, so as long as most of the robots are autonomous most of the time, it’s all good.

[ Contoro ]

The Danish medical technology start-up ROPCA ApS has launched its first medical product, the arthritis robot “ARTHUR”, which is already being used in the first hospitals. It is based on the lightweight robot LBR Med and supports the early diagnosis of rheumatoid arthritis using robot-assisted ultrasound. This ultrasound robot enables autonomous examination and can thus counteract the shortage of specialists in medicine. This enables earlier treatment, which is essential for a good therapeutic outcome.

[ ROPCA ]

Since 2020, KIMLAB has dedicated efforts to craft an affordable humanoid robot tailored for educational needs, boasting vital features like an ROS-enabled processor and multimodal sensory capabilities. By incorporating a commercially available product, we seamlessly integrated an SBC (Orange PI Lite 2), a camera, and an IMU to create a cost-effective humanoid robot, priced at less than $700 in total.

[ KIMLAB ]

As the newest product launched by WEILAN, the 6th generation AlphaDog, namely BabyAlpha, is defined as a new family member of the artificial intelligence era. Designed for domestic scenarios, it was born for the purpose of providing joyful companionship. Not only do they possess autonomous emotions and distinct personalities, but they also excel in various skills such as singing and dancing, FaceTime calling, English communication, and sports.

[ Weilan ] via [ ModernExpress ]

Affective behaviors enable social robots to not only establish better connections with humans but also serve as a tool for the robots to express their internal states. It has been well established that emotions are important to signal understanding in Human-Robot Interaction (HRI). This work aims to harness the power of Large Language Models (LLM) and proposes an approach to control the affective behavior of robots. By interpreting emotion appraisal as an Emotion Recognition in Conversation (ERC) tasks, we used GPT-3.5 to predict the emotion of a robot’s turn in real-time, using the dialogue history of the ongoing conversation. The robot signaled the predicted emotion using facial expressions. The model was evaluated in a within-subjects user study (N = 47) where the model-driven emotion generation was compared against conditions where the robot did not display any emotions and where it displayed incongruent emotions. The participants interacted with the robot by playing a card sorting game that was specifically designed to evoke emotions. The results indicated that the emotions were reliably generated by the LLM and the participants were able to perceive the robot’s emotions. It was found that the robot expressing congruent model-driven facial emotion expressions were perceived to be significantly more human-like, emotionally appropriate, and elicit a more positive impression. Participants also scored significantly better in the card sorting game when the robot displayed congruent facial expressions. From a technical perspective, the study shows that LLMs can be used to control the affective behavior of robots reliably in real-time. Additionally, our results could be used in devising novel human-robot interactions, making robots more effective in roles where emotional interaction is important, such as therapy, companionship, or customer service.

This paper summarizes the structure and findings from the first Workshop on Troubles and Failures in Conversations between Humans and Robots. The workshop was organized to bring together a small, interdisciplinary group of researchers working on miscommunication from two complementary perspectives. One group of technology-oriented researchers was made up of roboticists, Human-Robot Interaction (HRI) researchers and dialogue system experts. The second group involved experts from conversation analysis, cognitive science, and linguistics. Uniting both groups of researchers is the belief that communication failures between humans and machines need to be taken seriously and that a systematic analysis of such failures may open fruitful avenues in research beyond current practices to improve such systems, including both speech-centric and multimodal interfaces. This workshop represents a starting point for this endeavour. The aim of the workshop was threefold: Firstly, to establish an interdisciplinary network of researchers that share a common interest in investigating communicative failures with a particular view towards robotic speech interfaces; secondly, to gain a partial overview of the “failure landscape” as experienced by roboticists and HRI researchers; and thirdly, to determine the potential for creating a robotic benchmark scenario for testing future speech interfaces with respect to the identified failures. The present article summarizes both the “failure landscape” surveyed during the workshop as well as the outcomes of the attempt to define a benchmark scenario.

Interaction with artificial social agents is often designed based on models of human interaction and dialogue. While this is certainly useful for basic interaction mechanisms, it has been argued that social communication strategies and social language use, a “particularly human” ability, may not be appropriate and transferable to interaction with artificial conversational agents. In this paper, we present qualitative research exploring whether users expect artificial agents to use politeness—a fundamental mechanism of social communication—in language-based human-robot interaction. Based on semi-structured interviews, we found that humans mostly ascribe a functional, rule-based use of polite language to humanoid robots and do not expect them to apply socially motivated politeness strategies that they expect in human interaction. This study 1) provides insights for interaction design for social robots’ politeness use from a user perspective, and 2) contributes to politeness research based on the analysis of our participants’ perspectives on politeness.

Identifying an accurate dynamics model remains challenging for humanoid robots. The difficulty is mainly due to the following two points. First, a good initial model is required to evaluate the feasibility of motions for data acquisition. Second, a highly nonlinear optimization problem needs to be solved to design movements to acquire the identification data. To cope with the first point, in this paper, we propose a curriculum of identification to gradually learn an accurate dynamics model from an unreliable initial model. For the second point, we propose using a large-scale human motion database to efficiently design the humanoid movements for the parameter identification. The contribution of our study is developing a humanoid identification method that does not require the good initial model and does not need to solve the highly nonlinear optimization problem. We showed that our curriculum-based approach was able to more efficiently identify humanoid model parameters than a method that just randomly picked reference motions for identification. We evaluated our proposed method in a simulation experiment and demonstrated that our curriculum was led to obtain a wide variety of motion data for efficient parameter estimation. Consequently, our approach successfully identified an accurate model of an 18-DoF, simulated upper-body humanoid robot.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE

Enjoy today’s videos!

Do you find yourself wondering why the world needs bipedal humanoid robots? Allow IHMC and Boardwalk Robotics to answer that question with this video.

[ IHMC ]

Thanks, Robert!

As NASA’s Ingenuity Helicopter made its 59th flight on Mars–achieving its second highest altitude while taking pictures of this flight–the Perseverance Mars rover was watching. See two perspectives of this 142-second flight that reached an altitude of 20 meters (66 feet). This flight took place on 16 Sept. 2023. In this side-by-side video, you’ll see the perspective from Perseverance on the left, which was captured by the rover’s Mastcam-Z imager from about 55 m (180 ft.) away. On the right, you’ll see the perspective from Ingenuity, which was taken by its downward-pointing Navigation Camera (Navcam). During Flight 59, Ingenuity hovered at different altitudes to check Martian wind patterns. The highest altitude achieved in this flight was 20 m. At the time, that was a record for the helicopter.

[ JPL ]

Cassie Blue showcases its ability to navigate a moving walkway, a common yet challenging scenario in human environments. Cassie Blue can walk on to and off of a 1.2 meter-per-second moving treadmill and reject disturbances caused by a tugging gantry and sub-optimal approach angle caused by operator error. The key to Cassie Blue’s success is a new controller featuring a novel combination of virtual constraint-based control and a model predictive controller applied on the often-neglected ankle motor. This technology paves the way for robots to adapt and function in dynamic, real-world settings.

[ Paper ] via [ Michigan Robotics ]

Thanks, Wami!

In this study, we propose a parallel wire-driven leg structure, which has one DoF of linear motion and two DoFs of rotation and is controlled by six wires, as a structure that can achieve both continuous jumping and high jumping. The proposed structure can simultaneously achieve high controllability on each DoF, long acceleration distance and high power required for jumping. In order to verify the jumping performance of the parallel wire-driven leg structure, we have developed a parallel wire-driven monopedal robot, RAMIEL. RAMIEL is equipped with quasi-direct drive, high power wire winding mechanisms and a lightweight leg, and can achieve a maximum jumping height of 1.6 m and a maximum of seven continuous jumps.

[ RAMIEL ]

Thanks, Temma!

PAL Robotics’ Kangaroo demonstrates classic “zero-moment point” or ZMP walking, with only one or two engineers tagging along, and neither of them look all that nervous.

Eventually, PAL Robotics says that the robot will be able to “perform agile maneuvers like running, jumping, and withstanding impacts.”

[ PAL Robotics ]

Thanks, Lorna!

SLOT is a small soft-bodied crawling robot with electromagnetic legs and passive body adaptation. The robot, driven by neural central pattern generator (CPG)-based control, can successfully crawl on a variety of metal terrains, including a flat surface, step, slope, confined space, and an inner (concave surface) and outer (convex surface) pipe in both horizontal and vertical directions. It can be also steered to navigate through a cluttered environment with obstacles. This small soft robot has the potential to be employed as a robotic system for inner and outer pipe inspection and confined space exploration in the oil and gas industry.

[ VISTEK ]

Thanks, Poramate!

It isn’t easy for a robot to find its way out of a maze. Picture these machines trying to traverse a kid’s playroom to reach the kitchen, with miscellaneous toys scattered across the floor and furniture blocking some potential paths. This messy labyrinth requires the robot to calculate the most optimal journey to its destination, without crashing into any obstacles. What is the bot to do? MIT CSAIL researchers’ “Graphs of Convex Sets (GCS) Trajectory Optimization” algorithm presents a scalable, collision-free motion planning system for these robotic navigational needs.

[ MIT CSAIL ]

As the field of human-robot collaboration continues to grow and autonomous general-purpose service robots become more prevalent, robots need to obtain situational awareness and handle tasks with a limited field of view and workspace. Addressing these challenges, KIMLAB and Prof. Yong Jae Lee at the University of Wisconsin-Madison utilize the game of chess as a testbed, employing a general-purpose robotic arm.

[ KIMLAB ]

Humanoid robots have the potential of becoming general purpose robots augmenting the human workforce in industries. However, they must match the agility and versatility of humans. In this paper, we perform experimental investigations on the dynamic walking capabilities of a series-parallel hybrid humanoid named RH5. We demonstrate that it is possible to walk up to speeds of 0.43 m/s with a position-controlled robot without full state feedback, which makes it one of the fastest walking humanoids with similar size and actuation modalities.

[ DFKI ]

Avocado drone. That is all.

[ Paper ]

Autonomous robots must navigate reliably in unknown environments even under compromised exteroceptive perception, or perception failures. Such failures often occur when harsh environments lead to degraded sensing, or when the perception algorithm misinterprets the scene due to limited generalization. In this paper, we model perception failures as invisible obstacles and pits, and train a reinforcement learning (RL) based local navigation policy to guide our legged robot.

[ Resilient Navigation ]

X20 Long Range Remote Hazard Detection Test. We remote the robot dog from a straight line distance of one kilometer, and it successfully tested the density of gases. The purpose of the test is to provide solution for firefighters to use the robot to detect harmful gases first before putting themselves in danger.

[ Deep Robotics ]

This CMU RI Seminar is by Robert Ambrose from Texas A&M, on “Robots at the Johnson Space Center and Future Plans.”

The seminar will review a series of robotic systems built at the Johnson Space Center over the last 20 years. These will include wearable robots (exoskeletons, powered gloves and jetpacks), manipulation systems (ISS cranes down to human scale) and lunar mobility systems (human surface mobility and robotic rovers). As all robotics presentations should, this will include some fun videos.

[ CMU RI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCE

Enjoy today’s videos!

Do you find yourself wondering why the world needs bipedal humanoid robots? Allow IHMC and Boardwalk Robotics to answer that question with this video.

[ IHMC ]

Thanks, Robert!

As NASA’s Ingenuity Helicopter made its 59th flight on Mars–achieving its second highest altitude while taking pictures of this flight–the Perseverance Mars rover was watching. See two perspectives of this 142-second flight that reached an altitude of 20 meters (66 feet). This flight took place on 16 Sept. 2023. In this side-by-side video, you’ll see the perspective from Perseverance on the left, which was captured by the rover’s Mastcam-Z imager from about 55 m (180 ft.) away. On the right, you’ll see the perspective from Ingenuity, which was taken by its downward-pointing Navigation Camera (Navcam). During Flight 59, Ingenuity hovered at different altitudes to check Martian wind patterns. The highest altitude achieved in this flight was 20 m. At the time, that was a record for the helicopter.

[ JPL ]

Cassie Blue showcases its ability to navigate a moving walkway, a common yet challenging scenario in human environments. Cassie Blue can walk on to and off of a 1.2 meter-per-second moving treadmill and reject disturbances caused by a tugging gantry and sub-optimal approach angle caused by operator error. The key to Cassie Blue’s success is a new controller featuring a novel combination of virtual constraint-based control and a model predictive controller applied on the often-neglected ankle motor. This technology paves the way for robots to adapt and function in dynamic, real-world settings.

[ Paper ] via [ Michigan Robotics ]

Thanks, Wami!

In this study, we propose a parallel wire-driven leg structure, which has one DoF of linear motion and two DoFs of rotation and is controlled by six wires, as a structure that can achieve both continuous jumping and high jumping. The proposed structure can simultaneously achieve high controllability on each DoF, long acceleration distance and high power required for jumping. In order to verify the jumping performance of the parallel wire-driven leg structure, we have developed a parallel wire-driven monopedal robot, RAMIEL. RAMIEL is equipped with quasi-direct drive, high power wire winding mechanisms and a lightweight leg, and can achieve a maximum jumping height of 1.6 m and a maximum of seven continuous jumps.

[ RAMIEL ]

Thanks, Temma!

PAL Robotics’ Kangaroo demonstrates classic “zero-moment point” or ZMP walking, with only one or two engineers tagging along, and neither of them look all that nervous.

Eventually, PAL Robotics says that the robot will be able to “perform agile maneuvers like running, jumping, and withstanding impacts.”

[ PAL Robotics ]

Thanks, Lorna!

SLOT is a small soft-bodied crawling robot with electromagnetic legs and passive body adaptation. The robot, driven by neural central pattern generator (CPG)-based control, can successfully crawl on a variety of metal terrains, including a flat surface, step, slope, confined space, and an inner (concave surface) and outer (convex surface) pipe in both horizontal and vertical directions. It can be also steered to navigate through a cluttered environment with obstacles. This small soft robot has the potential to be employed as a robotic system for inner and outer pipe inspection and confined space exploration in the oil and gas industry.

[ VISTEK ]

Thanks, Poramate!

It isn’t easy for a robot to find its way out of a maze. Picture these machines trying to traverse a kid’s playroom to reach the kitchen, with miscellaneous toys scattered across the floor and furniture blocking some potential paths. This messy labyrinth requires the robot to calculate the most optimal journey to its destination, without crashing into any obstacles. What is the bot to do? MIT CSAIL researchers’ “Graphs of Convex Sets (GCS) Trajectory Optimization” algorithm presents a scalable, collision-free motion planning system for these robotic navigational needs.

[ MIT CSAIL ]

As the field of human-robot collaboration continues to grow and autonomous general-purpose service robots become more prevalent, robots need to obtain situational awareness and handle tasks with a limited field of view and workspace. Addressing these challenges, KIMLAB and Prof. Yong Jae Lee at the University of Wisconsin-Madison utilize the game of chess as a testbed, employing a general-purpose robotic arm.

[ KIMLAB ]

Humanoid robots have the potential of becoming general purpose robots augmenting the human workforce in industries. However, they must match the agility and versatility of humans. In this paper, we perform experimental investigations on the dynamic walking capabilities of a series-parallel hybrid humanoid named RH5. We demonstrate that it is possible to walk up to speeds of 0.43 m/s with a position-controlled robot without full state feedback, which makes it one of the fastest walking humanoids with similar size and actuation modalities.

[ DFKI ]

Avocado drone. That is all.

[ Paper ]

Autonomous robots must navigate reliably in unknown environments even under compromised exteroceptive perception, or perception failures. Such failures often occur when harsh environments lead to degraded sensing, or when the perception algorithm misinterprets the scene due to limited generalization. In this paper, we model perception failures as invisible obstacles and pits, and train a reinforcement learning (RL) based local navigation policy to guide our legged robot.

[ Resilient Navigation ]

X20 Long Range Remote Hazard Detection Test. We remote the robot dog from a straight line distance of one kilometer, and it successfully tested the density of gases. The purpose of the test is to provide solution for firefighters to use the robot to detect harmful gases first before putting themselves in danger.

[ Deep Robotics ]

This CMU RI Seminar is by Robert Ambrose from Texas A&M, on “Robots at the Johnson Space Center and Future Plans.”

The seminar will review a series of robotic systems built at the Johnson Space Center over the last 20 years. These will include wearable robots (exoskeletons, powered gloves and jetpacks), manipulation systems (ISS cranes down to human scale) and lunar mobility systems (human surface mobility and robotic rovers). As all robotics presentations should, this will include some fun videos.

[ CMU RI ]

The number of older adults living alone is rapidly increasing. Loneliness in older adults not only degrade their quality of life but also causes troubles such as heavy burden on the medical staff, especially when cognitive decline is present. Social robots could be used in several ways to reduce such problems. As a first step towards this goal, we introduced conversation robots into the homes of older adults with cognitive decline to evaluate the robot’s availability and acceptance during several months. The study involved two steps, one for evaluating the robustness of the proposed robotic system, and the second one to examine the long-term acceptance of social robots by older adults with cognitive decline living alone. Our data shows that after several weeks of human-robot interaction, the participants continued to use the robot and successfully integrated them into their lives. These results open the possibility of further research involving how sustained interaction can be achieved, as well as which factors contributed to the acceptance of the robot.

Pages