Feed aggregator



Using the brain to directly control an object was long the stuff of science fiction, and in 1988 the vision became a reality.

IEEE Life Senior Member Stevo Bozinovski and Members Mihail Sestakov and Dr. Liljana Bozinovska used a student volunteer’s electroencephalogram (EEG) brain signals to move a robot along a closed-circuit track. Bozinovski and Sestakov were electrical engineering and computer science professors at Saints Cyril and Methodius University, in Skopje, North Macedonia. Bozinovska, a physician, taught in the university’s medical school. Their achievement has paved the way for EEG-controlled wheelchairs and exoskeletons.

IEEE commemorated their work with an IEEE Milestone during a ceremony at the university on 10 October.

“The accomplishment is not only meaningful locally,” Vladimir Atanasovski said at the dedication ceremony. “It exhibits benefits for the entire humanity.” Atanasovski is dean of the university’s electrical engineering and information technologies school.

“It was at this very school, 35 years ago, where a relationship between two previously distant areas [robotics and EEG signals] was formed,” he added. “This remarkable work showed that science fiction can become a reality.

“Controlling a robot using human brain signals for the first time advanced both electrical and computer engineering and science, led to worldwide research on brain-computer interfaces, and opened an explicit communication channel between robots and humans.”

Using engineering to demonstrate psychokinesis

Bozinovski, Sestakov, and Bozinovska built a system to send commands to a robot based on EEG signal processing. The method is noninvasive; all you have to do is place electrodes on a volunteer’s scalp.

The three researchers used a Movit Line Tracer II robot they built from a kit from Japanese appliance shop Elehobby, now called Elekit. The robot had two plastic discs that sat on top of each other and held the electronic components between them. Its two wheels were controlled with a start/stop mechanical switch.

“Engineers are the driving force in every country, contributing to the welfare and progress of societies.”—Stevo Pendarovsk

The robot, powered by batteries, drove on a track drawn on a flat surface, according to the Milestone proposal entry on the Engineering Technology and History Wiki.

But the researchers still didn’t know how they were going to translate the brain signals into commands. Bozinovska suggested using the EEG’s prominent alpha-range frequency of 8 to 12 hertz—known as the mu rhythm. It’s a synchronized pattern of electrical activity in the part of the brain that controls voluntary movement. The frequency increases when a person is relaxed and not moving.

Bozinovska’s theory was that the pattern would command the Line Tracer. People attempting to control the robot would achieve relaxation by closing their eyes. To stop the robot from moving, they would perform a voluntary movement such as opening their eyes.

Bozinovski, Sestakov, and Bozinovska designed an experiment to test her theory.

Moving a robot using brain signals

Two rooms were built to conduct the experiment. One was called the “robot arena,” and had a table on which sat a rectangular closed-circuit track with markings where the robot would be commanded to stop.Saints Cyril and Mehtodius University/IEEE

Bozinovski and Sestakov built two rooms in the school’s Laboratory of Intelligent Machines and Bioinformation Systems to conduct the experiment. One was called the “robot arena,” as Bozinovski described it in the Milestone entry. In the room was a table on which sat a rectangular closed-circuit track with markings where the robot would be commanded to stop.

The second room housed the technology including analog-to-digital and digital-to-analog converters, an IBM XT personal computer, a transistor amplifier, and a differential biomedical amplifier, which measured the voltage difference between the brain’s amplitude and frequency and the time it took for them to change. Bozinovski and Sestakov replaced the mechanical switch with an EEG-emulated control switch that was connected to the transistor amplifier. The student volunteer sat in the second room.

A window connected the two rooms. Wires from the D/A converter attached to the robot hung from the ceiling, well out of the way of the robot’s movement.

Electrodes were placed on the top and center of the volunteer’s head, near the medial parietal cortex—the part of the brain that completes visual scene processing. More were placed behind the right ear, near the mastoid, which perceives visual sensations. Others were placed on the forehead, to collect the base electrical signal, from which the differential biomedical amplifier could detect if the frequency increased.

To move the robot, the student relaxed, with eyes closed. The differential biomedical amplifier recorded the EEG signals and inputted them into the computer with the help of the A/D converter set at 300 Hz. Recognition software created by Bozinovski and Sestakov translated the signals into a go command. The computer then sent a 5-volt logic pulse through the D/A converter—which the transistor amplifier magnified and sent to the robot. The volunteer was able to stop the Line Tracer by opening their eyes.

Bozinovski, Sestakov, and Bozinovska presented their findings at the 1988 IEEE International Engineering in Medicine and Biology Conference.

North Macedonia’s president on the importance of engineers

“Engineers are the driving force in every country, contributing to the welfare and progress of societies,” Stevo Pendarovski, president of North Macedonia, said at the dedication ceremony.

“Let this genuinely exceptional event of great importance for Macedonian engineering in particular, but also for Macedonian science and society as a whole, be an inspiration for all students, professors, and future engineers,” Pendarovski said, “to create and contribute to building a modern and technologically advanced world.”

IEEE President Saifur Rahman also attended the ceremony.

A plaque recognizing the technology is displayed outside the Saints Cyril and Methodius University electrical engineering faculty building. The Laboratory of Intelligent Machines and Bioinformation Systems, where the Milestone was achieved, is in the building. It reads:

In 1988, in the Laboratory of Intelligent Machines and Bioinformation Systems, human brain signals controlled the movement of a physical object (a robot) for the first time worldwide. This linked electroencephalogram (EEG) signals collected from a brain with robotics research, opening a new channel for communication between humans and machines. EEG-controlled devices (wheelchairs, exoskeletons, etc.) have benefited numerous users and expanded technology’s role in modern society.

Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

The IEEE North Macedonia Section sponsored the nomination.


Using the brain to directly control an object was long the stuff of science fiction, and in 1988 the vision became a reality.

IEEE Life Senior Member Stevo Bozinovski and Members Mihail Sestakov and Dr. Liljana Bozinovska used a student volunteer’s electroencephalogram (EEG) brain signals to move a robot along a closed-circuit track. Bozinovski and Sestakov were electrical engineering and computer science professors at Saints Cyril and Methodius University, in Skopje, North Macedonia. Bozinovska, a physician, taught in the university’s medical school. Their achievement has paved the way for EEG-controlled wheelchairs and exoskeletons.

IEEE commemorated their work with an IEEE Milestone during a ceremony at the university on 10 October.

“The accomplishment is not only meaningful locally,” Vladimir Atanasovski said at the dedication ceremony. “It exhibits benefits for the entire humanity.” Atanasovski is dean of the university’s electrical engineering and information technologies school.

“It was at this very school, 35 years ago, where a relationship between two previously distant areas [robotics and EEG signals] was formed,” he added. “This remarkable work showed that science fiction can become a reality.

“Controlling a robot using human brain signals for the first time advanced both electrical and computer engineering and science, led to worldwide research on brain-computer interfaces, and opened an explicit communication channel between robots and humans.”

Using engineering to demonstrate psychokinesis

Bozinovski, Sestakov, and Bozinovska built a system to send commands to a robot based on EEG signal processing. The method is noninvasive; all you have to do is place electrodes on a volunteer’s scalp.

The three researchers used a Movit Line Tracer II robot they built from a kit from Japanese appliance shop Elehobby, now called Elekit. The robot had two plastic discs that sat on top of each other and held the electronic components between them. Its two wheels were controlled with a start/stop mechanical switch.

“Engineers are the driving force in every country, contributing to the welfare and progress of societies.”—Stevo Pendarovsk

The robot, powered by batteries, drove on a track drawn on a flat surface, according to the Milestone proposal entry on the Engineering Technology and History Wiki.

But the researchers still didn’t know how they were going to translate the brain signals into commands. Bozinovska suggested using the EEG’s prominent alpha-range frequency of 8 to 12 hertz—known as the mu rhythm. It’s a synchronized pattern of electrical activity in the part of the brain that controls voluntary movement. The frequency increases when a person is relaxed and not moving.

Bozinovska’s theory was that the pattern would command the Line Tracer. People attempting to control the robot would achieve relaxation by closing their eyes. To stop the robot from moving, they would perform a voluntary movement such as opening their eyes.

Bozinovski, Sestakov, and Bozinovska designed an experiment to test her theory.

Moving a robot using brain signals

Two rooms were built to conduct the experiment. One was called the “robot arena,” and had a table on which sat a rectangular closed-circuit track with markings where the robot would be commanded to stop.Saints Cyril and Mehtodius University/IEEE

Bozinovski and Sestakov built two rooms in the school’s Laboratory of Intelligent Machines and Bioinformation Systems to conduct the experiment. One was called the “robot arena,” as Bozinovski described it in the Milestone entry. In the room was a table on which sat a rectangular closed-circuit track with markings where the robot would be commanded to stop.

The second room housed the technology including analog-to-digital and digital-to-analog converters, an IBM XT personal computer, a transistor amplifier, and a differential biomedical amplifier, which measured the voltage difference between the brain’s amplitude and frequency and the time it took for them to change. Bozinovski and Sestakov replaced the mechanical switch with an EEG-emulated control switch that was connected to the transistor amplifier. The student volunteer sat in the second room.

A window connected the two rooms. Wires from the D/A converter attached to the robot hung from the ceiling, well out of the way of the robot’s movement.

Electrodes were placed on the top and center of the volunteer’s head, near the medial parietal cortex—the part of the brain that completes visual scene processing. More were placed behind the right ear, near the mastoid, which perceives visual sensations. Others were placed on the forehead, to collect the base electrical signal, from which the differential biomedical amplifier could detect if the frequency increased.

To move the robot, the student relaxed, with eyes closed. The differential biomedical amplifier recorded the EEG signals and inputted them into the computer with the help of the A/D converter set at 300 Hz. Recognition software created by Bozinovski and Sestakov translated the signals into a go command. The computer then sent a 5-volt logic pulse through the D/A converter—which the transistor amplifier magnified and sent to the robot. The volunteer was able to stop the Line Tracer by opening their eyes.

Bozinovski, Sestakov, and Bozinovska presented their findings at the 1988 IEEE International Engineering in Medicine and Biology Conference.

North Macedonia’s president on the importance of engineers

“Engineers are the driving force in every country, contributing to the welfare and progress of societies,” Stevo Pendarovski, president of North Macedonia, said at the dedication ceremony.

“Let this genuinely exceptional event of great importance for Macedonian engineering in particular, but also for Macedonian science and society as a whole, be an inspiration for all students, professors, and future engineers,” Pendarovski said, “to create and contribute to building a modern and technologically advanced world.”

IEEE President Saifur Rahman also attended the ceremony.

A plaque recognizing the technology is displayed outside the Saints Cyril and Methodius University electrical engineering faculty building. The Laboratory of Intelligent Machines and Bioinformation Systems, where the Milestone was achieved, is in the building. It reads:

In 1988, in the Laboratory of Intelligent Machines and Bioinformation Systems, human brain signals controlled the movement of a physical object (a robot) for the first time worldwide. This linked electroencephalogram (EEG) signals collected from a brain with robotics research, opening a new channel for communication between humans and machines. EEG-controlled devices (wheelchairs, exoskeletons, etc.) have benefited numerous users and expanded technology’s role in modern society.

Administered by the IEEE History Center and supported by donors, the Milestone program recognizes outstanding technical developments around the world.

The IEEE North Macedonia Section sponsored the nomination.


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

We wrote about an earlier version of this absurdly simple walking robot a few years ago. That version had two motors, but this version walks fully controllably with just a single motor! We’re told that the robot’s name is Mugatu, because for a while, it wasn’t an ambiturner.

This was just presented at the IEEE Humanoids Conference in Austin. And here’s a second video with more technical detail on how the robot works:

[ CMU ]

Happy Holiday from Boston Dynamics!

Side note—has anyone built a robot that can flawlessly gift-wrap arbitrary objects yet?

[ Boston Dynamics ]

Is there a world where Digit can leverage a large language model (LLM) to expand its capabilities and better adapt to our world? We had the same question. Our innovation team developed this interactive demo to show how LLMs could make our robots more versatile and faster to deploy. The demo enables people to talk to Digit in natural language and ask it to do tasks, giving a glimpse at the future.

[ Agility Robotics ]

Thanks, Tim!

In 2028, ESA will launch its most ambitious exploration mission to search for past and present signs of life on Mars. ESA’s Rosalind Franklin rover has unique scientific potential to search for evidence of past life on Mars thanks to its drill and scientific instruments. It will be the first rover to reach a depth of up to two metres deep below the surface, acquiring samples that have been protected from surface radiation and extreme temperatures. The drill will retrieve soils from ancient parts of Mars and analyse them in situ with its onboard laboratory.

[ ESA ]

With ChatGPT celebrating the anniversary of its launch a year ago, we thought this would be a good time to sit down with roboticist Hod Lipson and ask him what he thinks about all the changes with the rapid evolution of AI, how they’ve enabled the creation of ChatGPT, and what all this may mean for our future.

[ Columbia Engineering ]

We propose a technique that simultaneously solves for optimal design and control parameters for a robotic character whose design is parameterized with configurable joints. At the technical core of our technique is an efficient solution strategy that uses dynamic programming to solve for optimal state, control, and design parameters, together with a strategy to remove redundant constraints that commonly exist in general robot assemblies with kinematic loops.

[ Disney Research ]

And now, this.

[ Baby Clappy ] via [ Kazumichi Moriyama ]

Humanoid robots that can autonomously operate in diverse environments have the potential to help address labor shortages in factories, assist elderly at homes, and colonize new planets. While classical controllers for humanoid robots have shown impressive results in a number of settings, they are challenging to generalize and adapt to new environments. Here, we present a fully learning-based approach for humanoid locomotion.

[ Hybrid Robotics Lab ]

At the University of Michigan, graduate students in robotics all take ROB 550: Robotic Systems Laboratory. For the Fall 2023 class, the final project asked students to create a robot capable of lifting and stacking small pallets. Students designed and built the lift mechanisms from scratch, with a wide variety of solutions being implemented.

[ Michigan Robotics ]

In-hand object reorientation is necessary for performing many dexterous manipulation tasks, such as tool use in less structured environments that remain beyond the reach of current robots. We present a general object reorientation controller that uses readings from a single commodity depth camera to dynamically reorient complex and new object shapes by any rotation in real-time, with the median reorientation time being close to seven seconds.

[ Visual Dexterity ]

If you weren’t at IEEE Humanoids this week, you missed out on meeting me in person, so shame on you. But you can see all the lightning talks from the Can We Built Baymax workshop right here.

[ CWBB ] via [ KIMLAB ]

The U.S. National Science Foundation’s Graduate Research Fellowship Program (GRFP) has helped ensure the quality, vitality and diversity of the scientific and engineering workforce by recognizing and supporting outstanding graduate students since 1952. Kyle Johnson, a doctoral student at the University of Washington, joins us to talk about his work with robotics, his GRFP experience and how he inspires the next generation.

[ NSF ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Cybathlon Challenges: 02 February 2024, ZURICHEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

We wrote about an earlier version of this absurdly simple walking robot a few years ago. That version had two motors, but this version walks fully controllably with just a single motor! We’re told that the robot’s name is Mugatu, because for a while, it wasn’t an ambiturner.

This was just presented at the IEEE Humanoids Conference in Austin. And here’s a second video with more technical detail on how the robot works:

[ CMU ]

Happy Holiday from Boston Dynamics!

Side note—has anyone built a robot that can flawlessly gift-wrap arbitrary objects yet?

[ Boston Dynamics ]

Is there a world where Digit can leverage a large language model (LLM) to expand its capabilities and better adapt to our world? We had the same question. Our innovation team developed this interactive demo to show how LLMs could make our robots more versatile and faster to deploy. The demo enables people to talk to Digit in natural language and ask it to do tasks, giving a glimpse at the future.

[ Agility Robotics ]

Thanks, Tim!

In 2028, ESA will launch its most ambitious exploration mission to search for past and present signs of life on Mars. ESA’s Rosalind Franklin rover has unique scientific potential to search for evidence of past life on Mars thanks to its drill and scientific instruments. It will be the first rover to reach a depth of up to two metres deep below the surface, acquiring samples that have been protected from surface radiation and extreme temperatures. The drill will retrieve soils from ancient parts of Mars and analyse them in situ with its onboard laboratory.

[ ESA ]

With ChatGPT celebrating the anniversary of its launch a year ago, we thought this would be a good time to sit down with roboticist Hod Lipson and ask him what he thinks about all the changes with the rapid evolution of AI, how they’ve enabled the creation of ChatGPT, and what all this may mean for our future.

[ Columbia Engineering ]

We propose a technique that simultaneously solves for optimal design and control parameters for a robotic character whose design is parameterized with configurable joints. At the technical core of our technique is an efficient solution strategy that uses dynamic programming to solve for optimal state, control, and design parameters, together with a strategy to remove redundant constraints that commonly exist in general robot assemblies with kinematic loops.

[ Disney Research ]

And now, this.

[ Baby Clappy ] via [ Kazumichi Moriyama ]

Humanoid robots that can autonomously operate in diverse environments have the potential to help address labor shortages in factories, assist elderly at homes, and colonize new planets. While classical controllers for humanoid robots have shown impressive results in a number of settings, they are challenging to generalize and adapt to new environments. Here, we present a fully learning-based approach for humanoid locomotion.

[ Hybrid Robotics Lab ]

At the University of Michigan, graduate students in robotics all take ROB 550: Robotic Systems Laboratory. For the Fall 2023 class, the final project asked students to create a robot capable of lifting and stacking small pallets. Students designed and built the lift mechanisms from scratch, with a wide variety of solutions being implemented.

[ Michigan Robotics ]

In-hand object reorientation is necessary for performing many dexterous manipulation tasks, such as tool use in less structured environments that remain beyond the reach of current robots. We present a general object reorientation controller that uses readings from a single commodity depth camera to dynamically reorient complex and new object shapes by any rotation in real-time, with the median reorientation time being close to seven seconds.

[ Visual Dexterity ]

If you weren’t at IEEE Humanoids this week, you missed out on meeting me in person, so shame on you. But you can see all the lightning talks from the Can We Built Baymax workshop right here.

[ CWBB ] via [ KIMLAB ]

The U.S. National Science Foundation’s Graduate Research Fellowship Program (GRFP) has helped ensure the quality, vitality and diversity of the scientific and engineering workforce by recognizing and supporting outstanding graduate students since 1952. Kyle Johnson, a doctoral student at the University of Washington, joins us to talk about his work with robotics, his GRFP experience and how he inspires the next generation.

[ NSF ]



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

The useful niche that quadrupedal robots seem to have found for themselves, at least for the moment, is infrastructure inspection. They’ve had a mild to moderate amount of success monitoring industrial sites, tracking construction progress, and things of that nature. Which is great! But when you look at what humans have historically relied on quadrupeds for, there’s a little bit of situational awareness (in the form of security), but the majority of what these animals have done for us is manual labor.

In a paper published last month in IEEE Robotics and Automation Letters, roboticists from the Robotic Systems Lab at ETH Zurich are aiming to address the fact that “legged robots are still too weak, slow, inefficient, or fragile to take over tasks that involve heavy payloads.” Their new robot that is none of these things is Barry, which can efficiently carry up to 90 kilograms so that you don’t have to.

If you go back far enough, a bunch of the initial funding for quadrupedal robots that enabled the commercial platforms that are available today was tied into the idea of robotic pack animals. Boston Dynamics’ BigDog and LS3 were explicitly designed to haul heavy loads (up to 200 kg) across rough terrain for the U.S. Military. This kind of application may be obvious, but the hardware requirements are challenging. Boston Dynamics’ large quadrupeds were all driven by hydraulics, which depended on the power density of gasoline to function, and ultimately they were too complex and noisy for the military to adopt. The current generation of quadruped robots, like Spot and ANYmal, have a payload of between 10 and 15 kg.

Barry manages to carry 50 percent of the payload of LS3 in a much smaller, more efficient, and quieter form factor. It’s essentially a customized ANYmal, using unique high-efficiency electric actuators rather than hydraulics. The robot itself weighs 48 kg, and can handle unmodeled 90 kg payloads, meaning that Barry doesn’t have to know the size, weight, or mass distribution of what it’s carrying. It’s a key capability, because it makes Barry’s payload capacity actually useful, as the paper’s first author Giorgio Valsecchi explains: “When we use a wheelbarrow, we don’t have to change any settings on it, regardless of what we load it with—any manual adjustment is a bottleneck in usability. Why should a ‘smart’ robot be any different?” This is really what makes Barry’s payload capacity actually real-world useful, and also means that if you want to, you can even ride it.

Barry: A High-Payload and Agile Quadruped Robot youtu.be

Barry’s heroic payload is enabled by its custom actuators. While the standard approach for developing powered robotic joints involves choosing the smallest motor capable of producing the required peak power, Barry focuses on motor efficiency instead. “It turns out that the ideal solution is to have the biggest possible motor,” Valsecchi says. “It is a bit counterintuitive, but bigger motors are more efficient, they consume less energy when performing the same task. This results in a robot with more payload capabilities and a lower cost of transport.” Barry is actually quite efficient: with a cost of transport of just 0.7, it can operate with a payload for over two hours and travel nearly 10 km.

The commercial potential for a robot like Barry is obvious, and Valsecchi is already thinking about several use cases: “carrying raw materials on construction sites to prevent injuries and increase productivity, carrying equipment in search and rescue operations to free up rescuers from excessive loads… The same technology could be used to design a walking wheelchair, and we actually got some requests for this specific use case. Once we started showing the robot with a big box on top, people realized a lot of things could be done.”

At the moment, Barry doesn’t yet have much in the way of perception, so giving the robot the ability to intelligently navigate around obstacles and over complex terrain is one of the things that the researchers will be working on next. They’re also starting to think about potential commercial applications, and it certainly seems like there’s a market for a robot like this—heck, I’d buy one.

The preserved 200 year old body of the original Barry.Photo via Wikipedia by PraktikantinNMBE and reproduced under CC BY-SA 4.0.

Barry, by the way, is named after a legendary St. Bernard who saved the lives of more than 40 people in the Swiss Alps in the early 1800s, including by carrying them to safety on his back. “Being able to ride the robot was one of our ambitions,” Valsecchi tells us. “When we managed to accomplish that I thought we did well enough to tribute the original Barry by using his name, to convey our vision of what robots could become.” Barry the dog died in 1814 (apparently stabbed by someone he was trying to rescue who thought he was a wolf), but his preserved body is on display at the Natural History Museum in Bern.

Barry: A High-Payload and Agile Quadruped Robot, by Giorgio Valsecchi, Nikita Rudin, Lennart Nachtigall, Konrad Mayer, Fabian Tischhauser, and Marco Hutter from ETH Zurich, is published in IEEE Robotics and Automation Letters.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

The useful niche that quadrupedal robots seem to have found for themselves, at least for the moment, is infrastructure inspection. They’ve had a mild to moderate amount of success monitoring industrial sites, tracking construction progress, and things of that nature. Which is great! But when you look at what humans have historically relied on quadrupeds for, there’s a little bit of situational awareness (in the form of security), but the majority of what these animals have done for us is manual labor.

In a paper published last month in IEEE Robotics and Automation Letters, roboticists from the Robotic Systems Lab at ETH Zurich are aiming to address the fact that “legged robots are still too weak, slow, inefficient, or fragile to take over tasks that involve heavy payloads.” Their new robot that is none of these things is Barry, which can efficiently carry up to 90 kilograms so that you don’t have to.

If you go back far enough, a bunch of the initial funding for quadrupedal robots that enabled the commercial platforms that are available today was tied into the idea of robotic pack animals. Boston Dynamics’ BigDog and LS3 were explicitly designed to haul heavy loads (up to 200 kg) across rough terrain for the U.S. Military. This kind of application may be obvious, but the hardware requirements are challenging. Boston Dynamics’ large quadrupeds were all driven by hydraulics, which depended on the power density of gasoline to function, and ultimately they were too complex and noisy for the military to adopt. The current generation of quadruped robots, like Spot and ANYmal, have a payload of between 10 and 15 kg.

Barry manages to carry 50 percent of the payload of LS3 in a much smaller, more efficient, and quieter form factor. It’s essentially a customized ANYmal, using unique high-efficiency electric actuators rather than hydraulics. The robot itself weighs 48 kg, and can handle unmodeled 90 kg payloads, meaning that Barry doesn’t have to know the size, weight, or mass distribution of what it’s carrying. It’s a key capability, because it makes Barry’s payload capacity actually useful, as the paper’s first author Giorgio Valsecchi explains: “When we use a wheelbarrow, we don’t have to change any settings on it, regardless of what we load it with—any manual adjustment is a bottleneck in usability. Why should a ‘smart’ robot be any different?” This is really what makes Barry’s payload capacity actually real-world useful, and also means that if you want to, you can even ride it.

Barry: A High-Payload and Agile Quadruped Robot youtu.be

Barry’s heroic payload is enabled by its custom actuators. While the standard approach for developing powered robotic joints involves choosing the smallest motor capable of producing the required peak power, Barry focuses on motor efficiency instead. “It turns out that the ideal solution is to have the biggest possible motor,” Valsecchi says. “It is a bit counterintuitive, but bigger motors are more efficient, they consume less energy when performing the same task. This results in a robot with more payload capabilities and a lower cost of transport.” Barry is actually quite efficient: with a cost of transport of just 0.7, it can operate with a payload for over two hours and travel nearly 10 km.

The commercial potential for a robot like Barry is obvious, and Valsecchi is already thinking about several use cases: “carrying raw materials on construction sites to prevent injuries and increase productivity, carrying equipment in search and rescue operations to free up rescuers from excessive loads… The same technology could be used to design a walking wheelchair, and we actually got some requests for this specific use case. Once we started showing the robot with a big box on top, people realized a lot of things could be done.”

At the moment, Barry doesn’t yet have much in the way of perception, so giving the robot the ability to intelligently navigate around obstacles and over complex terrain is one of the things that the researchers will be working on next. They’re also starting to think about potential commercial applications, and it certainly seems like there’s a market for a robot like this—heck, I’d buy one.

The preserved 200 year old body of the original Barry.Photo via Wikipedia by PraktikantinNMBE and reproduced under CC BY-SA 4.0.

Barry, by the way, is named after a legendary St. Bernard who saved the lives of more than 40 people in the Swiss Alps in the early 1800s, including by carrying them to safety on his back. “Being able to ride the robot was one of our ambitions,” Valsecchi tells us. “When we managed to accomplish that I thought we did well enough to tribute the original Barry by using his name, to convey our vision of what robots could become.” Barry the dog died in 1814 (apparently stabbed by someone he was trying to rescue who thought he was a wolf), but his preserved body is on display at the Natural History Museum in Bern.

Barry: A High-Payload and Agile Quadruped Robot, by Giorgio Valsecchi, Nikita Rudin, Lennart Nachtigall, Konrad Mayer, Fabian Tischhauser, and Marco Hutter from ETH Zurich, is published in IEEE Robotics and Automation Letters.

For robots to become integrated into our daily environment, they must be designed to gain sufficient trust of both users and bystanders. This is in particular important for social robots including those that assume the role of a mediator, working towards positively shaping relationships and interactions between individuals. One crucial factor influencing trust is the appropriate handling of personal information. Previous research on privacy has focused on data collection, secure storage, and abstract third-party disclosure risks. However, robot mediators may face situations where the disclosure of private information about one person to another specific person appears necessary. It is not clear if, how, and to what extent robots should share private information between people. This study presents an online investigation into appropriate robotic disclosure strategies. Using a vignette design, participants were presented with written descriptions of situations where a social robot reveals personal information about its owner to support pro-social human-human interaction. Participants were asked to choose the most appropriate robot behaviors, which differed in the level of information disclosure. We aimed to explore the effects of disclosure context, such as the relationship to the other person and the information content. The findings indicate that both the information content and relationship configurations significantly influence the perception of appropriate behavior but are not the sole determinants of disclosure-adequacy perception. The results also suggest that expected benefits of disclosure and individual general privacy attitudes serve as additional influential factors. These insights can inform the design of future mediating robots, enabling them to make more privacy-appropriate decisions which could foster trust and acceptance.

Material Handling Vehicles (loaders, excavators, forklifts, harvesters, etc.) have seen a strong increase in automation efforts in recent years. The contexts such vehicles operate in are frequently complex and due to the often very specific nature of industrial material handling scenarios, know-how is fragmented and literature is not as numerous as, for example, for passenger vehicle automation. In this paper, we present a contextual design space for automated material handling vehicles (AMHV), that is intended to inform context analysis and design activities across a wide spectrum of material handling use cases. It was developed on the basis of existing context and design spaces for vehicle and machine automation and extended via expert knowledge. The design space consists of separate context and interaction subspaces, that separately capture the situation and each individual point of interaction, respectively. Implications, opportunities, and limitations for the investigation and design of AMHV are discussed.

Introduction: As a result of Industry 5.0’s technological advancements, collaborative robots (cobots) have emerged as pivotal enablers for refining manufacturing processes while re-focusing on humans. However, the successful integration of these cutting-edge tools hinges on a better understanding of human factors when interacting with such new technologies, eventually fostering workers’ trust and acceptance and promoting low-fatigue work. This study thus delves into the intricate dynamics of human-cobot interactions by adopting a human-centric view.

Methods: With this intent, we targeted senior workers, who often contend with diminishing work capabilities, and we explored the nexus between various human factors and task outcomes during a joint assembly operation with a cobot on an ergonomic workstation. Exploiting a dual-task manipulation to increase the task demand, we measured performance, subjective perceptions, eye-tracking indices and cardiac activity during the task. Firstly, we provided an overview of the senior workers’ perceptions regarding their shared work with the cobot, by measuring technology acceptance, perceived wellbeing, work experience, and the estimated social impact of this technology in the industrial sector. Secondly, we asked whether the considered human factors varied significantly under dual-tasking, thus responding to a higher mental load while working alongside the cobot. Finally, we explored the predictive power of the collected measurements over the number of errors committed at the work task and the participants’ perceived workload.

Results: The present findings demonstrated how senior workers exhibited strong acceptance and positive experiences with our advanced workstation and the cobot, even under higher mental strain. Besides, their task performance suffered increased errors and duration during dual-tasking, while the eye behavior partially reflected the increased mental demand. Some interesting outcomes were also gained about the predictive power of some of the collected indices over the number of errors committed at the assembly task, even though the same did not apply to predicting perceived workload levels.

Discussion: Overall, the paper discusses possible applications of these results in the 5.0 manufacturing sector, emphasizing the importance of adopting a holistic human-centered approach to understand the human-cobot complex better.

Introduction: Electromagnetically controlled small-scale robots show great potential in precise diagnosis, targeted delivery, and minimally invasive surgery. The automatic navigation of such robots could reduce human intervention, as well as the risk and difficulty of surgery. However, it is challenging to build a precise kinematics model for automatic robotic control because the controlling process is affected by various delays and complex environments.

Method: Here, we propose a learning-based intelligent trajectory planning strategy for automatic navigation of magnetic robots without kinematics modeling. The Long Short-Term Memory (LSTM) neural network is employed to establish a global mapping relationship between the current sequence in the electromagnetic actuation system and the trajectory coordinates.

Result: We manually control the robot to move on a curved path 50 times to form the training database to train the LSTM network. The trained LSTM network is validated to output the current sequence for automatically controlling the magnetic robot to move on the same curved path and the tortuous and branched new paths in simulated vascular tracks.

Discussion: The proposed trajectory planning strategy is expected to impact the clinical applications of robots.

Metaverse has been confirmed as a relatively amorphous concept of innovation, which refers to technological advancement. Metaverse, i.e., a coalition between reality world and virtual world, has created significant significance and convenience in education, communication, economy, etc. The COVID-19 outbreak has stimulated the growth of metaverse applications in medicine. The above-mentioned technology has broad applications while comprising online remote medical treatment, online conferences, medical education, preparation of surgical plans, etc. Moreover, technical, security, and financial challenges should be tackled down by the future widespread use of metaverse. Metaverse is limitlessly promising, and it will exert a certain effect on future scientific and technological advancements in the medical industry. The review article primarily aims to summarize the application of the metaverse in medicine and their challenge in the future of medicine.



Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

This magnetically actuated soft robot is perhaps barely a robot by most definitions, but I can’t stop watching it flop around.

In this work, Ahmad Rafsanjani, Ahmet F. Demirörs, and co‐workers from SDU (DK) and ETH (CH) introduce kirigami into a soft magnetic sheet to achieve bidirectional crawling under rotating magnetic fields. Experimentally characterized crawling and deformation profiles, combined with numerical simulations, reveal programmable motion through changes in cut shape, magnet orientation, and translational motion. This work offers a simple approach toward untethered soft robots.

[ Paper ] via [ SDU ]

Thanks, Ahmad!

Winner of the earliest holiday video is the LARSEN team at Inria!

[ Inria ]

Thanks, Serena!

Even though this is just a rendering, I really appreciate Apptronik being like, “we’re into the humanoid thing, but sometimes you just don’t need legs.”

[ Apptronik ]

We’re not allowed to discuss unmentionables here at IEEE Spectrum, so I can only tell you that Digit has started working in a warehouse handling, uh, things.

[ Agility ]

Unitree’s sub-$90k H1 Humanoid suffering some abuse in a non-PR video.

[ Impress ]

Unlike me, ANYmal can perform 24/7 in all weather.

[ ANYbotics ]

Most of the world will need to turn on subtitles for this, but it’s cool to see how industrial robots can be used to make art.

[ Kuka ]

I was only 12 when this episode of Scientific American Frontiers aired, but I totally remember Alan Alda meeting Flakey!

And here’s the segment, it’s pretty great.

[ SRI ]

Agility CEO Damion Shelton talks about the hierarchy of robot control and draws similarities to the process of riding a horse.

[ Agility ]

Seeking to instill students with real-life workforce skills through hands-on learning, teachers at Central High School in Louisville, Ky., incorporated Spot into their curriculum. For students at CHS, a magnet school for Jefferson County Public Schools district, getting experience with an industrial robot has sparked a passion for engineering and robotics, kickstarted advancement into university engineering programs, and built lifelong career skills. See how students learn to operate Spot, program new behaviors for the robot, and inspire their peers with the school’s “emotional support robot” and unofficial mascot.

[ Boston Dynamics ]



Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids 2023: 12–14 December 2023, AUSTIN, TEX.Cybathlon Challenges: 02 February 2024, ZURICH, SWITZERLANDEurobot Open 2024: 8–11 May 2024, LA ROCHE-SUR-YON, FRANCEICRA 2024: 13–17 May 2024, YOKOHAMA, JAPAN

Enjoy today’s videos!

This magnetically actuated soft robot is perhaps barely a robot by most definitions, but I can’t stop watching it flop around.

In this work, Ahmad Rafsanjani, Ahmet F. Demirörs, and co‐workers from SDU (DK) and ETH (CH) introduce kirigami into a soft magnetic sheet to achieve bidirectional crawling under rotating magnetic fields. Experimentally characterized crawling and deformation profiles, combined with numerical simulations, reveal programmable motion through changes in cut shape, magnet orientation, and translational motion. This work offers a simple approach toward untethered soft robots.

[ Paper ] via [ SDU ]

Thanks, Ahmad!

Winner of the earliest holiday video is the LARSEN team at Inria!

[ Inria ]

Thanks, Serena!

Even though this is just a rendering, I really appreciate Apptronik being like, “we’re into the humanoid thing, but sometimes you just don’t need legs.”

[ Apptronik ]

We’re not allowed to discuss unmentionables here at IEEE Spectrum, so I can only tell you that Digit has started working in a warehouse handling, uh, things.

[ Agility ]

Unitree’s sub-$90k H1 Humanoid suffering some abuse in a non-PR video.

[ Impress ]

Unlike me, ANYmal can perform 24/7 in all weather.

[ ANYbotics ]

Most of the world will need to turn on subtitles for this, but it’s cool to see how industrial robots can be used to make art.

[ Kuka ]

I was only 12 when this episode of Scientific American Frontiers aired, but I totally remember Alan Alda meeting Flakey!

And here’s the segment, it’s pretty great.

[ SRI ]

Agility CEO Damion Shelton talks about the hierarchy of robot control and draws similarities to the process of riding a horse.

[ Agility ]

Seeking to instill students with real-life workforce skills through hands-on learning, teachers at Central High School in Louisville, Ky., incorporated Spot into their curriculum. For students at CHS, a magnet school for Jefferson County Public Schools district, getting experience with an industrial robot has sparked a passion for engineering and robotics, kickstarted advancement into university engineering programs, and built lifelong career skills. See how students learn to operate Spot, program new behaviors for the robot, and inspire their peers with the school’s “emotional support robot” and unofficial mascot.

[ Boston Dynamics ]



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Thanks to eons of evolution, vines have the ability to seek out light sources, growing in the direction that will optimize their chances of absorbing sunlight and thriving. Now, researchers have succeeded in creating a vine-inspired crawling bot that can achieve similar feats, seeking out and moving towards light and heat sources. It’s described in a study published last month in IEEE Robotics and Automation Letters.

Shivani Deglurkar, a Ph.D. candidate in the department of Mechanical and Aerospace Engineering at the University of California, San Diego, helped co-design these automated “vines.” Because of its light- and heat-seeking abilities, the system doesn’t require a complex centralized controller. Instead, the “vines” automatically move towards a desired target. “[Also], if some of the vines or roots are damaged or removed, the others remain fully functional,” she notes.

While the tech is still in its infancy, Deglurkar says she envisions it helping in different applications related to solar tracking, or perhaps even in detecting and fighting smoldering fires.

It uses a novel actuator that contracts in the presence of light, causing it to gravitate towards the source. Shivani Deglurkar et al.

To help the device automatically gravitate towards heat and light, Deglurkar’s team developed a novel actuator. It uses a photo absorber in low-boiling-point fluid, which is contained in many small, individual pouches along the sides of the vine’s body. They called this novel actuator a Photothermal Phase-change Series Actuator (PPSA).

When exposed to light, the PPSAs absorb light, heat up, inflate with vapor, and contract. As the PPSAs are pressurized, they elongate, by unfurling material from inside its tip. “At the same time, the PPSAs on the side exposed to light contract, shortening that portion of the robot, and steering it toward the [light or heat] source,” explains Deglurkar.

Her team then tested the system, placing it at different distances from an infrared light source, and confirmed that it will gravitate towards the source at short distances. Its ability to do so depends on the light intensity, whereby stronger light sources allow the device to bend more towards the heat source.

Full turning of the vine by the PPSAs takes about 90 seconds. Strikingly, the device was even able to navigate around obstacles thanks to its inherent need to seek out light and heat sources.

Charles Xiao, a Ph. D. candidate in the department of Mechanical Engineering at the University of California, Santa Barbara, helped co-design the vine. He says he was surprised to see its responsiveness in even very low lighting. “Sunlight is about 1000 W/m2, and our robot has been shown to work at a fraction of solar intensity,” he explains, noting that a lot of comparable systems require illumination greater than that of one Sun.

Xiao says that the main strength of the automated vine is its simplicity and low cost to make. But more work is needed before it can hit the market—or makes its debut fighting fires. “It is slow to respond to light and heat signals and not yet designed for high temperature applications,” explains Xiao.

Therefore future prototypes would need better performance at high temperatures and ability to sense fires in order to be deployed in a real-world environment. Moving forward, Deglurkar says her team’s next steps include designing the actuators to be more selective to the wavelengths emitted by a fire, and developing actuators with a faster response time.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Thanks to eons of evolution, vines have the ability to seek out light sources, growing in the direction that will optimize their chances of absorbing sunlight and thriving. Now, researchers have succeeded in creating a vine-inspired crawling bot that can achieve similar feats, seeking out and moving towards light and heat sources. It’s described in a study published last month in IEEE Robotics and Automation Letters.

Shivani Deglurkar, a Ph.D. candidate in the department of Mechanical and Aerospace Engineering at the University of California, San Diego, helped co-design these automated “vines.” Because of its light- and heat-seeking abilities, the system doesn’t require a complex centralized controller. Instead, the “vines” automatically move towards a desired target. “[Also], if some of the vines or roots are damaged or removed, the others remain fully functional,” she notes.

While the tech is still in its infancy, Deglurkar says she envisions it helping in different applications related to solar tracking, or perhaps even in detecting and fighting smoldering fires.

It uses a novel actuator that contracts in the presence of light, causing it to gravitate towards the source. Shivani Deglurkar et al.

To help the device automatically gravitate towards heat and light, Deglurkar’s team developed a novel actuator. It uses a photo absorber in low-boiling-point fluid, which is contained in many small, individual pouches along the sides of the vine’s body. They called this novel actuator a Photothermal Phase-change Series Actuator (PPSA).

When exposed to light, the PPSAs absorb light, heat up, inflate with vapor, and contract. As the PPSAs are pressurized, they elongate, by unfurling material from inside its tip. “At the same time, the PPSAs on the side exposed to light contract, shortening that portion of the robot, and steering it toward the [light or heat] source,” explains Deglurkar.

Her team then tested the system, placing it at different distances from an infrared light source, and confirmed that it will gravitate towards the source at short distances. Its ability to do so depends on the light intensity, whereby stronger light sources allow the device to bend more towards the heat source.

Full turning of the vine by the PPSAs takes about 90 seconds. Strikingly, the device was even able to navigate around obstacles thanks to its inherent need to seek out light and heat sources.

Charles Xiao, a Ph. D. candidate in the department of Mechanical Engineering at the University of California, Santa Barbara, helped co-design the vine. He says he was surprised to see its responsiveness in even very low lighting. “Sunlight is about 1000 W/m2, and our robot has been shown to work at a fraction of solar intensity,” he explains, noting that a lot of comparable systems require illumination greater than that of one Sun.

Xiao says that the main strength of the automated vine is its simplicity and low cost to make. But more work is needed before it can hit the market—or makes its debut fighting fires. “It is slow to respond to light and heat signals and not yet designed for high temperature applications,” explains Xiao.

Therefore future prototypes would need better performance at high temperatures and ability to sense fires in order to be deployed in a real-world environment. Moving forward, Deglurkar says her team’s next steps include designing the actuators to be more selective to the wavelengths emitted by a fire, and developing actuators with a faster response time.



Every minute counts when someone suffers a cardiac arrest. New research suggests that drones equipped with equipment to automatically restart someone’s heart could help get life-saving care to people much faster.

If your heart stops beating outside of a hospital, your chance of survival is typically less than 10 percent. One thing that can boost the prospect of pulling through is an automated external defibrillator (AED)—a device that can automatically diagnose dangerous heart rhythms and deliver an electric shock to get the heart pumping properly again.

AEDs are designed to be easy to use and provide step-by-step voice instructions, making it possible for untrained bystanders to deliver treatment before an ambulance arrives. But even though AEDs are often installed in public spaces such as shopping malls and airports, the majority of cardiac arrests outside of hospitals actually occur in homes.

A team of Swedish researchers decided to use drones to deliver AEDs directly to patients. Over the course of an 11-month trial in the suburbs of Gothenburg, the team showed they could get the devices to the scene of a medical emergency before an ambulance 67 percent of the time. Generally the AED arrived more than three minutes earlier, giving bystanders time to attach the device before paramedics reached the patient. In one case, this saved a patient’s life.

“The results are really promising because we show that it’s possible to beat the ambulance services by several minutes in a majority of cases,” says Andreas Claesson, an associate professor at the Karolinska Institute in Solna who led the research. “If you look at cardiac arrest, each minute that passes without treatment survival decreases by about 10 percent. So a time benefit of three minutes, as in this study, could potentially increase survival.”

The project was a collaboration with Gothenburg-based drone operator Everdone and covered 194.3 square kilometers of semi-urban areas around the city, with a total population of roughly 200,000. Throughout the study period, the company operated five DJI drones that could be dispatched from hangars at five different locations around the city. The drones could autonomously fly to the scene of an emergency under the watch of a single safety supervisor. Each drone carried an AED in a basket that could be winched down from an altitude of 30 meters.

When the local emergency response center received a call about a suspected cardiac arrest or ongoing CPR, one of the drones was dispatched immediately. Once the drone reached the location, it lowered the AED to the ground. If the emergency dispatcher deemed it appropriate and safe, the person who had called in the cardiac arrest was directed to retrieve the device.

Everdrone

Drones weren’t dispatched for every emergency call, because they weren’t allowed to operate in rain and strong winds, in no-fly zones, or when calls came from high-rise buildings. But in a paper in the December edition of The Lancet Digital Health, the research team reported that of the 55 cases where both a drone and an ambulance reached the scene of the emergency, the drone got there first 37 times, with a median lead time of 3 minutes and 14 seconds.

Only 18 of those emergency calls actually turned out to be cardiac arrests, but in six of those cases the caller managed to apply the AED. In two cases the device recommended applying a shock, with one of the patients surviving thanks to the intervention. The number of cases is too few to make any claims about the clinical effectiveness of the approach, says Claesson, but he says the results clearly show that drones are an effective way to improve emergency response times.

“Three minutes is quite substantial,” says Timothy Chan, a professor of mechanical and industrial engineering at the University of Toronto, who has investigated the effectiveness of drone-delivered AEDs. “Given that in most parts of the world emergency response times are fairly static over time, it would be a huge win if we could achieve and sustain a big reduction like this in widespread practice.”

The approach won’t work everywhere, admits Claesson. In rural areas, the technology would likely lead to even bigger reductions in response time, but lower population density means the cases would be too few to justify the investment. And in big cities, ambulance response times are already relatively rapid and high rise buildings would make drone operation challenging.

But in the kind of semi-urban areas where the trial was conducted, Claesson thinks the technology is very promising. Each drone system costs roughly US $125,000 a year to run and can cover an area with roughly 30,000 to 40,000 inhabitants, which he says is already fairly cost-effective. But what will make the idea even more compelling is when the drones are able to respond to a wider range of emergencies.

That could involve delivering medical supplies for other time-sensitive medical emergencies like drug overdoses, allergic reactions or severe bleeding, he says. Drones equipped with cameras could also rapidly relay video of car accidents or fires to dispatchers, enabling them to tailor the emergency response based on the nature and severity of the incident.

The biggest challenge when it comes to delivering medical support such as AEDs by drone, says Claesson, is the reliance on untrained bystanders.“It’s a really stressful event for them,” he says. “Most often it’s a relative and most often they don’t know CPR and they might not know how an AED works.”

One promising future direction could be to combine drone-delivered AEDs with existing smartphone apps that are used to quickly alert volunteers trained in first aid to nearby medical emergencies. “In Sweden, in 40 percent of cases they arrive before an ambulance,” says Claesson. “We could just send a push notification to the app saying a drone will deliver an AED in two minutes, make your way to the site.”



Every minute counts when someone suffers a cardiac arrest. New research suggests that drones equipped with equipment to automatically restart someone’s heart could help get life-saving care to people much faster.

If your heart stops beating outside of a hospital, your chance of survival is typically less than 10 percent. One thing that can boost the prospect of pulling through is an automated external defibrillator (AED)—a device that can automatically diagnose dangerous heart rhythms and deliver an electric shock to get the heart pumping properly again.

AEDs are designed to be easy to use and provide step-by-step voice instructions, making it possible for untrained bystanders to deliver treatment before an ambulance arrives. But even though AEDs are often installed in public spaces such as shopping malls and airports, the majority of cardiac arrests outside of hospitals actually occur in homes.

A team of Swedish researchers decided to use drones to deliver AEDs directly to patients. Over the course of an 11-month trial in the suburbs of Gothenburg, the team showed they could get the devices to the scene of a medical emergency before an ambulance 67 percent of the time. Generally the AED arrived more than three minutes earlier, giving bystanders time to attach the device before paramedics reached the patient. In one case, this saved a patient’s life.

“The results are really promising because we show that it’s possible to beat the ambulance services by several minutes in a majority of cases,” says Andreas Claesson, an associate professor at the Karolinska Institute in Solna who led the research. “If you look at cardiac arrest, each minute that passes without treatment survival decreases by about 10 percent. So a time benefit of three minutes, as in this study, could potentially increase survival.”

The project was a collaboration with Gothenburg-based drone operator Everdone and covered 194.3 square kilometers of semi-urban areas around the city, with a total population of roughly 200,000. Throughout the study period, the company operated five DJI drones that could be dispatched from hangars at five different locations around the city. The drones could autonomously fly to the scene of an emergency under the watch of a single safety supervisor. Each drone carried an AED in a basket that could be winched down from an altitude of 30 meters.

When the local emergency response center received a call about a suspected cardiac arrest or ongoing CPR, one of the drones was dispatched immediately. Once the drone reached the location, it lowered the AED to the ground. If the emergency dispatcher deemed it appropriate and safe, the person who had called in the cardiac arrest was directed to retrieve the device.

Everdrone

Drones weren’t dispatched for every emergency call, because they weren’t allowed to operate in rain and strong winds, in no-fly zones, or when calls came from high-rise buildings. But in a paper in the December edition of The Lancet Digital Health, the research team reported that of the 55 cases where both a drone and an ambulance reached the scene of the emergency, the drone got there first 37 times, with a median lead time of 3 minutes and 14 seconds.

Only 18 of those emergency calls actually turned out to be cardiac arrests, but in six of those cases the caller managed to apply the AED. In two cases the device recommended applying a shock, with one of the patients surviving thanks to the intervention. The number of cases is too few to make any claims about the clinical effectiveness of the approach, says Claesson, but he says the results clearly show that drones are an effective way to improve emergency response times.

“Three minutes is quite substantial,” says Timothy Chan, a professor of mechanical and industrial engineering at the University of Toronto, who has investigated the effectiveness of drone-delivered AEDs. “Given that in most parts of the world emergency response times are fairly static over time, it would be a huge win if we could achieve and sustain a big reduction like this in widespread practice.”

The approach won’t work everywhere, admits Claesson. In rural areas, the technology would likely lead to even bigger reductions in response time, but lower population density means the cases would be too few to justify the investment. And in big cities, ambulance response times are already relatively rapid and high rise buildings would make drone operation challenging.

But in the kind of semi-urban areas where the trial was conducted, Claesson thinks the technology is very promising. Each drone system costs roughly US $125,000 a year to run and can cover an area with roughly 30,000 to 40,000 inhabitants, which he says is already fairly cost-effective. But what will make the idea even more compelling is when the drones are able to respond to a wider range of emergencies.

That could involve delivering medical supplies for other time-sensitive medical emergencies like drug overdoses, allergic reactions or severe bleeding, he says. Drones equipped with cameras could also rapidly relay video of car accidents or fires to dispatchers, enabling them to tailor the emergency response based on the nature and severity of the incident.

The biggest challenge when it comes to delivering medical support such as AEDs by drone, says Claesson, is the reliance on untrained bystanders.“It’s a really stressful event for them,” he says. “Most often it’s a relative and most often they don’t know CPR and they might not know how an AED works.”

One promising future direction could be to combine drone-delivered AEDs with existing smartphone apps that are used to quickly alert volunteers trained in first aid to nearby medical emergencies. “In Sweden, in 40 percent of cases they arrive before an ambulance,” says Claesson. “We could just send a push notification to the app saying a drone will deliver an AED in two minutes, make your way to the site.”

In this article, we present RISE—a Robotics Integration and Scenario-Management Extensible-Architecture—for designing human–robot dialogs and conducting Human–Robot Interaction (HRI) studies. In current HRI research, interdisciplinarity in the creation and implementation of interaction studies is becoming increasingly important. In addition, there is a lack of reproducibility of the research results. With the presented open-source architecture, we aim to address these two topics. Therefore, we discuss the advantages and disadvantages of various existing tools from different sub-fields within robotics. Requirements for an architecture can be derived from this overview of the literature, which 1) supports interdisciplinary research, 2) allows reproducibility of the research, and 3) is accessible to other researchers in the field of HRI. With our architecture, we tackle these requirements by providing a Graphical User Interface which explains the robot behavior and allows introspection into the current state of the dialog. Additionally, it offers controlling possibilities to easily conduct Wizard of Oz studies. To achieve transparency, the dialog is modeled explicitly, and the robot behavior can be configured. Furthermore, the modular architecture offers an interface for external features and sensors and is expandable to new robots and modalities.

Introduction: Robotic exoskeletons are emerging technologies that have demonstrated their effectiveness in assisting with Activities of Daily Living. However, kinematic disparities between human and robotic joints can result in misalignment between humans and exoskeletons, leading to discomfort and potential user injuries.

Methods: In this paper, we present an ergonomic knee exoskeleton based on a dual four-bar linkage mechanism powered by hydraulic artificial muscles for stair ascent assistance. The device comprises two asymmetric four-bar linkage mechanisms on the medial and lateral sides to accommodate the internal rotation of the knee and address the kinematic discrepancies between these sides. A genetic algorithm was employed to optimize the parameters of the four-bar linkage mechanism to minimize misalignment between human and exoskeleton knee joints. The proposed device was evaluated through two experiments. The first experiment measured the reduction in undesired load due to misalignment, while the second experiment evaluated the device’s effectiveness in assisting stair ascent in a healthy subject.

Results: The experimental results indicate that the proposed device has a significantly reduced undesired load compared to the traditional revolute joint, decreasing from 14.15 N and 18.32 N to 1.88 N and 1.07 N on the medial and lateral sides, respectively. Moreover, a substantial reduction in muscle activities during stair ascent was observed, with a 55.94% reduction in surface electromyography signal.

Discussion: The reduced undesired load of the proposed dual four-bar linkage mechanism highlights the importance of the adopted asymmetrical design for reduced misalignment and increased comfort. Moreover, the proposed device was effective at reducing the effort required during stair ascent.

The present research is innovative as we followed a user-centered approach to implement and train two working memory architectures on an industrial RB-KAIROS + robot: GRU, a state-of-the-art architecture, and WorkMATe, a biologically-inspired alternative. Although user-centered approaches are essential to create a comfortable and safe HRI, they are still rare in industrial settings. Closing this research gap, we conducted two online user studies with large heterogeneous samples. The major aim of these studies was to evaluate the RB-KAIROS + robot’s appearance, movements, and perceived memory functions before (User Study 1) and after the implementation and training of robot working memory (User Study 2). In User Study 1, we furthermore explored participants’ ideas about robot memory and what aspects of the robot’s movements participants found positive and what aspects they would change. The effects of participants’ demographic background and attitudes were controlled for. In User Study 1, participants’ overall evaluations of the robot were moderate. Participant age and negative attitudes toward robots led to more negative robot evaluations. According to exploratory analyses, these effects were driven by perceived low experience with robots. Participants expressed clear ideas of robot memory and precise suggestions for a safe, efficient, and comfortable robot navigation which are valuable for further research and development. In User Study 2, the implementation of WorkMATe and GRU led to more positive evaluations of perceived robot memory, but not of robot appearance and movements. Participants’ robot evaluations were driven by their positive views of robots. Our results demonstrate that considering potential users’ views can greatly contribute to an efficient and positively perceived robot navigation, while users’ experience with robots is crucial for a positive HRI.

Pages