Feed aggregator



Brain-machine interface (BMI) technology, for all its decades of development, still awaits widespread use. Reasons include hardware and software not yet up to the task in non-invasive approaches that use electroencephalogram (EEG) sensors placed on the scalp, and because surgery is required in approaches relying on brain implants.

Now, researchers at, the University of Technology Sydney (UTS), Australia, in collaboration with the Australian Army, have developed portable, prototype dry sensors that achieve 94 percent of the accuracy of benchmark wet sensors, but without the latter’s awkwardness, lengthy setup time, need for messy gels, and limited reliability outside the lab.

“Dry sensors have performed poorly compared to the gold standard silver on silver chloride wet sensors,” says Francesca Iacopi, from the UTS Faculty of Engineering and IT. “This is especially the case when monitoring EEG signals from hair-covered curved areas of the scalp. That’s why they are needle-shaped, bulky, and uncomfortable for users.”

“We’ve used [the new sensors] in a field test to demonstrate hands-free operations of a quadruped robot using only brain signals.”
—Francesca Iacopi, University of Technology Sydney

Iacopi, together with Chin-Teng Lin, a faculty colleague specializing in BMI algorithm research, have developed three-dimensional micropatterned sensors using sub-nanometer-thick epitaxial graphene for the area of contact. The sensors can be attached to the back of the head, the location best for detecting EEG signals from the visual cortex, the area of the brain processing visual information.

“As long as the hair is short, the sensors provide enough skin contact and low impedance to compare well on a signal-to-noise bases with wet sensors,” says Iacopi. “And we’ve used them in a field test to demonstrate hands-free operations of a quadruped robot using only brain signals.”

The sensors are fabricated on a silicon substrate over which a layer of cubic silicon carbide (3C-SiC) is laid down and patterned using photolithography and etching to form approximately 10-micron thick designs—three-dimensional designs being crucial to obtain good contact with the curved and hairy part of the scalp, according to the researchers. A catalytic alloy method is then used to grow epitaxial graphene around the surface of the patterned structure.

The researchers chose SiC on silicon because it’s easier to pattern and to integrate with silicon than SiC alone. And as for graphene, “it’s extremely conductive, it’s biocompatible, and it’s resilient and highly adhesive to its substrate,” says Iacopi. In addition, “it can be hydrated and act like a sponge to soak up the moisture and sweat on the skin, which increases its conductivity and lowers impedance.”

Brain Robotics Interface

Several patterns were tested and an hexagonal structure that provided the best contact with the skin through the hair was chosen. With redundancy in mind, eight sensors were attached to a custom-made sensor pad using pin buttons, and was then employed on an elastic headband wrapped around the operator’s skull. All eight sensors recorded EEG signals to varying degrees depending on their location and the pressure from the headband, explains Lin. Results of the tests were published last month in Applied Nano Materials.

To test the sensors, an operator is also fitted with a head-mounted augmented reality lens that displays six white flickering squares representing different commands. When he or she concentrates on a specific square, a particular collective biopotential is produced in the visual cortex and picked up by the sensors. The signal is sent to a decoder in the mount via Bluetooth, which converts the signal into the intended command and is then wirelessly transmitted to a receiver in the robot.

“The system can issue up to nine commands at present, though only six commands have been tested and verified for use with the graphene sensors,” says Lin. “Each command corresponds to a specific action or function such as go forward, turn right, or stop. We will add more commands in the future.”

The Australian Army successfully carried out two field tests using a quadruped robot. In the first test, the soldier operator had the robot follow a series of visual guides set out over rough ground. The second test had the operator take on the role of a section commander. He provided directions to both the robot and soldiers in the team as they conducted a simulated clearance of several buildings in an urban war setting, with the robot preceding the soldiers in checking out the buildings.

The development of executive function (EF) in children, particularly with respect to self-regulation skills, has been linked to long-term benefits in terms of social and health outcomes. One such skill is the ability to deal with frustrations when waiting for a delayed, preferred reward. Although robots have increasingly been utilized in educational situations that involve teaching psychosocial skills to children, including various aspects related to self-control, the utility of robots in increasing the likelihood of self-imposed delay of gratification remains to be explored. Using a single-case experimental design, the present study exposed 24 preschoolers to three experimental conditions where a choice was provided between an immediately available reward and a delayed but larger reward. The likelihood of waiting increased over sessions when children were simply asked to wait, but waiting times did not increase further during a condition where teachers offered activities as a distraction. However, when children were exposed to robots and given the opportunity to interact with them, waiting times for the majority of children increased with medium to large effect sizes. Given the positive implications of strong executive function, how it might be increased in children in which it is lacking, limited, or in the process of developing, is of considerable import. This study highlights the effectiveness of robots as a distractor during waiting times and outlines a potential new application of robots in educational contexts.



This sponsored article is brought to you by Robotics Summit & Expo.

The Robotics Summit & Expo, taking place May 10-11 at the Boston Convention Center, will bring together the brightest minds in robotics to share their commercial robotics development experiences.

Learn from industry-leading speakers, build new relationships by networking, see demos from 150+ exhibitors showcasing enabling technologies to help build commercial robots.


Use code IEEE25 at checkout to save 25% off your full conference pass!

The conference programming will provide professionals the information they need to successfully develop the next generation of commercial robots. This year’s program has an exceptional lineup of speakers.

The Robotics Summit keynote speakers include the following:

  • Howie Choset, Professor of Robotics, Carnegie Mellon University: “Idea to Reality: Commercializing Robotics Technologies”
  • Laura Major, CTO, Motional: “Scalable AI Solutions for Driverless Vehicles”
  • Marc Raibert, Executive Director, AI Institute: “The Next Decade in Robotics”
  • Martin Buehler, Global Head of Robotics R&D, Johnson & Johnson MedTech: “The Future of Surgical Robotics”
  • Nicolaus Radford, CEO, Nauticus Robotics: “Developing Robots for Final Frontiers”

The expo hall at the Robotics Summit will have more than 150 exhibitors showcasing their latest enabling technologies, products and services that can help robotics engineers throughout their development journey.

The Robotics Summit also offers networking opportunities, a Career Fair, a robotics development challenge and much more.



Gain full access to the world’s leading event dedicated to commercial robotics development with our discounted rate.

Use code IEEE25 at checkout to save 25% off your full conference pass!

Discounts are also available for academia, associations, and corporate groups. Please e-mail events@wtwhmedia.com for more details about our discount programs.

Expo only tickets are just $75. Attendees can purchase tickets for the event here.



This sponsored article is brought to you by Robotics Summit & Expo.

The Robotics Summit & Expo, taking place May 10-11 at the Boston Convention Center, will bring together the brightest minds in robotics to share their commercial robotics development experiences.

Learn from industry-leading speakers, build new relationships by networking, see demos from 150+ exhibitors showcasing enabling technologies to help build commercial robots.


Use code IEEE25 at checkout to save 25% off your full conference pass!

The conference programming will provide professionals the information they need to successfully develop the next generation of commercial robots. This year’s program has an exceptional lineup of speakers.

The Robotics Summit keynote speakers include the following:

  • Howie Choset, Professor of Robotics, Carnegie Mellon University: “Idea to Reality: Commercializing Robotics Technologies”
  • Laura Major, CTO, Motional: “Scalable AI Solutions for Driverless Vehicles”
  • Marc Raibert, Executive Director, AI Institute: “The Next Decade in Robotics”
  • Martin Buehler, Global Head of Robotics R&D, Johnson & Johnson MedTech: “The Future of Surgical Robotics”
  • Nicolaus Radford, CEO, Nauticus Robotics: “Developing Robots for Final Frontiers”

The expo hall at the Robotics Summit will have more than 150 exhibitors showcasing their latest enabling technologies, products and services that can help robotics engineers throughout their development journey.

The Robotics Summit also offers networking opportunities, a Career Fair, a robotics development challenge and much more.



Gain full access to the world’s leading event dedicated to commercial robotics development with our discounted rate.

Use code IEEE25 at checkout to save 25% off your full conference pass!

Discounts are also available for academia, associations, and corporate groups. Please e-mail events@wtwhmedia.com for more details about our discount programs.

Expo only tickets are just $75. Attendees can purchase tickets for the event here.

As personalization technology increasingly orchestrates individualized shopping or marketing experiences in industries such as logistics, fast-moving consumer goods, and food delivery, these sectors require flexible solutions that can automate object grasping for unknown or unseen objects without much modification or downtime. Most solutions in the market are based on traditional object recognition and are, therefore, not suitable for grasping unknown objects with varying shapes and textures. Adequate learning policies enable robotic grasping to accommodate high-mix and low-volume manufacturing scenarios. In this paper, we review the recent development of learning-based robotic grasping techniques from a corpus of over 150 papers. In addition to addressing the current achievements from researchers all over the world, we also point out the gaps and challenges faced in AI-enabled grasping, which hinder robotization in the aforementioned industries. In addition to 3D object segmentation and learning-based grasping benchmarks, we have also performed a comprehensive market survey regarding tactile sensors and robot skin. Furthermore, we reviewed the latest literature on how sensor feedback can be trained by a learning model to provide valid inputs for grasping stability. Finally, learning-based soft gripping is evaluated as soft grippers can accommodate objects of various sizes and shapes and can even handle fragile objects. In general, robotic grasping can achieve higher flexibility and adaptability, when equipped with learning algorithms.



Interactive robotics is a relatively new field of study, but in a short time it has moved on from just performing preprogrammed repetitive tasks to more complex activities, including interactions with living creatures. Biocompatible and biomimetic robots, for example, are being increasingly used to study animals and plants.

Animal-in-the-loop robotic systems are especially effective for studying collective behaviors that are otherwise challenging with traditional methods. Not only do these systems provide new insights for animal behavior and conservation studies. They also push innovation in robotics engineering. A recent collaboration between the Swiss École Polytechnique Fédérale de Lausanne (EPFL) and University of Graz in Austria demonstrated how effective animal-in-the-loop systems can be, when they developed a robotic system camouflaged as a honeycomb sheet and integrated it into a honeybee colony.

With the robotic system, the researchers studied three colonies of European honeybee (Apis mellifera) nonintrusively during the winter months of 2020 and 2021. The researchers were able to study collective thermoregulation behaviors in the colony, influence bee movement within the hive by modulating temperatures, and notice new patterns of movement. The group published their findings in March in Science Robotics.

“Basically, our device is a strange robotic system that not only is biocompatible, but also, it has a bunch of sensors, electronics, [and] thermal actuators to interact with honeybee colonies,” says Rafael Barmak, a doctoral student at EPFL and one of the authors of the study. Honeybees are notoriously territorial and will either destroy or cover up any foreign body in a hive. Therefore, while designing their system, the engineers had to account for not just the robotic functionalities required, but also the social behaviors of honeybees.

EPFL and University of Graz/Science Robotics

Using robots to study animals, Barmak says, means that tedious, repetitive tasks can be automated, like measuring localized temperatures, which is very important for a healthy hive and the bee life cycle. “Measuring temperature is not simple inside a honeybee colony…. What is cool about this device is that once it is well integrated in the colony, the bees surround the sensors [embedded in it].” Otherwise, beekeepers and scientists mostly have to rely on external temperature data, which isn’t always accurate.

It took a few iterations for the EPFL–University of Graz team to get the design right. The final robotic device looks like a beekeeping frame with an electronics panel across the top. “In the very middle, there is a printed circuit board [PCB], where we have the thermal actuators, the sensors, all the supporting electronics to make all this work,” Barmak says. “We have a microcontroller, which is a processor to orchestrate all the [workings].” The bees didn’t take to the first design, just the PCB, coated with a resin, and covered with wax. After that, the researchers decided to add a building template frame, and, after trying a few different materials, eventually won the colony over with a 1-millimeter-thick laser-cut mesh.

The system’s sensor arrays were even able to detect the impending thermal collapse of a colony. This happens when the temperature falls dangerously low (below 10 ºC for the European honeybee, at which point the bees are unable to beat their wings to generate heat). “We saw that the bees had stopped moving, and then we looked at the thermal data…and realized they were in trouble,” Barmak says. The researchers decided to use the thermal actuators, so far being used to study collective behaviors in the colony, to turn up the heat, and thus, the bees were saved.

Barmak and his team also used thermal stimuli to move the bees around within the colony, something that has not been tried before with winter colonies. They noticed that the bees followed thermal stimuli very precisely, and observed previously unknown behavior patterns, which could be useful to develop other apiculture technologies. They are now preparing their robotic system to study summer colonies, which will be a little more difficult as the bees are far more active then.

The most important thing about this robotic system, Barmak says, is that it allows scientists to study these animals in newer ways and expand on available knowledge. Aside from scientific curiosity, he says, it shows the possibilities of using interactive robotic systems to observe animal colonies, and then use the data to create new e-agriculture devices, sensors, and more for the field.



Interactive robotics is a relatively new field of study, but in a short time it has moved on from just performing preprogrammed repetitive tasks to more complex activities, including interactions with living creatures. Biocompatible and biomimetic robots, for example, are being increasingly used to study animals and plants.

Animal-in-the-loop robotic systems are especially effective for studying collective behaviors that are otherwise challenging with traditional methods. Not only do these systems provide new insights for animal behavior and conservation studies. They also push innovation in robotics engineering. A recent collaboration between the Swiss École Polytechnique Fédérale de Lausanne (EPFL) and University of Graz in Austria demonstrated how effective animal-in-the-loop systems can be, when they developed a robotic system camouflaged as a honeycomb sheet and integrated it into a honeybee colony.

With the robotic system, the researchers studied three colonies of European honeybee (Apis mellifera) nonintrusively during the winter months of 2020 and 2021. The researchers were able to study collective thermoregulation behaviors in the colony, influence bee movement within the hive by modulating temperatures, and notice new patterns of movement. The group published their findings in March in Science Robotics.

“Basically, our device is a strange robotic system that not only is biocompatible, but also, it has a bunch of sensors, electronics, [and] thermal actuators to interact with honeybee colonies,” says Rafael Barmak, a doctoral student at EPFL and one of the authors of the study. Honeybees are notoriously territorial and will either destroy or cover up any foreign body in a hive. Therefore, while designing their system, the engineers had to account for not just the robotic functionalities required, but also the social behaviors of honeybees.

EPFL and University of Graz/Science Robotics

Using robots to study animals, Barmak says, means that tedious, repetitive tasks can be automated, like measuring localized temperatures, which is very important for a healthy hive and the bee life cycle. “Measuring temperature is not simple inside a honeybee colony…. What is cool about this device is that once it is well integrated in the colony, the bees surround the sensors [embedded in it].” Otherwise, beekeepers and scientists mostly have to rely on external temperature data, which isn’t always accurate.

It took a few iterations for the EPFL–University of Graz team to get the design right. The final robotic device looks like a beekeeping frame with an electronics panel across the top. “In the very middle, there is a printed circuit board [PCB], where we have the thermal actuators, the sensors, all the supporting electronics to make all this work,” Barmak says. “We have a microcontroller, which is a processor to orchestrate all the [workings].” The bees didn’t take to the first design, just the PCB, coated with a resin, and covered with wax. After that, the researchers decided to add a building template frame, and, after trying a few different materials, eventually won the colony over with a 1-millimeter-thick laser-cut mesh.

The system’s sensor arrays were even able to detect the impending thermal collapse of a colony. This happens when the temperature falls dangerously low (below 10 ºC for the European honeybee, at which point the bees are unable to beat their wings to generate heat). “We saw that the bees had stopped moving, and then we looked at the thermal data…and realized they were in trouble,” Barmak says. The researchers decided to use the thermal actuators, so far being used to study collective behaviors in the colony, to turn up the heat, and thus, the bees were saved.

Barmak and his team also used thermal stimuli to move the bees around within the colony, something that has not been tried before with winter colonies. They noticed that the bees followed thermal stimuli very precisely, and observed previously unknown behavior patterns, which could be useful to develop other apiculture technologies. They are now preparing their robotic system to study summer colonies, which will be a little more difficult as the bees are far more active then.

The most important thing about this robotic system, Barmak says, is that it allows scientists to study these animals in newer ways and expand on available knowledge. Aside from scientific curiosity, he says, it shows the possibilities of using interactive robotic systems to observe animal colonies, and then use the data to create new e-agriculture devices, sensors, and more for the field.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Robotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

This is the Grain Weevil, and it’s designed to keep humans out of grain bins. I love this because it’s an excellent example of how to solve a real, valuable problem uniquely with a relatively simple, focused robot.

[ Grain Weevil ]

As the city of Paris sleeps, Spot is hard at work inspecting some of RATP Group’s 35,000 civil works components. The RATP Group (Autonomous Parisian Transportation Administration), is a French state-owned public transport operator and maintainer for the Greater Paris area. With thousands of civil works to inspect each year, the company has turned to mobile robotics to inspect hard-to-reach and hazardous areas in order to keep employees out of harm’s way.

[ Boston Dynamics ]

Thanks, Renee!

Looks like Agility Robotics and the new Digit had a productive (and popular!) time at ProMat.

[ Agility Robotics ]

I still cannot believe that this makes sense. But it does?

[ Tevel ]

Unitree sells a lidar now, and it’s $330.

[ Unitree ]

We recently had the privilege to host Madeline Gannon (robot whisperer and head of ATONATON) at our HQ in Portland. It’s no surprise that a week in our shop resulted in a game of industrial basketball with our ABB IRB 8700-turned basketball hoop.

[ Loupe ]

Thanks, Madeline!

We demonstrated Stretch, our autonomous case handling robot, automating trailer unloading at Promat 2023. From efficiency to ease of use, hear from our team to learn how Stretch works, what’s new, and what’s coming next for warehouse automation!

[ Boston Dynamics ]

KAIST has developed a quadrupedal robot locomotion technology that moves up and down stairs without the aid of visual or tactile sensors in a disaster situation where it is impossible to see due to smoke and moves without falling over bumpy environments such as tree roots.

[ KAIST ]

Here’s how Pickle’s box-unloading robot has been doing.

[ Pickle Robot ]

Quite possibly the most destructive combat robot ever designed has been revamped and is heading to RoboGames 2023.

Use the code “HardCore” for a discount on your RoboGames tickets!

[ RoboGames ]

Is AI smarter than babies? Depends what you mean by smarter, of course.

[ NSF ]

Someone can do this through a telepresence robot and I can’t even do it in real life, sigh.

[ Sanctuary AI ]

Chen Li, from the Terradynamics Lab at JHU, gives a talk on “The Need for & Feasibility of Alternative Robots to Traverse Sandy & Rocky Extraterrestrial Terrain.”

[ JHU ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Robotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

This is the Grain Weevil, and it’s designed to keep humans out of grain bins. I love this because it’s an excellent example of how to solve a real, valuable problem uniquely with a relatively simple, focused robot.

[ Grain Weevil ]

As the city of Paris sleeps, Spot is hard at work inspecting some of RATP Group’s 35,000 civil works components. The RATP Group (Autonomous Parisian Transportation Administration), is a French state-owned public transport operator and maintainer for the Greater Paris area. With thousands of civil works to inspect each year, the company has turned to mobile robotics to inspect hard-to-reach and hazardous areas in order to keep employees out of harm’s way.

[ Boston Dynamics ]

Thanks, Renee!

Looks like Agility Robotics and the new Digit had a productive (and popular!) time at ProMat.

[ Agility Robotics ]

I still cannot believe that this makes sense. But it does?

[ Tevel ]

Unitree sells a lidar now, and it’s $330.

[ Unitree ]

We recently had the privilege to host Madeline Gannon (robot whisperer and head of ATONATON) at our HQ in Portland. It’s no surprise that a week in our shop resulted in a game of industrial basketball with our ABB IRB 8700-turned basketball hoop.

[ Loupe ]

Thanks, Madeline!

We demonstrated Stretch, our autonomous case handling robot, automating trailer unloading at Promat 2023. From efficiency to ease of use, hear from our team to learn how Stretch works, what’s new, and what’s coming next for warehouse automation!

[ Boston Dynamics ]

KAIST has developed a quadrupedal robot locomotion technology that moves up and down stairs without the aid of visual or tactile sensors in a disaster situation where it is impossible to see due to smoke and moves without falling over bumpy environments such as tree roots.

[ KAIST ]

Here’s how Pickle’s box-unloading robot has been doing.

[ Pickle Robot ]

Quite possibly the most destructive combat robot ever designed has been revamped and is heading to RoboGames 2023.

Use the code “HardCore” for a discount on your RoboGames tickets!

[ RoboGames ]

Is AI smarter than babies? Depends what you mean by smarter, of course.

[ NSF ]

Someone can do this through a telepresence robot and I can’t even do it in real life, sigh.

[ Sanctuary AI ]

Chen Li, from the Terradynamics Lab at JHU, gives a talk on “The Need for & Feasibility of Alternative Robots to Traverse Sandy & Rocky Extraterrestrial Terrain.”

[ JHU ]

Surveying active nuclear facilities for spread of alpha and beta contamination is currently performed by human operators. However, a skills gap of qualified workers is emerging and is set to worsen in the near future due to under recruitment, retirement and increased demand. This paper presents an autonomous ground vehicle that can survey nuclear facilities for alpha, beta and gamma radiation and generate radiation heatmaps. New methods for preventing the robot from spreading radioactive contamination using a state-machine and radiation costmaps are introduced. This is the first robot that can detect alpha and beta contamination and autonomously re-plan around the contamination without the wheels passing over the contaminated area. Radiation avoidance functionality is proven experimentally to reduce alpha and beta contamination spread as well as gamma radiation dose to the robot. The robot’s survey area is defined using a custom designed, graphically controlled area coverage planner. It was concluded that the robot is highly suited to certain monotonous room scale radiation surveying tasks and therefore provides the opportunity for financial savings, to mitigate a future skills gap, and provision of radiation surveys that are more granular, accurate and repeatable than those currently performed by human operators.

Space resource utilisation is opening a new space era. The scientific proof of the presence of water ice on the south pole of the Moon, the recent advances in oxygen extraction from lunar regolith, and its use as a material to build shelters are positioning the Moon, again, at the centre of important space programs. These worldwide programs, led by ARTEMIS, expect robotics to be the disrupting technology enabling humankind’s next giant leap. However, Moon robots require a high level of autonomy to perform lunar exploration tasks more efficiently without being constantly controlled from Earth. Furthermore, having more than one robotic system will increase the resilience and robustness of the global system, improving its success rate, as well as providing additional redundancy. This paper introduces the Resilient Exploration and Lunar Mapping System, developed with a scalable architecture for semi-autonomous lunar mapping. It leverages Visual Simultaneous Localization and Mapping techniques on multiple rovers to map large lunar environments. Several resilience mechanisms are implemented, such as two-agent redundancy, delay invariant communications, a multi-master architecture different control modes. This study presents the experimental results of REALMS with two robots and its potential to be scaled to a larger number of robots, increasing the map coverage and system redundancy. The system’s performance is verified and validated in a lunar analogue facility, and a larger lunar environment during the European Space Agency (ESA)-European Space Resources Innovation Centre Space Resources Challenge. The results of the different experiments show the efficiency of REALMS and the benefits of using semi-autonomous systems.

One of the main goals of robotics and intelligent agent research is to enable them to communicate with humans in physically situated settings. Human communication consists of both verbal and non-verbal modes. Recent studies in enabling communication for intelligent agents have focused on verbal modes, i.e., language and speech. However, in a situated setting the non-verbal mode is crucial for an agent to adapt flexible communication strategies. In this work, we focus on learning to generate non-verbal communicative expressions in situated embodied interactive agents. Specifically, we show that an agent can learn pointing gestures in a physically simulated environment through a combination of imitation and reinforcement learning that achieves high motion naturalness and high referential accuracy. We compared our proposed system against several baselines in both subjective and objective evaluations. The subjective evaluation is done in a virtual reality setting where an embodied referential game is played between the user and the agent in a shared 3D space, a setup that fully assesses the communicative capabilities of the generated gestures. The evaluations show that our model achieves a higher level of referential accuracy and motion naturalness compared to a state-of-the-art supervised learning motion synthesis model, showing the promise of our proposed system that combines imitation and reinforcement learning for generating communicative gestures. Additionally, our system is robust in a physically-simulated environment thus has the potential of being applied to robots.



We’ve gotten used to thinking of quadrupedal robots as robotic versions of dogs. And, to be fair, it’s right there in the word “quadrupedal.” But if we can just get past the Latin, there’s absolutely no reason why quadrupedal robots have to restrict themselves to using all four of their limbs as legs all of the time. And in fact, most other quadrupeds are versatile like this: four-legged animals frequently use their front limbs to interact with the world around them for non-locomotion purposes.

Roboticists at CMU and UC Berkeley are training robot dogs to use their legs for manipulation, not just locomotion, demonstrating skills that include climbing walls, pressing buttons, and even kicking a soccer ball.

Training a robot to do both locomotion and manipulation at the same time with the same limbs can be tricky using reinforcement learning techniques, because you can get stuck in local minima while trying to optimize for skills that are very different and (I would guess) sometimes in opposition to each other. So, the researchers split the training into separate manipulation and locomotion policies, and trained each in simulation, although that meant an extra step smooshing those separate skills together in the real world to perform useful tasks.

Successfully performing a combined locomotion and manipulation task requires one high-quality expert demonstration. The robot remembers what commands the human gave during the demonstration, and then creates a behavior tree that it can follow that breaks up the tasks into a bunch of connected locomotion and manipulation sub-tasks that it can perform in order. This also adds robustness to the system, because if the robot fails any sub-task, it can “rewind” its way back through the behavior tree until it gets back to a point of success, and then start over from there.

This particular robot (a Unitree Go1 with an Intel RealSense for perception) manages to balance itself against a wall to press a wheelchair access button that’s nearly a meter high, and then walk out the open door, which is pretty impressive. More broadly, this is a useful step towards helping non-humanoid robots to operate in human-optimized environments, which might be more important than it seems. It’s certainly possible to modify our environments to be friendlier to robots, and we see this in places like hospitals (and some hotels) where robots are able to directly control elevators. This makes it much easier for the robots to get around, but it’s annoying enough to have to do that in some cases, it’s more practical (if not necessarily simpler) to just build a button-pushing robot instead. There’s perhaps an argument to be made that the best middle ground here is just to build broadly accessible infrastructure in the first place, by making sure that neither robots nor humans should have to rely on a specific manipulation technique to operate anything. But until we make that happen, skills like these will be critical for helpful legged robots.

Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion, by Xuxin Cheng, Ashish Kumar, and Deepak Pathak from Carnegie Mellon University and UC Berkeley, will be presented next month at ICRA 2023 in London.



We’ve gotten used to thinking of quadrupedal robots as robotic versions of dogs. And, to be fair, it’s right there in the word “quadrupedal.” But if we can just get past the Latin, there’s absolutely no reason why quadrupedal robots have to restrict themselves to using all four of their limbs as legs all of the time. And in fact, most other quadrupeds are versatile like this: four-legged animals frequently use their front limbs to interact with the world around them for non-locomotion purposes.

Roboticists at CMU and UC Berkeley are training robot dogs to use their legs for manipulation, not just locomotion, demonstrating skills that include climbing walls, pressing buttons, and even kicking a soccer ball.

Training a robot to do both locomotion and manipulation at the same time with the same limbs can be tricky using reinforcement learning techniques, because you can get stuck in local minima while trying to optimize for skills that are very different and (I would guess) sometimes in opposition to each other. So, the researchers split the training into separate manipulation and locomotion policies, and trained each in simulation, although that meant an extra step smooshing those separate skills together in the real world to perform useful tasks.

Successfully performing a combined locomotion and manipulation task requires one high-quality expert demonstration. The robot remembers what commands the human gave during the demonstration, and then creates a behavior tree that it can follow that breaks up the tasks into a bunch of connected locomotion and manipulation sub-tasks that it can perform in order. This also adds robustness to the system, because if the robot fails any sub-task, it can “rewind” its way back through the behavior tree until it gets back to a point of success, and then start over from there.

This particular robot (a Unitree Go1 with an Intel RealSense for perception) manages to balance itself against a wall to press a wheelchair access button that’s nearly a meter high, and then walk out the open door, which is pretty impressive. More broadly, this is a useful step towards helping non-humanoid robots to operate in human-optimized environments, which might be more important than it seems. It’s certainly possible to modify our environments to be friendlier to robots, and we see this in places like hospitals (and some hotels) where robots are able to directly control elevators. This makes it much easier for the robots to get around, but it’s annoying enough to have to do that in some cases, it’s more practical (if not necessarily simpler) to just build a button-pushing robot instead. There’s perhaps an argument to be made that the best middle ground here is just to build broadly accessible infrastructure in the first place, by making sure that neither robots nor humans should have to rely on a specific manipulation technique to operate anything. But until we make that happen, skills like these will be critical for helpful legged robots.

Legs as Manipulator: Pushing Quadrupedal Agility Beyond Locomotion, by Xuxin Cheng, Ashish Kumar, and Deepak Pathak from Carnegie Mellon University and UC Berkeley, will be presented next month at ICRA 2023 in London.

When a snake robot explores a collapsed house as a rescue robot, it needs to move through various obstacles, some of which may be made of soft materials, such as mattresses. In this study, we call mattress-like environment as a soft floor, which deforms when some force is added to it. We focused on the central pattern generator (CPG) network as a control for the snake robot to propel itself on the soft floor and constructed a CPG network that feeds back contact information between the robot and the floor. A genetic algorithm was used to determine the parameters of the CPG network suitable for the soft floor. To verify the obtained parameters, comparative simulations were conducted using the parameters obtained for the soft and hard floor, and the parameters were confirmed to be appropriate for each environment. By observing the difference in snake robot’s propulsion depending on the presence or absence of the tactile sensor feedback signal, we confirmed the effectiveness of the tactile sensor considered in the parameter search.

Electrohydrodynamic (EHD) pumps are a promising driving source for various fluid-driven systems owing to features such as simple structure and silent operation. The performance of EHD pumps depends on the properties of the working fluid, such as conductivity, viscosity, and permittivity. This implies that the tuning of these parameters in a working fluid can enhance the EHD performance. This study reports a method to modify the properties of a liquid for EHD pumps by mixing an additive. Specifically, dibutyl adipate (DBA) and polyvinyl chloride (PVC) are employed as the working fluid and the additive, respectively. The results show that when the concentration of PVC is 0.2%, the flow rate and pressure at applied voltage of 8 kV take highest value of 7.85 μL/s and 1.63 kPa, respectively. These values correspond to an improvement of 109% and 40% for the flow rate and pressure, respectively, compared to the pure DBA (PVC 0%). When the voltage is 10 kV, the flow rate of 10.95 μL/s and the pressure of 2.07 kPa are observed for DBA with PVC concentration of 0.2%. These values are more than five times higher than those observed for FC40 at the same voltage (2.02 μL/s and 0.32 kPa). The results also suggest that optimal conductivity and viscosity values exist for maximizing the EHD performance of a liquid. This demonstrates the validity of the proposed method for realizing high-performance EHD pumps by using additives in the working fluid.

In the last decades, Simultaneous Localization and Mapping (SLAM) proved to be a fundamental topic in the field of robotics, due to the many applications, ranging from autonomous driving to 3D reconstruction. Many systems have been proposed in literature exploiting a heterogeneous variety of sensors. State-of-the-art methods build their own map from scratch, using only data coming from the equipment of the robot, and not exploiting possible reconstructions of the environment. Moreover, temporary loss of data proves to be a challenge for SLAM systems, as it demands efficient re-localization to continue the localization process. In this paper, we present a SLAM system that exploits additional information coming from mapping services like OpenStreetMaps, hence the name OSM-SLAM, to face these issues. We extend an existing LiDAR-based Graph SLAM system, ART-SLAM, making it able to integrate the 2D geometry of buildings in the trajectory estimation process, by matching a prior OpenStreetMaps map with a single LiDAR scan. Each estimated pose of the robot is then associated with all buildings surrounding it. This association allows to improve localization accuracy, but also to adjust possible mistakes in the prior map. The pose estimates coming from SLAM are then jointly optimized with the constraints associated with the various OSM buildings, which can assume one of the following types: Buildings are always fixed (Prior SLAM); buildings surrounding a robot are movable in chunks, for every scan (Rigid SLAM); and every single building is free to move independently from the others (Non-rigid SLAM). Lastly, OSM maps can also be used to re-localize the robot when sensor data is lost. We compare the accuracy of the proposed system with existing methods for LiDAR-based SLAM, including the baseline, also providing a visual inspection of the results. The comparison is made by evaluating the estimated trajectory displacement using the KITTI odometry dataset. Moreover, the experimental campaign, along with an ablation study on the re-localization capabilities of the proposed system and its accuracy in loop detection-denied scenarios, allow a discussion about how the quality of prior maps influences the SLAM procedure, which may lead to worse estimates than the baseline.



This is a sponsored article brought to you by Elephant Robotics.

In recent years, interest in using robots in education has seen massive growth. Projects that involve robotics, artificial intelligence, speech recognition, and related technologies can help develop students’ analytical, creative, and practical skills. However, a major challenge has been the robots themselves: They are typically big, heavy, and costly. For robots to become widely used in education, they need to be smaller, easier to setup and use, and, more important, they need to be affordable to educators and students.

That’s the goal Elephant Robotics aims to achieve with its line of lightweight, smart, and capable robots. The company has launched several desktop collaborative robots over the past few years, including the myCobot, mechArm, and myPalletizer. To help users achieve more applications in education, Elephant Robotics has also launched AI Robot Kit, a robotic kit that integrates multiple functions like vision, positioning grabbing, and automatic sorting modules. This year, the company is unveiling completely improved and upgraded products to make robotics even more accessible in education.

Upgraded Robotic Arms and AI Kits

Schools in different countries and regions have been using Elephant Robotics’ robotic arms and AI Kits as educational tools in recent years. The products’ portability, ease of use, and cost-effectiveness have helped schools integrate robotics as part of their programs and courses. The performance of the products and the wide range of built-in software and features help students learn better about robotics and programming. Using the robotic arms and AI Kit, students can learn about artificial intelligence and applications such as robot vision, object recognition, manipulation, and more.


To help more students experience robots and start learning about robotics and programming at a young age, Elephant Robotics has upgraded its AI Kit to make it more powerful and even easier to use.


Elephant Robotics has upgraded the AI Kit comprehensively, improving the quality of the hardware while optimizing the built-in algorithms and software to make the product more flexible and scalable. Video: Elephant Robotics


In addition to the upgraded AI Kit, Elephant Robotics this year released new robotics and AI education material, starting with a book called “Machine Vision and Robotics.” The book explains the topics of robotic arms and vision sensors, and includes five algorithm courses, plus tutorials on programming languages. Users learn about machine vision in a practical way by experimenting with the AI Kit through a series of examples and applications.

Elephant Robotics also provides users visualization software and customization options to select the built-in algorithms. The software is very friendly for new robotics users who have yet to gain programming experience. AI Kit 2023 uses a camera with higher accuracy and light adjustment for the hardware to make the robotic arm more efficient in object recognition. The suction pump installed at the end of the robotic arm has also been improved, which has higher adaptability and stability when working with different robotic arms.

Elephant Robotics has also upgraded the myCobot product into a new version: myCobot 2023, which features more user-friendly software. myCobot 2023 allows users to use their mobile phone and gamepad to control the robotic arm and accessories, helping to learn and develop remote wireless operations. To improve safety, Elephant Robotics has added algorithms to the robotic arm to prevent it from colliding with other objects while in operation.

ultraArm P340: A New Robot for Education

In 2023, Elephant Robotics is also introducing the ultraArm P340, a high-performance robotic arm designed to meet educational needs. ultraArm uses metal construction to increase the stiffness and its payload capabilities. To provide further stability, the ultraArm P340 also features stepper motors, which improves its speed and provides repeatable positioning accuracy to ±0.1mm.

More AI and Robot Kits for Makers

Elephant Robotics has still more products to announce this year: The company has launched five kits with ultraArm that provide additional applications for the education field. The Vision educational kits include the Vision & Picking Kit, Vision & Conveyor Belt Kit, and Vision & Slide Rail Kit. These kits help users learn about machine vision and experiment with industrial-like applications such as dynamic and static intelligent recognition and grasping.

There are two kits in the DIY series: the Drawing Kit and Laser Engraving Kit. This series helps makers achieve high quality reproduction of drawings and laser engraving applications, developing users’ creativity and imagination. To help users quickly achieve DIY production, Elephant Robotics created software called Elephant Luban. It is a platform that generates the G-Code track and provides primary cases for users. Users can select multiple functions, such as drawing and laser engraving, with just a few clicks.

Combined with different robotics kits, the ultraArm is an excellent and affordable option for many educational applications. These kits offer students the possibility of learning and experimenting with advanced, complex robotics and AI tools that are fun and easy to use.

And this has long been one of Elephant Robotics‘s main goals: creating innovative products that are specifically designed to provide students with hands-on experience and promote STEM education. The company is committed to keep pursuing that goal by continuing to research and contribute to the development of even better educational robots in the future.



This is a sponsored article brought to you by Elephant Robotics.

In recent years, interest in using robots in education has seen massive growth. Projects that involve robotics, artificial intelligence, speech recognition, and related technologies can help develop students’ analytical, creative, and practical skills. However, a major challenge has been the robots themselves: They are typically big, heavy, and costly. For robots to become widely used in education, they need to be smaller, easier to setup and use, and, more important, they need to be affordable to educators and students.

That’s the goal Elephant Robotics aims to achieve with its line of lightweight, smart, and capable robots. The company has launched several desktop collaborative robots over the past few years, including the myCobot, mechArm, and myPalletizer. To help users achieve more applications in education, Elephant Robotics has also launched AI Robot Kit, a robotic kit that integrates multiple functions like vision, positioning grabbing, and automatic sorting modules. This year, the company is unveiling completely improved and upgraded products to make robotics even more accessible in education.

Upgraded Robotic Arms and AI Kits

Schools in different countries and regions have been using Elephant Robotics’ robotic arms and AI Kits as educational tools in recent years. The products’ portability, ease of use, and cost-effectiveness have helped schools integrate robotics as part of their programs and courses. The performance of the products and the wide range of built-in software and features help students learn better about robotics and programming. Using the robotic arms and AI Kit, students can learn about artificial intelligence and applications such as robot vision, object recognition, manipulation, and more.


To help more students experience robots and start learning about robotics and programming at a young age, Elephant Robotics has upgraded its AI Kit to make it more powerful and even easier to use.


Elephant Robotics has upgraded the AI Kit comprehensively, improving the quality of the hardware while optimizing the built-in algorithms and software to make the product more flexible and scalable. Video: Elephant Robotics


In addition to the upgraded AI Kit, Elephant Robotics this year released new robotics and AI education material, starting with a book called “Machine Vision and Robotics.” The book explains the topics of robotic arms and vision sensors, and includes five algorithm courses, plus tutorials on programming languages. Users learn about machine vision in a practical way by experimenting with the AI Kit through a series of examples and applications.

Elephant Robotics also provides users visualization software and customization options to select the built-in algorithms. The software is very friendly for new robotics users who have yet to gain programming experience. AI Kit 2023 uses a camera with higher accuracy and light adjustment for the hardware to make the robotic arm more efficient in object recognition. The suction pump installed at the end of the robotic arm has also been improved, which has higher adaptability and stability when working with different robotic arms.

Elephant Robotics has also upgraded the myCobot product into a new version: myCobot 2023, which features more user-friendly software. myCobot 2023 allows users to use their mobile phone and gamepad to control the robotic arm and accessories, helping to learn and develop remote wireless operations. To improve safety, Elephant Robotics has added algorithms to the robotic arm to prevent it from colliding with other objects while in operation.

ultraArm P340: A New Robot for Education

In 2023, Elephant Robotics is also introducing the ultraArm P340, a high-performance robotic arm designed to meet educational needs. ultraArm uses metal construction to increase the stiffness and its payload capabilities. To provide further stability, the ultraArm P340 also features stepper motors, which improves its speed and provides repeatable positioning accuracy to ±0.1mm.

More AI and Robot Kits for Makers

Elephant Robotics has still more products to announce this year: The company has launched five kits with ultraArm that provide additional applications for the education field. The Vision educational kits include the Vision & Picking Kit, Vision & Conveyor Belt Kit, and Vision & Slide Rail Kit. These kits help users learn about machine vision and experiment with industrial-like applications such as dynamic and static intelligent recognition and grasping.

There are two kits in the DIY series: the Drawing Kit and Laser Engraving Kit. This series helps makers achieve high quality reproduction of drawings and laser engraving applications, developing users’ creativity and imagination. To help users quickly achieve DIY production, Elephant Robotics created software called Elephant Luban. It is a platform that generates the G-Code track and provides primary cases for users. Users can select multiple functions, such as drawing and laser engraving, with just a few clicks.

Combined with different robotics kits, the ultraArm is an excellent and affordable option for many educational applications. These kits offer students the possibility of learning and experimenting with advanced, complex robotics and AI tools that are fun and easy to use.

And this has long been one of Elephant Robotics‘s main goals: creating innovative products that are specifically designed to provide students with hands-on experience and promote STEM education. The company is committed to keep pursuing that goal by continuing to research and contribute to the development of even better educational robots in the future.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

A new air-ground vehicle, appropriately named Skywalker, is able to seamlessly transition between ground and air modes, outperforming competing air-ground vehicles in several key performance measures. Skywalker was put to the test in a series of experiments, which were described in a study published 14 March in IEEE Robotics and Automation Letters.

Airborne vehicles are undeniably convenient, offering great mobility, but they require significantly more energy than ground vehicles. Meanwhile, ground vehicles are typically slower and may encounter physical obstacles and barriers. Skywalker, the researchers say, offers the best of both worlds.

“We create this air-ground vehicle to take the complementary advantages of ground vehicles’ high power efficiency, while maintaining multicopters’ great mobility,” explains Fei Gao, an associate professor at Zhejiang University who was involved in the study. He notes that these features will help Skywalker work in large-scale environments and complete long-distance deliveries.

Skywalker is essentially a quadrotor copter consisting of four brushless motors, a Hobbywing, and propellers. For traveling on the ground, it has a single omnidirectional wheel that allows it to turn freely.

“Skywalker still needs to keep the propellers rotating to keep balance and tilt itself to move around. However, the rotating speed [of the propellers] can be significantly reduced compared with aerial locomotion, thus saving much energy,” explains Gao.

Gao’s team also developed a unified controller designed for both aerial and ground locomotion, so that Skywalker can conduct hybrid air-ground locomotion freely and at high speeds.

In their study, the researchers conducted four experiments to test Skywalker’s ground-trajectory tracking ability, hybrid-trajectory tracking ability, rotational ability (free yaw execution), and power efficiency.

The results show that Skywalker is able to reach a maximum velocity of 5 meters per second and can turn on a dime—thanks to its omnidirectional wheel and propellers. Whereas other air-ground vehicles can take from one to 20 seconds to transition between aerial and ground modes, Skywalker can complete the task seamlessly, the researchers say.

The team also assessed Skywalker’s energy efficiency. The researchers found that it uses 75 percent less energy by traversing the ground—while minimally using its propellers to guide and balance itself—compared to what it uses while flying.

“The uniqueness of Skywalker mainly lies in the simple mechanism, impressive trajectory-tracking ability, and free yaw execution ability,” says Gao.

Meet Skywalker: a vehicle that boh flies and drives www.youtube.com


Gao says his team is interested in commercializing Skywalker, given its broad range of potential applications—for example, in photography, exploration, rescue, surveying, and mapping. Because of its endurance and ability to carry loads, Skywalker could be fitted with more batteries, onboard computers, and sensors to further broaden its applications, he says.

But while the vehicle is theoretically capable of going over difficult terrain, these added capabilities still must be put to the test.

“In this work, we make the assumption that the vehicle moves on flat ground, which limits its application in wild, complicated environments,” Gao says. “In the future, we aim to precisely model the dynamics of Skywalker on uneven ground and develop autonomous planning algorithms for outdoor application.”

Pages