Feed aggregator



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

A new air-ground vehicle, appropriately named Skywalker, is able to seamlessly transition between ground and air modes, outperforming competing air-ground vehicles in several key performance measures. Skywalker was put to the test in a series of experiments, which were described in a study published 14 March in IEEE Robotics and Automation Letters.

Airborne vehicles are undeniably convenient, offering great mobility, but they require significantly more energy than ground vehicles. Meanwhile, ground vehicles are typically slower and may encounter physical obstacles and barriers. Skywalker, the researchers say, offers the best of both worlds.

“We create this air-ground vehicle to take the complementary advantages of ground vehicles’ high power efficiency, while maintaining multicopters’ great mobility,” explains Fei Gao, an associate professor at Zhejiang University who was involved in the study. He notes that these features will help Skywalker work in large-scale environments and complete long-distance deliveries.

Skywalker is essentially a quadrotor copter consisting of four brushless motors, a Hobbywing, and propellers. For traveling on the ground, it has a single omnidirectional wheel that allows it to turn freely.

“Skywalker still needs to keep the propellers rotating to keep balance and tilt itself to move around. However, the rotating speed [of the propellers] can be significantly reduced compared with aerial locomotion, thus saving much energy,” explains Gao.

Gao’s team also developed a unified controller designed for both aerial and ground locomotion, so that Skywalker can conduct hybrid air-ground locomotion freely and at high speeds.

In their study, the researchers conducted four experiments to test Skywalker’s ground-trajectory tracking ability, hybrid-trajectory tracking ability, rotational ability (free yaw execution), and power efficiency.

The results show that Skywalker is able to reach a maximum velocity of 5 meters per second and can turn on a dime—thanks to its omnidirectional wheel and propellers. Whereas other air-ground vehicles can take from one to 20 seconds to transition between aerial and ground modes, Skywalker can complete the task seamlessly, the researchers say.

The team also assessed Skywalker’s energy efficiency. The researchers found that it uses 75 percent less energy by traversing the ground—while minimally using its propellers to guide and balance itself—compared to what it uses while flying.

“The uniqueness of Skywalker mainly lies in the simple mechanism, impressive trajectory-tracking ability, and free yaw execution ability,” says Gao.

Meet Skywalker: a vehicle that boh flies and drives www.youtube.com


Gao says his team is interested in commercializing Skywalker, given its broad range of potential applications—for example, in photography, exploration, rescue, surveying, and mapping. Because of its endurance and ability to carry loads, Skywalker could be fitted with more batteries, onboard computers, and sensors to further broaden its applications, he says.

But while the vehicle is theoretically capable of going over difficult terrain, these added capabilities still must be put to the test.

“In this work, we make the assumption that the vehicle moves on flat ground, which limits its application in wild, complicated environments,” Gao says. “In the future, we aim to precisely model the dynamics of Skywalker on uneven ground and develop autonomous planning algorithms for outdoor application.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Robotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

GITAI conducted a demonstration of lunar base construction using two GITAI inchworm-type robotic arms and two GITAI Lunar Robotic Rovers in a simulated lunar environment and successfully completed all planned tasks. The GITAI robots have successfully passed various tests corresponding to Level 4 of NASA’s Technology Readiness Levels (TRL) in a simulated lunar environment in the desert.

[ GITAI ]

Thanks, Sho!

This is 30 minutes of Agility Robotics’ Digit being productive at ProMat. The fact that it gets boring and repetitive to watch reinforces how much this process needs robots, and is also remarkable because bipedal robots can now be seen as just another tool.

[ Agility Robotics ]

We are now one step closer to mimicking Baymax’ skin with softness and whole-body sensing, which may benefit social or task-based touch/interaction at large area. We constructed a robot arm with a soft skin and vision-based tactile sensing. We also showcase this method for our large-scale tactile sensor (TacLink) by demonstrating its use in two scenarios: namely whole-arm nonprehensile manipulation, and intuitive motion guidance using a custom-built tactile robot arm integrated with TacLink.

[ Paper ]

Thanks, Van!

Meet Fifi, a software engineering team lead at Boston Dynamics. Hear her perspective on how she got into engineering, why she wouldn’t trust Stretch with her pet cactus, and much more—as she answers questions from kids and other curious minds.

[ Boston Dynamics ]

Take a look at this 7 ingredient printed dessert and ask yourself if conventional cooking appliances such as ovens, stovetops, and microwaves may one day be replaced by cooking devices that incorporate three-dimensional (3D) printers, lasers, or other software-driven processes.

[ Paper ]

What if you just loaded the robots onto the truck?!

Mind. Blown.

[ Slip Robotics ]

Uh.

As weird as this looks, it’s designed to reduce the burden on caregivers by automating tooth brushing.

[ RobotStart ]

Relay is still getting important work done in hospitals.

[ Relay Robotics ]

Real cars are expensive, simulation is fake, but MIT’s MiniCity is just the right compromise for developing safer autonomous vehicles.

[ Paper ]

Robot-to-human mechanical tool handover is a common task in a human-robot collaborative assembly where humans are performing complex, high-value tasks and robots are performing supporting tasks. We explore an approach to ensure the safe handover of mechanical tools to humans. Our experimental results indicate that our system can safely and effectively hand off many different types of tools. We have tested the system’s ability to successfully handle contingencies that may occur during the handover process.

[ USC Viterbi ]

Autonomous vehicle (AV) uncertainty is at an all-time high. Michigan Engineering researchers aim to change that. A team of researchers used artificial intelligence to train virtual vehicles that can challenge AVs in a virtual or augmented reality testing environment. The virtual cars were only fed safety-critical training data, making them better equipped to challenge AVs with more of those rare events in a shorter amount of time.

[ Michigan ]

All of the sea lamprey detection problems you never knew you had are now solved.

[ Paper ]

OTTO Motors is thrilled to announce the official launch of our newest autonomous mobile robot (AMR)—OTTO 600. We have also released a major update that makes industry-leading strides in software development. With this, the industry’s most comprehensive AMR fleet is unveiled, enabling manufacturers to automate any material handling job up to 4,200 lb.

[ OTTO Motors ]

From falling boxes to discarded mattresses, we prepare the Waymo Driver to identify and navigate around all kinds of debris on public roads. See how we use debris tests at our closed-course facilities to prepare our Waymo Driver for any foreign objects and debris it may encounter on public roads.

[ Waymo ]

Over 500 students participated in the 2022 Raytheon Technologies UK quadcopter challenge ... covering all the British Isles.

[ Raytheon ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Robotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

GITAI conducted a demonstration of lunar base construction using two GITAI inchworm-type robotic arms and two GITAI Lunar Robotic Rovers in a simulated lunar environment and successfully completed all planned tasks. The GITAI robots have successfully passed various tests corresponding to Level 4 of NASA’s Technology Readiness Levels (TRL) in a simulated lunar environment in the desert.

[ GITAI ]

Thanks, Sho!

This is 30 minutes of Agility Robotics’ Digit being productive at ProMat. The fact that it gets boring and repetitive to watch reinforces how much this process needs robots, and is also remarkable because bipedal robots can now be seen as just another tool.

[ Agility Robotics ]

We are now one step closer to mimicking Baymax’ skin with softness and whole-body sensing, which may benefit social or task-based touch/interaction at large area. We constructed a robot arm with a soft skin and vision-based tactile sensing. We also showcase this method for our large-scale tactile sensor (TacLink) by demonstrating its use in two scenarios: namely whole-arm nonprehensile manipulation, and intuitive motion guidance using a custom-built tactile robot arm integrated with TacLink.

[ Paper ]

Thanks, Van!

Meet Fifi, a software engineering team lead at Boston Dynamics. Hear her perspective on how she got into engineering, why she wouldn’t trust Stretch with her pet cactus, and much more—as she answers questions from kids and other curious minds.

[ Boston Dynamics ]

Take a look at this 7 ingredient printed dessert and ask yourself if conventional cooking appliances such as ovens, stovetops, and microwaves may one day be replaced by cooking devices that incorporate three-dimensional (3D) printers, lasers, or other software-driven processes.

[ Paper ]

What if you just loaded the robots onto the truck?!

Mind. Blown.

[ Slip Robotics ]

Uh.

As weird as this looks, it’s designed to reduce the burden on caregivers by automating tooth brushing.

[ RobotStart ]

Relay is still getting important work done in hospitals.

[ Relay Robotics ]

Real cars are expensive, simulation is fake, but MIT’s MiniCity is just the right compromise for developing safer autonomous vehicles.

[ Paper ]

Robot-to-human mechanical tool handover is a common task in a human-robot collaborative assembly where humans are performing complex, high-value tasks and robots are performing supporting tasks. We explore an approach to ensure the safe handover of mechanical tools to humans. Our experimental results indicate that our system can safely and effectively hand off many different types of tools. We have tested the system’s ability to successfully handle contingencies that may occur during the handover process.

[ USC Viterbi ]

Autonomous vehicle (AV) uncertainty is at an all-time high. Michigan Engineering researchers aim to change that. A team of researchers used artificial intelligence to train virtual vehicles that can challenge AVs in a virtual or augmented reality testing environment. The virtual cars were only fed safety-critical training data, making them better equipped to challenge AVs with more of those rare events in a shorter amount of time.

[ Michigan ]

All of the sea lamprey detection problems you never knew you had are now solved.

[ Paper ]

OTTO Motors is thrilled to announce the official launch of our newest autonomous mobile robot (AMR)—OTTO 600. We have also released a major update that makes industry-leading strides in software development. With this, the industry’s most comprehensive AMR fleet is unveiled, enabling manufacturers to automate any material handling job up to 4,200 lb.

[ OTTO Motors ]

From falling boxes to discarded mattresses, we prepare the Waymo Driver to identify and navigate around all kinds of debris on public roads. See how we use debris tests at our closed-course facilities to prepare our Waymo Driver for any foreign objects and debris it may encounter on public roads.

[ Waymo ]

Over 500 students participated in the 2022 Raytheon Technologies UK quadcopter challenge ... covering all the British Isles.

[ Raytheon ]

Improving the mobility of robots is an important goal for many real-world applications and implementing an animal-like spine structure in a quadruped robot is a promising approach to achieving high-speed running. This paper proposes a feline-like multi-joint spine adopting a one-degree-of-freedom closed-loop linkage for a quadruped robot to realize high-speed running. We theoretically prove that the proposed spine structure can realize 1.5 times the horizontal range of foot motion compared to a spine structure with a single joint. Experimental results demonstrate that a robot with the proposed spine structure achieves 1.4 times the horizontal range of motion and 1.9 times the speed of a robot with a single-joint spine structure.

This paper presents a cooperative, multi-robot solution for searching, excavating, and transporting mineral resources on the Moon. Our work was developed in the context of the Space Robotics Challenge Phase 2 (SRCP2), which was part of the NASA Centennial Challenges and was motivated by the current NASA Artemis program, a flagship initiative that intends to establish a long-term human presence on the Moon. In the SRCP2 a group of simulated mobile robots was tasked with reporting volatile locations within a realistic lunar simulation environment, and excavating and transporting these resources to target locations in such an environment. In this paper, we describe our solution to the SRCP2 competition that includes our strategies for rover mobility hazard estimation (e.g. slippage level, stuck status), immobility recovery, rover-to-rover, and rover-to-infrastructure docking, rover coordination and cooperation, and cooperative task planning and autonomy. Our solution was able to successfully complete all tasks required by the challenge, granting our team sixth place among all participants of the challenge. Our results demonstrate the potential of using autonomous robots for autonomous in-situ resource utilization (ISRU) on the Moon. Our results also highlight the effectiveness of realistic simulation environments for testing and validating robot autonomy and coordination algorithms. The successful completion of the SRCP2 challenge using our solution demonstrates the potential of cooperative, multi-robot systems for resource utilization on the Moon.

Advancements in the research on so-called “synthetic (artificial) cells” have been mainly characterized by an important acceleration in all sorts of experimental approaches, providing a growing amount of knowledge and techniques that will shape future successful developments. Synthetic cell technology, indeed, shows potential in driving a revolution in science and technology. On the other hand, theoretical and epistemological investigations related to what synthetic cells “are,” how they behave, and what their role is in generating knowledge have not received sufficient attention. Open questions about these less explored subjects range from the analysis of the organizational theories applied to synthetic cells to the study of the “relevance” of synthetic cells as scientific tools to investigate life and cognition; and from the recognition and the cultural reappraisal of cybernetic inheritance in synthetic biology to the need for developing concepts on synthetic cells and to the exploration, in a novel perspective, of information theories, complexity, and artificial intelligence applied in this novel field. In these contributions, we will briefly sketch some crucial aspects related to the aforementioned issues, based on our ongoing studies. An important take-home message will result: together with their impactful experimental results and potential applications, synthetic cells can play a major role in the exploration of theoretical questions as well.

Introduction: The RobHand (Robot for Hand Rehabilitation) is a robotic neuromotor rehabilitation exoskeleton that assists in performing flexion and extension movements of the fingers. The present case study assesses changes in manual function and hand muscle strength of four selected stroke patients after completion of an established training program. In addition, safety and user satisfaction are also evaluated.

Methods: The training program consisted of 16 sessions; two 60-minute training sessions per week for eight consecutive weeks. During each session, patients moved through six consecutive rehabilitation stages using the RobHand. Manual function assessments were applied before and after the training program and safety tests were carried out after each session. A user evaluation questionnaire was filled out after each patient completed the program.

Results: The safety test showed the absence of significant adverse events, such as skin lesions or fatigue. An average score of 4 out of 5 was obtained on the Quebec User Evaluation of Satisfaction with Assistive Technology 2.0 Scale. Users were very satisfied with the weight, comfort, and quality of professional services. A Kruskal-Wallis test revealed that there were not statistically significant changes in the manual function tests between the beginning and the end of the training program.

Discussion: It can be concluded that the RobHand is a safe rehabilitation technology and users were satisfied with the system. No statistically significant differences in manual function were found. This could be due to the high influence of the stroke stage on motor recovery since the study was performed with chronic patients. Hence, future studies should evaluate the rehabilitation effectiveness of the repetitive use of the RobHand exoskeleton on subacute patients.

Clinical Trial Registration:https://clinicaltrials.gov/ct2/show/NCT05598892?id=NCT05598892&draw=2&rank=1, identifier NCT05598892.



Metal detecting can be a fun hobby, or it can be a task to be completed in deadly earnest—if the buried treasure you’re searching for includes landmines and explosive remnants of war. This is an enormous, dangerous problem: Something like 12,000 square kilometers worldwide are essentially useless and uninhabitable because of the threat of buried explosives, and thousands and thousands of people are injured or killed every year.

While there are many different ways of detecting mines and explosives, none of them are particularly quick or easy. For obvious reasons, sending a human out into a minefield with a metal detector is not the safest way of doing things. So, instead, people send anything else that they possibly can, from machines that can smash through minefields with brute force to well-trained rats that take a more passive approach by sniffing out explosive chemicals.

Because the majority of mines are triggered by pressure or direct proximity, a drone seems like it would be the ideal way of detecting them non-explosively. However, unless you’re only detecting over a perfectly flat surface (and perhaps not even then) your detector won’t be positioned ideally most of the time, and you might miss something, which is not a viable option for mine detection.

But now a novel combination of a metal detector and a drone with five degrees of freedom is under development at the Autonomous Systems Lab at ETH Zurich. It may provide a viable solution to remote landmine detection, by using careful sensing and localization along with some twisting motors to keep the detector reliably close to the ground.

The really tricky part of this whole thing is making sure that the metal detector stays at the correct orientation relative to the ground surface so there’s no dip in its effectiveness. With a conventional drone, this wouldn’t work at all, because every time the drone moves in any direction but up or down, it has to tilt, which is going to also tilt anything that’s attached to it. Unless you want to mount your metal detector on some kind of (likely complicated and heavy) gimbal system, you need a drone that can translate its position without tilting, and happily, such a drone not only exists but is commercially available.

The drone used in this research is made by a company called Voliro, and it’s a tricopter that uses rotating thruster nacelles that move independently of the body of the drone. It may not shock you to learn that Voliro (which has, in the past, made some really weird flying robots) is a startup with its roots in the Autonomous Systems Lab at ETH Zurich, the same place where the mine-detecting drone research is taking place.

So, now that you have a drone that theoretically capable of making your metal detector work, you need to design the control system that makes it work in practice. The system needs to be able to pilot the drone across a 3D surface it has never seen before and which might include obstacles. Meanwhile, it must prioritize the alignment of the detector. The researchers combine GPS with inertial measurements from a lidar mounted on the drone for absolute position and state estimation, and then autonomously plots and executes a “boustrophedon coverage path” across an area of interest. “Boustrophedon,” which is not a word that I knew existed until just this minute, refers to something (usually writing) in which alternate lines are reversed (and mirrored). So, right to left, and then left to right.

Testing with metallic (non-explosive) targets showed that this system does very well, even in areas with obstacles, overhead occlusion, and significant slope. Whether it’s ultimately field-useful or not will require some further investigation, but because the platform itself is commercial off-the-shelf hardware, there’s a bit more room for optimism than there otherwise might be.

A research paper, “Resilient Terrain Navigation with a 5 DOF Metal Detector Drone” by Patrick Pfreundschuh, Rik Bähnemann, Tim Kazik, Thomas Mantel, Roland Siegwart, and Olov Andersson from the Autonomous Systems Lab at ETH Zurich, will be presented in May at ICRA 2023 in London.



Metal detecting can be a fun hobby, or it can be a task to be completed in deadly earnest—if the buried treasure you’re searching for includes landmines and explosive remnants of war. This is an enormous, dangerous problem: Something like 12,000 square kilometers worldwide are essentially useless and uninhabitable because of the threat of buried explosives, and thousands and thousands of people are injured or killed every year.

While there are many different ways of detecting mines and explosives, none of them are particularly quick or easy. For obvious reasons, sending a human out into a minefield with a metal detector is not the safest way of doing things. So, instead, people send anything else that they possibly can, from machines that can smash through minefields with brute force to well-trained rats that take a more passive approach by sniffing out explosive chemicals.

Because the majority of mines are triggered by pressure or direct proximity, a drone seems like it would be the ideal way of detecting them non-explosively. However, unless you’re only detecting over a perfectly flat surface (and perhaps not even then) your detector won’t be positioned ideally most of the time, and you might miss something, which is not a viable option for mine detection.

But now a novel combination of a metal detector and a drone with five degrees of freedom is under development at the Autonomous Systems Lab at ETH Zurich. It may provide a viable solution to remote landmine detection, by using careful sensing and localization along with some twisting motors to keep the detector reliably close to the ground.

The really tricky part of this whole thing is making sure that the metal detector stays at the correct orientation relative to the ground surface so there’s no dip in its effectiveness. With a conventional drone, this wouldn’t work at all, because every time the drone moves in any direction but up or down, it has to tilt, which is going to also tilt anything that’s attached to it. Unless you want to mount your metal detector on some kind of (likely complicated and heavy) gimbal system, you need a drone that can translate its position without tilting, and happily, such a drone not only exists but is commercially available.

The drone used in this research is made by a company called Voliro, and it’s a tricopter that uses rotating thruster nacelles that move independently of the body of the drone. It may not shock you to learn that Voliro (which has, in the past, made some really weird flying robots) is a startup with its roots in the Autonomous Systems Lab at ETH Zurich, the same place where the mine-detecting drone research is taking place.

So, now that you have a drone that theoretically capable of making your metal detector work, you need to design the control system that makes it work in practice. The system needs to be able to pilot the drone across a 3D surface it has never seen before and which might include obstacles. Meanwhile, it must prioritize the alignment of the detector. The researchers combine GPS with inertial measurements from a lidar mounted on the drone for absolute position and state estimation, and then autonomously plots and executes a “boustrophedon coverage path” across an area of interest. “Boustrophedon,” which is not a word that I knew existed until just this minute, refers to something (usually writing) in which alternate lines are reversed (and mirrored). So, right to left, and then left to right.

Testing with metallic (non-explosive) targets showed that this system does very well, even in areas with obstacles, overhead occlusion, and significant slope. Whether it’s ultimately field-useful or not will require some further investigation, but because the platform itself is commercial off-the-shelf hardware, there’s a bit more room for optimism than there otherwise might be.

A research paper, “Resilient Terrain Navigation with a 5 DOF Metal Detector Drone” by Patrick Pfreundschuh, Rik Bähnemann, Tim Kazik, Thomas Mantel, Roland Siegwart, and Olov Andersson from the Autonomous Systems Lab at ETH Zurich, will be presented in May at ICRA 2023 in London.

Middlewares are standard tools for modern software development in many areas, especially in robotics. Although such have become common for high-level applications, there is little support for real-time systems and low-level control. Therefore, µRT provides a lightweight solution for resource-constrained embedded systems, such as microcontrollers. It features publish–subscribe communication and remote procedure calls (RPCs) and can validate timing constraints at runtime. In contrast to other middlewares, µRT does not rely on specific transports for communication but can be used with any technology. Empirical results prove the small memory footprint, consistent temporal behavior, and predominantly linear scaling. The usability of µRT was found to be competitive with state-of-the-art solutions by means of a study.



This morning at the ProMat conference in Chicago, Agility Robotics is introducing the latest iteration of Digit, its bipedal multipurpose robot designed for near-term commercial success in warehouse and logistics operations. This version of Digit adds a head (for human-robot interaction) along with manipulators intended for the very first task that Digit will be performing, one that Agility hopes will be its entry point to a sustainable and profitable business bringing bipedal robots into the workplace.

So that’s a bit of background, and if you want more, you should absolutely read the article that Agility CTO and cofounder Jonathan Hurst wrote for us in 2019 talking about the origins of this bipedal (not humanoid, mind you) robot. And now that you’ve finished reading that, here’s a better look at the newest, fanciest version of Digit:

The most visually apparent change here is of course Digit’s head, which either makes the robot look much more normal or a little strange depending on how much success you’ve had imagining the neck-mounted lidar on the previous version as a head. The design of Digit’s head is carefully done—Digit is (again) a biped rather than a humanoid, in the sense that the head is not really intended to evoke a humanlike head, which is why it’s decidedly sideways in a way that human heads generally aren’t. But at the same time, the purpose of the head is to provide a human-robot interaction (HRI) focal point so that humans can naturally understand what Digit is doing. There’s still work to be done here; we’re told that this isn’t the final version, but it’s at the point where Agility can start working with customers to figure out what Digit needs to be using its head for in practice.

Digit’s hands are designed primarily for moving totes.Agility

Digit’s new hands are designed to do one thing: move totes, which are the plastic bins that control the flow of goods in a warehouse. They’re not especially humanlike, and they’re not fancy, but they’re exactly what Digit needs to do the job that it needs to do. This is that job:

Yup, that’s it: moving totes from some shelves to a conveyor belt (and eventually, putting totes back on those shelves). It’s not fancy or complicated and for a human, it’s mind-numbingly simple. It’s basically an automated process, except in a lot of warehouses, humans are doing the work that robots like Digit could be doing instead. Or, in many cases, humans aren’t doing this work, because nobody actually wants these jobs and companies are having a lot of trouble filling these positions anyway.

For a robot, a task like this is not easy at all, especially when you throw legs into the mix. But you can see why the legs are necessary: they give Digit the same workspace as a human within approximately the same footprint as a human, which is a requirement if the goal is to take over from humans without requiring time-consuming and costly infrastructure changes. This gives Digit a lot of potential, as Agility points out in today’s press release:

Digit is multipurpose, so it can execute a variety of tasks and adapt to many different workflows; a fleet of Digits will be able to switch between applications depending on current warehouse needs and seasonal shifts. Because Digit is also human-centric, meaning it is the size and shape of a human and is built to work in spaces designed for people, it is easy to deploy into existing warehouse operations and as-built infrastructure without costly retrofitting.

We should point out that while Digit is multipurpose in the sense that it can execute a variety of tasks, at the moment, it’s just doing this one thing. And while this one thing certainly has value, the application is not yet ready for deployment, since there’s a big gap between being able to do a task most of the time (which is where Digit is now) and being able to do a task robustly enough that someone will pay you for it (which is where Digit needs to get to). Agility has some real work to do, but the company is already launching a partner program for Digit’s first commercial customers. And that’s the other thing that has to happen here: At some point Agility has to make a whole bunch of robots, which is a huge challenge by itself. Rather than building a couple of robots at a time for friendly academics, Agility will need to build and deliver and support tens and eventually hundreds or thousands or billions of Digit units. No problem!

Turning a robot from a research project into a platform that can make money by doing useful work has never been easy. And doing this with a robot that’s bipedal and is trying to do the same tasks as human workers has never been done before. It’s increasingly obvious that someone will make it happen at some point, but it’s hard to tell exactly when—if it’s anything like autonomous cars, it’s going to take way, way longer than it seems like it should. But with its partner program and a commitment to start manufacturing robots at scale soon, Agility is imposing an aggressive timeline on itself, with a plan to ship robots to its partners in early 2024, followed by general availability the following year.


This morning at the ProMat conference in Chicago, Agility Robotics is introducing the latest iteration of Digit, its bipedal multipurpose robot designed for near-term commercial success in warehouse and logistics operations. This version of Digit adds a head (for human-robot interaction) along with manipulators intended for the very first task that Digit will be performing, one that Agility hopes will be its entry point to a sustainable and profitable business bringing bipedal robots into the workplace.

So that’s a bit of background, and if you want more, you should absolutely read the article that Agility CTO and cofounder Jonathan Hurst wrote for us in 2019 talking about the origins of this bipedal (not humanoid, mind you) robot. And now that you’ve finished reading that, here’s a better look at the newest, fanciest version of Digit:

The most visually apparent change here is of course Digit’s head, which either makes the robot look much more normal or a little strange depending on how much success you’ve had imagining the neck-mounted lidar on the previous version as a head. The design of Digit’s head is carefully done—Digit is (again) a biped rather than a humanoid, in the sense that the head is not really intended to evoke a humanlike head, which is why it’s decidedly sideways in a way that human heads generally aren’t. But at the same time, the purpose of the head is to provide a human-robot interaction (HRI) focal point so that humans can naturally understand what Digit is doing. There’s still work to be done here; we’re told that this isn’t the final version, but it’s at the point where Agility can start working with customers to figure out what Digit needs to be using its head for in practice.

Digit’s hands are designed primarily for moving totes.Agility

Digit’s new hands are designed to do one thing: move totes, which are the plastic bins that control the flow of goods in a warehouse. They’re not especially humanlike, and they’re not fancy, but they’re exactly what Digit needs to do the job that it needs to do. This is that job:

Yup, that’s it: moving totes from some shelves to a conveyor belt (and eventually, putting totes back on those shelves). It’s not fancy or complicated and for a human, it’s mind-numbingly simple. It’s basically an automated process, except in a lot of warehouses, humans are doing the work that robots like Digit could be doing instead. Or, in many cases, humans aren’t doing this work, because nobody actually wants these jobs and companies are having a lot of trouble filling these positions anyway.

For a robot, a task like this is not easy at all, especially when you throw legs into the mix. But you can see why the legs are necessary: they give Digit the same workspace as a human within approximately the same footprint as a human, which is a requirement if the goal is to take over from humans without requiring time-consuming and costly infrastructure changes. This gives Digit a lot of potential, as Agility points out in today’s press release:

Digit is multipurpose, so it can execute a variety of tasks and adapt to many different workflows; a fleet of Digits will be able to switch between applications depending on current warehouse needs and seasonal shifts. Because Digit is also human-centric, meaning it is the size and shape of a human and is built to work in spaces designed for people, it is easy to deploy into existing warehouse operations and as-built infrastructure without costly retrofitting.

We should point out that while Digit is multipurpose in the sense that it can execute a variety of tasks, at the moment, it’s just doing this one thing. And while this one thing certainly has value, the application is not yet ready for deployment, since there’s a big gap between being able to do a task most of the time (which is where Digit is now) and being able to do a task robustly enough that someone will pay you for it (which is where Digit needs to get to). Agility has some real work to do, but the company is already launching a partner program for Digit’s first commercial customers. And that’s the other thing that has to happen here: At some point Agility has to make a whole bunch of robots, which is a huge challenge by itself. Rather than building a couple of robots at a time for friendly academics, Agility will need to build and deliver and support tens and eventually hundreds or thousands or billions of Digit units. No problem!

Turning a robot from a research project into a platform that can make money by doing useful work has never been easy. And doing this with a robot that’s bipedal and is trying to do the same tasks as human workers has never been done before. It’s increasingly obvious that someone will make it happen at some point, but it’s hard to tell exactly when—if it’s anything like autonomous cars, it’s going to take way, way longer than it seems like it should. But with its partner program and a commitment to start manufacturing robots at scale soon, Agility is imposing an aggressive timeline on itself, with a plan to ship robots to its partners in early 2024, followed by general availability the following year.


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Robotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings—but to still fly effectively.

[ MIT ]

This robot gripper is called DragonClaw, and do you really need to know anything else?

“Alas, DragonClaw wins again!”

[ AMTL ]

Here’s a good argument for having legs on a robot:

And here’s a less-good argument for having legs on a robot, but still, impressive!

[ ANYbotics ]

Always nice to see drones getting real work done! Also, when you offer your drone up for powerline inspections and promise that it won’t crash into anything, that’s confidence.

[ Skydio ]

Voxel robots have been extensively simulated because they’re easy to simulated, but not extensively built because they’re hard to build. But here are some that actually work.

[ Paper ]

Thanks, Bram!

Reinforcement learning (RL) has become a promising approach to developing controllers for quadrupedal robots. We explore an alternative to the position-based RL paradigm, by introducing a torque-based RL framework, where an RL policy directly predicts joint torques at a high frequency, thus circumventing the use of a PD controller. The proposed learning torque control framework is validated with extensive experiments, in which a quadruped is capable of traversing various terrain and resisting external disturbances while following user-specified commands.

[ Berkeley ]

In this work we show how bio-inspired, 3D-printed snakeskins enhances the friction anisotropy and thus the slithering locomotion of a snake robot. Experiments have been conducted with a soft pneumatic snake robot in various indoor and outdoor settings.

[ Paper ]

For bipedal humanoid robots to successfully operate in the real world, they must be competent at simultaneously executing multiple motion tasks while reacting to unforeseen external disturbances in real-time. We propose Kinodynamic Fabrics as an approach for the specification, solution and simultaneous execution of multiple motion tasks in real-time while being reactive to dynamism in the environment.

[ Michigan Robotics ]

The RPD 35 from Built Robotics is the world’s first autonomous piling system. It combines four steps—layout, pile distribution, pile driving, and as-builts—into one package. With the RPD 35, a two-person crew can install pile more productivity than traditional methods.

[ Built Robotics ]

This work contributes a novel and modularized learning-based method for aerial robots navigating cluttered environments containing hard-to-perceive, thin obstacles without assuming access to a map or the full pose estimation of the robot.

[ ARL ]

Thanks, Kostas!

The video shows a use case that was developed by the FZI with assistance of the KIT: the multi-robot retrieval of hazardous materials using two FZI robots as well as a KIT virtual reality environment.

[ FZI ]

Satisfying.

[ Soft Robtics ]

A year has passed since the launch of the ESA’s Rosalind Franklin rover mission was put on hold, but the work has not stopped for the ExoMars teams in Europe. In this program, the ESA Web TV crew travel back to Turin, Italy to talk to the teams and watch as new tests are being conducted with the rover’s Earth twin Amalia while the real rover remains carefully stored in an ultra-clean room.

[ ESA ]

Camilo Buscaron, Chief Technologist, AWS Robotics sits down with Ramon Roche in this Behind the Tech episode to share his storied career in the robotics industry. Camilo explains how AWS provides a host of services for robotics developers from simulation and streaming to basic realtime cloud storage.

[ Behind the Tech ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Robotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

Inspired by the hardiness of bumblebees, MIT researchers have developed repair techniques that enable a bug-sized aerial robot to sustain severe damage to the actuators, or artificial muscles, that power its wings—but to still fly effectively.

[ MIT ]

This robot gripper is called DragonClaw, and do you really need to know anything else?

“Alas, DragonClaw wins again!”

[ AMTL ]

Here’s a good argument for having legs on a robot:

And here’s a less-good argument for having legs on a robot, but still, impressive!

[ ANYbotics ]

Always nice to see drones getting real work done! Also, when you offer your drone up for powerline inspections and promise that it won’t crash into anything, that’s confidence.

[ Skydio ]

Voxel robots have been extensively simulated because they’re easy to simulated, but not extensively built because they’re hard to build. But here are some that actually work.

[ Paper ]

Thanks, Bram!

Reinforcement learning (RL) has become a promising approach to developing controllers for quadrupedal robots. We explore an alternative to the position-based RL paradigm, by introducing a torque-based RL framework, where an RL policy directly predicts joint torques at a high frequency, thus circumventing the use of a PD controller. The proposed learning torque control framework is validated with extensive experiments, in which a quadruped is capable of traversing various terrain and resisting external disturbances while following user-specified commands.

[ Berkeley ]

In this work we show how bio-inspired, 3D-printed snakeskins enhances the friction anisotropy and thus the slithering locomotion of a snake robot. Experiments have been conducted with a soft pneumatic snake robot in various indoor and outdoor settings.

[ Paper ]

For bipedal humanoid robots to successfully operate in the real world, they must be competent at simultaneously executing multiple motion tasks while reacting to unforeseen external disturbances in real-time. We propose Kinodynamic Fabrics as an approach for the specification, solution and simultaneous execution of multiple motion tasks in real-time while being reactive to dynamism in the environment.

[ Michigan Robotics ]

The RPD 35 from Built Robotics is the world’s first autonomous piling system. It combines four steps—layout, pile distribution, pile driving, and as-builts—into one package. With the RPD 35, a two-person crew can install pile more productivity than traditional methods.

[ Built Robotics ]

This work contributes a novel and modularized learning-based method for aerial robots navigating cluttered environments containing hard-to-perceive, thin obstacles without assuming access to a map or the full pose estimation of the robot.

[ ARL ]

Thanks, Kostas!

The video shows a use case that was developed by the FZI with assistance of the KIT: the multi-robot retrieval of hazardous materials using two FZI robots as well as a KIT virtual reality environment.

[ FZI ]

Satisfying.

[ Soft Robtics ]

A year has passed since the launch of the ESA’s Rosalind Franklin rover mission was put on hold, but the work has not stopped for the ExoMars teams in Europe. In this program, the ESA Web TV crew travel back to Turin, Italy to talk to the teams and watch as new tests are being conducted with the rover’s Earth twin Amalia while the real rover remains carefully stored in an ultra-clean room.

[ ESA ]

Camilo Buscaron, Chief Technologist, AWS Robotics sits down with Ramon Roche in this Behind the Tech episode to share his storied career in the robotics industry. Camilo explains how AWS provides a host of services for robotics developers from simulation and streaming to basic realtime cloud storage.

[ Behind the Tech ]



Bioprinting is the use of 3D printing techniques to fabricate tissues out of biomaterials. It is mainly used to create human tissue for research and for drug testing in vitro. When used to create a body part intended to be implanted into a patient, the part must first be printed with a desktop bioprinter, and then large open-field surgery is typically required to place it. Besides the risk of infection and long recovery time, a mismatch between the printed part and the internal target tissue it’s being attached to is possible, as are problems arising from contamination and handling.

To overcome these challenges, researchers at the University of New South Wales, Sydney, Australia, have developed a miniature soft robotic arm and flexible printing head, and integrated them into a long tubular catheter that comprises the flexible printer body. Both the arm and printing head have three degrees of freedom (DoFs).

“Our flexible 3D bioprinter, designated F3DB, can directly deliver biomaterials onto the target tissue or organs with a minimally invasive approach,” says Thanh Nho Do, a Senior Lecturer at UNSW’s Graduate School of Biomedical Engineering, who together with his PhD student, Mai Thanh Thai, led the research team.

Not only does F3DB have the potential to directly reconstruct damaged parts of the body, it “can also be used as an all-in-one endoscopic surgical tool with the nozzle taking on the role of a surgical knife,” Do adds. “This would avoid the need for using different tools for cleaning, marking and incising now used in longer procedures such as removing a tumor.”

Prototype flexible 3D bioprinter can also serve as an all- purpose endoscopic surgical tool. Source: UNSW Sydney youtu.be

Though in situ bioprinting has been investigated for the past decade, “bioprinting onto internal organs has been limited due to various difficulties,” says Ibrahim Ozbolat, professor of engineering science and mechanics at Pennyslvania State University, in commenting on the research details published in February’s Advanced Science. “This mobile all-in-one endoscopic bioprinting device is novel,” and could “advance existing techniques by allowing real-time observations, incisions, and bioprinting onto internal organs.”

The device has a similar diameter to an endoscope (about 11–13 mm), small enough to be inserted into the body through the mouth or anus. The soft robotic arm is actuated by three soft-fabric-bellow actuators regulated by a hydraulic system composed of DC-motor-driven syringes that pump water to the actuators. A flexible printing head, consisting of soft hydraulic artificial muscles, enables the printing nozzle to move in three directions, like that of a conventional desktop 3D printer. Overall control is by a master-slave setup that uses a commercial haptic system to transmit hand motions by the master.

On reaching the target, the arm and printing head are controlled by an automated algorithm based on inverse kinematics, a mathematical process that determines the motions necessary to deliver the biomaterials onto the surface of an internal organ or tissue. Printing is monitored by an attached flexible miniature camera.

To test the device, the researchers first used various non-biomaterials such as liquid silicone and chocolate to print different multilayer 3D patterns in the lab. In further experiments, they printed various shapes with non-living materials on the surface of a pig’s kidney. Later, the researchers printed in situ living biomaterials on a glass surface inside an artificial colon.

“We saw the cells grow every day and increase by four times on day seven, the last day of the experiment,” says Do.

To test the device as an all-purpose tool for endoscopic surgery, the researchers performed various functions such as washing, marking, and dissecting the intestine of a pig. “The results show the F3DB has strong potential to be developed into an all-in-one endoscopic tool for endoscopic submucosal dissection procedures,” says Do.

Further improvements are needed, including the inclusion of more parameters in the kinematic model controlling the printing, and the addition of more cameras to better monitor the printing. “Then we will begin testing the device on animals, and eventually on humans,” says Do. “We hope to see the device in operation in hospitals in the next five to seven years.”

The device “has high potential to be successful,” agrees Ozbolat. “But its safety needs to be verified first and other improvements carried out.” He notes that 3D endoscopic robot arms are already in use clinically, so providing the device’s feasibility and safety is proven going forward, “commercialization can only be a matter of time.”



Bioprinting is the use of 3D printing techniques to fabricate tissues out of biomaterials. It is mainly used to create human tissue for research and for drug testing in vitro. When used to create a body part intended to be implanted into a patient, the part must first be printed with a desktop bioprinter, and then large open-field surgery is typically required to place it. Besides the risk of infection and long recovery time, a mismatch between the printed part and the internal target tissue it’s being attached to is possible, as are problems arising from contamination and handling.

To overcome these challenges, researchers at the University of New South Wales, Sydney, Australia, have developed a miniature soft robotic arm and flexible printing head, and integrated them into a long tubular catheter that comprises the flexible printer body. Both the arm and printing head have three degrees of freedom (DoFs).

“Our flexible 3D bioprinter, designated F3DB, can directly deliver biomaterials onto the target tissue or organs with a minimally invasive approach,” says Thanh Nho Do, a Senior Lecturer at UNSW’s Graduate School of Biomedical Engineering, who together with his PhD student, Mai Thanh Thai, led the research team.

Not only does F3DB have the potential to directly reconstruct damaged parts of the body, it “can also be used as an all-in-one endoscopic surgical tool with the nozzle taking on the role of a surgical knife,” Do adds. “This would avoid the need for using different tools for cleaning, marking and incising now used in longer procedures such as removing a tumor.”

Prototype flexible 3D bioprinter can also serve as an all- purpose endoscopic surgical tool. Source: UNSW Sydney youtu.be

Though in situ bioprinting has been investigated for the past decade, “bioprinting onto internal organs has been limited due to various difficulties,” says Ibrahim Ozbolat, professor of engineering science and mechanics at Pennyslvania State University, in commenting on the research details published in February’s Advanced Science. “This mobile all-in-one endoscopic bioprinting device is novel,” and could “advance existing techniques by allowing real-time observations, incisions, and bioprinting onto internal organs.”

The device has a similar diameter to an endoscope (about 11–13 mm), small enough to be inserted into the body through the mouth or anus. The soft robotic arm is actuated by three soft-fabric-bellow actuators regulated by a hydraulic system composed of DC-motor-driven syringes that pump water to the actuators. A flexible printing head, consisting of soft hydraulic artificial muscles, enables the printing nozzle to move in three directions, like that of a conventional desktop 3D printer. Overall control is by a master-slave setup that uses a commercial haptic system to transmit hand motions by the master.

On reaching the target, the arm and printing head are controlled by an automated algorithm based on inverse kinematics, a mathematical process that determines the motions necessary to deliver the biomaterials onto the surface of an internal organ or tissue. Printing is monitored by an attached flexible miniature camera.

To test the device, the researchers first used various non-biomaterials such as liquid silicone and chocolate to print different multilayer 3D patterns in the lab. In further experiments, they printed various shapes with non-living materials on the surface of a pig’s kidney. Later, the researchers printed in situ living biomaterials on a glass surface inside an artificial colon.

“We saw the cells grow every day and increase by four times on day seven, the last day of the experiment,” says Do.

To test the device as an all-purpose tool for endoscopic surgery, the researchers performed various functions such as washing, marking, and dissecting the intestine of a pig. “The results show the F3DB has strong potential to be developed into an all-in-one endoscopic tool for endoscopic submucosal dissection procedures,” says Do.

Further improvements are needed, including the inclusion of more parameters in the kinematic model controlling the printing, and the addition of more cameras to better monitor the printing. “Then we will begin testing the device on animals, and eventually on humans,” says Do. “We hope to see the device in operation in hospitals in the next five to seven years.”

The device “has high potential to be successful,” agrees Ozbolat. “But its safety needs to be verified first and other improvements carried out.” He notes that 3D endoscopic robot arms are already in use clinically, so providing the device’s feasibility and safety is proven going forward, “commercialization can only be a matter of time.”

Soft robotics technology can aid in achieving United Nations’ Sustainable Development Goals (SDGs) and the Paris Climate Agreement through development of autonomous, environmentally responsible machines powered by renewable energy. By utilizing soft robotics, we can mitigate the detrimental effects of climate change on human society and the natural world through fostering adaptation, restoration, and remediation. Moreover, the implementation of soft robotics can lead to groundbreaking discoveries in material science, biology, control systems, energy efficiency, and sustainable manufacturing processes. However, to achieve these goals, we need further improvements in understanding biological principles at the basis of embodied and physical intelligence, environment-friendly materials, and energy-saving strategies to design and manufacture self-piloting and field-ready soft robots. This paper provides insights on how soft robotics can address the pressing issue of environmental sustainability. Sustainable manufacturing of soft robots at a large scale, exploring the potential of biodegradable and bioinspired materials, and integrating onboard renewable energy sources to promote autonomy and intelligence are some of the urgent challenges of this field that we discuss in this paper. Specifically, we will present field-ready soft robots that address targeted productive applications in urban farming, healthcare, land and ocean preservation, disaster remediation, and clean and affordable energy, thus supporting some of the SDGs. By embracing soft robotics as a solution, we can concretely support economic growth and sustainable industry, drive solutions for environment protection and clean energy, and improve overall health and well-being.



My favorite approach to human-robot interaction is minimalism. I’ve met a lot of robots, and some of the ones that have most effectively captured my heart are those that express themselves through their fundamental simplicity and purity of purpose. What’s great about simple, purpose-driven robots is that they encourage humans to project needs and wants and personality onto them, letting us do a lot of the human-robot-interaction (HRI) heavy lifting.

In terms of simple, purpose-driven robots, you can’t do much better than a robotic trash barrel (or bin or can or what have you). And in a paper presented at HRI 2023 this week, researchers from Cornell explored what happened when random strangers interacted with a pair of autonomous trash barrels in NYC, with intermittently delightful results.

What’s especially cool about this, is how much HRI takes place around these robots that have essentially no explicit HRI features, since they’re literally just trash barrels on wheels. They don’t even have googly eyes! However, as the video notes, they’re controlled remotely by humans, so a lot of the movement-based expression they demonstrate likely comes from a human source—whether or not that’s intentional. These remote-controlled robots move much differently than an autonomous robot would. Folks who know how autonomous mobile robots work, expect such machines to perform slow, deliberate motions along smooth trajectories. But as an earlier paper on trash barrel robots describes, most people expect the opposite:

One peculiarity we discovered is that individuals appear to have a low confidence in autonomy, associating poor navigation and social mistakes with autonomy. In other words, people were more likely to think that the robot was computer controlled if they observed it getting stuck, bumping into obstacles, or ignoring people’s attempts to draw its attention.

We initially stumbled upon this perception when a less experienced robot driver was experimenting with the controls, actively moving the robot in strange patterns. An observer nearby asserted that the robot “has to be autonomous. It’s too erratic to be controlled by a person!”

A lot of inferred personality can come from robots that make mistakes or need help; in many contexts this is a bug, but for simple social robots where their purpose can easily be understood, it can turn into an endearing feature:

Due to the non-uniform pavement surface, the robots occasionally got stuck. People were keen to help the robots when they were in trouble. Some observers would proactively move chairs and obstacles to clear a path for the robots. Furthermore, people interpreted the back-and-forth wobbling motion as if the robots were nodding and agreeing with them, even when such motion was caused merely by uneven surfaces.

Another interesting thing going on here is how people expect that the robots want to be “fed” trash and recycling:

Occasionally, people thought the robots expected trash from them and felt obligated to give the robots something. As the robot passed and stopped by the same person for the second time, she said: “I guess it knows I’ve been sitting here long enough, I should give it something.” Some people would even find an excuse to generate trash to “satisfy” and dismiss the trash barrel by searching through a bag or picking rubbish up off the floor.

The earlier paper goes into a bit more detail on what this leads to:

It appears that people naturally attribute intrinsic motivation (or desire to fulfill some need) to the robot’s behavior and that mental model encourages them to interact with the robot in a social way by “feeding” the robot or expecting a social reciprocation of a thank you. Interestingly, the role casted upon the robot by the bystanders is reminiscent of a beggar where it prompts for collections and is expected to be thankful for donations. This contrasts sharply with human analogs such as waitstaff or cleanup janitors where they offer assistance and the receiving bystander is expected to express gratitude.

I wonder how much of this social interaction is dependent on the novelty of meeting the trash barrel robots for the first time, and whether (if these robots were to become full-time staff) humans would start treating them more like janitors. I’m also not sure how well these robots would do if they were autonomous. If part of the magic comes from having a human in the loop to manage what seems like (but probably aren’t) relatively simple human-robot interactions, turning that into effective autonomy could be a real challenge.

Trash Barrel Robots in the City, by Fanjun Bu, Ilan Mandel, Wen-Ying Lee, and Wendy Ju, is presented this week at HRI 2023 in Stockholm, Sweden.



My favorite approach to human-robot interaction is minimalism. I’ve met a lot of robots, and some of the ones that have most effectively captured my heart are those that express themselves through their fundamental simplicity and purity of purpose. What’s great about simple, purpose-driven robots is that they encourage humans to project needs and wants and personality onto them, letting us do a lot of the human-robot-interaction (HRI) heavy lifting.

In terms of simple, purpose-driven robots, you can’t do much better than a robotic trash barrel (or bin or can or what have you). And in a paper presented at HRI 2023 this week, researchers from Cornell explored what happened when random strangers interacted with a pair of autonomous trash barrels in NYC, with intermittently delightful results.

What’s especially cool about this, is how much HRI takes place around these robots that have essentially no explicit HRI features, since they’re literally just trash barrels on wheels. They don’t even have googly eyes! However, as the video notes, they’re controlled remotely by humans, so a lot of the movement-based expression they demonstrate likely comes from a human source—whether or not that’s intentional. These remote-controlled robots move much differently than an autonomous robot would. Folks who know how autonomous mobile robots work, expect such machines to perform slow, deliberate motions along smooth trajectories. But as an earlier paper on trash barrel robots describes, most people expect the opposite:

One peculiarity we discovered is that individuals appear to have a low confidence in autonomy, associating poor navigation and social mistakes with autonomy. In other words, people were more likely to think that the robot was computer controlled if they observed it getting stuck, bumping into obstacles, or ignoring people’s attempts to draw its attention.

We initially stumbled upon this perception when a less experienced robot driver was experimenting with the controls, actively moving the robot in strange patterns. An observer nearby asserted that the robot “has to be autonomous. It’s too erratic to be controlled by a person!”

A lot of inferred personality can come from robots that make mistakes or need help; in many contexts this is a bug, but for simple social robots where their purpose can easily be understood, it can turn into an endearing feature:

Due to the non-uniform pavement surface, the robots occasionally got stuck. People were keen to help the robots when they were in trouble. Some observers would proactively move chairs and obstacles to clear a path for the robots. Furthermore, people interpreted the back-and-forth wobbling motion as if the robots were nodding and agreeing with them, even when such motion was caused merely by uneven surfaces.

Another interesting thing going on here is how people expect that the robots want to be “fed” trash and recycling:

Occasionally, people thought the robots expected trash from them and felt obligated to give the robots something. As the robot passed and stopped by the same person for the second time, she said: “I guess it knows I’ve been sitting here long enough, I should give it something.” Some people would even find an excuse to generate trash to “satisfy” and dismiss the trash barrel by searching through a bag or picking rubbish up off the floor.

The earlier paper goes into a bit more detail on what this leads to:

It appears that people naturally attribute intrinsic motivation (or desire to fulfill some need) to the robot’s behavior and that mental model encourages them to interact with the robot in a social way by “feeding” the robot or expecting a social reciprocation of a thank you. Interestingly, the role casted upon the robot by the bystanders is reminiscent of a beggar where it prompts for collections and is expected to be thankful for donations. This contrasts sharply with human analogs such as waitstaff or cleanup janitors where they offer assistance and the receiving bystander is expected to express gratitude.

I wonder how much of this social interaction is dependent on the novelty of meeting the trash barrel robots for the first time, and whether (if these robots were to become full-time staff) humans would start treating them more like janitors. I’m also not sure how well these robots would do if they were autonomous. If part of the magic comes from having a human in the loop to manage what seems like (but probably aren’t) relatively simple human-robot interactions, turning that into effective autonomy could be a real challenge.

Trash Barrel Robots in the City, by Fanjun Bu, Ilan Mandel, Wen-Ying Lee, and Wendy Ju, is presented this week at HRI 2023 in Stockholm, Sweden.

Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices.

Objectives and Methods: This work presents a Tiny You Only Look Once architecture for pedestrian detection, which can be implemented in low-cost wearable devices as an alternative for the development of assistive technologies for the visually impaired.

Results: The recall results of the proposed refined model represent an improvement of 71% working with four anchor boxes and 66% with six anchor boxes compared to the original model. The accuracy achieved on the same data set shows an increase of 14% and 25%, respectively. The F1 calculation shows a refinement of 57% and 55%. The average accuracy of the models achieved an improvement of 87% and 99%. The number of correctly detected objects was 3098 and 2892 for four and six anchor boxes, respectively, whose performance is better by 77% and 65% compared to the original, which correctly detected 1743 objects.

Discussion: Finally, the model was optimized for the Jetson Nano embedded system, a case study for low-power embedded devices, and in a desktop computer. In both cases, the graphics processing unit (GPU) and central processing unit were tested, and a documented comparison of solutions aimed at serving visually impaired people was performed.

Conclusion: We performed the desktop tests with a RTX 2070S graphics card, and the image processing took about 2.8 ms. The Jetson Nano board could process an image in about 110 ms, offering the opportunity to generate alert notification procedures in support of visually impaired mobility.

Pages