Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

The industry standard for dangerous and routine autonomous inspections just got better, now with a brand-new set of features and hardware.

[ Boston Dynamics ]

For too long, dogs and vacuums have existed in a state of conflict. But Roomba robots are finally ready to make peace. To celebrate Pet Appreciation Week (4–10 June), iRobot is introducing T.R.E.A.T., an experimental prototype engineered to dispense dog treats on demand. Now dogs and vacuums can finally be friends.

[ T.R.E.A.T. ]

Legged robots have better adaptability in complex terrain, and wheeled robots move faster on flat surfaces. Unitree B-W, the ultimate speed all-rounder, combines the advantages of both types of two robots, and continues to bring new exploration and change to the industry.

[ Unitree ]

In this demonstration, Digit starts out knowing there is trash on the floor and that bins are used for recycling/trash. We use a voice command “Clean up this mess” to have Digit help us. Digit hears the command and uses a large language model to interpret how best to achieve the stated goal with its existing physical capabilities. At no point is Digit instructed on how to clean or what a mess is. This is an example of bridging the conversational nature of Chat GPT and other LLMs to generate real-world physical action.

[ Agility ]

Battery endurance represents a key challenge for long-term autonomy and long-range operations, especially in the case of aerial robots. In this paper, we propose AutoCharge, an autonomous charging solution for quadrotors that combines a portable ground station with a flexible, lightweight charging tether and is capable of universal, highly efficient, and robust charging.

[ ARPL NYU ]

BruBotics secured a place in the Guinness World Records! Together with the visitors of the Nerdland Festival, they created the longest chain of robots ever, which also respond to light. Vrije Universiteit Brussel/Imec professor Bram Vanderborght and his team, consisting of Ellen Roels, Gabriël Van De Velde, Hendrik Cools, and Niklas Steenackers, have worked hard on the project in recent months. They set their record with a chain of 334 self-designed robots. The BruBotics research group at VUB aims to bring robots closer to people with their record. “Our main objective was to introduce participants to robots in an interactive way,” says Vanderborght. “And we are proud that we have succeeded.”

[ VUB ]

Based in Italy, Comau is a leading robot manufacturer and global systems integrator. The company has been working with Intrinsic over the past several years to validate our platform technology and our developer product Flowstate through real-world use cases. In a new video case study, we go behind the scenes to explore and hear firsthand how Comau and Intrinsic are working together. Comau is using Intrinsic Flowstate to assemble the rigid components of a supermodule for a plug-in hybrid electric vehicle (PHEV).

[ Intrinsic ]

Thanks, Scott!

GITAI has achieved a significant milestone with the successful demonstration of a GITAI, an inchworm-type robotic arm equipped with a tool-changer function, and a GITAI lunar robotic rover in a simulated regolith chamber, featuring a 7-ton regolith simulant (LHS-1E).

[ GITAI ]

Uhh, pinch points...?

[ Deep Robotics ]

Detect, fetch, and collect. A seemingly easy task is being tested to find the best strategy to collect samples on the Martian surface, some 290,000 million kilometers away from home. The Sample Transfer Arm will need to load the tubes from the Martian surface for delivery to Earth. ESA’s robotic arm will collect them from the Perseverance rover, and possibly others dropped by sample-recovery helicopters as a backup.

[ ESA ]

Wing’s AutoLoader for curbside pickup.

[ Wing ]

MIT Mechanical Engineering students in Professor Sangbae Kim’s class explore why certain physical traits have evolved in animals in the natural world. Then they extract those useful principles that are applicable to robotic systems to solve such challenges as manipulation and locomotion in novel and interesting ways.

[ MIT ]

I get that it’s slightly annoying that robot vacuums generally cannot clean stairs, but I’m not sure that it’s a problem actually worth solving.

https://gizmodo.com/migo-ascender-first-robot-vacu...

Also, the actual existence of this thing is super sketchy, and I wouldn’t give them any money just yet.

[ Migo ] via [ Gizmodo ]

The fastest, tiniest, mouse-iest competition for how well robots can stick to smooth surfaces.

[ Veritasium ]

Art and language are pinnacles of human expressive achievement. This panel, part of the Stanford HAI Spring Symposium on 24 May 2023, offered conversations between artists and technologists about intersections in their work. Speakers included Ken Goldberg, professor of industrial engineering and operations research, University of California, Berkeley, and Sydney Skybetter, deputy dean of the College for Curriculum and Co-Curriculum and senior lecturer in theater arts and performance studies, Brown University. Moderated by Catie Cuan, Stanford University.

[ Stanford HAI ]

An ICRA 2023 Plenary from 90-year-old living legend Jasia Reichardt (who coined the term “uncanny valley” in 1978), linking robots with Turing, Fellini, Asimov, and Buddhism.

[ ICRA 2023 ]

Thanks, Ken!



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

The industry standard for dangerous and routine autonomous inspections just got better, now with a brand-new set of features and hardware.

[ Boston Dynamics ]

For too long, dogs and vacuums have existed in a state of conflict. But Roomba robots are finally ready to make peace. To celebrate Pet Appreciation Week (4–10 June), iRobot is introducing T.R.E.A.T., an experimental prototype engineered to dispense dog treats on demand. Now dogs and vacuums can finally be friends.

[ T.R.E.A.T. ]

Legged robots have better adaptability in complex terrain, and wheeled robots move faster on flat surfaces. Unitree B-W, the ultimate speed all-rounder, combines the advantages of both types of two robots, and continues to bring new exploration and change to the industry.

[ Unitree ]

In this demonstration, Digit starts out knowing there is trash on the floor and that bins are used for recycling/trash. We use a voice command “Clean up this mess” to have Digit help us. Digit hears the command and uses a large language model to interpret how best to achieve the stated goal with its existing physical capabilities. At no point is Digit instructed on how to clean or what a mess is. This is an example of bridging the conversational nature of Chat GPT and other LLMs to generate real-world physical action.

[ Agility ]

Battery endurance represents a key challenge for long-term autonomy and long-range operations, especially in the case of aerial robots. In this paper, we propose AutoCharge, an autonomous charging solution for quadrotors that combines a portable ground station with a flexible, lightweight charging tether and is capable of universal, highly efficient, and robust charging.

[ ARPL NYU ]

BruBotics secured a place in the Guinness World Records! Together with the visitors of the Nerdland Festival, they created the longest chain of robots ever, which also respond to light. Vrije Universiteit Brussel/Imec professor Bram Vanderborght and his team, consisting of Ellen Roels, Gabriël Van De Velde, Hendrik Cools, and Niklas Steenackers, have worked hard on the project in recent months. They set their record with a chain of 334 self-designed robots. The BruBotics research group at VUB aims to bring robots closer to people with their record. “Our main objective was to introduce participants to robots in an interactive way,” says Vanderborght. “And we are proud that we have succeeded.”

[ VUB ]

Based in Italy, Comau is a leading robot manufacturer and global systems integrator. The company has been working with Intrinsic over the past several years to validate our platform technology and our developer product Flowstate through real-world use cases. In a new video case study, we go behind the scenes to explore and hear firsthand how Comau and Intrinsic are working together. Comau is using Intrinsic Flowstate to assemble the rigid components of a supermodule for a plug-in hybrid electric vehicle (PHEV).

[ Intrinsic ]

Thanks, Scott!

GITAI has achieved a significant milestone with the successful demonstration of a GITAI, an inchworm-type robotic arm equipped with a tool-changer function, and a GITAI lunar robotic rover in a simulated regolith chamber, featuring a 7-ton regolith simulant (LHS-1E).

[ GITAI ]

Uhh, pinch points...?

[ Deep Robotics ]

Detect, fetch, and collect. A seemingly easy task is being tested to find the best strategy to collect samples on the Martian surface, some 290,000 million kilometers away from home. The Sample Transfer Arm will need to load the tubes from the Martian surface for delivery to Earth. ESA’s robotic arm will collect them from the Perseverance rover, and possibly others dropped by sample-recovery helicopters as a backup.

[ ESA ]

Wing’s AutoLoader for curbside pickup.

[ Wing ]

MIT Mechanical Engineering students in Professor Sangbae Kim’s class explore why certain physical traits have evolved in animals in the natural world. Then they extract those useful principles that are applicable to robotic systems to solve such challenges as manipulation and locomotion in novel and interesting ways.

[ MIT ]

I get that it’s slightly annoying that robot vacuums generally cannot clean stairs, but I’m not sure that it’s a problem actually worth solving.

https://gizmodo.com/migo-ascender-first-robot-vacu...

Also, the actual existence of this thing is super sketchy, and I wouldn’t give them any money just yet.

[ Migo ] via [ Gizmodo ]

The fastest, tiniest, mouse-iest competition for how well robots can stick to smooth surfaces.

[ Veritasium ]

Art and language are pinnacles of human expressive achievement. This panel, part of the Stanford HAI Spring Symposium on 24 May 2023, offered conversations between artists and technologists about intersections in their work. Speakers included Ken Goldberg, professor of industrial engineering and operations research, University of California, Berkeley, and Sydney Skybetter, deputy dean of the College for Curriculum and Co-Curriculum and senior lecturer in theater arts and performance studies, Brown University. Moderated by Catie Cuan, Stanford University.

[ Stanford HAI ]

An ICRA 2023 Plenary from 90-year-old living legend Jasia Reichardt (who coined the term “uncanny valley” in 1978), linking robots with Turing, Fellini, Asimov, and Buddhism.

[ ICRA 2023 ]

Thanks, Ken!



Inspired by dog-agility courses, a team of scientists from Google DeepMind has developed a robot-agility course called Barkour to test the abilities of four-legged robots.

Since the 1970s, dogs have been trained to nimbly jump through hoops, scale inclines, and weave between poles in order to demonstrate agility. To take home ribbons at these competitions, dogs must have not only speed but keen reflexes and attention to detail. These courses also set a benchmark for how agility should be measured across breeds, which is something that Atil Iscen—a Google DeepMind scientist in Denver—says is lacking in the world of four-legged robots.

Despite great developments in the past decade, including robots like MIT’s Mini Cheetah and Boston Dynamics’ Spot which have shown how animal-like robots’ movement can be, a lack of standardized tasks for these types of robots has made it difficult to compare their progress, Iscen says.

Quadruped Obstacle Course Provides New Robot Benchmark youtube

“Unlike previous benchmarks developed for legged robots, Barkour contains a diverse set of obstacles that requires a combination of different types of behaviors such as precise walking, climbing, and jumping,” Iscen says. “Moreover, our timing-based metric to reward faster behavior encourages researchers to push the boundaries of speed while maintaining requirements for precision and diversity of motion.”

For their reduced-size agility course—the Barkour course was 25 meters squared instead of up to 743 square meters used for traditional courses—Iscen and colleagues chose four obstacles from traditional dog-agility courses: a pause table, weave poles, climbing an A-frame, and a jump.

The Barkour robotic-quadruped benchmark course uses four obstacles from traditional dog-agility courses and standardizes a set of performance metrics around subjects’ timings on the course. Google

“We picked these obstacles to put multiple axes of agility, including speed, acceleration, and balance,” he said. “It is also possible to customize the course further by extending it to contain other types of obstacles within a larger area.”

As in dog-agility competitions, robots that enter this course are deducted points for failing or missing an obstacle, as well as for exceeding the course’s time limit of roughly 11 seconds. To see how difficult their course was, the DeepMind team developed two different learning approaches to the course: a specialist approach that trained on each type of skill needed for the course—for example, jumping or slope climbing—and a generalist approach that trained by studying simulations run using the specialist approach.

After training four-legged robots in both of these different styles, the team released them onto the course and found that robots trained with the specialist approach slightly edged out those trained with the generalized approach. The specialists completed the course in about 25 seconds, while the generalists took closer to 27 seconds. However, robots trained with both approaches not only exceeded the course time limit but were also surpassed by two small dogs—a Pomeranian/Chihuahua mix and a Dachshund—that completed the course in less than 10 seconds.

Here, an actual dog [left] and a robotic quadruped [right] ascend and then begin their descent on the Barkour course’s A-frame challenge. Google

“There is still a big gap in agility between robots and their animal counterparts, as demonstrated in this benchmark,” the team wrote in their conclusion.

While the robots’ performance may have fallen short of expectations, the team writes that this is actually a positive because it means there’s still room for growth and improvement. In the future, Iscen hopes that the easy reproducibility of the Barkour course will make it an attractive benchmark to be employed across the field.

“We proactively considered reproducibility of the benchmark and kept the cost of materials and footprint to be low. We would love to see Barkour setups pop up in other labs.”
—Atil Iscen, Google DeepMind

“We proactively considered reproducibility of the benchmark and kept the cost of materials and footprint to be low,” Iscen says. “We would love to see Barkour setups pop up in other labs and we would be happy to share our lessons learned about building it, if other research teams interested in the work can reach out to us. We would like to see other labs adopting this benchmark so that the entire community can tackle this challenging problem together.”

As for the DeepMind team, Iscen says they’re also interested in exploring another aspect of dog-agility courses in their future work: the role of human partners.

“At the surface, (real) dog-agility competitions appear to be only about the dog’s performance. However, a lot comes to the fleeting moments of communication between the dog and its handler,” he explains. “In this context, we are eager to explore human-robot interactions, such as how can a handler work with a legged robot to guide it swiftly through a new obstacle course.”

A paper describing DeepMind’s Barkour course was published on the arXiv preprint server in May.



Inspired by dog-agility courses, a team of scientists from Google DeepMind has developed a robot-agility course called Barkour to test the abilities of four-legged robots.

Since the 1970s, dogs have been trained to nimbly jump through hoops, scale inclines, and weave between poles in order to demonstrate agility. To take home ribbons at these competitions, dogs must have not only speed but keen reflexes and attention to detail. These courses also set a benchmark for how agility should be measured across breeds, which is something that Atil Iscen—a Google DeepMind scientist in Denver—says is lacking in the world of four-legged robots.

Despite great developments in the past decade, including robots like MIT’s Mini Cheetah and Boston Dynamics’ Spot which have shown how animal-like robots’ movement can be, a lack of standardized tasks for these types of robots has made it difficult to compare their progress, Iscen says.

Quadruped Obstacle Course Provides New Robot Benchmark youtube

“Unlike previous benchmarks developed for legged robots, Barkour contains a diverse set of obstacles that requires a combination of different types of behaviors such as precise walking, climbing, and jumping,” Iscen says. “Moreover, our timing-based metric to reward faster behavior encourages researchers to push the boundaries of speed while maintaining requirements for precision and diversity of motion.”

For their reduced-size agility course—the Barkour course was 25 meters squared instead of up to 743 square meters used for traditional courses—Iscen and colleagues chose four obstacles from traditional dog-agility courses: a pause table, weave poles, climbing an A-frame, and a jump.

The Barkour robotic-quadruped benchmark course uses four obstacles from traditional dog-agility courses and standardizes a set of performance metrics around subjects’ timings on the course. Google

“We picked these obstacles to put multiple axes of agility, including speed, acceleration, and balance,” he said. “It is also possible to customize the course further by extending it to contain other types of obstacles within a larger area.”

As in dog-agility competitions, robots that enter this course are deducted points for failing or missing an obstacle, as well as for exceeding the course’s time limit of roughly 11 seconds. To see how difficult their course was, the DeepMind team developed two different learning approaches to the course: a specialist approach that trained on each type of skill needed for the course—for example, jumping or slope climbing—and a generalist approach that trained by studying simulations run using the specialist approach.

After training four-legged robots in both of these different styles, the team released them onto the course and found that robots trained with the specialist approach slightly edged out those trained with the generalized approach. The specialists completed the course in about 25 seconds, while the generalists took closer to 27 seconds. However, robots trained with both approaches not only exceeded the course time limit but were also surpassed by two small dogs—a Pomeranian/Chihuahua mix and a Dachshund—that completed the course in less than 10 seconds.

Here, an actual dog [left] and a robotic quadruped [right] ascend and then begin their descent on the Barkour course’s A-frame challenge. Google

“There is still a big gap in agility between robots and their animal counterparts, as demonstrated in this benchmark,” the team wrote in their conclusion.

While the robots’ performance may have fallen short of expectations, the team writes that this is actually a positive because it means there’s still room for growth and improvement. In the future, Iscen hopes that the easy reproducibility of the Barkour course will make it an attractive benchmark to be employed across the field.

“We proactively considered reproducibility of the benchmark and kept the cost of materials and footprint to be low. We would love to see Barkour setups pop up in other labs.”
—Atil Iscen, Google DeepMind

“We proactively considered reproducibility of the benchmark and kept the cost of materials and footprint to be low,” Iscen says. “We would love to see Barkour setups pop up in other labs and we would be happy to share our lessons learned about building it, if other research teams interested in the work can reach out to us. We would like to see other labs adopting this benchmark so that the entire community can tackle this challenging problem together.”

As for the DeepMind team, Iscen says they’re also interested in exploring another aspect of dog-agility courses in their future work: the role of human partners.

“At the surface, (real) dog-agility competitions appear to be only about the dog’s performance. However, a lot comes to the fleeting moments of communication between the dog and its handler,” he explains. “In this context, we are eager to explore human-robot interactions, such as how can a handler work with a legged robot to guide it swiftly through a new obstacle course.”

A paper describing DeepMind’s Barkour course was published on the arXiv preprint server in May.

In this paper, a distributed cooperative filtering strategy for state estimation has been developed for mobile sensor networks in a spatial–temporal varying field modeled by the advection–diffusion equation. Sensors are organized into distributed cells that resemble a mesh grid covering a spatial area, and estimation of the field value and gradient information at each cell center is obtained by running a constrained cooperative Kalman filter while incorporating the sensor measurements and information from neighboring cells. Within each cell, the finite volume method is applied to discretize and approximate the advection–diffusion equation. These approximations build the weakly coupled relationships between neighboring cells and define the constraints that the cooperative Kalman filters are subjected to. With the estimated information, a gradient-based formation control law has been developed that enables the sensor network to adjust formation size by utilizing the estimated gradient information. Convergence analysis has been conducted for both the distributed constrained cooperative Kalman filter and the formation control. Simulation results with a 9-cell 12-sensor network validate the proposed distributed filtering method and control law.

Frictionally yielding media are a particular type of non-Newtonian fluids that significantly deform under stress and do not recover their original shape. For example, mud, snow, soil, leaf litters, or sand are such substrates because they flow when stress is applied but do not bounce back when released. Some robots have been designed to move on those substrates. However, compared to moving on solid ground, significantly fewer prototypes have been developed and only a few prototypes have been demonstrated outside of the research laboratory. This paper surveys the existing biology and robotics literature to analyze principles of physics facilitating motion on yielding substrates. We categorize animal and robot locomotion based on the mechanical principles and then further on the nature of the contact: discrete contact, continuous contact above the material, or through the medium. Then, we extract different hardware solutions and motion strategies enabling different robots and animals to progress. The result reveals which design principles are more widely used and which may represent research gaps for robotics. We also discuss that higher level of abstraction helps transferring the solutions to the robotics domain also when the robot is not explicitly meant to be bio-inspired. The contribution of this paper is a review of the biology and robotics literature for identifying locomotion principles that can be applied for future robot design in yielding environments, as well as a catalog of existing solutions either in nature or man-made, to enable locomotion on yielding grounds.

In the past 2 decades, there has been increasing interest in autonomous multi-robot systems for space use. They can assemble space structures and provide services for other space assets. The utmost significance lies in the performance, stability, and robustness of these space operations. By considering system dynamics and constraints, the Model Predictive Control (MPC) framework optimizes performance. Unlike other methods, standard MPC can offer greater robustness due to its receding horizon nature. However, current literature on MPC application to space robotics primarily focuses on linear models, which is not suitable for highly non-linear multi-robot systems. Although Nonlinear MPC (NMPC) shows promise for free-floating space manipulators, current NMPC applications are limited to unconstrained non-linear systems and do not guarantee closed-loop stability. This paper introduces a novel approach to NMPC using the concept of passivity to multi-robot systems for space applications. By utilizing a passivity-based state constraint and a terminal storage function, the proposed PNMPC scheme ensures closed-loop stability and a superior performance. Therefore, this approach offers an alternative method to the control Lyapunov function for control of non-linear multi-robot space systems and applications, as stability and passivity exhibit a close relationship. Finally, this paper demonstrates that the benefits of passivity-based concepts and NMPC can be combined into a single NMPC scheme that maintains the advantages of each, including closed-loop stability through passivity and good performance through one-line optimization in NMPC.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. This week, we’re featuring a special selection of videos from ICRA 2023! We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON, TEXAS, USARoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROIT, MICHIGAN, USACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

“Autonomous Drifting With 3 Minutes of Data Via Learned Tire Models,” by Franck Djeumou, Jonathan Y.M. Goh, Ufuk Topcu, and Avinash Balachandran from University of Texas at Austin, USA, and Toyota Research Institute, Los Altos, Calif., USA.

Abstract: Near the limits of adhesion, the forces generated by a tire are nonlinear and intricately coupled. Efficient and accurate modelling in this region could improve safety, especially in emergency situations where high forces are required. To this end, we propose a novel family of tire force models based on neural ordinary differential equations and a neural-ExpTanh parameterization. These models are designed to satisfy physically insightful assumptions while also having sufficient fidelity to capture higher-order effects directly from vehicle state measurements. They are used as drop-in replacements for an analytical brush tire model in an existing nonlinear model predictive control framework. Experiments with a customized Toyota Supra show that scarce amounts of driving data – less than three minutes – is sufficient to achieve high-performance autonomous drifting on various trajectories with speeds up to 45 mph. Comparisons with the benchmark model show a 4x improvement in tracking performance, smoother control inputs, and faster and more consistent computation time. “TJ-FlyingFish: Design and Implementation of an Aerial-Aquatic Quadrotor With Tiltable Propulsion Units,” by Xuchen Liu, Minghao Dou, Dongyue Huang, Songqun Gao, Ruixin Yan, Biao Wang, Jinqiang Cui, Qinyuan Ren, Lihua Dou, Zhi Gao, Jie Chen, and Ben M. Chen from Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai, China; Chinese University of Hong Kong, Hong Kong, China; Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu, China; Peng Cheng Laboratory, Shenzhen, Guangdong, China; Zhejiang University, Hangzhou, Zhejiang, China; Beijing Institute of Technology, Beijing, China; and Wuhan University, Wuhan, Hubei, China.

Abstract: Aerial-aquatic vehicles are capable to move in the two most dominant fluids, making them more promising for a wide range of applications. We propose a prototype with special designs for propulsion and thruster configuration to cope with the vast differences in the fluid properties of water and air. For propulsion, the operating range is switched for the different mediums by the dual-speed propulsion unit, providing sufficient thrust and also ensuring output efficiency. For thruster configuration, thrust vectoring is realized by the rotation of the propulsion unit around the mount arm, thus enhancing the underwater maneuverability. This paper presents a quadrotor prototype of this concept and the design details and realization in practice. “Towards Safe Landing of Falling Quadruped Robots Using a 3-DoF Morphable Inertial Tail,” by Yunxi Tang, Jiajun An, Xiangyu Chu, Shengzhi Wang, Ching Yan Wong, and K. W. Samuel Au from The Chinese University of Hong Kong, Hong Kong, and Multiscale Medical Robotics Centre, Hong Kong.

Abstract: Falling cat problem is well-known where cats show their super aerial reorientation capability and can land safely. For their robotic counterparts, a similar falling quadruped robot problem, has not been fully addressed, although achieving safe landing as the cats has been increasingly investigated. Unlike imposing the burden on landing control, we approach to safe landing of falling quadruped robots by effective flight phase control. Different from existing work like swinging legs and attaching reaction wheels or simple tails, we propose to deploy a 3-DoF morphable inertial tail on a medium-size quadruped robot. In the flight phase, the tail with its maximum length can self-right the body orientation in 3D effectively; before touch-down, the tail length can be retracted to about 1/4 of its maximum for impressing the tail’s side-effect on landing. To enable aerial reorientation for safe landing in the quadruped robots, we design a control architecture, which is verified in a high-fidelity physics simulation environment with different initial conditions. Experimental results on a customized flight-phase test platform with comparable inertial properties are provided and show the tail’s effectiveness on 3D body reorientation and its fast retractability before touch-down. An initial falling quadruped robot experiment is shown, where the robot Unitree A1 with the 3-DoF tail can land safely subject to non-negligible initial body angles. “Nonlinear Model Predictive Control of a 3D Hopping Robot: Leveraging Lie Group Integrators for Dynamically Stable Behaviors,” by Noel Csomay-Shanklin, Victor D. Dorobantu, and Aaron D. Ames from California Institute of Technology, Pasadena, Calif., USA.

Abstract: Achieving stable hopping has been a hallmark challenge in the field of dynamic legged locomotion. Controlled hopping is notably difficult due to extended periods of underactuation combined with very short ground phases wherein ground interactions must be modulated to regulate global state. In this work, we explore the use of hybrid nonlinear model predictive control paired with a low-level feedback controller in a multi-rate hierarchy to achieve dynamically stable motions on a novel 3D hopping robot. In order to demonstrate richer behaviors on the manifold of rotations, both the planning and feedback layers must be designed in a geometrically consistent fashion; therefore, we develop the necessary tools to employ Lie group integrators and appropriate feedback controllers. We experimentally demonstrate stable 3D hopping on a novel robot, as well as trajectory tracking and flipping in simulation. “Fast Untethered Soft Robotic Crawler with Elastic Instability,” by Zechen Xiong, Yufeng Su, and Hod Lipson from Columbia University, New York, NY, USA.

Abstract: Enlightened by the fast-running gait of mammals like cheetahs and wolves, we design and fabricate a single- actuated untethered compliant robot that is capable of galloping at a speed of 313 mm/s or 1.56 body length per second (BL/s), faster than most reported soft crawlers in mm/s and BL/s. An in- plane prestressed hair clip mechanism (HCM) made up of semi- rigid materials, i.e. plastics are used as the supporting chassis, the compliant spine, and the force amplifier of the robot at the same time, enabling the robot to be simple, rapid, and strong. With experiments, we find that the HCM robotic locomotion speed is linearly related to actuation frequencies and substrate friction differences except for concrete surface, that tethering slows down the crawler, and that asymmetric actuation creates a new galloping gait. This paper demonstrates the potential of HCM-based soft robots. “Nature Inspired Machine Intelligence from Animals to Robots,” by Thirawat Chuthong, Wasuthorn Ausrivong, Binggwong Leung, Jettanan Homchanthanakul, Nopparada Mingchinda, and Poramate Manoonpong from Vidyasirimedhi Institute of Science and Technology (VISTEC), Thailand, and The Maersk Mc-Kinney Moller Institute, University of Southern Denmark.

Abstract: In nature, living creatures show versatile behaviors. They can move on various terrains and perform impressive object manipulation/transportation using their legs. Inspired by their morphologies and control strategies, we have developed bio-inspired robots and adaptive modular neural control. In this video, we demonstrate our five bio-inspired robots in our robot zoo setup. Inchworm-inspired robots with two electromagnetic feet (Freelander-02 and AVIS) can adaptively crawl and balance on horizontal and vertical metal pipes. With special design, the Freelander-02 robot can adapt its posture to crawl underneath an obstacle, while the AVIS robot can step over a flange. A millipede-inspired robot with multiple body segments (Freelander-08) can proactively adapt its body joints to efficiently navigate on bump terrain. A dung beetle-inspired robot (ALPHA) can transport an object by grasping the object with its hind legs and at the same time walk backward with the remaining legs like dung beetles. Finally, an insect-inspired robot (MORF), which is a hexapod robot platform, demonstrates typical insect-like gaits (slow wave and fast tripod gaits). In a nutshell, we believe that this bio-inspired robot zoo demonstrates how the diverse and fascinating abilities of living creatures can serve as inspiration and principles for developing robotics technology capable of achieving multiple robotic functions and solving complex motor control problems in systems with many degrees of freedom. “AngGo: Shared Indoor Smart Mobility Device,” by Yoon Joung Kwak, Haeun Park, Donghun Kang, Byounghern Kim, Jiyeon Lee, and Hui Sung Lee from Ulsan National Institute of Science and Technology (UNIST), in Ulsan, South Korea.

Abstract: AngGo is a hands-free shared indoor smart mobility device for public use. AngGo is a personal mobility device that is suitable for the movement of passengers in huge indoor spaces such as convention centers or airports. The user can use both hands freely while riding the AngGo. Unlike existing mobility devices, the mobility device that can be maneuvered using the feet was designed to be as intuitive as possible. The word “AngGo” is pronounced like a Korean word meaning “sit down and move.” There are 6 ToF distance sensors around AngGo. Half of them are in the front part and the other half are in the rear part. In the autonomous mode, AngGo avoids obstacles based on the distance from each sensor. IR distance sensors are mounted under the footrest to measure the extent to which the footrest is moved forward or backward, and these data are used to control the rotational speed of motors. The user can control the speed and the direction of AngGo simultaneously. The spring in the footrest generates force feedback, so the user can recognize the amount of variation. “Creative Robotic Pen-Art System,” by Daeun Song and Young Jun Kim from Ewha Womans University in Seoul, South Korea.

Abstract: Since the Renaissance, artists have created artworks using novel techniques and machines, deviating from conventional methods. The robotic drawing system is one of such creative attempts that involves not only the artistic nature but also scientific problems that need to be solved. Robotic drawing problems can be viewed as planning the robot’s drawing path that eventually leads to the art form. The robotic pen-art system imposes new challenges, unlike robotic painting, requiring the robot to maintain stable contact with the target drawing surface. This video showcases an autonomous robotic system that creates pen art on an arbitrary canvas surface without restricting its size or shape. Our system converts raster or vector images into piecewise-continuous paths depending on stylistic choices, such as TSP art or stroke-based drawing. Our system consists of multiple manipulators with mobility and performs stylistic drawing tasks. In order to create a more extensive pen art, the mobile manipulator setup finds a minimal number of discrete configurations for the mobile platform to cover the ample canvas space. The dual manipulator setup can generate multi-color pen art using adaptive 3-finger grippers with a pen-tool-change mechanism. We demonstrate that our system can create visually pleasing and complicated pen art on various surfaces. “I Know What You Want: A ‘Smart Bartender’ System by Interactive Gaze Following,” by Haitao Lin, Zhida Ge, Xiang Li, Yanwei Fu, and Xiangyang Xue from Fudan University, in Shanghai, China.

Abstract: We developed a novel “Smart Bartender” system, which can understand the intention of users just from the eye gaze, and make some corresponding actions. Particularly, we believe that a cyber-barman who cannot feel our faces is not an intelligent one. We thus aim at building a novel cyber-barman by capturing and analyzing the intention of the customers on the fly. Technically, such a system enables the user to select a drink simply by staring at it. Then the robotic arm mounted with a camera will automatically grasp the target bottle, and pour the liquid into the cup. To achieve this goal, we firstly adopt YOLO to detect candidate drinks. Then, the GazeNet is utilized to generate potential gaze center for grounding the target bottle that has minimum center-to-center distance. Finally, we use object pose estimation and path planning algorithms to guide the robotic arm to grasp the target bottle and execute pouring. Our system integrated with the category-level object pose estimation enjoys powerful performance, generalizing to various unseen bottles and cups which are not used for training. We believe our system would not only reduce the intensive human labor in different service scenarios, but also provide users with interactivity and enjoyment. “Towards Aerial Humanoid Robotics: Developing the Jet-Powered Robot iRonCub,” by Daniele Pucci, Gabriele Nava, Fabio Bergonti, Fabio Di Natale, Antonello Paolino, Giuseppe L’erario, Affaf Junaid Ahamad Momin, Hosameldin Awadalla Omer Mohamed, Punith Reddy Vanteddu, and Francesca Bruzzone from the Italian Institute of Technology (IIT), in Genoa, Italy.

Abstract: The current state of robotics technology lacks a platform that can combine manipulation, aerial locomotion, and bipedal terrestrial locomotion. Therefore, we define aerial humanoid robotics as the outcome of platforms with these three capabilities. To implement aerial humanoid robotics on the humanoid robot iCub, we conduct research in different directions. This includes experimental research on jet turbines and co-design, which is necessary to implement aerial humanoid robotics on the real iCub. These activities aim to model and identify the jet turbines. We also investigate flight control of flying humanoid robots using Lyapunov-quadratic-programming based control algorithms to regulate both the attitude and position of the robot. These algorithms work independently of the number of jet turbines installed on the robot and ensure satisfaction of physical constraints associated with the jet engines. In addition, we research computational fluid dynamics for aerodynamics modeling. Since the aerodynamics of a multi-body system like a flying humanoid robot is complex, we use CFD simulations with Ansys to extract a simplified model for control design, as there is little space for closed-form expressions of aerodynamic effects. “AMEA Autonomous Electrically Operated One-Axle Mowing Robot,” by Romano Hauser, Matthias Scholer, and Katrin Solveig Lohan from Eastern Switzerland University of Applied Sciences (OST), in St. Gallen, Switzerland, and Heriot-Watt University, in Edinburgh, Scotland.

Abstract: The goal of this research project (Consortium: Altatek GmbH, Eastern Switzerland University of Applied Sciences OST, Faculty of Law University of Zurich) was the development of a multifunctional, autonomous single-axle robot with an electric drive. The robot is customized for agricultural applications in mountainous areas with steepest slopes. The intention is to relieve farmers from arduous and safety critical work. Furthermore, the robot is developed as a modular platform which can be used for work in forestry, municipal, sports fields and winter/snow applications. Robot features: Core feature is the patented center of gravity control. With a sliding wheel axle of 800mm, hills up to a steepness of 35° (70%) can be easily driven and a safe operation without tipping can be ensured. To make the robot more sustainable electric drives and a 48V battery were equipped. To navigate in mountainous areas several sensors are used. In difference to applications on flat areas the position and gradient of the robot on the slope needs to be measured and considered in the path planning. A sensor system which detects possible obstacles and especially humans or animals which could be in the path of the robot is currently under development. “Surf Zone Exploration With Crab-Like Legged Robots,” by Yifeng Gong, John Grezmak, Jianfeng Zhou, Nicole Graf, Zhili Gong, Nathan Carmichael, Airel Foss, Glenna Clifton, and Kathryn A. Daltorio from Case Western Reserve University, in Cleveland, Ohio, USA, and University of Portland, in Portland, Oregon, USA.

Abstract: Surf zones are challenging for walking robots if they cannot anchor to the substrate, especially at the transition between dry sand and waves. Crab-like dactyl designs enable robots to achieve this anchoring behavior while still being lightweight enough to walk on dry sand. Our group has been developing a series of crab-like robots to achieve the transition from walking on underwater surfaces to walking on dry land. Compared with the default forward-moving gait, we find that inward-pulling gaits and sideways walking increase efficiency in granular media. By using soft dactyls, robots can probe the ground to classify substrates, which can help modify gaits to better suit the environment and recognize hazardous conditions. Dactyls can also be used to securely grasp the object and dig in the substrate for installing cables, searching for buried objects, and collecting sediment samples. To simplify control and actuation, we developed a four-degree-freedom Klann mechanism robot, which can climb onto an object and then grasp it. In addition, human interfaces will improve our ability to precisely control the robot for these types of tasks. In particular, the US government has identified munitions retrieval as an environmental priority through their Strategic Environmental Research Development Program. Our goal is to support these efforts with new robots. “Learning Exploration Strategies to Solve Real-World Marble Runs,” by Alisa Allaire and Christopher G. Atkeson from the Robotics Institute, Carnegie Mellon University, Pittsburgh, Penn., USA.

Abstract: Tasks involving locally unstable or discontinuous dynamics (such as bifurcations and collisions) remain challenging in robotics, because small variations in the environment can have a significant impact on task outcomes. For such tasks, learning a robust deterministic policy is difficult. We focus on structuring exploration with multiple stochastic policies based on a mixture of experts (MoE) policy representation that can be efficiently adapted. The MoE policy is composed of stochastic sub-policies that allow exploration of multiple distinct regions of the action space (or strategies) and a high- level selection policy to guide exploration towards the most promising regions. We develop a robot system to evaluate our approach in a real-world physical problem solving domain. After training the MoE policy in simulation, online learning in the real world demonstrates efficient adaptation within just a few dozen attempts, with a minimal sim2real gap. Our results confirm that representing multiple strategies promotes efficient adaptation in new environments and strategies learned under different dynamics can still provide useful information about where to look for good strategies. “Flipbot: Learning Continuous Paper Flipping Via Coarse-To-Fine Exteroceptive-Proprioceptive Exploration,” by Chao Zhao, Chunli Jiang, Junhao Cai, Michael Yu Wang, Hongyu Yu, and Qifeng Chen from Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, and HKUST - Shenzhen-Hong Kong Collaborative Innovation Research Institute, Futian, Shenzhen.

Abstract: This paper tackles the task of singulating and grasping paper-like deformable objects. We refer to such tasks as paper-flipping. In contrast to manipulating deformable objects that lack compression strength (such as shirts and ropes), minor variations in the physical properties of the paper-like deformable objects significantly impact the results, making manipulation highly challenging. Here, we present Flipbot, a novel solution for flipping paper-like deformable objects. Flipbot allows the robot to capture object physical properties by integrating exteroceptive and proprioceptive perceptions that are indispensable for manipulating deformable objects. Furthermore, by incorporating a proposed coarse-to-fine exploration process, the system is capable of learning the optimal control parameters for effective paper-flipping through proprioceptive and exteroceptive inputs. We deploy our method on a real-world robot with a soft gripper and learn in a self-supervised manner. The resulting policy demonstrates the effectiveness of Flipbot on paper-flipping tasks with various settings beyond the reach of prior studies, including but not limited to flipping pages throughout a book and emptying paper sheets in a box. The code is available here: https://robotll.github.io/Flipbot/ “Croche-Matic: A Robot for Crocheting 3D Cylindrical Geometry,” by Gabriella Perry, Jose Luis Garcia del Castillo y Lopez, and Nathan Melenbrink from Harvard University, in Cambridge, Mass., USA.

Abstract: Crochet is a textile craft that has resisted mechanization and industrialization except for a select number of one-off crochet machines. These machines are only capable of producing a limited subset of common crochet stitches. Crochet machines are not used in the textile industry, yet mass-produced crochet objects and clothes sold in stores like Target and Zara are almost certainly the products of crochet sweatshops. The popularity of crochet and the existence of crochet products in major chain stores shows that there is both a clear demand for this craft as well as a need for it to be produced in a more ethical way. In this paper, we present Croche-Matic, a radial crochet machine for generating three-dimensional cylindrical geometry. The Croche-Matic is designed based on Magic Ring technique, a method for hand crocheting 3D cylindrical objects. The machine consists of nine mechanical axes that work in sequence to complete different types of crochet stitches, and includes a sensor component for measuring and regulating yarn tension within the mechanical system. Croche-Matic can complete the four main stitches used in Magic Ring technique. It has a success rate of 50.7% with single crochet stitches, and has demonstrated an ability to create three-dimensional objects. “SOPHIE: SOft and Flexible Aerial Vehicle for PHysical Interaction with the Environment,” by F. Ruiz , B. C. Arrue, and A. Ollero from GRVC Robotics Lab of Seville, Spain.

Abstract: This letter presents the first design of a soft and lightweight UAV, entirely 3D-printed in flexible filament. The drone’s flexible arms are equipped with a tendon-actuated bend- ing system, which is used for applications that require physical interaction with the environment. The flexibility of the UAV can be controlled during the additive manufacturing process by adjusting the infill rate ρTPU distribution. However, the increase inflexibility implies difficulties in controlling the UAV, as well as structural, aerodynamic, and aeroelastic effects. This article provides insight into the dynamics of the system and validates the flyability of the vehicle for densities as low as 6%. Within this range, quasi-static arm deformations can be considered, thus the autopilot is fed back through a static arm deflection model. At lower densities, strong non-linear elastic dynamics appear, which translates to complex modeling, and it is suggested to switch to data-based approaches. Moreover, this work demonstrates the ability of the soft UAV to perform full-body perching, specifically landing and stabilizing on pipelines and irregular surfaces without the need for an auxiliary system. “Reconfigurable Drone System for Transportation of Parcels with Variable Mass and Size,” by Fabrizio Schiano, Przemyslaw Mariusz Kornatowski, Leonardo Cencetti, and Dario Floreano from École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, and Leonardo S.p.A., Leonardo Labs, Rome, Italy.

Abstract: Cargo drones are designed to carry payloads with predefined shape, size, and/or mass. This lack of flexibility requires a fleet of diverse drones tailored to specific cargo dimensions. Here we propose a new reconfigurable drone based on a modular design that adapts to different cargo shapes, sizes, and mass. We also propose a method for the automatic generation of drone configurations and suitable parameters for the flight controller. The parcel becomes the drone’s body to which several individual propulsion modules are attached. We demonstrate the use of the reconfigurable hardware and the accompanying software by transporting parcels of different mass and sizes requiring various numbers and propulsion modules’ positioning. The experiments are conducted indoors (with a motion capture system) and outdoors (with an RTK-GNSS sensor). The proposed design represents a cheaper and more versatile alternative to the solutions involving several drones for parcel transportation.


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. This week, we’re featuring a special selection of videos from ICRA 2023! We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON, TEXAS, USARoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROIT, MICHIGAN, USACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

“Autonomous Drifting With 3 Minutes of Data Via Learned Tire Models,” by Franck Djeumou, Jonathan Y.M. Goh, Ufuk Topcu, and Avinash Balachandran from University of Texas at Austin, USA, and Toyota Research Institute, Los Altos, Calif., USA.

Abstract: Near the limits of adhesion, the forces generated by a tire are nonlinear and intricately coupled. Efficient and accurate modelling in this region could improve safety, especially in emergency situations where high forces are required. To this end, we propose a novel family of tire force models based on neural ordinary differential equations and a neural-ExpTanh parameterization. These models are designed to satisfy physically insightful assumptions while also having sufficient fidelity to capture higher-order effects directly from vehicle state measurements. They are used as drop-in replacements for an analytical brush tire model in an existing nonlinear model predictive control framework. Experiments with a customized Toyota Supra show that scarce amounts of driving data – less than three minutes – is sufficient to achieve high-performance autonomous drifting on various trajectories with speeds up to 45 mph. Comparisons with the benchmark model show a 4x improvement in tracking performance, smoother control inputs, and faster and more consistent computation time. “TJ-FlyingFish: Design and Implementation of an Aerial-Aquatic Quadrotor With Tiltable Propulsion Units,” by Xuchen Liu, Minghao Dou, Dongyue Huang, Songqun Gao, Ruixin Yan, Biao Wang, Jinqiang Cui, Qinyuan Ren, Lihua Dou, Zhi Gao, Jie Chen, and Ben M. Chen from Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai, China; Chinese University of Hong Kong, Hong Kong, China; Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu, China; Peng Cheng Laboratory, Shenzhen, Guangdong, China; Zhejiang University, Hangzhou, Zhejiang, China; Beijing Institute of Technology, Beijing, China; and Wuhan University, Wuhan, Hubei, China.

Abstract: Aerial-aquatic vehicles are capable to move in the two most dominant fluids, making them more promising for a wide range of applications. We propose a prototype with special designs for propulsion and thruster configuration to cope with the vast differences in the fluid properties of water and air. For propulsion, the operating range is switched for the different mediums by the dual-speed propulsion unit, providing sufficient thrust and also ensuring output efficiency. For thruster configuration, thrust vectoring is realized by the rotation of the propulsion unit around the mount arm, thus enhancing the underwater maneuverability. This paper presents a quadrotor prototype of this concept and the design details and realization in practice. “Towards Safe Landing of Falling Quadruped Robots Using a 3-DoF Morphable Inertial Tail,” by Yunxi Tang, Jiajun An, Xiangyu Chu, Shengzhi Wang, Ching Yan Wong, and K. W. Samuel Au from The Chinese University of Hong Kong, Hong Kong, and Multiscale Medical Robotics Centre, Hong Kong.

Abstract: Falling cat problem is well-known where cats show their super aerial reorientation capability and can land safely. For their robotic counterparts, a similar falling quadruped robot problem, has not been fully addressed, although achieving safe landing as the cats has been increasingly investigated. Unlike imposing the burden on landing control, we approach to safe landing of falling quadruped robots by effective flight phase control. Different from existing work like swinging legs and attaching reaction wheels or simple tails, we propose to deploy a 3-DoF morphable inertial tail on a medium-size quadruped robot. In the flight phase, the tail with its maximum length can self-right the body orientation in 3D effectively; before touch-down, the tail length can be retracted to about 1/4 of its maximum for impressing the tail’s side-effect on landing. To enable aerial reorientation for safe landing in the quadruped robots, we design a control architecture, which is verified in a high-fidelity physics simulation environment with different initial conditions. Experimental results on a customized flight-phase test platform with comparable inertial properties are provided and show the tail’s effectiveness on 3D body reorientation and its fast retractability before touch-down. An initial falling quadruped robot experiment is shown, where the robot Unitree A1 with the 3-DoF tail can land safely subject to non-negligible initial body angles. “Nonlinear Model Predictive Control of a 3D Hopping Robot: Leveraging Lie Group Integrators for Dynamically Stable Behaviors,” by Noel Csomay-Shanklin, Victor D. Dorobantu, and Aaron D. Ames from California Institute of Technology, Pasadena, Calif., USA.

Abstract: Achieving stable hopping has been a hallmark challenge in the field of dynamic legged locomotion. Controlled hopping is notably difficult due to extended periods of underactuation combined with very short ground phases wherein ground interactions must be modulated to regulate global state. In this work, we explore the use of hybrid nonlinear model predictive control paired with a low-level feedback controller in a multi-rate hierarchy to achieve dynamically stable motions on a novel 3D hopping robot. In order to demonstrate richer behaviors on the manifold of rotations, both the planning and feedback layers must be designed in a geometrically consistent fashion; therefore, we develop the necessary tools to employ Lie group integrators and appropriate feedback controllers. We experimentally demonstrate stable 3D hopping on a novel robot, as well as trajectory tracking and flipping in simulation. “Fast Untethered Soft Robotic Crawler with Elastic Instability,” by Zechen Xiong, Yufeng Su, and Hod Lipson from Columbia University, New York, NY, USA.

Abstract: Enlightened by the fast-running gait of mammals like cheetahs and wolves, we design and fabricate a single- actuated untethered compliant robot that is capable of galloping at a speed of 313 mm/s or 1.56 body length per second (BL/s), faster than most reported soft crawlers in mm/s and BL/s. An in- plane prestressed hair clip mechanism (HCM) made up of semi- rigid materials, i.e. plastics are used as the supporting chassis, the compliant spine, and the force amplifier of the robot at the same time, enabling the robot to be simple, rapid, and strong. With experiments, we find that the HCM robotic locomotion speed is linearly related to actuation frequencies and substrate friction differences except for concrete surface, that tethering slows down the crawler, and that asymmetric actuation creates a new galloping gait. This paper demonstrates the potential of HCM-based soft robots. “Nature Inspired Machine Intelligence from Animals to Robots,” by Thirawat Chuthong, Wasuthorn Ausrivong, Binggwong Leung, Jettanan Homchanthanakul, Nopparada Mingchinda, and Poramate Manoonpong from Vidyasirimedhi Institute of Science and Technology (VISTEC), Thailand, and The Maersk Mc-Kinney Moller Institute, University of Southern Denmark.

Abstract: In nature, living creatures show versatile behaviors. They can move on various terrains and perform impressive object manipulation/transportation using their legs. Inspired by their morphologies and control strategies, we have developed bio-inspired robots and adaptive modular neural control. In this video, we demonstrate our five bio-inspired robots in our robot zoo setup. Inchworm-inspired robots with two electromagnetic feet (Freelander-02 and AVIS) can adaptively crawl and balance on horizontal and vertical metal pipes. With special design, the Freelander-02 robot can adapt its posture to crawl underneath an obstacle, while the AVIS robot can step over a flange. A millipede-inspired robot with multiple body segments (Freelander-08) can proactively adapt its body joints to efficiently navigate on bump terrain. A dung beetle-inspired robot (ALPHA) can transport an object by grasping the object with its hind legs and at the same time walk backward with the remaining legs like dung beetles. Finally, an insect-inspired robot (MORF), which is a hexapod robot platform, demonstrates typical insect-like gaits (slow wave and fast tripod gaits). In a nutshell, we believe that this bio-inspired robot zoo demonstrates how the diverse and fascinating abilities of living creatures can serve as inspiration and principles for developing robotics technology capable of achieving multiple robotic functions and solving complex motor control problems in systems with many degrees of freedom. “AngGo: Shared Indoor Smart Mobility Device,” by Yoon Joung Kwak, Haeun Park, Donghun Kang, Byounghern Kim, Jiyeon Lee, and Hui Sung Lee from Ulsan National Institute of Science and Technology (UNIST), in Ulsan, South Korea.

Abstract: AngGo is a hands-free shared indoor smart mobility device for public use. AngGo is a personal mobility device that is suitable for the movement of passengers in huge indoor spaces such as convention centers or airports. The user can use both hands freely while riding the AngGo. Unlike existing mobility devices, the mobility device that can be maneuvered using the feet was designed to be as intuitive as possible. The word “AngGo” is pronounced like a Korean word meaning “sit down and move.” There are 6 ToF distance sensors around AngGo. Half of them are in the front part and the other half are in the rear part. In the autonomous mode, AngGo avoids obstacles based on the distance from each sensor. IR distance sensors are mounted under the footrest to measure the extent to which the footrest is moved forward or backward, and these data are used to control the rotational speed of motors. The user can control the speed and the direction of AngGo simultaneously. The spring in the footrest generates force feedback, so the user can recognize the amount of variation. “Creative Robotic Pen-Art System,” by Daeun Song and Young Jun Kim from Ewha Womans University in Seoul, South Korea.

Abstract: Since the Renaissance, artists have created artworks using novel techniques and machines, deviating from conventional methods. The robotic drawing system is one of such creative attempts that involves not only the artistic nature but also scientific problems that need to be solved. Robotic drawing problems can be viewed as planning the robot’s drawing path that eventually leads to the art form. The robotic pen-art system imposes new challenges, unlike robotic painting, requiring the robot to maintain stable contact with the target drawing surface. This video showcases an autonomous robotic system that creates pen art on an arbitrary canvas surface without restricting its size or shape. Our system converts raster or vector images into piecewise-continuous paths depending on stylistic choices, such as TSP art or stroke-based drawing. Our system consists of multiple manipulators with mobility and performs stylistic drawing tasks. In order to create a more extensive pen art, the mobile manipulator setup finds a minimal number of discrete configurations for the mobile platform to cover the ample canvas space. The dual manipulator setup can generate multi-color pen art using adaptive 3-finger grippers with a pen-tool-change mechanism. We demonstrate that our system can create visually pleasing and complicated pen art on various surfaces. “I Know What You Want: A ‘Smart Bartender’ System by Interactive Gaze Following,” by Haitao Lin, Zhida Ge, Xiang Li, Yanwei Fu, and Xiangyang Xue from Fudan University, in Shanghai, China.

Abstract: We developed a novel “Smart Bartender” system, which can understand the intention of users just from the eye gaze, and make some corresponding actions. Particularly, we believe that a cyber-barman who cannot feel our faces is not an intelligent one. We thus aim at building a novel cyber-barman by capturing and analyzing the intention of the customers on the fly. Technically, such a system enables the user to select a drink simply by staring at it. Then the robotic arm mounted with a camera will automatically grasp the target bottle, and pour the liquid into the cup. To achieve this goal, we firstly adopt YOLO to detect candidate drinks. Then, the GazeNet is utilized to generate potential gaze center for grounding the target bottle that has minimum center-to-center distance. Finally, we use object pose estimation and path planning algorithms to guide the robotic arm to grasp the target bottle and execute pouring. Our system integrated with the category-level object pose estimation enjoys powerful performance, generalizing to various unseen bottles and cups which are not used for training. We believe our system would not only reduce the intensive human labor in different service scenarios, but also provide users with interactivity and enjoyment. “Towards Aerial Humanoid Robotics: Developing the Jet-Powered Robot iRonCub,” by Daniele Pucci, Gabriele Nava, Fabio Bergonti, Fabio Di Natale, Antonello Paolino, Giuseppe L’erario, Affaf Junaid Ahamad Momin, Hosameldin Awadalla Omer Mohamed, Punith Reddy Vanteddu, and Francesca Bruzzone from the Italian Institute of Technology (IIT), in Genoa, Italy.

Abstract: The current state of robotics technology lacks a platform that can combine manipulation, aerial locomotion, and bipedal terrestrial locomotion. Therefore, we define aerial humanoid robotics as the outcome of platforms with these three capabilities. To implement aerial humanoid robotics on the humanoid robot iCub, we conduct research in different directions. This includes experimental research on jet turbines and co-design, which is necessary to implement aerial humanoid robotics on the real iCub. These activities aim to model and identify the jet turbines. We also investigate flight control of flying humanoid robots using Lyapunov-quadratic-programming based control algorithms to regulate both the attitude and position of the robot. These algorithms work independently of the number of jet turbines installed on the robot and ensure satisfaction of physical constraints associated with the jet engines. In addition, we research computational fluid dynamics for aerodynamics modeling. Since the aerodynamics of a multi-body system like a flying humanoid robot is complex, we use CFD simulations with Ansys to extract a simplified model for control design, as there is little space for closed-form expressions of aerodynamic effects. “AMEA Autonomous Electrically Operated One-Axle Mowing Robot,” by Romano Hauser, Matthias Scholer, and Katrin Solveig Lohan from Eastern Switzerland University of Applied Sciences (OST), in St. Gallen, Switzerland, and Heriot-Watt University, in Edinburgh, Scotland.

Abstract: The goal of this research project (Consortium: Altatek GmbH, Eastern Switzerland University of Applied Sciences OST, Faculty of Law University of Zurich) was the development of a multifunctional, autonomous single-axle robot with an electric drive. The robot is customized for agricultural applications in mountainous areas with steepest slopes. The intention is to relieve farmers from arduous and safety critical work. Furthermore, the robot is developed as a modular platform which can be used for work in forestry, municipal, sports fields and winter/snow applications. Robot features: Core feature is the patented center of gravity control. With a sliding wheel axle of 800mm, hills up to a steepness of 35° (70%) can be easily driven and a safe operation without tipping can be ensured. To make the robot more sustainable electric drives and a 48V battery were equipped. To navigate in mountainous areas several sensors are used. In difference to applications on flat areas the position and gradient of the robot on the slope needs to be measured and considered in the path planning. A sensor system which detects possible obstacles and especially humans or animals which could be in the path of the robot is currently under development. “Surf Zone Exploration With Crab-Like Legged Robots,” by Yifeng Gong, John Grezmak, Jianfeng Zhou, Nicole Graf, Zhili Gong, Nathan Carmichael, Airel Foss, Glenna Clifton, and Kathryn A. Daltorio from Case Western Reserve University, in Cleveland, Ohio, USA, and University of Portland, in Portland, Oregon, USA.

Abstract: Surf zones are challenging for walking robots if they cannot anchor to the substrate, especially at the transition between dry sand and waves. Crab-like dactyl designs enable robots to achieve this anchoring behavior while still being lightweight enough to walk on dry sand. Our group has been developing a series of crab-like robots to achieve the transition from walking on underwater surfaces to walking on dry land. Compared with the default forward-moving gait, we find that inward-pulling gaits and sideways walking increase efficiency in granular media. By using soft dactyls, robots can probe the ground to classify substrates, which can help modify gaits to better suit the environment and recognize hazardous conditions. Dactyls can also be used to securely grasp the object and dig in the substrate for installing cables, searching for buried objects, and collecting sediment samples. To simplify control and actuation, we developed a four-degree-freedom Klann mechanism robot, which can climb onto an object and then grasp it. In addition, human interfaces will improve our ability to precisely control the robot for these types of tasks. In particular, the US government has identified munitions retrieval as an environmental priority through their Strategic Environmental Research Development Program. Our goal is to support these efforts with new robots. “Learning Exploration Strategies to Solve Real-World Marble Runs,” by Alisa Allaire and Christopher G. Atkeson from the Robotics Institute, Carnegie Mellon University, Pittsburgh, Penn., USA.

Abstract: Tasks involving locally unstable or discontinuous dynamics (such as bifurcations and collisions) remain challenging in robotics, because small variations in the environment can have a significant impact on task outcomes. For such tasks, learning a robust deterministic policy is difficult. We focus on structuring exploration with multiple stochastic policies based on a mixture of experts (MoE) policy representation that can be efficiently adapted. The MoE policy is composed of stochastic sub-policies that allow exploration of multiple distinct regions of the action space (or strategies) and a high- level selection policy to guide exploration towards the most promising regions. We develop a robot system to evaluate our approach in a real-world physical problem solving domain. After training the MoE policy in simulation, online learning in the real world demonstrates efficient adaptation within just a few dozen attempts, with a minimal sim2real gap. Our results confirm that representing multiple strategies promotes efficient adaptation in new environments and strategies learned under different dynamics can still provide useful information about where to look for good strategies. “Flipbot: Learning Continuous Paper Flipping Via Coarse-To-Fine Exteroceptive-Proprioceptive Exploration,” by Chao Zhao, Chunli Jiang, Junhao Cai, Michael Yu Wang, Hongyu Yu, and Qifeng Chen from Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, and HKUST - Shenzhen-Hong Kong Collaborative Innovation Research Institute, Futian, Shenzhen.

Abstract: This paper tackles the task of singulating and grasping paper-like deformable objects. We refer to such tasks as paper-flipping. In contrast to manipulating deformable objects that lack compression strength (such as shirts and ropes), minor variations in the physical properties of the paper-like deformable objects significantly impact the results, making manipulation highly challenging. Here, we present Flipbot, a novel solution for flipping paper-like deformable objects. Flipbot allows the robot to capture object physical properties by integrating exteroceptive and proprioceptive perceptions that are indispensable for manipulating deformable objects. Furthermore, by incorporating a proposed coarse-to-fine exploration process, the system is capable of learning the optimal control parameters for effective paper-flipping through proprioceptive and exteroceptive inputs. We deploy our method on a real-world robot with a soft gripper and learn in a self-supervised manner. The resulting policy demonstrates the effectiveness of Flipbot on paper-flipping tasks with various settings beyond the reach of prior studies, including but not limited to flipping pages throughout a book and emptying paper sheets in a box. The code is available here: https://robotll.github.io/Flipbot/ “Croche-Matic: A Robot for Crocheting 3D Cylindrical Geometry,” by Gabriella Perry, Jose Luis Garcia del Castillo y Lopez, and Nathan Melenbrink from Harvard University, in Cambridge, Mass., USA.

Abstract: Crochet is a textile craft that has resisted mechanization and industrialization except for a select number of one-off crochet machines. These machines are only capable of producing a limited subset of common crochet stitches. Crochet machines are not used in the textile industry, yet mass-produced crochet objects and clothes sold in stores like Target and Zara are almost certainly the products of crochet sweatshops. The popularity of crochet and the existence of crochet products in major chain stores shows that there is both a clear demand for this craft as well as a need for it to be produced in a more ethical way. In this paper, we present Croche-Matic, a radial crochet machine for generating three-dimensional cylindrical geometry. The Croche-Matic is designed based on Magic Ring technique, a method for hand crocheting 3D cylindrical objects. The machine consists of nine mechanical axes that work in sequence to complete different types of crochet stitches, and includes a sensor component for measuring and regulating yarn tension within the mechanical system. Croche-Matic can complete the four main stitches used in Magic Ring technique. It has a success rate of 50.7% with single crochet stitches, and has demonstrated an ability to create three-dimensional objects. “SOPHIE: SOft and Flexible Aerial Vehicle for PHysical Interaction with the Environment,” by F. Ruiz , B. C. Arrue, and A. Ollero from GRVC Robotics Lab of Seville, Spain.

Abstract: This letter presents the first design of a soft and lightweight UAV, entirely 3D-printed in flexible filament. The drone’s flexible arms are equipped with a tendon-actuated bend- ing system, which is used for applications that require physical interaction with the environment. The flexibility of the UAV can be controlled during the additive manufacturing process by adjusting the infill rate ρTPU distribution. However, the increase inflexibility implies difficulties in controlling the UAV, as well as structural, aerodynamic, and aeroelastic effects. This article provides insight into the dynamics of the system and validates the flyability of the vehicle for densities as low as 6%. Within this range, quasi-static arm deformations can be considered, thus the autopilot is fed back through a static arm deflection model. At lower densities, strong non-linear elastic dynamics appear, which translates to complex modeling, and it is suggested to switch to data-based approaches. Moreover, this work demonstrates the ability of the soft UAV to perform full-body perching, specifically landing and stabilizing on pipelines and irregular surfaces without the need for an auxiliary system. “Reconfigurable Drone System for Transportation of Parcels with Variable Mass and Size,” by Fabrizio Schiano, Przemyslaw Mariusz Kornatowski, Leonardo Cencetti, and Dario Floreano from École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, and Leonardo S.p.A., Leonardo Labs, Rome, Italy.

Abstract: Cargo drones are designed to carry payloads with predefined shape, size, and/or mass. This lack of flexibility requires a fleet of diverse drones tailored to specific cargo dimensions. Here we propose a new reconfigurable drone based on a modular design that adapts to different cargo shapes, sizes, and mass. We also propose a method for the automatic generation of drone configurations and suitable parameters for the flight controller. The parcel becomes the drone’s body to which several individual propulsion modules are attached. We demonstrate the use of the reconfigurable hardware and the accompanying software by transporting parcels of different mass and sizes requiring various numbers and propulsion modules’ positioning. The experiments are conducted indoors (with a motion capture system) and outdoors (with an RTK-GNSS sensor). The proposed design represents a cheaper and more versatile alternative to the solutions involving several drones for parcel transportation.


This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Does your robot know where it is right now? Does it? Are you sure? And what about all of its robot friends, do they know where they are too? This is important. So important, in fact, that some would say that multi-robot simultaneous localization and mapping (SLAM) is a crucial capability to obtain timely situational awareness over large areas. Those some would be a group of MIT roboticists who just won the IEEE Transactions on Robotics Best Paper Award for 2022, presented at this year’s IEEE International Conference on Robotics and Automation (ICRA 2023) in London. Congratulations!

Out of more than 200 papers published in Transactions on Robotics last year, reviewers and editors voted to award the 2022 IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award to Yulun Tian, Yun Chang, Fernando Herrera Arias, Carlos Nieto-Granda, Jonathan P. How, and Luca Carlone from MIT for their paper Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems.

“The editorial board, and the reviewers, were deeply impressed by the theoretical elegance and practical relevance of this paper and the open-source code that accompanies it. Kimera-Multi is now the gold-standard for distributed multi-robot SLAM.”
—Kevin Lynch, editor-in-chief, IEEE Transactions on Robotics

Robots rely on simultaneous localization and mapping to understand where they are in unknown environments. But unknown environments are a big place, and it takes more than one robot to explore all of them. If you send a whole team of robots, each of them can explore their own little bit, and then share what they’ve learned with each other to make a much bigger map that they can all take advantage of. Like most things robot, this is much easier said than done, which is why Kimera-Multi is so useful and important. The award-winning researchers say that Kimera-Multi is a distributed system that runs locally on a bunch of robots all at once. If one robot finds itself in communications range with another robot, they can share map data, and use those data to build and improve a globally consistent map that includes semantic annotations.

Since filming the above video, the researchers have done real-world tests with Kimera-Multi. Below is an example of the map generated by three robots as they travel a total of more than two kilometers. You can easily see how the accuracy of the map improves significantly as the robots talk to each other:

More details and code are available on GitHub.

T-RO also selected some excellent Honorable Mentions for 2022, which are:

Stabilization of Complementarity Systems via Contact-Aware Controllers, by Alp Aydinoglu, Philip Sieg, Victor M. Preciado, and Michael Posa

Autonomous Cave Surveying With an Aerial Robot, by Wennie Tabib, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael

Prehensile Manipulation Planning: Modeling, Algorithms and Implementation, by Florent Lamiraux and Joseph Mirabel

Rock-and-Walk Manipulation: Object Locomotion by Passive Rolling Dynamics and Periodic Active Control, by Abdullah Nazir, Pu Xu, and Jungwon Seo

Origami-Inspired Soft Actuators for Stimulus Perception and Crawling Robot Applications, by Tao Jin, Long Li, Tianhong Wang, Guopeng Wang, Jianguo Cai, Yingzhong Tian, and Quan Zhang



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Does your robot know where it is right now? Does it? Are you sure? And what about all of its robot friends, do they know where they are too? This is important. So important, in fact, that some would say that multi-robot simultaneous localization and mapping (SLAM) is a crucial capability to obtain timely situational awareness over large areas. Those some would be a group of MIT roboticists who just won the IEEE Transactions on Robotics Best Paper Award for 2022, presented at this year’s IEEE International Conference on Robotics and Automation (ICRA 2023) in London. Congratulations!

Out of more than 200 papers published in Transactions on Robotics last year, reviewers and editors voted to award the 2022 IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award to Yulun Tian, Yun Chang, Fernando Herrera Arias, Carlos Nieto-Granda, Jonathan P. How, and Luca Carlone from MIT for their paper Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems.

“The editorial board, and the reviewers, were deeply impressed by the theoretical elegance and practical relevance of this paper and the open-source code that accompanies it. Kimera-Multi is now the gold-standard for distributed multi-robot SLAM.”
—Kevin Lynch, editor-in-chief, IEEE Transactions on Robotics

Robots rely on simultaneous localization and mapping to understand where they are in unknown environments. But unknown environments are a big place, and it takes more than one robot to explore all of them. If you send a whole team of robots, each of them can explore their own little bit, and then share what they’ve learned with each other to make a much bigger map that they can all take advantage of. Like most things robot, this is much easier said than done, which is why Kimera-Multi is so useful and important. The award-winning researchers say that Kimera-Multi is a distributed system that runs locally on a bunch of robots all at once. If one robot finds itself in communications range with another robot, they can share map data, and use those data to build and improve a globally consistent map that includes semantic annotations.

Since filming the above video, the researchers have done real-world tests with Kimera-Multi. Below is an example of the map generated by three robots as they travel a total of more than two kilometers. You can easily see how the accuracy of the map improves significantly as the robots talk to each other:

More details and code are available on GitHub.

T-RO also selected some excellent Honorable Mentions for 2022, which are:

Stabilization of Complementarity Systems via Contact-Aware Controllers, by Alp Aydinoglu, Philip Sieg, Victor M. Preciado, and Michael Posa

Autonomous Cave Surveying With an Aerial Robot, by Wennie Tabib, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael

Prehensile Manipulation Planning: Modeling, Algorithms and Implementation, by Florent Lamiraux and Joseph Mirabel

Rock-and-Walk Manipulation: Object Locomotion by Passive Rolling Dynamics and Periodic Active Control, by Abdullah Nazir, Pu Xu, and Jungwon Seo

Origami-Inspired Soft Actuators for Stimulus Perception and Crawling Robot Applications, by Tao Jin, Long Li, Tianhong Wang, Guopeng Wang, Jianguo Cai, Yingzhong Tian, and Quan Zhang

The implementation of anthropomorphic features in regard to appearance and framing is widely supposed to increase empathy towards robots. However, recent research used mainly tasks that are rather atypical for daily human-robot interactions like sacrificing or destroying robots. The scope of the current study was to investigate the influence of anthropomorphism by design on empathy and empathic behavior in a more realistic, collaborative scenario. In this online experiment, participants collaborated either with an anthropomorphic or a technical looking robot and received either an anthropomorphic or a technical description of the respective robot. After the task completion, we investigated situational empathy by displaying a choice-scenario in which participants needed to decide whether they wanted to act empathically towards the robot (sign a petition or a guestbook for the robot) or non empathically (leave the experiment). Subsequently, the perception of and empathy towards the robot was assessed. The results revealed no significant influence of anthropomorphism on empathy and participants’ empathic behavior. However, an exploratory follow-up analysis indicates that the individual tendency to anthropomorphize might be crucial for empathy. This result strongly supports the importance to consider individual difference in human-robot interaction. Based on the exploratory analysis, we propose six items to be further investigated as empathy questionnaire in HRI.

The behaviour of shedding tears is a unique human expression of emotion. Human tears have an emotional signalling function that conveys sadness and a social signalling function that elicits support intention from others. The present study aimed to clarify whether the tears of robots have the same emotional and social signalling functions as human tears, using methods employed in previous studies conducted on human tears. Tear processing was applied to robot pictures to create pictures with and without tears, which were used as visual stimuli. In Study 1, the participants viewed pictures of robots with and without tears and rated the intensity of the emotion experienced by the robot in the picture. The results showed that adding tears to a robot’s picture significantly increased the rated intensity of sadness. Study 2 measured support intentions towards a robot by presenting a robot’s picture with a scenario. The results showed that adding tears to the robot’s picture also increased the support intentions indicating that robot tears have emotional and social signalling functions similar to those of human tears.

Image-based robot action planning is becoming an active area of research owing to recent advances in deep learning. To evaluate and execute robot actions, recently proposed approaches require the estimation of the optimal cost-minimizing path, such as the shortest distance or time, between two states. To estimate the cost, parametric models consisting of deep neural networks are widely used. However, such parametric models require large amounts of correctly labeled data to accurately estimate the cost. In real robotic tasks, collecting such data is not always feasible, and the robot itself may require collecting it. In this study, we empirically show that when a model is trained with data autonomously collected by a robot, the estimation of such parametric models could be inaccurate to perform a task. Specifically, the higher the maximum predicted distance, the more inaccurate the estimation, and the robot fails navigating in the environment. To overcome this issue, we propose an alternative metric, “task achievability” (TA), which is defined as the probability that a robot will reach a goal state within a specified number of timesteps. Compared to the training of optimal cost estimator, TA can use both optimal and non-optimal trajectories in the training dataset to train, which leads to a stable estimation. We demonstrate the effectiveness of TA through robot navigation experiments in an environment resembling a real living room. We show that TA-based navigation succeeds in navigating a robot to different target positions, even when conventional cost estimator-based navigation fails.

In this paper, the problem of attitude estimation of a quad-copter system equipped with a multi-rate camera and gyroscope sensors is addressed through extension of a sampling importance re-sampling (SIR) particle filter (PF). Attitude measurement sensors, such as cameras, usually suffer from a slow sampling rate and processing time delay compared to inertial sensors, such as gyroscopes. A discretized attitude kinematics in Euler angles is employed where the gyroscope noisy measurements are considered the model input, leading to a stochastic uncertain system model. Then, a multi-rate delayed PF is proposed so that when no camera measurement is available, the sampling part is performed only. In this case, the delayed camera measurements are used for weight computation and re-sampling. Finally, the efficiency of the proposed method is demonstrated through both numerical simulation and experimental work on the DJI Tello quad-copter system. The images captured by the camera are processed using the ORB feature extraction method and the homography method in Python-OpenCV, which is used to calculate the rotation matrix from the Tello’s image frames.



I love plants. I am not great with plants. I have accepted this fact and have therefore entrusted the lives of all of the plants in my care to robots. These aren’t fancy robots: they’re automated hydroponic systems that take care of water and nutrients and (fake) sunlight, and they do an amazing job. My plants are almost certainly happier this way, and therefore I don’t have to feel guilty about my hands-off approach. This is especially true that there is now data from roboticist at UC Berkeley to back up the assertion that robotic gardeners can do just as good of a job as even the best human gardeners can. In fact, in some metrics, the robots can do even better.

In 1950, Alan Turing considered the question “Can Machines Think?” and proposed a test based on comparing human vs. machine ability to answer questions. In this paper, we consider the question “Can Machines Garden?” based on comparing human vs. machine ability to tend a real polyculture garden.

UC Berkeley has a long history of robotic gardens, stretching back to at least the early 90s. And (as I have experienced) you can totally tend a garden with a robot. But the real question is this: Can you usefully tend a garden with a robot in a way that is as effective as a human tending that same garden? Time for some SCIENCE!

AlphaGarden is a combination of a commercial gantry robot farming system and UC Berkeley’s AlphaGardenSim, which tells the robot what to do to maximize plant health and growth. The system includes a high-resolution camera and soil moisture sensors for monitoring plant growth, and everything is (mostly) completely automated, from seed planting to drip irrigation to pruning. The garden itself is somewhat complicated, since it’s a polyculture garden (meaning of different plants). Polyculture farming mimics how plants grow in nature; its benefits include pest resilience, decreased fertilization needs, and improved soil health. But since different plants have different needs and grow in different ways at different rates, polyculture farming is more labor-intensive than monoculture, which is how most large-scale farming happens.

To test AlphaGarden’s performance, the UC Berkeley researchers planted two side-by-side farming plots with the same seeds at the same time. There were 32 plants in total, including kale, borage, swiss chard, mustard greens, turnips, arugula, green lettuce, cilantro, and red lettuce. Over the course of two months, AlphaGarden tended its plot full time, while professional horticulturalists tended the plot next door. Then, the experiment was repeated, except that AlphaGarden was allowed to stagger the seed planting to give slower-growing plants a head start. A human did have to help the robot out with pruning from time to time, but just to follow the robot’s directions when the pruning tool couldn’t quite do what it wanted to do.

The robot and the professional human both achieved similar results in their garden plots.UC Berkeley

The results of these tests showed that the robot was able to keep up with the professional human in terms of both overall plant diversity and coverage. In other words, stuff grew just as well when tended by the robot as it did when tended by a professional human. The biggest difference is that the robot managed to keep up while using 44 percent less water: several hundred liters less over two months.

“AlphaGarden has thus passed the Turing Test for gardening,” the researchers say. They also say that “much remains to be done,” mostly by improving the AlphaGardenSim plant growth simulator to further optimize water use, although there are other variables to explore like artificial light sources. The future here is a little uncertain, though—the hardware is pretty expensive, and human labor is (relatively) cheap. Expert human knowledge is not cheap, of course. But for those of us who are very much non-experts, I could easily imagine mounting some cameras above my garden and installing some sensors and then just following the orders of the simulator about where and when and how much to water and prune. I’m always happy to donate my labor to a robot that knows what it’s doing better than I do.

“Can Machines Garden? Systematically Comparing the AlphaGarden vs. Professional Horticulturalists,” by Simeon Adebola, Rishi Parikh, Mark Presten, Satvik Sharma, Shrey Aeron, Ananth Rao, Sandeep Mukherjee, Tomson Qu, Christina Wistrom, Eugen Solowjow, and Ken Goldberg from UC Berkeley, will be presented at ICRA 2023 in London.



I love plants. I am not great with plants. I have accepted this fact and have therefore entrusted the lives of all of the plants in my care to robots. These aren’t fancy robots: they’re automated hydroponic systems that take care of water and nutrients and (fake) sunlight, and they do an amazing job. My plants are almost certainly happier this way, and therefore I don’t have to feel guilty about my hands-off approach. This is especially true that there is now data from roboticist at UC Berkeley to back up the assertion that robotic gardeners can do just as good of a job as even the best human gardeners can. In fact, in some metrics, the robots can do even better.

In 1950, Alan Turing considered the question “Can Machines Think?” and proposed a test based on comparing human vs. machine ability to answer questions. In this paper, we consider the question “Can Machines Garden?” based on comparing human vs. machine ability to tend a real polyculture garden.

UC Berkeley has a long history of robotic gardens, stretching back to at least the early 90s. And (as I have experienced) you can totally tend a garden with a robot. But the real question is this: Can you usefully tend a garden with a robot in a way that is as effective as a human tending that same garden? Time for some SCIENCE!

AlphaGarden is a combination of a commercial gantry robot farming system and UC Berkeley’s AlphaGardenSim, which tells the robot what to do to maximize plant health and growth. The system includes a high-resolution camera and soil moisture sensors for monitoring plant growth, and everything is (mostly) completely automated, from seed planting to drip irrigation to pruning. The garden itself is somewhat complicated, since it’s a polyculture garden (meaning of different plants). Polyculture farming mimics how plants grow in nature; its benefits include pest resilience, decreased fertilization needs, and improved soil health. But since different plants have different needs and grow in different ways at different rates, polyculture farming is more labor-intensive than monoculture, which is how most large-scale farming happens.

To test AlphaGarden’s performance, the UC Berkeley researchers planted two side-by-side farming plots with the same seeds at the same time. There were 32 plants in total, including kale, borage, swiss chard, mustard greens, turnips, arugula, green lettuce, cilantro, and red lettuce. Over the course of two months, AlphaGarden tended its plot full time, while professional horticulturalists tended the plot next door. Then, the experiment was repeated, except that AlphaGarden was allowed to stagger the seed planting to give slower-growing plants a head start. A human did have to help the robot out with pruning from time to time, but just to follow the robot’s directions when the pruning tool couldn’t quite do what it wanted to do.

The robot and the professional human both achieved similar results in their garden plots.UC Berkeley

The results of these tests showed that the robot was able to keep up with the professional human in terms of both overall plant diversity and coverage. In other words, stuff grew just as well when tended by the robot as it did when tended by a professional human. The biggest difference is that the robot managed to keep up while using 44 percent less water: several hundred liters less over two months.

“AlphaGarden has thus passed the Turing Test for gardening,” the researchers say. They also say that “much remains to be done,” mostly by improving the AlphaGardenSim plant growth simulator to further optimize water use, although there are other variables to explore like artificial light sources. The future here is a little uncertain, though—the hardware is pretty expensive, and human labor is (relatively) cheap. Expert human knowledge is not cheap, of course. But for those of us who are very much non-experts, I could easily imagine mounting some cameras above my garden and installing some sensors and then just following the orders of the simulator about where and when and how much to water and prune. I’m always happy to donate my labor to a robot that knows what it’s doing better than I do.

“Can Machines Garden? Systematically Comparing the AlphaGarden vs. Professional Horticulturalists,” by Simeon Adebola, Rishi Parikh, Mark Presten, Satvik Sharma, Shrey Aeron, Ananth Rao, Sandeep Mukherjee, Tomson Qu, Christina Wistrom, Eugen Solowjow, and Ken Goldberg from UC Berkeley, will be presented at ICRA 2023 in London.

Pages