IEEE Spectrum Robotics

IEEE Spectrum
Subscribe to IEEE Spectrum Robotics feed IEEE Spectrum Robotics


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

The industry standard for dangerous and routine autonomous inspections just got better, now with a brand-new set of features and hardware.

[ Boston Dynamics ]

For too long, dogs and vacuums have existed in a state of conflict. But Roomba robots are finally ready to make peace. To celebrate Pet Appreciation Week (4–10 June), iRobot is introducing T.R.E.A.T., an experimental prototype engineered to dispense dog treats on demand. Now dogs and vacuums can finally be friends.

[ T.R.E.A.T. ]

Legged robots have better adaptability in complex terrain, and wheeled robots move faster on flat surfaces. Unitree B-W, the ultimate speed all-rounder, combines the advantages of both types of two robots, and continues to bring new exploration and change to the industry.

[ Unitree ]

In this demonstration, Digit starts out knowing there is trash on the floor and that bins are used for recycling/trash. We use a voice command “Clean up this mess” to have Digit help us. Digit hears the command and uses a large language model to interpret how best to achieve the stated goal with its existing physical capabilities. At no point is Digit instructed on how to clean or what a mess is. This is an example of bridging the conversational nature of Chat GPT and other LLMs to generate real-world physical action.

[ Agility ]

Battery endurance represents a key challenge for long-term autonomy and long-range operations, especially in the case of aerial robots. In this paper, we propose AutoCharge, an autonomous charging solution for quadrotors that combines a portable ground station with a flexible, lightweight charging tether and is capable of universal, highly efficient, and robust charging.

[ ARPL NYU ]

BruBotics secured a place in the Guinness World Records! Together with the visitors of the Nerdland Festival, they created the longest chain of robots ever, which also respond to light. Vrije Universiteit Brussel/Imec professor Bram Vanderborght and his team, consisting of Ellen Roels, Gabriël Van De Velde, Hendrik Cools, and Niklas Steenackers, have worked hard on the project in recent months. They set their record with a chain of 334 self-designed robots. The BruBotics research group at VUB aims to bring robots closer to people with their record. “Our main objective was to introduce participants to robots in an interactive way,” says Vanderborght. “And we are proud that we have succeeded.”

[ VUB ]

Based in Italy, Comau is a leading robot manufacturer and global systems integrator. The company has been working with Intrinsic over the past several years to validate our platform technology and our developer product Flowstate through real-world use cases. In a new video case study, we go behind the scenes to explore and hear firsthand how Comau and Intrinsic are working together. Comau is using Intrinsic Flowstate to assemble the rigid components of a supermodule for a plug-in hybrid electric vehicle (PHEV).

[ Intrinsic ]

Thanks, Scott!

GITAI has achieved a significant milestone with the successful demonstration of a GITAI, an inchworm-type robotic arm equipped with a tool-changer function, and a GITAI lunar robotic rover in a simulated regolith chamber, featuring a 7-ton regolith simulant (LHS-1E).

[ GITAI ]

Uhh, pinch points...?

[ Deep Robotics ]

Detect, fetch, and collect. A seemingly easy task is being tested to find the best strategy to collect samples on the Martian surface, some 290,000 million kilometers away from home. The Sample Transfer Arm will need to load the tubes from the Martian surface for delivery to Earth. ESA’s robotic arm will collect them from the Perseverance rover, and possibly others dropped by sample-recovery helicopters as a backup.

[ ESA ]

Wing’s AutoLoader for curbside pickup.

[ Wing ]

MIT Mechanical Engineering students in Professor Sangbae Kim’s class explore why certain physical traits have evolved in animals in the natural world. Then they extract those useful principles that are applicable to robotic systems to solve such challenges as manipulation and locomotion in novel and interesting ways.

[ MIT ]

I get that it’s slightly annoying that robot vacuums generally cannot clean stairs, but I’m not sure that it’s a problem actually worth solving.

https://gizmodo.com/migo-ascender-first-robot-vacu...

Also, the actual existence of this thing is super sketchy, and I wouldn’t give them any money just yet.

[ Migo ] via [ Gizmodo ]

The fastest, tiniest, mouse-iest competition for how well robots can stick to smooth surfaces.

[ Veritasium ]

Art and language are pinnacles of human expressive achievement. This panel, part of the Stanford HAI Spring Symposium on 24 May 2023, offered conversations between artists and technologists about intersections in their work. Speakers included Ken Goldberg, professor of industrial engineering and operations research, University of California, Berkeley, and Sydney Skybetter, deputy dean of the College for Curriculum and Co-Curriculum and senior lecturer in theater arts and performance studies, Brown University. Moderated by Catie Cuan, Stanford University.

[ Stanford HAI ]

An ICRA 2023 Plenary from 90-year-old living legend Jasia Reichardt (who coined the term “uncanny valley” in 1978), linking robots with Turing, Fellini, Asimov, and Buddhism.

[ ICRA 2023 ]

Thanks, Ken!



Inspired by dog-agility courses, a team of scientists from Google DeepMind has developed a robot-agility course called Barkour to test the abilities of four-legged robots.

Since the 1970s, dogs have been trained to nimbly jump through hoops, scale inclines, and weave between poles in order to demonstrate agility. To take home ribbons at these competitions, dogs must have not only speed but keen reflexes and attention to detail. These courses also set a benchmark for how agility should be measured across breeds, which is something that Atil Iscen—a Google DeepMind scientist in Denver—says is lacking in the world of four-legged robots.

Despite great developments in the past decade, including robots like MIT’s Mini Cheetah and Boston Dynamics’ Spot which have shown how animal-like robots’ movement can be, a lack of standardized tasks for these types of robots has made it difficult to compare their progress, Iscen says.

Quadruped Obstacle Course Provides New Robot Benchmark youtube

“Unlike previous benchmarks developed for legged robots, Barkour contains a diverse set of obstacles that requires a combination of different types of behaviors such as precise walking, climbing, and jumping,” Iscen says. “Moreover, our timing-based metric to reward faster behavior encourages researchers to push the boundaries of speed while maintaining requirements for precision and diversity of motion.”

For their reduced-size agility course—the Barkour course was 25 meters squared instead of up to 743 square meters used for traditional courses—Iscen and colleagues chose four obstacles from traditional dog-agility courses: a pause table, weave poles, climbing an A-frame, and a jump.

The Barkour robotic-quadruped benchmark course uses four obstacles from traditional dog-agility courses and standardizes a set of performance metrics around subjects’ timings on the course. Google

“We picked these obstacles to put multiple axes of agility, including speed, acceleration, and balance,” he said. “It is also possible to customize the course further by extending it to contain other types of obstacles within a larger area.”

As in dog-agility competitions, robots that enter this course are deducted points for failing or missing an obstacle, as well as for exceeding the course’s time limit of roughly 11 seconds. To see how difficult their course was, the DeepMind team developed two different learning approaches to the course: a specialist approach that trained on each type of skill needed for the course—for example, jumping or slope climbing—and a generalist approach that trained by studying simulations run using the specialist approach.

After training four-legged robots in both of these different styles, the team released them onto the course and found that robots trained with the specialist approach slightly edged out those trained with the generalized approach. The specialists completed the course in about 25 seconds, while the generalists took closer to 27 seconds. However, robots trained with both approaches not only exceeded the course time limit but were also surpassed by two small dogs—a Pomeranian/Chihuahua mix and a Dachshund—that completed the course in less than 10 seconds.

Here, an actual dog [left] and a robotic quadruped [right] ascend and then begin their descent on the Barkour course’s A-frame challenge. Google

“There is still a big gap in agility between robots and their animal counterparts, as demonstrated in this benchmark,” the team wrote in their conclusion.

While the robots’ performance may have fallen short of expectations, the team writes that this is actually a positive because it means there’s still room for growth and improvement. In the future, Iscen hopes that the easy reproducibility of the Barkour course will make it an attractive benchmark to be employed across the field.

“We proactively considered reproducibility of the benchmark and kept the cost of materials and footprint to be low. We would love to see Barkour setups pop up in other labs.”
—Atil Iscen, Google DeepMind

“We proactively considered reproducibility of the benchmark and kept the cost of materials and footprint to be low,” Iscen says. “We would love to see Barkour setups pop up in other labs and we would be happy to share our lessons learned about building it, if other research teams interested in the work can reach out to us. We would like to see other labs adopting this benchmark so that the entire community can tackle this challenging problem together.”

As for the DeepMind team, Iscen says they’re also interested in exploring another aspect of dog-agility courses in their future work: the role of human partners.

“At the surface, (real) dog-agility competitions appear to be only about the dog’s performance. However, a lot comes to the fleeting moments of communication between the dog and its handler,” he explains. “In this context, we are eager to explore human-robot interactions, such as how can a handler work with a legged robot to guide it swiftly through a new obstacle course.”

A paper describing DeepMind’s Barkour course was published on the arXiv preprint server in May.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. This week, we’re featuring a special selection of videos from ICRA 2023! We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON, TEXAS, USARoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROIT, MICHIGAN, USACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

“Autonomous Drifting With 3 Minutes of Data Via Learned Tire Models,” by Franck Djeumou, Jonathan Y.M. Goh, Ufuk Topcu, and Avinash Balachandran from University of Texas at Austin, USA, and Toyota Research Institute, Los Altos, Calif., USA.

Abstract: Near the limits of adhesion, the forces generated by a tire are nonlinear and intricately coupled. Efficient and accurate modelling in this region could improve safety, especially in emergency situations where high forces are required. To this end, we propose a novel family of tire force models based on neural ordinary differential equations and a neural-ExpTanh parameterization. These models are designed to satisfy physically insightful assumptions while also having sufficient fidelity to capture higher-order effects directly from vehicle state measurements. They are used as drop-in replacements for an analytical brush tire model in an existing nonlinear model predictive control framework. Experiments with a customized Toyota Supra show that scarce amounts of driving data – less than three minutes – is sufficient to achieve high-performance autonomous drifting on various trajectories with speeds up to 45 mph. Comparisons with the benchmark model show a 4x improvement in tracking performance, smoother control inputs, and faster and more consistent computation time. “TJ-FlyingFish: Design and Implementation of an Aerial-Aquatic Quadrotor With Tiltable Propulsion Units,” by Xuchen Liu, Minghao Dou, Dongyue Huang, Songqun Gao, Ruixin Yan, Biao Wang, Jinqiang Cui, Qinyuan Ren, Lihua Dou, Zhi Gao, Jie Chen, and Ben M. Chen from Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai, China; Chinese University of Hong Kong, Hong Kong, China; Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu, China; Peng Cheng Laboratory, Shenzhen, Guangdong, China; Zhejiang University, Hangzhou, Zhejiang, China; Beijing Institute of Technology, Beijing, China; and Wuhan University, Wuhan, Hubei, China.

Abstract: Aerial-aquatic vehicles are capable to move in the two most dominant fluids, making them more promising for a wide range of applications. We propose a prototype with special designs for propulsion and thruster configuration to cope with the vast differences in the fluid properties of water and air. For propulsion, the operating range is switched for the different mediums by the dual-speed propulsion unit, providing sufficient thrust and also ensuring output efficiency. For thruster configuration, thrust vectoring is realized by the rotation of the propulsion unit around the mount arm, thus enhancing the underwater maneuverability. This paper presents a quadrotor prototype of this concept and the design details and realization in practice. “Towards Safe Landing of Falling Quadruped Robots Using a 3-DoF Morphable Inertial Tail,” by Yunxi Tang, Jiajun An, Xiangyu Chu, Shengzhi Wang, Ching Yan Wong, and K. W. Samuel Au from The Chinese University of Hong Kong, Hong Kong, and Multiscale Medical Robotics Centre, Hong Kong.

Abstract: Falling cat problem is well-known where cats show their super aerial reorientation capability and can land safely. For their robotic counterparts, a similar falling quadruped robot problem, has not been fully addressed, although achieving safe landing as the cats has been increasingly investigated. Unlike imposing the burden on landing control, we approach to safe landing of falling quadruped robots by effective flight phase control. Different from existing work like swinging legs and attaching reaction wheels or simple tails, we propose to deploy a 3-DoF morphable inertial tail on a medium-size quadruped robot. In the flight phase, the tail with its maximum length can self-right the body orientation in 3D effectively; before touch-down, the tail length can be retracted to about 1/4 of its maximum for impressing the tail’s side-effect on landing. To enable aerial reorientation for safe landing in the quadruped robots, we design a control architecture, which is verified in a high-fidelity physics simulation environment with different initial conditions. Experimental results on a customized flight-phase test platform with comparable inertial properties are provided and show the tail’s effectiveness on 3D body reorientation and its fast retractability before touch-down. An initial falling quadruped robot experiment is shown, where the robot Unitree A1 with the 3-DoF tail can land safely subject to non-negligible initial body angles. “Nonlinear Model Predictive Control of a 3D Hopping Robot: Leveraging Lie Group Integrators for Dynamically Stable Behaviors,” by Noel Csomay-Shanklin, Victor D. Dorobantu, and Aaron D. Ames from California Institute of Technology, Pasadena, Calif., USA.

Abstract: Achieving stable hopping has been a hallmark challenge in the field of dynamic legged locomotion. Controlled hopping is notably difficult due to extended periods of underactuation combined with very short ground phases wherein ground interactions must be modulated to regulate global state. In this work, we explore the use of hybrid nonlinear model predictive control paired with a low-level feedback controller in a multi-rate hierarchy to achieve dynamically stable motions on a novel 3D hopping robot. In order to demonstrate richer behaviors on the manifold of rotations, both the planning and feedback layers must be designed in a geometrically consistent fashion; therefore, we develop the necessary tools to employ Lie group integrators and appropriate feedback controllers. We experimentally demonstrate stable 3D hopping on a novel robot, as well as trajectory tracking and flipping in simulation. “Fast Untethered Soft Robotic Crawler with Elastic Instability,” by Zechen Xiong, Yufeng Su, and Hod Lipson from Columbia University, New York, NY, USA.

Abstract: Enlightened by the fast-running gait of mammals like cheetahs and wolves, we design and fabricate a single- actuated untethered compliant robot that is capable of galloping at a speed of 313 mm/s or 1.56 body length per second (BL/s), faster than most reported soft crawlers in mm/s and BL/s. An in- plane prestressed hair clip mechanism (HCM) made up of semi- rigid materials, i.e. plastics are used as the supporting chassis, the compliant spine, and the force amplifier of the robot at the same time, enabling the robot to be simple, rapid, and strong. With experiments, we find that the HCM robotic locomotion speed is linearly related to actuation frequencies and substrate friction differences except for concrete surface, that tethering slows down the crawler, and that asymmetric actuation creates a new galloping gait. This paper demonstrates the potential of HCM-based soft robots. “Nature Inspired Machine Intelligence from Animals to Robots,” by Thirawat Chuthong, Wasuthorn Ausrivong, Binggwong Leung, Jettanan Homchanthanakul, Nopparada Mingchinda, and Poramate Manoonpong from Vidyasirimedhi Institute of Science and Technology (VISTEC), Thailand, and The Maersk Mc-Kinney Moller Institute, University of Southern Denmark.

Abstract: In nature, living creatures show versatile behaviors. They can move on various terrains and perform impressive object manipulation/transportation using their legs. Inspired by their morphologies and control strategies, we have developed bio-inspired robots and adaptive modular neural control. In this video, we demonstrate our five bio-inspired robots in our robot zoo setup. Inchworm-inspired robots with two electromagnetic feet (Freelander-02 and AVIS) can adaptively crawl and balance on horizontal and vertical metal pipes. With special design, the Freelander-02 robot can adapt its posture to crawl underneath an obstacle, while the AVIS robot can step over a flange. A millipede-inspired robot with multiple body segments (Freelander-08) can proactively adapt its body joints to efficiently navigate on bump terrain. A dung beetle-inspired robot (ALPHA) can transport an object by grasping the object with its hind legs and at the same time walk backward with the remaining legs like dung beetles. Finally, an insect-inspired robot (MORF), which is a hexapod robot platform, demonstrates typical insect-like gaits (slow wave and fast tripod gaits). In a nutshell, we believe that this bio-inspired robot zoo demonstrates how the diverse and fascinating abilities of living creatures can serve as inspiration and principles for developing robotics technology capable of achieving multiple robotic functions and solving complex motor control problems in systems with many degrees of freedom. “AngGo: Shared Indoor Smart Mobility Device,” by Yoon Joung Kwak, Haeun Park, Donghun Kang, Byounghern Kim, Jiyeon Lee, and Hui Sung Lee from Ulsan National Institute of Science and Technology (UNIST), in Ulsan, South Korea.

Abstract: AngGo is a hands-free shared indoor smart mobility device for public use. AngGo is a personal mobility device that is suitable for the movement of passengers in huge indoor spaces such as convention centers or airports. The user can use both hands freely while riding the AngGo. Unlike existing mobility devices, the mobility device that can be maneuvered using the feet was designed to be as intuitive as possible. The word “AngGo” is pronounced like a Korean word meaning “sit down and move.” There are 6 ToF distance sensors around AngGo. Half of them are in the front part and the other half are in the rear part. In the autonomous mode, AngGo avoids obstacles based on the distance from each sensor. IR distance sensors are mounted under the footrest to measure the extent to which the footrest is moved forward or backward, and these data are used to control the rotational speed of motors. The user can control the speed and the direction of AngGo simultaneously. The spring in the footrest generates force feedback, so the user can recognize the amount of variation. “Creative Robotic Pen-Art System,” by Daeun Song and Young Jun Kim from Ewha Womans University in Seoul, South Korea.

Abstract: Since the Renaissance, artists have created artworks using novel techniques and machines, deviating from conventional methods. The robotic drawing system is one of such creative attempts that involves not only the artistic nature but also scientific problems that need to be solved. Robotic drawing problems can be viewed as planning the robot’s drawing path that eventually leads to the art form. The robotic pen-art system imposes new challenges, unlike robotic painting, requiring the robot to maintain stable contact with the target drawing surface. This video showcases an autonomous robotic system that creates pen art on an arbitrary canvas surface without restricting its size or shape. Our system converts raster or vector images into piecewise-continuous paths depending on stylistic choices, such as TSP art or stroke-based drawing. Our system consists of multiple manipulators with mobility and performs stylistic drawing tasks. In order to create a more extensive pen art, the mobile manipulator setup finds a minimal number of discrete configurations for the mobile platform to cover the ample canvas space. The dual manipulator setup can generate multi-color pen art using adaptive 3-finger grippers with a pen-tool-change mechanism. We demonstrate that our system can create visually pleasing and complicated pen art on various surfaces. “I Know What You Want: A ‘Smart Bartender’ System by Interactive Gaze Following,” by Haitao Lin, Zhida Ge, Xiang Li, Yanwei Fu, and Xiangyang Xue from Fudan University, in Shanghai, China.

Abstract: We developed a novel “Smart Bartender” system, which can understand the intention of users just from the eye gaze, and make some corresponding actions. Particularly, we believe that a cyber-barman who cannot feel our faces is not an intelligent one. We thus aim at building a novel cyber-barman by capturing and analyzing the intention of the customers on the fly. Technically, such a system enables the user to select a drink simply by staring at it. Then the robotic arm mounted with a camera will automatically grasp the target bottle, and pour the liquid into the cup. To achieve this goal, we firstly adopt YOLO to detect candidate drinks. Then, the GazeNet is utilized to generate potential gaze center for grounding the target bottle that has minimum center-to-center distance. Finally, we use object pose estimation and path planning algorithms to guide the robotic arm to grasp the target bottle and execute pouring. Our system integrated with the category-level object pose estimation enjoys powerful performance, generalizing to various unseen bottles and cups which are not used for training. We believe our system would not only reduce the intensive human labor in different service scenarios, but also provide users with interactivity and enjoyment. “Towards Aerial Humanoid Robotics: Developing the Jet-Powered Robot iRonCub,” by Daniele Pucci, Gabriele Nava, Fabio Bergonti, Fabio Di Natale, Antonello Paolino, Giuseppe L’erario, Affaf Junaid Ahamad Momin, Hosameldin Awadalla Omer Mohamed, Punith Reddy Vanteddu, and Francesca Bruzzone from the Italian Institute of Technology (IIT), in Genoa, Italy.

Abstract: The current state of robotics technology lacks a platform that can combine manipulation, aerial locomotion, and bipedal terrestrial locomotion. Therefore, we define aerial humanoid robotics as the outcome of platforms with these three capabilities. To implement aerial humanoid robotics on the humanoid robot iCub, we conduct research in different directions. This includes experimental research on jet turbines and co-design, which is necessary to implement aerial humanoid robotics on the real iCub. These activities aim to model and identify the jet turbines. We also investigate flight control of flying humanoid robots using Lyapunov-quadratic-programming based control algorithms to regulate both the attitude and position of the robot. These algorithms work independently of the number of jet turbines installed on the robot and ensure satisfaction of physical constraints associated with the jet engines. In addition, we research computational fluid dynamics for aerodynamics modeling. Since the aerodynamics of a multi-body system like a flying humanoid robot is complex, we use CFD simulations with Ansys to extract a simplified model for control design, as there is little space for closed-form expressions of aerodynamic effects. “AMEA Autonomous Electrically Operated One-Axle Mowing Robot,” by Romano Hauser, Matthias Scholer, and Katrin Solveig Lohan from Eastern Switzerland University of Applied Sciences (OST), in St. Gallen, Switzerland, and Heriot-Watt University, in Edinburgh, Scotland.

Abstract: The goal of this research project (Consortium: Altatek GmbH, Eastern Switzerland University of Applied Sciences OST, Faculty of Law University of Zurich) was the development of a multifunctional, autonomous single-axle robot with an electric drive. The robot is customized for agricultural applications in mountainous areas with steepest slopes. The intention is to relieve farmers from arduous and safety critical work. Furthermore, the robot is developed as a modular platform which can be used for work in forestry, municipal, sports fields and winter/snow applications. Robot features: Core feature is the patented center of gravity control. With a sliding wheel axle of 800mm, hills up to a steepness of 35° (70%) can be easily driven and a safe operation without tipping can be ensured. To make the robot more sustainable electric drives and a 48V battery were equipped. To navigate in mountainous areas several sensors are used. In difference to applications on flat areas the position and gradient of the robot on the slope needs to be measured and considered in the path planning. A sensor system which detects possible obstacles and especially humans or animals which could be in the path of the robot is currently under development. “Surf Zone Exploration With Crab-Like Legged Robots,” by Yifeng Gong, John Grezmak, Jianfeng Zhou, Nicole Graf, Zhili Gong, Nathan Carmichael, Airel Foss, Glenna Clifton, and Kathryn A. Daltorio from Case Western Reserve University, in Cleveland, Ohio, USA, and University of Portland, in Portland, Oregon, USA.

Abstract: Surf zones are challenging for walking robots if they cannot anchor to the substrate, especially at the transition between dry sand and waves. Crab-like dactyl designs enable robots to achieve this anchoring behavior while still being lightweight enough to walk on dry sand. Our group has been developing a series of crab-like robots to achieve the transition from walking on underwater surfaces to walking on dry land. Compared with the default forward-moving gait, we find that inward-pulling gaits and sideways walking increase efficiency in granular media. By using soft dactyls, robots can probe the ground to classify substrates, which can help modify gaits to better suit the environment and recognize hazardous conditions. Dactyls can also be used to securely grasp the object and dig in the substrate for installing cables, searching for buried objects, and collecting sediment samples. To simplify control and actuation, we developed a four-degree-freedom Klann mechanism robot, which can climb onto an object and then grasp it. In addition, human interfaces will improve our ability to precisely control the robot for these types of tasks. In particular, the US government has identified munitions retrieval as an environmental priority through their Strategic Environmental Research Development Program. Our goal is to support these efforts with new robots. “Learning Exploration Strategies to Solve Real-World Marble Runs,” by Alisa Allaire and Christopher G. Atkeson from the Robotics Institute, Carnegie Mellon University, Pittsburgh, Penn., USA.

Abstract: Tasks involving locally unstable or discontinuous dynamics (such as bifurcations and collisions) remain challenging in robotics, because small variations in the environment can have a significant impact on task outcomes. For such tasks, learning a robust deterministic policy is difficult. We focus on structuring exploration with multiple stochastic policies based on a mixture of experts (MoE) policy representation that can be efficiently adapted. The MoE policy is composed of stochastic sub-policies that allow exploration of multiple distinct regions of the action space (or strategies) and a high- level selection policy to guide exploration towards the most promising regions. We develop a robot system to evaluate our approach in a real-world physical problem solving domain. After training the MoE policy in simulation, online learning in the real world demonstrates efficient adaptation within just a few dozen attempts, with a minimal sim2real gap. Our results confirm that representing multiple strategies promotes efficient adaptation in new environments and strategies learned under different dynamics can still provide useful information about where to look for good strategies. “Flipbot: Learning Continuous Paper Flipping Via Coarse-To-Fine Exteroceptive-Proprioceptive Exploration,” by Chao Zhao, Chunli Jiang, Junhao Cai, Michael Yu Wang, Hongyu Yu, and Qifeng Chen from Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, and HKUST - Shenzhen-Hong Kong Collaborative Innovation Research Institute, Futian, Shenzhen.

Abstract: This paper tackles the task of singulating and grasping paper-like deformable objects. We refer to such tasks as paper-flipping. In contrast to manipulating deformable objects that lack compression strength (such as shirts and ropes), minor variations in the physical properties of the paper-like deformable objects significantly impact the results, making manipulation highly challenging. Here, we present Flipbot, a novel solution for flipping paper-like deformable objects. Flipbot allows the robot to capture object physical properties by integrating exteroceptive and proprioceptive perceptions that are indispensable for manipulating deformable objects. Furthermore, by incorporating a proposed coarse-to-fine exploration process, the system is capable of learning the optimal control parameters for effective paper-flipping through proprioceptive and exteroceptive inputs. We deploy our method on a real-world robot with a soft gripper and learn in a self-supervised manner. The resulting policy demonstrates the effectiveness of Flipbot on paper-flipping tasks with various settings beyond the reach of prior studies, including but not limited to flipping pages throughout a book and emptying paper sheets in a box. The code is available here: https://robotll.github.io/Flipbot/ “Croche-Matic: A Robot for Crocheting 3D Cylindrical Geometry,” by Gabriella Perry, Jose Luis Garcia del Castillo y Lopez, and Nathan Melenbrink from Harvard University, in Cambridge, Mass., USA.

Abstract: Crochet is a textile craft that has resisted mechanization and industrialization except for a select number of one-off crochet machines. These machines are only capable of producing a limited subset of common crochet stitches. Crochet machines are not used in the textile industry, yet mass-produced crochet objects and clothes sold in stores like Target and Zara are almost certainly the products of crochet sweatshops. The popularity of crochet and the existence of crochet products in major chain stores shows that there is both a clear demand for this craft as well as a need for it to be produced in a more ethical way. In this paper, we present Croche-Matic, a radial crochet machine for generating three-dimensional cylindrical geometry. The Croche-Matic is designed based on Magic Ring technique, a method for hand crocheting 3D cylindrical objects. The machine consists of nine mechanical axes that work in sequence to complete different types of crochet stitches, and includes a sensor component for measuring and regulating yarn tension within the mechanical system. Croche-Matic can complete the four main stitches used in Magic Ring technique. It has a success rate of 50.7% with single crochet stitches, and has demonstrated an ability to create three-dimensional objects. “SOPHIE: SOft and Flexible Aerial Vehicle for PHysical Interaction with the Environment,” by F. Ruiz , B. C. Arrue, and A. Ollero from GRVC Robotics Lab of Seville, Spain.

Abstract: This letter presents the first design of a soft and lightweight UAV, entirely 3D-printed in flexible filament. The drone’s flexible arms are equipped with a tendon-actuated bend- ing system, which is used for applications that require physical interaction with the environment. The flexibility of the UAV can be controlled during the additive manufacturing process by adjusting the infill rate ρTPU distribution. However, the increase inflexibility implies difficulties in controlling the UAV, as well as structural, aerodynamic, and aeroelastic effects. This article provides insight into the dynamics of the system and validates the flyability of the vehicle for densities as low as 6%. Within this range, quasi-static arm deformations can be considered, thus the autopilot is fed back through a static arm deflection model. At lower densities, strong non-linear elastic dynamics appear, which translates to complex modeling, and it is suggested to switch to data-based approaches. Moreover, this work demonstrates the ability of the soft UAV to perform full-body perching, specifically landing and stabilizing on pipelines and irregular surfaces without the need for an auxiliary system. “Reconfigurable Drone System for Transportation of Parcels with Variable Mass and Size,” by Fabrizio Schiano, Przemyslaw Mariusz Kornatowski, Leonardo Cencetti, and Dario Floreano from École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland, and Leonardo S.p.A., Leonardo Labs, Rome, Italy.

Abstract: Cargo drones are designed to carry payloads with predefined shape, size, and/or mass. This lack of flexibility requires a fleet of diverse drones tailored to specific cargo dimensions. Here we propose a new reconfigurable drone based on a modular design that adapts to different cargo shapes, sizes, and mass. We also propose a method for the automatic generation of drone configurations and suitable parameters for the flight controller. The parcel becomes the drone’s body to which several individual propulsion modules are attached. We demonstrate the use of the reconfigurable hardware and the accompanying software by transporting parcels of different mass and sizes requiring various numbers and propulsion modules’ positioning. The experiments are conducted indoors (with a motion capture system) and outdoors (with an RTK-GNSS sensor). The proposed design represents a cheaper and more versatile alternative to the solutions involving several drones for parcel transportation.


This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Does your robot know where it is right now? Does it? Are you sure? And what about all of its robot friends, do they know where they are too? This is important. So important, in fact, that some would say that multi-robot simultaneous localization and mapping (SLAM) is a crucial capability to obtain timely situational awareness over large areas. Those some would be a group of MIT roboticists who just won the IEEE Transactions on Robotics Best Paper Award for 2022, presented at this year’s IEEE International Conference on Robotics and Automation (ICRA 2023) in London. Congratulations!

Out of more than 200 papers published in Transactions on Robotics last year, reviewers and editors voted to award the 2022 IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award to Yulun Tian, Yun Chang, Fernando Herrera Arias, Carlos Nieto-Granda, Jonathan P. How, and Luca Carlone from MIT for their paper Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for Multi-Robot Systems.

“The editorial board, and the reviewers, were deeply impressed by the theoretical elegance and practical relevance of this paper and the open-source code that accompanies it. Kimera-Multi is now the gold-standard for distributed multi-robot SLAM.”
—Kevin Lynch, editor-in-chief, IEEE Transactions on Robotics

Robots rely on simultaneous localization and mapping to understand where they are in unknown environments. But unknown environments are a big place, and it takes more than one robot to explore all of them. If you send a whole team of robots, each of them can explore their own little bit, and then share what they’ve learned with each other to make a much bigger map that they can all take advantage of. Like most things robot, this is much easier said than done, which is why Kimera-Multi is so useful and important. The award-winning researchers say that Kimera-Multi is a distributed system that runs locally on a bunch of robots all at once. If one robot finds itself in communications range with another robot, they can share map data, and use those data to build and improve a globally consistent map that includes semantic annotations.

Since filming the above video, the researchers have done real-world tests with Kimera-Multi. Below is an example of the map generated by three robots as they travel a total of more than two kilometers. You can easily see how the accuracy of the map improves significantly as the robots talk to each other:

More details and code are available on GitHub.

T-RO also selected some excellent Honorable Mentions for 2022, which are:

Stabilization of Complementarity Systems via Contact-Aware Controllers, by Alp Aydinoglu, Philip Sieg, Victor M. Preciado, and Michael Posa

Autonomous Cave Surveying With an Aerial Robot, by Wennie Tabib, Kshitij Goel, John Yao, Curtis Boirum, and Nathan Michael

Prehensile Manipulation Planning: Modeling, Algorithms and Implementation, by Florent Lamiraux and Joseph Mirabel

Rock-and-Walk Manipulation: Object Locomotion by Passive Rolling Dynamics and Periodic Active Control, by Abdullah Nazir, Pu Xu, and Jungwon Seo

Origami-Inspired Soft Actuators for Stimulus Perception and Crawling Robot Applications, by Tao Jin, Long Li, Tianhong Wang, Guopeng Wang, Jianguo Cai, Yingzhong Tian, and Quan Zhang



I love plants. I am not great with plants. I have accepted this fact and have therefore entrusted the lives of all of the plants in my care to robots. These aren’t fancy robots: they’re automated hydroponic systems that take care of water and nutrients and (fake) sunlight, and they do an amazing job. My plants are almost certainly happier this way, and therefore I don’t have to feel guilty about my hands-off approach. This is especially true that there is now data from roboticist at UC Berkeley to back up the assertion that robotic gardeners can do just as good of a job as even the best human gardeners can. In fact, in some metrics, the robots can do even better.

In 1950, Alan Turing considered the question “Can Machines Think?” and proposed a test based on comparing human vs. machine ability to answer questions. In this paper, we consider the question “Can Machines Garden?” based on comparing human vs. machine ability to tend a real polyculture garden.

UC Berkeley has a long history of robotic gardens, stretching back to at least the early 90s. And (as I have experienced) you can totally tend a garden with a robot. But the real question is this: Can you usefully tend a garden with a robot in a way that is as effective as a human tending that same garden? Time for some SCIENCE!

AlphaGarden is a combination of a commercial gantry robot farming system and UC Berkeley’s AlphaGardenSim, which tells the robot what to do to maximize plant health and growth. The system includes a high-resolution camera and soil moisture sensors for monitoring plant growth, and everything is (mostly) completely automated, from seed planting to drip irrigation to pruning. The garden itself is somewhat complicated, since it’s a polyculture garden (meaning of different plants). Polyculture farming mimics how plants grow in nature; its benefits include pest resilience, decreased fertilization needs, and improved soil health. But since different plants have different needs and grow in different ways at different rates, polyculture farming is more labor-intensive than monoculture, which is how most large-scale farming happens.

To test AlphaGarden’s performance, the UC Berkeley researchers planted two side-by-side farming plots with the same seeds at the same time. There were 32 plants in total, including kale, borage, swiss chard, mustard greens, turnips, arugula, green lettuce, cilantro, and red lettuce. Over the course of two months, AlphaGarden tended its plot full time, while professional horticulturalists tended the plot next door. Then, the experiment was repeated, except that AlphaGarden was allowed to stagger the seed planting to give slower-growing plants a head start. A human did have to help the robot out with pruning from time to time, but just to follow the robot’s directions when the pruning tool couldn’t quite do what it wanted to do.

The robot and the professional human both achieved similar results in their garden plots.UC Berkeley

The results of these tests showed that the robot was able to keep up with the professional human in terms of both overall plant diversity and coverage. In other words, stuff grew just as well when tended by the robot as it did when tended by a professional human. The biggest difference is that the robot managed to keep up while using 44 percent less water: several hundred liters less over two months.

“AlphaGarden has thus passed the Turing Test for gardening,” the researchers say. They also say that “much remains to be done,” mostly by improving the AlphaGardenSim plant growth simulator to further optimize water use, although there are other variables to explore like artificial light sources. The future here is a little uncertain, though—the hardware is pretty expensive, and human labor is (relatively) cheap. Expert human knowledge is not cheap, of course. But for those of us who are very much non-experts, I could easily imagine mounting some cameras above my garden and installing some sensors and then just following the orders of the simulator about where and when and how much to water and prune. I’m always happy to donate my labor to a robot that knows what it’s doing better than I do.

“Can Machines Garden? Systematically Comparing the AlphaGarden vs. Professional Horticulturalists,” by Simeon Adebola, Rishi Parikh, Mark Presten, Satvik Sharma, Shrey Aeron, Ananth Rao, Sandeep Mukherjee, Tomson Qu, Christina Wistrom, Eugen Solowjow, and Ken Goldberg from UC Berkeley, will be presented at ICRA 2023 in London.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONEnergy Drone & Robotics Summit: 10–12 June 2023, HOUSTONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

We’ve just relaunched the IEEE Robots Guide over at RobotsGuide.com, featuring new robots, new interactives, and a complete redesign from the ground up. Tell your friends, tell your family, and explore nearly 250 robots in pictures and videos and detailed facts and specs, with lots more on the way!

[Robots Guide]

The qualities that make a knitted sweater comfortable and easy to wear are the same things that might allow robots to better interact with humans. RobotSweater, developed by a research team from Carnegie Mellon University’s Robotics Institute, is a machine-knitted textile “skin” that can sense contact and pressure.

RobotSweater’s knitted fabric consists of two layers of conductive yarn made with metallic fibers to conduct electricity. Sandwiched between the two is a net-like, lace-patterned layer. When pressure is applied to the fabric—say, from someone touching it—the conductive yarn closes a circuit and is read by the sensors. In their research, the team demonstrated that pushing on a companion robot outfitted in RobotSweater told it which way to move or what direction to turn its head. When used on a robot arm, RobotSweater allowed a push from a person’s hand to guide the arm’s movement, while grabbing the arm told it to open or close its gripper. In future research, the team wants to explore how to program reactions from the swipe or pinching motions used on a touchscreen.

[CMU]

DEEP Robotics Co. yesterday announced that it has launched the latest version of its Lite3 robotic dog in Europe. The system combines advanced mobility and an open modular structure to serve the education, research, and entertainment markets, said the Hangzhou, China–based company.

Lite3’s announced price is US $2,900. It ships in September.

[Deep Robotics]

Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains. We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback in a self-supervised manner. We validate our method in multiple short- and large-scale navigation tasks on a large, autonomous all-terrain vehicle (ATV) on challenging off-road terrains, and demonstrate ease of integration on a separate large ground robot.

This work will be presented at the IEEE International Conference on Robotics and Automation (ICRA 2023) in London next week.

[Mateo Guaman Castro]

Thanks, Mateo!

Sheet Metal Workers’ Local Union 104 has introduced a training course on automating and innovating field layout with the Dusty Robotics FieldPrinter system.

[Dusty Robotics]

Apptronik has half of its general-purpose robot ready to go!

The other half is still a work in progress, but here’s progress:

[Apptronik]

A spotted-lanternfly-murdering robot is my kind of murdering robot.

[FRC]

ANYmal is rated IP67 for water resistance, but this still terrifies me.

[ANYbotics]

Check out the impressive ankle action on this humanoid walking over squishy terrain.

[CNRS-AIST JRL]

Wing’s progress can be charted along the increasingly dense environments in which we’ve been able to operate: from rural farms to lightly populated suburbs to more dense suburbs to large metropolitan areas like Brisbane, Australia; Helsinki, Finland; and the Dallas Fort Worth metro area in Texas. Earlier this month, we did a demonstration delivery at Coors Field–home of the Colorado Rockies–delivering beer (Coors of course) and peanuts to the field. Admittedly, it wasn’t on a game day, but there were 1,000 people in the stands enjoying the kickoff party for AUVSI’s annual autonomous systems conference.

[ Wing ]

Pollen Robotics’ team will be going to ICRA 2023 in London! Come and meet us there to try teleoperating Reachy by yourself and give us your feedback!

[ Pollen Robotics ]

The most efficient drone engine is no engine at all.

[ MAVLab ]

Is your robot spineless? Should it be? Let’s find out.

[ UPenn ]

Looks like we’re getting closer to that robot butler.

[ Prisma Lab ]

This episode of the Robot Brains podcast features Raff D’Andrea, from Kiva, Verity, and ETH Zurich.

[ Robot Brains ]



Calling all robot fanatics! We are the creators of the Robots Guide, IEEE’s interactive site about robotics, and we need your help.

Today, we’re expanding our massive catalog to nearly 250 robots, and we want your opinion to decide which are the coolest, most wanted, and also creepiest robots out there.

To submit your votes, find robots on the site that are interesting to you and rate them based on their design and capabilities. Every Friday, we’ll crunch the votes to update our Robot Rankings.

Rate this robot: For each robot on the site, you can submit your overall rating, answer if you’d want to have this robot, and rate its appearance.IEEE Spectrum

May the coolest (or creepiest) robot win!

Our collection currently features 242 robots, including humanoids, drones, social robots, underwater vehicles, exoskeletons, self-driving cars, and more.

The Robots Guide features three rankings: Top Rated, Most Wanted, and Creepiest.IEEE Spectrum

You can explore the collection by filtering robots by category, capability, and country, or sorting them by name, year, or size. And you can also search robots by keywords.

In particular, check out some of the new additions, which could use more votes. These include some really cool robots like LOVOT, Ingenuity, GITAI G1, Tertill, Salto, Proteus, and SlothBot.

Each robot profile includes detailed tech specs, photos, videos, history, and some also have interactives that let you move and spin robots 360º on the screen.

And note that these are all real-world robots. If you’re looking for sci-fi robots, check out our new Face-Off: Sci-Fi Robots game.

Robots Redesign

Today, we’re also relaunching the Robots Guide site with a fast and sleek new design, more sections and games, and thousands of photos and videos.

The new site was designed by Pentagram, the prestigious design consultancy, in collaboration with Standard, a design and technology studio.


The site is built as a modern, fully responsive web app. It’s powered by Remix.run, a React-based web framework, with structured content by Sanity.io and site search by Algolia.

More highlights:

  • Explore nearly 250 robots
  • Make robots move and spin 360º
  • View over 1,000 amazing photos
  • Watch 900 videos of robots in action
  • Play the Sci-Fi Robots Face-Off game
  • Keep up to date with daily robot news
  • Read detailed tech specs about each robot
  • Robot Rankings: Top Rated, Most Wanted, Creepiest

The Robots Guide was designed for anyone interested in learning more about robotics, including robot enthusiasts, both experts and beginners, researchers, entrepreneurs, STEM educators, teachers, and students.

The foundation for the Robots Guide is IEEE’s Robots App, which was downloaded 1.3 million times and is used in classrooms and STEM programs all over the world.

The Robots Guide is an editorial product of IEEE Spectrum, the world’s leading technology and engineering magazine and the flagship publication of the IEEE.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDONEnergy Drone & Robotics Summit: 10–12 June 2023, HOUSTONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

LATTICE is an undergrad project from Caltech that’s developing a modular robotic transportation system for the lunar surface that uses autonomous rovers to set up a sort of cable car system to haul things like ice out of deep craters to someplace more useful. The prototype is fully functional, and pretty cool to watch in action.

We’re told that the team will be targeting a full system demonstration deploying across a “crater” on Earth this time next year. As to what those quotes around “crater” mean, your guess is as good as mine.

[ Caltech ]

Thanks, Lucas!

Happy World Cocktail Day from Flexiv!

[ Flexiv ]

Here’s what Optimus has been up to lately.

As per usual, the robot is moderately interesting, but it’s probably best to mostly just ignore Musk.

[ Tesla ]

The INSECT tarsus-inspired compliant robotic grippER with soft adhesive pads (INSECTER) uses only one single electric actuator with a cable-driven mechanism. It can be easily controlled to perform a gripping motion akin to an insect tarsus (i.e., wrapping around the object) for handling various objects.

[ Paper ]

Thanks, Poramate!

Congratulations to ANYbotics on their $50 million Series B!

And from 10 years ago (!) at ICRA 2013, here is video I took of StarlETH, one of ANYmal’s ancestors.

[ ANYbotics ]

In this video we present results from the recent field-testing campaign of the DigiForest project at Evo, Finland. The DigiForest project started in September 2022 and runs up to February 2026. It brings together diverse partners working on aerial robots, walking robots, autonomous lightweight harvesters, as well as forestry decision makers and commercial companies with the goal to create a full data pipeline for digitized forestry.

[ DigiForest ]

The Robotics and Perception Group at UZH will be presenting some new work on agile autonomous high-speed flight through cluttered environments at ICRA 2023.

[ Paper ]

Robots who lift together, stay together.

[ Sanctuary AI ]

The next CYBATHLON competition, which will take place again in 2024, breaks down barriers between the public, people with disabilities, researchers and technology developers. The initiative promotes the inclusion and participation of people with disabilities and improves assistance systems for use in everyday life by the end users.

[ Cybathlon ]



We’ve been keeping track of Sanctuary AI for quite a while, mainly through the company’s YouTube videos that show the upper half of a dexterous humanoid performing a huge variety of complicated manipulation tasks thanks to the teleoperation skills of a remote human pilot.

Despite a recent successful commercial deployment of the teleoperated system at a store in Canada (where it was able to complete 110 retail-related tasks), Sanctuary’s end goal is way, way past telepresence. The company describes itself as “on a mission to create the world’s-first human-like intelligence in general-purpose robots.” That sounds extremely ambitious, depending on what you believe “human-like intelligence” and “general-purpose robots” actually means. But today, Sanctuary is unveiling something that indi ates a substantial amount of progress towards this goal: Phoenix, a new bipedal humanoid robot designed to do manual (in the sense of hand-dependent) labor.

Sanctuary’s teleoperated humanoid is very capable, but teleoperation is of course not scalable in the way that even partial autonomy is. What all of this teleop has allowed Sanctuary to do is to collect lots and lots of data about how humans do stuff. The long-term plan is that some of those human manipulation skills can eventually be transferred to a very human-like robot, which is the design concept underlying Phoenix.

Some specs from the press release:

  • Human-like form and function: standing at 5’ 7” and weighing 155 lbs (70.3 kilograms)
  • A maximum payload of 55 lbs (24.9 kg)
  • A maximum speed of 3 miles per hour (4.8 kilometers per hour)
  • Industry-leading robotic hands with increased degrees of freedom (20 in total) that rival human hand dexterity and fine manipulation with proprietary haptic technology that mimics the sense of touch

The hardware looks very impressive, but you should take the press release with a grain of salt, as it claims that the control system (called Carbon) “enables Phoenix to think and act to complete tasks like a person.” That may be the goal, but the company is certainly not there yet. For example, Phoenix is not currently walking, and instead is mobile thanks to a small wheeled autonomous base. We’ll get into the legs a bit more later on, but Phoenix has a ways to go in terms of functionality. This is by no means a criticism—robots are super hard, and a useful and reliable general purpose bipedal humanoid is super duper hard. For Sanctuary, there’s a long road ahead, but they’ve got a map, and some snacks, and experienced folks in the driver’s seat, to extend that metaphor just a little too far.

Sanctuary

Sanctuary’s plan is to start with telepresence and use that as a foundation on which to iterate towards general purpose autonomy. The first step actually doesn’t involve robots at all—it’s to sensorize humans and record their movements while they do useful stuff out in the world. The data collected in that way are used to design effective teleoperated robots, and as those robots get pushed back out into the world to do a bunch of that same useful stuff under teleoperation, Sanctuary pays attention to what tasks or subtasks keep getting repeated over and over. Things like, opening a door or grasping a handle are the first targets to transition from teleoperated to autonomous. Automating some of the human pilot’s duties significantly boosts their efficiency. From there, Sanctuary will combine those autonomous tasks together into longer sequences to transition to more of a supervised autonomy model. Then, the company hopes, it will gradually achieve full automaton autonomy.

Sanctuary

What doesn’t really come through when you glance at Phoenix is just how unique Sanctuary’s philosophy on general purpose humanoid robots is. All the talk about completing tasks like a person, and human-like intelligence—which honestly sounds a lot like the kind of meaningless hype that you often find in breathless robotics press releases—is in fact a reflection of how Sanctuary thinks that humanoid robots should be designed and programmed to maximize their flexibility and usefulness.

To better understand this perspective, we spoke with Geordie Rose, Sanctuary AI founder and CEO.

IEEE Spectrum: Sanctuary has a unique approach to developing autonomous skills for humanoid robots. Can you describe what you’ve been working on for the past several years?

Geordie Rose: Our approach to general purpose humanoid robots has two main steps. The first is high quality teleoperation—a human pilot controlling a robot using a rig that transmits their physical movements to the robot, which moves in the same way. And the robot’s senses are transmitted back to the pilot as well. The reason why this is so important is that complex robots are very difficult to control, and if you want to get good data about accomplishing interesting tasks in the world, this is the gold star way to do that. We use that data in step two.

Step two is the automation of things that humans can do. This is a process, not an event. The way that we do it is by using a construct called a cognitive architecture, which is borrowed from cognitive science. It’s the idea that the way the human mind controls a human body is decomposable into parts, such as memory, motor control, visual cortex, and so on. When you’re engineering a control system for a robot, one of the things you can do is try to replicate each of those pieces in software to essentially try to emulate what cognitive scientists believe the human brain is doing. So, our cognitive control system is based on that premise, and the data that is collected in the first step of this process becomes examples that the cognitive system can learn from, just like you would learn from a teacher through demonstration.

The way the human mind evolved, and what it’s for, is to convert perception data of a certain kind, into actions of a certain kind. So, the mind is kind of a machine that translates perception into action. If you want to build a mind, the obvious thing to do is to build a physical thing that collects the same kinds of sensory data and outputs the same kind of actuator data, so that you’re solving the same problems as the human brain solves. Our central thesis is that the shortest way to get to general intelligence of the human kind is via building a control system for a robot that shares the same sensory and action modes that we have as people.

What made you decide on this cognitive approach, as opposed to one that’s more optimized for how robots have historically been designed and programmed?

Rose: Our previous company, Kindred, went down that road. We used essentially the same kinds of control tactics as we’re using at Sanctuary, but specialized for particular robot morphologies that we designed for specific tasks. What we found was that by doing so, you shave off all of the generality because you don’t need it. There’s nothing wrong with developing a specialized tool, but we decided that that’s not what we wanted to do—we wanted to go for a more ambitious goal.

What we’re trying to do is build a truly general purpose technology; general purpose in the sense of being able to do the sorts of things that you’d expect a person to be able to do in the course of doing work. For that approach, human morphology is ideal, because all of our tools and environments are built for us.

How humanoid is the right amount of humanoid for a humanoid robot that will be leveraging your cognitive architecture approach and using human data as a model?

Rose: The place where we started is to focus on the things that are clearly the most valuable for delivering work. So, those are (roughly in order) the hands, the sensory apparatus like vision and haptics and sound and so on, and the ability to locomote to get the hands to work. There are a lot of different kinds of design decisions to make that are underneath those primary ones, but the primary ones are about the physical form that is necessary to actually deliver value in the world. It’s almost a truism that humans are defined by our brains and opposable thumbs, so we focus mostly on brains and hands.

What about adding sensing systems that humans don’t have to make things easier for your robot, like a wrist camera?

Rose: The main reason that we wouldn’t do that is to preserve our engineering clarity. When we started the project five years ago, one of the things we’ve never wavered on is the model of what we’re trying to do, and that’s fidelity to the human form when it comes to delivering work. While there are gray areas, adding sensors like wrist cameras is not helpful, in the general case—it makes the machine worse. The kind of cognition that humans have is based on certain kinds of sensory arrays, so the way that we think about the world is built around the way that we sense and act in it. The thesis we’ve focused on is trying to build a human-like intelligence in a human-like body to do labor.

“We’re a technologically advanced civilization, why aren’t there more robots? We believe that robots have traditionally fallen into this specialization trap of building the simplest possible thing for the most specific possible job. But that’s not necessary. Technology is advanced to the point where it’s a legitimate thing to ask: Could you build a machine that can do everything a person can do? Our answer is yes.”
–Geordie Rose, Sanctuary founder and CEO

When you say artificial general intelligence or human-like intelligence, how far would you extend that?

Rose: All the way. I’m not claiming anything about the difficulty of the problem, because I think nobody knows how difficult it will be. Our team has the stated intent of trying to build a control system for a robot that is in nearly all ways the same as the way the mind controls the body in a person. That is a very tall order, of course, but it was the fundamental motivation, under certain interpretations, for why the field of AI was started in the first place. This idea of building generality in problem solving, and being able to deal with unforeseen circumstances, is the central feature of living in the real world. All animals have to solve this problem, because the real world is dangerous and ever-changing and so on. So the control system for a squirrel or a human needs to be able to adapt to ever-changing and dangerous conditions, and a properly-designed control system for a robot needs to do that as well.

And by the way, I’m not slighting animals, because animals like squirrels are massively more powerful in terms of what they can do than the best machines that we’ve ever built. There’s this idea, I think, that people might have, that there’s a lot of difference between a squirrel and a person. But if you can build a squirrel-like robot, you can layer on all of the symbolic and other AI stuff on top of it so that it can react to the world and understand it while also doing useful labor.

So there’s a bigger gap right now between robots and squirrels, than there is between squirrels and humans?

Rose: Right now, there’s a bigger gap between robots and squirrels, but it’s closing quickly.

Aside from your overall approach of using humans as a model for your system, what are the reasons to put legs on a robot that’s intended to do labor?

Rose: In analyzing the role of legs in work, they do contribute to a lot of what we do in ways that are not completely obvious. Legs are nowhere near as important as hands, so in our strategy for rolling out the product, we’re perfectly fine using wheels. And I think wheels are a better solution to certain kinds of problems than legs are. But there are certain things where you do need legs, and so there are certain kinds of customers who have been adamant that legs are a requirement.

The way that I think about this is that legs are ultimately where you want to be if you want to cover all of the human experience. My view is that legs are currently lagging behind some of the other robotic hardware, but they’ll catch up. At some point in the not-too-distant future, there will be multiple folks who have built walking algorithms and so on that we can then use in our platform. So, for example, I think you’re familiar with Apptronik; we own part of that company. Part of the reason we made that investment was to use their legs if and when they can solve that problem.

From the commercial side, we can get away with not using legs for a while, and just use wheeled base systems to deliver hands to work. But ultimately, I would like to have legs as well.

How much of a gap is there between building a machine that is physically capable of doing useful tasks, and building a robot with the intelligence to autonomously do those tasks?

Rose: Something about robotics that I’ve always believed is that the thing that you’re looking at, the machine, is actually not the important part of the robot. The important part is the software, and that’s the hardest part of all of this. Building control systems that have the thing that we call intelligence still contains many deep mysteries.

The way that we’ve approached this is a layered one, where we begin by using teleoperation of the robots, which is an established technology that we’ve been working on for roughly a decade. That’s our fallback layer, and we’re building increasing layers of autonomy on top of that, so that eventually the system gets to the point of being fully autonomous. But that doesn’t happen in one go, it happens by adding layers of autonomy over time.

The problems in building a human-level AI are very, very deep and profound. I think they’re intimately connected to the problem of embodiment. My perspective is that you don’t get to general human-like intelligence in software—that’s not the way that intelligence works. Intelligence is part of a process that converts perception into action in an embodied agent in the real world. And that’s the way we think about it: intelligence is actually a thing that makes a body move, and if you don’t look at intelligence that way, you’ll never get to it. So, all of the problems of building artificial general intelligence, human-like intelligence, are manifest inside of this control problem.

Building a true intelligence of the sort that lives inside a robot is a grand challenge. It’s a civilization-level challenge, but it’s the challenge that we’ve set for ourselves. This is the reason for the existence of this organization: to solve that problem, and then apply that to delivering labor.



An octopus-like soft robot can unfurl itself inside the skull on top of the brain, a new study finds. The novel gadget may lead to minimally invasive ways to investigate the brain and implant brain-computer interfaces, researchers say.

In order to analyze the brain after traumatic injuries, help treat disorders such as seizures, and embed brain-computer interfaces, scientists at times lay grids of electrodes onto the surface of the brain. These electrocorticography grids can capture higher-quality recordings of brain signals than electroencephalography data gathered by electrodes on the scalp, but are also less invasive than probes stuck into the brain.

However, placing electrocorticography grids onto the brain typically involves creating openings in the skull at least as large as these arrays, leaving holes up to 100 square centimeters. These surgical operations may result in severe complications, such as inflammation and scarring.

Now scientists have developed a new soft robot they can place into the skull through a tiny hole. In experiments on a minipig, they showed the device could unfold like a ship in a bottle to deploy an electrocorticography grid 4 centimeters wide, all of it fitting into a space only roughly 1 millimeter wide. This “enabled the implant to navigate through the narrow gap between the skull and the brain,” says study senior author Stéphanie Lacour, a neural engineer and director of the Federal Polytechnic School of Lausanne’s Neuro-X Institute in Switzerland.

Deployable electrodes for minimally invasive craniosurgery www.youtube.com

The researchers created the array by evaporating flexible gold electrodes less than 400 micrometers thick onto soft, flexible, medical-grade silicon rubber. The array possessed six spiral arms that maximized the surface area of the array and thus the number of electrodes in contact with the brain.

The scientists folded the array inside a cylindrical tube that was then inserted through a hole in the skull. They deployed the array by inserting a watery solution that made each spiral arm “evert,” or turn inside out, over the course of 30 to 40 seconds.

When the researchers electrically stimulated the minipig’s snout, the array successfully captured brain activity related to the sensations. In the future, Lacour and her colleagues want to create arrays that can detect brainwaves and also stimulate the brain, she notes.

Sensors in the array monitored the fluid pressure each arm encountered in real-time. These sensors helped make sure the arms didn’t push with too much force as they deployed.

“We have not encountered issues with resistance during deployment but this is certainly a point to explore further with this technology,” Lacour says. “The inflation of the leg during deployment should be kept minimal not to compress the brain and trigger irreversible damage.”

The scientists had explored idea of rolling up each arm of the array. However, the longer the arm, the thicker it became when rolled up. If a rolled-up arm becomes too thick, it will take up too much room to easily deploy. In contrast, the eversion technique used in the new study has no such limit on size. In theory, eversion could help deploy a grid that could cover the entire surface of the brain, the researchers say.

A spinoff of the Federal Polytechnic School of Lausanne called Neurosoft Bioelectronics now aims to bring this invention to the clinic. The spin-off was recently granted 2.5 million Swiss francs (nearly 2.8 million USD) by Swiss innovation agency Innosuisse.

“The deployable implant in our current study is a proof of concept,” Lacour says. “Before it may be used in a clinical context, much work is needed to translate and scale the technology to medical-grade requirements. But the research holds exciting applications in brain-computer interfaces and monitoring implants for epilepsy.”

The scientists detailed their findings online 10 May in the journal Science Robotics.



The conventional way of adding a robot to your business is to pay someone else a lot of money to do it for you. While robots are a heck of a lot easier to program than they once were, they’re still kind of scary for nonroboticists, and efforts to make robotics more accessible to folks with software experience but not hardware experience haven’t really gotten anywhere. Obviously, there are all kinds of opportunities for robots (even simple robots) across all kinds of industries, but the barrier to entry is very high when the only way to realistically access those opportunities is to go through a system integrator. This may make sense for big companies, but for smaller businesses, it could be well out of reach.

Today, Intrinsic (the Alphabet company that acquired Open Robotics a little while back) is announcing its first product. Flowstate, in the word of Intrinsic’s press release, is “an intuitive, web-based developer environment to build robotic applications from concept to deployment.” We spoke with Intrinsic CEO Wendy Tan White along with Brian Gerkey, who directs the Open Robotics team at Intrinsic, to learn more about how Intrinsic hopes to use Flowstate to change industrial robotics development.

“Our mission is, in short, to democratize access to robotics. We’re making the ability to program intelligent robotic solutions as simple as standing up a website or mobile application.” —Wendy Tan White, Intrinsic CEO

To be honest, we’ve heard this sort of thing many times before: How robots will be easy now, and how you won’t need to be a roboticist (or hire a dedicated roboticist) to get them to do useful stuff. Robots have gotten somewhat easier over the years (even as they’ve gotten both more capable and more complicated), but this dream of every software developer also being able to develop robotics applications for robots hasn’t ever really materialized.

Intrinsic’s Flowstate developer environment is intended to take diverse robotic hardware and make it all programmable through one single accessible software system. If that sounds kind of like what Open Robotics’ Robot Operating System (ROS) does, well, that shouldn’t be much of a surprise. Here are some highlights from the press release:

  • Includes a graphical process builder that removes the need for extensive programming experience
  • Behavior trees make it easy to orchestrate complex process flows, authored through a flowchart-inspired graphical representation
  • Lay out a workcell and design a process in the same virtual environment, in the cloud or on-premise
  • Simulate and validate solutions in real time (using Gazebo) without touching a single piece of hardware
  • Encode domain knowledge in custom skills that can be used and reused, with basic skills like pose estimation, manipulation, force-based insertion, and path planning available at launch
  • Fully configured development environment provides clear APIs to contribute new skills to the platform

Intrinsic’s Flowstate development environment.Intrinsic

Intrinisic’s industry partner on this for the last several years is Comau, an Italian automation company that you may not have heard of but apparently built the first robotic assembly line in 1979—if a Wikipedia article with a bad citation is to be believed. Anyway, Comau currently does a lot of robotic automation in the automotive industry, so it has been able to help Intrinsic make sure that Flowstate is real-world useful. The company will be showing it off at Automatica, if you happen to find yourself in Munich at the end of June.

For some additional background and context and details and all that good stuff, we had a chat with Wendy Tan White and Brian Gerkey.

Intrinsic is certainly not the first company to work toward making it easier to program and deploy robots. How is your approach different, and why is it going to work?

Wendy Tan White: One of the things that’s really important to make robotics accessible is agnosticism. In robotics, much of the hardware is proprietary and not very interoperable. We’re looking at bridging that. And then there’s also who can actually develop the applications. At the moment, it still takes even an integrator or a developer multiple types of software to actually build an application, or they have to build it from scratch themselves, and if you want to add anything more sophisticated like force feedback or vision, you need a specialist. What we’re looking to do with our product is to encapsulate all of that, so that whether you’re a process engineer or a software developer, you can launch an application much easier and much faster without repeatedly rebuilding the plumbing every time.

Not having to rebuild the plumbing with every new application has been one of the promises of ROS, though. So how is your tool actually solving this problem?

Brian Gerkey: ROS handles the agnosticism quite well—it gives you a lot of the developer tools that you need. What it doesn’t give you is an application building experience that’s approachable, unless you’re already a software engineer. What I said in the early days of ROS was that we want to make it possible for every software developer to build robot applications. And I think we got pretty close. Now, we’re going a step further and saying, actually, you don’t even need to be a programmer, because we can give you this low/no code type of experience where you can still access all of that underlying functionality and build a fairly complex robot application.

And then also, as you know with ROS, it gives you the toolbox, but deploying an application is basically on you: How are you actually going to roll it out? How do you tie it into a cloud system? How do you have simulation be in the loop as part of the iterative development experience, and then the continuous integration and testing experience? So, there’s a lot of room between ROS as it exists today and a fully integrated product that ties all that together.

White: Bluntly, this is going to be our first product release. So you’ll get a sense of all of that from the beginning, but my guess is that it’s not going to complete everybody’s needs through the whole pipeline straight away, although it will satisfy a subset of folks. And from there you’ll see what we’re going to add in.

Brian, is this getting closer to what your vision for making ROS accessible has always been?

Gerkey: There was always this sense that we never had the opportunity to take the platform as it is, as a set of tools, and really finish it. Like, turn up the level of professionalism and polish and really integrate it seamlessly into a product, which is frankly what you would expect out of most modern open source projects. As an independent entity, it was difficult to find the resources necessary to invest in that kind of effort. With Intrinsic, we have the opportunity now to do both things—we have the opportunity to invest more in the underlying core, which we’re doing, and we also get to go beyond that and tie it all together into a unified product vision. I want to be clear, though, that the product that we’re announcing next week will not be that, because in large part it’s a product that’s been built independently over the last several years and has a different heritage. We’ll incrementally bring in more components from the ROS ecosystem into the Intrinsic stack, and there will be things that are developed on the Intrinsic side that we will push back into the ROS community as open source.

White: The intention is very much to converge the Intrinsic platform and ROS over time. And as Brian said, I really hope that a lot of what we develop together will go back into open source.

“We believe in the need for a holistic platform. One that makes it more seamless to use different types of hardware and software together…a platform that will benefit everyone in the robotics and automation industry.” —Wendy Tan White, Intrinsic CEO

What should experienced ROS users be most excited about?

Gerkey: We’re going to provide ROS users an on-ramp to bring their existing ROS-based systems into the Intrinsic systems. What they’ll then be able to do that they can’t do today is, for example, using a Web-native graphical tool, design the process flow for a real-world industrial application. They’ll be able to integrate that with a cloud-hosted simulation that lets them iteratively test what they’re building as they develop it to confirm that it works. They’ll have a way to then run that application on real hardware, using the same interface. They’ll have a pipeline to then deploy that to an edge device. ROS lets you do a lot of that stuff today, but it doesn’t include the unified development experience nor the deployment end of things.

How are you going to convince other companies to work with you on this product?

White: At the beginning, when we spoke to OEMs [original equipment manufacturers] and integrators, they were like, “Hang on a minute, we like our business model, why would we open up our software to you?” But actually, they’re all finding that they can’t meet demand. They need better, more efficient ways to build solutions for their customers. There has been a shift, and now they want things like this.

Gerkey: I’d like to give credit as well to the ROS Industrial Consortium that has spent the last 10 years getting robot OEMs and integrators and customers to work together on common problems. Initially, people thought that there was no way that the robot manufacturers were going to participate: They have their own vertically integrated software solutions, and that’s what they want their customers to use. But in fact, there’s additional value from interoperability with other software ecosystems, and you can sell more robots if they’re more flexible and more usable.

With much of the functionality of your platform being dependent on skills, what is the incentive for people to share new skills that they develop?

White: We do intend to ultimately become a distribution platform. So, what we would expect is if people add skills to the platform, they will get compensated. We’re really creating a demand and supply marketplace, but we’re not starting there—our first product will be the solution builder itself, to prove that the value is there.

Gerkey: We’ve demonstrated that there’s huge potential to get people to share what they’re doing. Everyone has different motivations—could be karma, could be altruism, but sharing the engineering burden is the more rational reason to participate in the open source community. And then on top of all those potential motivations, here we’ve got the opportunity to set up this distribution channel where they can get paid as well.

And what’s the incentive for Intrinsic? How is this a business for you?

White: Initially there will be a developer license. What we’re looking for longer term as applications are built is a fee per application used, and ultimately per robot deployed. We have partners already who are willing to pay for this, so that’s how we know it’s a good place to start.

As we’ve pointed out, this is not the first attempt at making industrial robots easy to program for nonroboticists, nor is it the first attempt at launching a sort of robot app store. Having said that, if anyone can actually make this work, it sure seems like it would be this current combination of Intrinsic and Open Robotics.

If Flowstate seems interesting to you and you want to give it a try, you can apply to join the private beta here.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2023: 29 May–2 June 2023, LONDON
Energy Drone & Robotics Summit: 10–12 June 2023, HOUSTON, TEXAS, USARoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

Brachiation is the dynamic, coordinated swinging maneuver of the body and arms that is used by monkeys and apes to move between branches. As a unique underactuated mode of locomotion, it is interesting to study from a robotics perspective since it can broaden the deployment scenarios for humanoids and animaloids. While several brachiating robots of varying complexity have been proposed in the past, this paper presents the simplest possible prototype of a brachiation robot, using only a single actuator and unactuated grippers.

[ AcroMonk ]

A team at NASA’s Jet Propulsion Laboratory is creating and testing a snakelike robot called EELS (Exobiology Extant Life Surveyor). Inspired by a desire to descend to vents on Saturn’s icy moon Enceladus and enter the subsurface ocean, this versatile robot is being developed to autonomously map, traverse, and explore previously inaccessible destinations on Earth, the moon, and other worlds in our solar system.

[ JPL ]

Elythor, an EPFL spin-off, has developed a new drone whose wing shape can adapt to wind conditions and flight position in real time, reducing the drone’s energy consumption. What’s more, the position of the wings can change, allowing the drone to fly vertically or horizontally. These features make it a perfect candidate for inspecting power plants.

[ Elythor ] via [ EPFL ]

Can robots keep physical contact while moving fast? Sure they can!

[ Paper ]

Thanks, Maged!

Maciej Stępień writes, “Roomac is a mobile manipulation platform that I was able to build for [US] $550. It consists of a differential drive mobile base and a 5-DoF manipulator with a gripper. Software is based on ROS, as a proof-of-concept application I implemented fetching bottle to the user.”

[ Roomac ]

Thanks, Maciej!

Personally, I think the remake is way better than the original on this one.

[ Box Shop ] via [ Engadget ]

Watch and listen to this video from Agility Robotics on perception.

[ Agility ]

From radiotherapy to joint surgery and hair transplants, our KUKA robot technology is already being used in numerous medical devices.

[ Kuka ]

By combining Gecko’s advanced robotics, AI-powered software platform, and digital twin modeling, the Abu Dhabi National Oil Company (ADNOC) created a smart maintenance program to transform their safety and reliability standards.

[ Gecko Robotics ]

This video presents results on the autonomous exploration of multiple ballast water-tank compartments inside a floating production storage and offloading (FPSO) vessel. To execute the mission, the robot performs simultaneous localization and mapping; represents the world using voxels, thus allowing it to conduct volumetric calculations; plans exploratory paths within each compartment; and detects and localizes the manholes within them.

[ ARL ]

Robots still have a lot to learn from humans when it comes to manipulation.

[ Extend Robotics ]

Helping develop the next generation of engineers and technicians, volunteers from NASA’s Armstrong Flight Research Center, in Edwards, Calif., assisted students competing in the Aerospace Valley Regional Robotics Competition.

If the competition makes no sense to you, that’s normal for FIRST.

[ NASA ]

This video is about Team ORIon’s participation in the 2022 RoboCup held in Bangkok, Thailand. RoboCup is an international robotics competition that promotes research in robotics and AI. Team ORIon, supported by the ORI, the Department of Engineering Science, and private donors, sent nine members to compete in the @Home domestic standard platform league.

[ ORI ]

I would love a robot hiking buddy, especially one that’s better at hiking than I am.

[ RaiLab ]

On this episode of the Robot Brains podcast, Pieter Abbeel interviews Geoff Hinton, the “Godfather of AI.”

[ Robot Brains ]



New materials are urgently needed to make better components used for sustainable energy. Technologies like nuclear fusion and quantum computing need materials that can tolerate high levels of radiation or support quantum computing while being safe, cost-effective, and sustainable. But those materials don’t yet exist, and discovering them is a Herculean task that involves synthesizing and testing large numbers of hypothesized materials.

“The discovered materials are a very tiny fraction of the hypothesized materials—like a droplet of water in an ocean,” wrote MIT professor of nuclear science Mingda Li over email.

The ability to carry out its tasks without human intervention makes a self-driving lab a “closed-loop” system, which Polybot achieved last June.

One tool researchers are increasingly using to help with this discovery process are self-driving labs—laboratory systems that combine advanced robotics with machine learning software to run experiments autonomously.

For instance, Lawrence Berkeley Laboratory‘s A-Lab just opened last month and aims to prospect for novel materials that could help to make better solar cells, fuel cells, and thermoelectric technologies. (The lab says the “A” in its name is deliberately ambiguous, variously standing for autonomy, AI, abstracted, and accelerated.)

Another recently-minted self-driving lab—named Polybot at Argonne National Laboratory in Lemont, Ill.—has been in business a little longer than A-Lab and, as a result, has climbed the ladder of lab autonomy toward its own material science quests. Polybot consists of chemical analysis equipment, computers running machine learning software, and three robots. There is a synthetic robot that runs chemical reactions, a processing robot that refines the products of reactions, and a robot on wheels with a robotic arm that transports samples between stations. Robots are programmed using Python scripts and perform all manual tasks in an experiment, like loading samples and collecting data.

Data collected from experiments are then sent to the machine learning software for analysis. The software analyzes the results and suggests changes for the next set of experiments, such as adjusting the temperature, quantity of reagents, or length of reactions. The ability to carry out all this without human intervention makes a self-driving lab a “closed-loop” system, which Polybot achieved last June.

Argonne scientist Jie Xu, who started planning Polybot in 2019, said she wants the self-driving lab to function as a resource that’s “universally applicable and reconfigurable,” so researchers of all stripes can take advantage of it. Xu and fellow Argonne scientists have used Polybot to research electronic polymers, which are plastics that can conduct electricity. The hope is to create polymers that can make better and more sustainable versions of technologies we use today, like solar cells and biosensors.

Xu estimates that they would have to attempt a half million different experiments before they exhausted all possible ways of synthesizing their target electronic polymer. It’s impossible for a self-driving lab to attempt all of them, let alone for human researchers who can only generate about ten molecules in two years, Xu said.

Self-driving labs help to speed up the process of synthesizing new materials from two directions, she said. One is by using robotics to perform the synthesis and analysis of hypothesized materials faster than humans can, because robots can run continuously. The other way is by using machine learning to prioritize which parameters to adjust that would most likely yield a better result during the next experiment. Good prioritization is important, Xu said, because the sheer number of adjustable experimental parameters—such as temperature and quantity of reagents—can be daunting.

There are only a handful of self-driving labs around the world today. That number will be increasing soon, though. Every U.S. national lab, for starters, is now building one.

Self-driving labs also offer the advantage of generating large amounts of experimental data. That data is valuable because machine learning algorithms need to be trained on a lot of data to produce useful results. A single lab isn’t capable of generating that magnitude of data on its own, so some labs have started to pool their data with that of other researchers.

LBL’s A-Lab also regularly contributes data to the Materials Project, which aggregates data from materials science researchers around the world. Milad Abolhasani, whose lab at North Carolina State University studies self-driving labs, said expanding open-access data sharing is important for self-driving labs to succeed. But sharing data effectively will require standardization of how data from labs are formatted and reported.

Abolhasani estimates that there are only a handful of true self-driving labs around the world—labs able to run continuously without human intervention and without frequent breakdowns. That number may soon increase, he said, because every national lab in the U.S. is building one.

But there are still significant barriers to entry. Specialized robots and lab environments are expensive, and it takes years to build the required infrastructure and integrate robotic systems with existing lab equipment. Every time a new experiment is run, researchers may find that they have to make further customizations to the system.

Henry Chan, Xu’s colleague at Argonne, said they eventually want Polybot’s machine learning capabilities to go beyond just optimizing experiments. He wants to use the system for “discovery”—creating completely new materials, like polymers with new molecular structures.

Discovery is much harder to do, because it requires machine learning algorithms to make decisions about where to proceed from an almost unlimited number of starting points.

“For optimization you can still kind of define the space, but for discovery the space is infinite,” said Chan. “Because you can have different structures, different compositions, different ways of processing.”

But results at A-Lab suggest it may be possible. When the lab opened earlier this year, researchers tried synthesizing completely new materials by running their machine learning algorithms on data from the Materials Project database. The self-driving lab performed better than expected, yielding promising results 70 percent of the time.

“We had expected at best a success rate of something like 30 percent,” wrote A-Lab’s principal investigator Gerd Ceder.



In industry, it’s pretty common for robots to be sold as a service. That is, rather than buying the robot and managing it yourself, you instead rent the service that the robot provides. This service includes renting the robot itself, but it also includes stuff like installation, maintenance, repairs, tech support, software updates, and all of that good stuff that you’d otherwise have to worry about separately. This is a good model for companies that want a robot to do useful things for them without having to go through the agony of becoming a little bit of a robotics company themselves.

Typically this robotics-as-a-service (RaaS) model is not something that we see with domestic robots—the closest that we get is some kind of additional subscription for specific features. But a startup called Matician is bringing this idea to home floor-cleaning robots by launching Matic, a combination vacuuming and mopping robot that can be yours for an all-inclusive $125 a month.

Matician

Like some other robot vacuums we could mention, Matic uses vision to build a map of your home and navigate around looking for messes, but it does create an interesting 3D map as it does so, and provides a live rendering of where the robot is on that map. I’m not sure how necessary it is in day-to-day use, but it’s pretty cool. The cameras also mean that Matic is good at both detecting messes that need to be cleaned, as well as not running into or over anything important as it cleans those messes.

A small water reservoir allows Matic to wet mop or vacuum as it sees fit, and all waste (wet and dry) ends up in a removable bag inside of the robot. The bag is preloaded with an absorbent polymer so that anything wet that ends up in the bag turns into a gel rather than making things more gross than they would be otherwise. Matic can autonomously roam around your house looking for messes, or it can respond to voice commands and (cleverly) gestures for targeted cleaning on demand.

One unique-ish feature of Matic is that almost everything is done on-robot. This includes map building and image recognition, meaning that no video or audio ever leaves your home. In fact, you don’t even need to connect the robot to the Internet if you’re not intending to control it remotely—the app works locally, over your WiFi.

Matician

If this is something that you’re interested in, you’ll probably want to run the math on it before you commit to anything, because you can’t buy the robot straight up, it’s subscription only. $125 a month is $1,500 per year—that’s $600 more than the most expensive Roomba, which once you buy it, is yours forever. This is not to say that the Roomba is a better vacuum than the Matic, and we don’t know how much the Matic actually costs to produce: one of the benefits of a service model is that it can be used to support much more expensive hardware. But of course, no matter how expensive the hardware is or how many consumables are included, over time, a service model is almost certain to cost you more.

This additional cost is not just you throwing money into the void, though. Here’s how Matician justifies it:

Matic is designed as a service to keep your floors clean, all the time. The membership for this service ensures:

  1. You always have the best.
    • As we build and release new versions of Matic hardware, we will upgrade yours at no additional cost. As a member, you will always have the latest and best version of hardware available.
    • We will frequently update Matic’s software with new features and capabilities. Cleaning becomes even more effortless over time.
    • Free repairs and maintenance are included, with no questions asked.
  2. You are in control.
    • If Matic doesn’t provide continuous value to you, you may cancel and return the robot for refund of unused months.
    • If you have feedback or suggestions on new features or capabilities, we will do our best to implement them.
  3. You don’t have to think about it.
    • Refills on Matic’s consumables are automatic and included. We’ll send new HEPA bags, brush rolls, and mop rolls as you use them. You always have what you need but don’t have to worry about tracking inventory or placing frequent orders.
  4. We can continue to improve Matic.
    • Your membership ensures that we will never stop iterating this product and delivering new value to you. Our goal is for you to love Matic more and more each month.

Of this, I feel like the first point is probably the most relevant one, but it depends entirely on how Matician operates in the future. If the company actually iterates on the hardware and software a couple of times a year or whatever and keeps sending you new and better robots for free, then a service model might be something that you can feel good about. And if worrying about maintaining a robot or fixing a robot or ordering consumables for a robot stresses you out, then maybe it’s worth paying a little extra for that peace of mind.

The first release of Matic will begin shipping in early fall of 2023, but you can reserve one now for a fully refundable deposit of $125.



Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Robotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONEnergy Drone & Robotics Summit: 10–12 June 2023, HOUSTONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Happy May the 4th from KIMLAB!

[ KIMLAB ]

The Star Wars universe does have a thing for gold legged robots.

[ Botston Dynamics ]

SkyMul uses robots for rebar tying, which is an important job in construction that is deeply unpleasant (and sometimes damaging) for humans to do. But, since it’s also fairly structured, it’s a potentially robot-friendly task, whether it’s done with a drone or a quadruped.

[ SkyMul ]

Thanks, Eohan!

Indoor Capture is the newest Skydio 3D Scan mode that is specifically tailored for autonomously scanning large, complex, indoor environments. With Skydio drones, a single operator can now easily scan even the largest and most complex facilities that may have been too difficult to manage in the past. Capture indoor and outdoor facilities with a single tool, then use that data to create high-quality digital replicas. The main advantage of Indoor Capture is that it can capture hard-to-reach areas, particularly those that are high up.

[ Skydio ]

A small, lost, and adorable alien robot has crash landed on our planet. Many of its subsystems were damaged and it is seeking help! Luckily, our team of brilliant engineers stumbled upon the stranded robot and moved it to the repair bay. With their expertise, they’ve managed to repair the damaged subsystems and get the robot back up and running. The lost robot begins communicating and we learn about its home origins, its self-awareness of its own sensors, hopes to return back to its home planet, fear of cliffs and dangerous hazards, as well as its intelligent interpretation of our human language.

[ Clearpath ]

A2Z Drone Delivery, Inc, developer of commercial drone delivery solutions, has just announced the release of its latest flagship commercial delivery drone, the A2Z Drone Delivery RDSX Pelican. With the RDSX Pelican’s new hybrid VTOL design, A2Z has extended the range and payload capacity to handle up to 5 kg payloads on up to 40 km routes.

[ A2Z ]

Don’t let that clear blue sky fool you—the ground in Oregon in the spring is like Jell-O, which makes walking a challenge.

[ Agility Robotics ]

Oh My DOT is a soup noodle specialty store where you can enjoy a combination of 10 types of unique soup base called “Soup DOT” and 3 types of noodles to enjoy your favorite taste. By using the cooking robot “N-Robo”, it is possible to customize various menus.

[ Impress ]

Sanctuary AI is on a mission to create the world’s first human-like intelligence in general-purpose robots that will help us work more safely, efficiently, and sustainably, helping to address the labor challenges facing many organizations today.

[ Sanctuary AI ]

At the Robotics festival Portugal, one of the best robot soccer teams in the world, Tech United Eindhoven, takes on some moderately experienced humans as well as a local girls youth team, ending on some brutal penalties.

[ Tech United Eindhoven ]

This work presents the mechanical design and control of a novel small-size and lightweight Micro Aerial Vehicle (MAV) for aerial manipulation. To our knowledge, with a total take-off mass of only 2.0 kg, the proposed system is the most lightweight Aerial Manipulator (AM) that has 8-DOF independently controllable.

[ ASL ]

Learn how Amazon’s Fulfillment Technologies & Robotics team uses STEM to find and ship your orders.

[ Amazon Robotics ]

Mortician’s first fight at the 2023 RoboGames event, up against the tough wedge robot Who’s Your Daddy Now? Did all of the work pay off, or was a solidly built wedge going to be too much for the new design?

[ Hardcore Robotics ]

This short film documents some of the most innovative projects that emerged from the work of NCCR Robotics, the Swiss-wide consortium coordinated from 2010 to 2022 by EPFL professor Dario Floreano and ETHZ professor Robert Riener, including other major research institutions across Switzerland. Shot over the course of six months in Lausanne, Geneva, Zurich, Wangen an der Aare, Leysin, Lugano, the documentary is a unique look at the state of the art of medical, educational and rescue robotics, and at the specific contributions that Swiss researchers have given to the field over the last decade.

[ NCCR ]

Lex Fridman interviews Boston Dynamics CEO Rob Player.

[ Lex Fridman ]

AI is rapidly changing the speed and breadth of scientific discovery. In this discussion, Demis Hassabis, co-founder and CEO of DeepMind Technologies, shares his company’s efforts in this space, followed by a conversation with Fei-Fei Li, Denning co-director of the Stanford Institute for Human-Centered Artificial Intelligence, on the future of AI.

[ Stanford HAI ]



The space technology industry is defying gravity both literally and figuratively — driving innovation that pushes into new frontiers.

Keysight’s new in-depth survey of the latest trends in this fast-paced industry can help you identify:

  • Top satellite applications driving market growth
  • Top enabling technologies, technology trends, and novel solutions
  • What space technology leaders consider to be the top challenges and applications with the greatest future impact

The report provides industry analysis that adds essential context to the survey findings.

Register now to download this free whitepaper.



In terms of human features that robots are probably the most jealous of, fingers have to be right up there with eyeballs and brains. Our fleshy little digits have a crazy amount of dexterity relative to their size, and so many sensors packed into them that allow you to manipulate complex objects sight unseen. Obviously, these are capabilities that would be really nice to have in a robot , especially if we want them to be useful outside of factories and warehouses.

There are two parts to this problem: The first is having fingers that can perform like human fingers (or as close to human fingers as is reasonable to expect); the second is having the intelligence necessary to do something useful with those fingers.

“Once we also add visual feedback into the mix along with touch, we hope to be able to achieve even more dexterity, and one day start approaching the replication of the human hand.”
–Matei Ciocarlie, Columbia University

In a paper just accepted to the Robotics: Science and Systems 2023 conference, researchers from Columbia University have shown how to train robotic fingers to perform dexterous in-hand manipulation of complex objects without dropping them. What’s more, the manipulation is done entirely by touch—no vision required.

Robotic fingers manipulate random objects¸ a level of dexterity humans master by the time they’re toddlers.Columbia University

Those slightly chunky fingers have a lot going on inside of them to help make this kind of manipulation possible. Underneath the skin of each finger is a flexible reflective membrane, and under that membrane is an array of LEDs along with an array of photodiodes. Each LED is cycled on and off for a fraction of a millisecond, and the photodiodes record how the light from each LED reflects off of the inner membrane of the finger. The pattern of that reflection changes when the membrane flexes, which is what happens if the finger is contacting something. A trained model can correlate that light pattern with the location and amplitude of finger contacts.

So now that you have fingers that know what they’re touching, they also need to know how to touch something in order to manipulate it the way you want them to without dropping it. There are some objects that are robot-friendly when it comes to manipulation, and some that are robot-hostile, like objects with complex shapes and concavities (L or U shapes, for example). And with a limited number of fingers, doing in-hand manipulation is often at odds with making sure that the object remains in a stable grip. This is a skill called “finger gaiting,” and it takes practice. Or, in this case, it takes reinforcement learning (which, I guess, is arguably the same thing). The trick that the researchers use is to combine sampling-based methods (which find trajectories between known start and end states) with reinforcement learning to develop a control policy trained on the entire state space.

While this method works well, the whole nonvision thing is somewhat of an artificial constraint. This isn’t to say that the ability to manipulate objects in darkness or clutter isn’t super important, it’s just that there’s even more potential with vision, says Columbia’s Matei Ciocarlie: “Once we also add visual feedback into the mix along with touch, we hope to be able to achieve even more dexterity, and one day start approaching the replication of the human hand.”

Sampling-based Exploration for Reinforcement Learning of Dexterous Manipulation,” by Gagan Khandate, Siqi Shang, Eric T. Chang, Tristan Luca Saidi, Johnson Adams, and Matei Ciocarlie from Columbia University, is accepted to RSS 2023.



The parent company of robot vacuum maker Neato Robotics, Germany’s Vorwerk Group, announced late last week that a broad restructuring of its robotics division will result in the closure of Neato and the end of the Neato product line. Vorwerk is promising five years of parts and service availability, along with enough software support to keep current cloud services operational and secure, but it’s the end of an era for a seriously cool family of home robots that were (for a while) ahead of their time.

I’ve had a soft spot for Neato Robotics ever since I met their first robot at CES 2010. At the time, iRobot was dominant in the robot vacuum market (and still is), and Neato sought to challenge that by making a robot that could affordably do what Roombas could not: Map their environment to ensure that they could reliably vacuum your floor in one efficient pass. Back then, Roombas would pseudo-randomly bounce around, relying on a handful of behaviors and a long cleaning time to hit every part of your floor an average of three times. This got things clean, but it looked a little inept, which was a bigger deal than it probably should have been.

Neato’s approach involved using an actual 360-degree lidar to detect walls and furniture, generating remarkably accurate maps as part of its cleaning process. Putting a lidar onto a consumer robot for US $400 (that’s $400 in 2010, but still) was quite an achievement—the lidar hardware itself only cost about $25, and you can still buy affordable lidars based on the same operating principle. In addition to getting a detailed (and eventually interactive) map of where your robot cleaned, the lidar also allowed the Neato to get right up against walls and into corners, making use of its unique (for a while, at least) “D” shape.

The Neato XV-11 in 2010.Evan Ackerman

In May of 2010, back when I was still writing for BotJunkie.com, I got what I’m pretty sure was the first ever Neato XV-11 review unit—I had to pick it up in person from Neato’s headquarters and promise to return it 24 hours later. It was actually super impressive to see the Neato vacuum back and forth in straight lines, and it held its own against a Roomba 560 in a butter knife fight.

Since then, Neato Robotics has been improving its robots, but with the introduction of the iRobot 980 with VSLAM-based mapping in 2015, Neato lost a major differentiator. And over the last five years or so, the robot vacuum market has become saturated with low-cost competition, which tends not to perform nearly as reliably but does still get floors cleaner than they might be otherwise. In 2017, Neato was acquired by Vorwerk Group, a multi-billion dollar company that sells (among other things) the Thermomix, a magical kitchen appliance that is apparently huge in Europe but barely exists in the United States. Anyway, Vowerk has decided that Neato Robotics has not been successful enough, and it’s being sadly restructured out of existence:

The consolidation of Vorwerk also affects the stake in the US company Neato Robotics, which has been 100 percent owned by the Vorwerk Group since 2017. Neato has brought valuable experience and innovations to Vorwerk’s product development in the field of cleaning robots over the past few years. However, Neato’s independent sales in e-commerce and brick-and-mortar retail with a focus on the USA has not been able to be successfully developed, so that the company has not achieved the economic goals it has set itself for several years.

As part of the consolidation, Neato will now be closed despite restructuring efforts, affecting 98 employees worldwide. A 14-strong team in Milan will be taken over by Vorwerk to ensure the security of the infrastructure for Neato’s cloud services for at least five years. The availability of spare and consumable parts and service for necessary repairs are also guaranteed for at least five years.

That last bit is welcome news for all current Neato owners I guess, but still, this is an abrupt and somewhat disappointing end to a company that did some seriously cool work on useful, affordable, and innovative home robots. Neato, you had a weird name, but I loved your robots and you will be missed.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Robotics Summit & Expo: 10–11 May 2023, BOSTONICRA 2023: 29 May–2 June 2023, LONDONEnergy Drone & Robotics Summit: 10–12 June 2023, HOUSTONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREACLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.

Enjoy today’s videos!

Looking to give robots a more nimble, human-like touch MIT engineers have now developed a gripper that grasps by reflex. Rather than start from scratch after a failed attempt, the team’s robot adapts in the moment to reflexively roll, palm, or pinch an object to get a better hold.

[ MIT ]

Roboticists at the Max Planck Institute for Intelligent Systems in Stuttgart have developed a jellyfish-inspired underwater robot with which they hope one day to collect waste from the bottom of the ocean. The almost noise-free prototype can trap objects underneath its body without physical contact, thereby enabling safe interactions in delicate environments such as coral reefs. Jellyfish-Bot could become an important tool for environmental remediation.

[ Max Planck Institute ]

Excited to share our latest collaborative work on humanoid robot behaviors with Draco 3. We look forward to a day that these robots can help us at home and at work to perform dull and time consuming tasks!

[ UT HCRL ]

This research focuses on the design of a novel hybrid gripper that enables versatile grasping and throwing manipulation with a single actuator. The gripper comprises a unique latching mechanism that drives two passive rigid fingers by elongating/releasing the coupled elastic strip. This arrangement provides the dual function of adapting to objects with different geometries, varying surface contact force characteristics, and storing energy in the form of elastic potential. The proposed latching mechanism can swiftly shift from a quick release to a gradual release of the stored elastic potential, enabling greater object acceleration during throwing and no acceleration while placing. By doing so, the object can be placed at the desired location even farther than the manipulator’s reachable workspace.

[ Paper ]

Thanks, Nagamanikandan!

Animals (or at least, many animals) are squishy for a reason–it helps to manage safe environmental contact. Let’s make all robots squishy!

[ Paper ]

Thanks, Pham!

This short video shows an actuator from Ed Habtour at the University of Washington, modeled after the vertebrae of sea birds and snakes.

[ UW ]

Thanks, Sarah!

This video presents results on autonomous exploration and visual inspection of a ballast tank inside an FPSO vessel. Specifically, RMF–a collision tolerant aerial robot implementing multi-modal SLAM and path planning functionality–is deployed inside the ballasts of the vessel and performs the autonomous inspection of 3 tank compartments without any prior knowledge of the environment other than a rough estimate of the geometric midpoint of each compartment. Such information is readily available and does not require access to hard-to-access CAD models of ships. The mission takes place in less than 4 minutes and ensures both the geometric mapping of those compartments and their visual inspection with certain resolution guarantees.

[ ARL ]

A team from Los Alamos National Laboratory recently went to the Haughton Impact Crater on Devon Island, Canada. It is the largest uninhabited island in the world. Nina Lanza and her team tested autonomous drones in the frigid environment that is similar to Mars.

[ LANL ]

OK, once urban delivery drones can do this, maybe I’ll pay more attention to them.

[ HKUST ]

Founded in 2014, Verity delivers fully autonomous indoor drone systems that are trusted in environments where failure is not an option. Based in Zurich, Switzerland, with global operations, Verity’s system is used to complete thousands of fully autonomous inventory checks every day in warehouses everywhere.

[ Verity ]

In this video you will learn about the ACFR marine group and some of the research projects they are currently working on.

[ ACFR ]

I am including this video because growing tea is beautiful.

[ SUIND ]

In this video we showcase a Husky-based robot equipped with a Franka Research 3 Robotics Arm. The Franka Research 3 by Franka Emika is the reference world-class, force sensitive robot system that empowers researchers with easy-to-use robot features as well as with low-level access to robot’s control and learning capabilities. The robot is also outfitted with Clearpath’s IndoorNav Autonomy Software, which enables robust point-to-point autonomous navigation of mobile robots.

[ Clearpath ]

This Tartan Planning Series talk on is from Sebastian Scherer, on “Informative Path Planning, Exploration, and Intent Prediction.”

[ Air Lab ]

This Stanford HAI Seminar is from Oussama Khatib, on “From Romeo and Juliet to OceanOnek; Deep-Sea Robotic Exploration.”

[ Stanford HAI ]



Without a lifetime of experience to build on like humans have (and totally take for granted), robots that want to learn a new skill often have to start from scratch. Reinforcement learning is a technique that lets robots learn new skills through trial and error, but especially in the case of learning end-to-end vision based control policies, it takes a lot of time because the real world is a weirdly-lit friction-filled obstacle-y mess that robots can’t understand without a frequently impractical amount of effort.

Roboticists at UC Berkeley have vastly sped up this process by doing the same kind of cheating that humans do—instead of starting from scratch, you start with some previous experience that helps get you going. By leveraging a “foundation model” that was pre-trained on robots driving themselves around, the researchers were able to get a small-scale robotic rally car to teach itself to race around indoor and outdoor tracks, matching human performance after just 20 minutes of practice.

That first pre-training stage happens at your leisure, by manually driving a robot (that isn’t necessarily the robot that will be doing the task that you care about) around different environments. The goal of doing this isn’t to teach the robot to drive fast around a course, but instead to teach it the basics of not running into stuff.

With that pre-trained “foundation model” in place, when you then move over to the little robotic rally car, it no longer has to start from scratch. Instead, you can plop it onto the course you want it to learn, drive it around once slowly to show it where you want it to go, and then let it go fully autonomous, training itself to drive faster and faster. With a low-resolution, front-facing camera and some basic state estimation, the robot attempts to reach the next checkpoint on the course as quickly as possible, leading to some interesting emergent behaviors:

The system learns the concept of a “racing line,” finding a smooth path through the lap and maximizing its speed through tight corners and chicanes. The robot learns to carry its speed into the apex, then brakes sharply to turn and accelerates out of the corner, to minimize the driving duration. With a low-friction surface, the policy learns to over-steer slightly when turning, drifting into the corner to achieve fast rotation without braking during the turn. In outdoor environments, the learned policy is also able to distinguish ground characteristics, preferring smooth, high-traction areas on and around concrete paths over areas with tall grass that impedes the robot’s motion.

The other clever bit here is the reset feature, which is necessary in real world training. When training in simulation, it’s super easy to reset a robot that fails, but outside of simulation, a failure can (by definition) end the training if the robot gets itself stuck somehow. That’s not a big deal if you want to spend all your time minding the robot while it learns, but if you have something better to do, the robot needs to be able to train autonomously from start to finish. In this case, if the robot hasn’t moved at least 0.5 meters in the previous three seconds, it knows that it’s stuck, and will execute a simple behavior of turning randomly, backing up, and then trying to drive forward again, which gets it unstuck eventually.

During indoor and outdoor experiments, the robot was able to learn aggressive driving comparable to a human expert after just 20 minutes of autonomous practice, which the researchers say “provides strong validation that deep reinforcement learning can indeed be a viable tool for learning real-world policies even from raw images, when combined with appropriate pre-training and implemented in the context of an autonomous training framework.” It’s going to take a lot more work to implement this sort of thing safely on a larger platform, but this little car is taking the first few laps in the right direction just as quickly as it possibly can.

FastRLAP: A System for Learning High-Speed Driving via Deep RL and Autonomous Practicing, by Kyle Stachowicz, Arjun Bhorkar, Dhruv Shah, Ilya Kostrikov, and Sergey Levine from UC Berkeley, is available on arXiv.

Pages