IEEE Spectrum Automation

IEEE Spectrum
Subscribe to IEEE Spectrum Automation feed IEEE Spectrum Automation


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Humans are social creatures and learn from each other, even from a young age. Infants keenly observe their parents, siblings, or caregivers. They watch, imitate and replay what they see to learn skills and behaviors.

The way babies learn and explore their surroundings inspired researchers at Carnegie Mellon University and Meta AI to develop a new way to teach robots how to simultaneously learn multiple skills and leverage them to tackle unseen, everyday tasks. The researchers set out to develop a robotic AI agent with manipulation abilities equivalent to a 3-year-old child.

[ CMU ]

You’ll never be able to justify using a disposable coffee cup again thanks to a robot that does all the dishes for you.

[ Dino Robotics ]

While filming our new robot, this lovely curious cat became interested in the robot and drone. After a while, it started approaching the robot, and the following interaction ensued.

[ Zarrouk Lab ]

Robots are 100 percent more capable in slow motion with music.

[ MIT ]

Legged robots are heading to Mars!

[ JPL ]

I’m not sure how practical this is, but it’s sure fascinating to watch.

[ Somatic ]

Watch until the end, which is mildly NSFW.

Fun experiment aiming to gather data [for modeling] autonomous vehicles’ motion on ice. This is related to our vehicle dynamics work led by Dominic Baril, a Ph.D. student in our lab! Stay tuned for... paper preprint!

[ Norlab ]

Nauticus Robotics is working on something new.

[ Nauticus Robotics ]

The UBTECH humanoid robot Walker X can be used for smart SPS component sorting and intelligent aging testing in automated factory settings, which is another innovative step forward in the exploration of the commercial applications of humanoid robots.

With floors like those, why the heck wouldn’t you be using wheels, though?

[ UBTECH ]

BASF collaborates with ANYbotics to evaluate the potential of automated condition monitoring and digital documentation of operational data at their facilities. ANYmal X demonstrates its capabilities for extending robotic inspection into Ex-environments (Zone 1) that haven’t been accessible for this technology before.

[ ANYbotics ]

What meal is this robot kitting? My guess is some little tortillas, a single cherry tomato, guacamole, and walnuts. Grim.

[ Covariant ]

We present a mobile robot that provides an older adult with a handlebar located anywhere in space—“Handle Anywhere.” The robot consists of an omnidirectional mobile base attached to a repositionable handlebar.

[ MIT ]

The KUKA Innovation Award has been held annually since 2014 and is addressed to developers, graduates and research teams from universities or companies. For this year’s award, the applicants were asked to use open interfaces in our newly introduced robot operating system iiQKA and to add their own hardware and software components.

The Team Fashion & Robotics from the University of Art and Design Linz worked to create a way for small and medium-sized textile companies and designers to increase their production by setting up microfactories with collaborative robot systems, while simultaneously enabling more efficient sorting and finishing processes on an industrial scale.

[ Kuka ]

Dive into a world of cutting-edge innovation and robotic cuteness, as we take a peek into Misty’s unique personality and human expressions. From surprise to excitement, from curiosity to joy, from amusement to sadness–Misty is a canvas to create your very own fun and engaging social interactions!

[ Misty Robotics ]

We are thrilled to launch ICCV 2023 SLAM Challenge. Navigate through complex and challenging environments with our TartanAir &SubT-MRS datasets, pursing the robustness of your SLAM algorithms. Let’s redefine sim-to-real transfer together.

[ AirLab ]

Check out how the ZenRobotics Fast Picker was retrofitted into a Grundon MRF in England. The Fast Picker has optimised their waste sorting process to pick higher-value products (HDPE & PET Plastic) and increase efficiency.

[ ZenRobotics ]

Ants are highly capable in many behaviors relevant to robotics. Our recent work has focused on bridging the gap to understanding the neural circuits that underlie capacities such as visual orientation, path integration, and the combination of multiple cues. A new direction for this research is to investigate the manipulation capabilites of ants, which allow them to handle a wide diversity of arbitrary, unknown objects with a skill that goes well beyond current robotics.

[ Festo ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS, USA

Enjoy today’s videos!

NASA’s Curiosity rover recently made its most challenging climb on Mars. Curiosity faced a steep, slippery slope on its journey up Mount Sharp, so rover drivers had to come up with a creative detour.

[ JPL ]

Wheel knees for ANYmal! We should learn more about this at IROS 2023 this fall.

[ RSL ]

Hard vision and manipulation problem? Solve it by making it less hard!

[ Covariant ]

Oh good, drones are learning to open doors now.

[ ASL ]

If you look closely, you’ll see that Sanctuary’s robot has fingernails, a detail that I always appreciate on robotic hands.

[ Sanctuary AI ]

This summer, the University of Mary Washington (UMW) in Fredericksburg, Va. became the official home for Virginia’s SMART Community STEM Camp. The camp hosted over 30 local high school students for a full week to learn about cybersecurity, e-sports, [and] the drone industry—as well as [participating in] a hands-on flying experience.

[ Skydio ]

O_o

[ Pollen Robotics ]

Agility CEO and Co-Founder Damion Shelton talks with Pras Velagapudi, VP of Innovation and Chief Architect, about the best methods for robot control. Comparing Reinforcement Learning to what we can now do using LLMs.

[ Agility Robotics ]

In this episode of The Robot Brains Podcast, Pieter speaks with John Schulman, co-founder of OpenAI.

[ Robot Brains ]

This week, Geordie Rose (CEO) and Suzanne Gildert (CTO) continue the discussion about their co-authored position paper, now that it has been published. Titled “Building and Testing a General Intelligence Embodied in a Humanoid Robot,” the paper touches on metrics of intelligence, robotics, machine learning, and more. They round off by answering more audience questions.

[ Sanctuary AI ]



With the aid of crystals known as perovskites, solar cells are increasingly breaking records in how well they convert sunlight to electricity. Now a new automated system could make those records fall even faster. North Carolina State University’s RoboMapper can analyze how well perovskites might perform in solar cells, using roughly one-tenth to one-fiftieth the time, cost, and energy of either manual labor or previous robotic platforms, its inventors say.

The most common solar cells use silicon to convert light to electricity. These devices are rapidly approaching their theoretical conversion efficiency limit of 29.4 percent; modern commercial silicon solar cells now reach efficiencies of more than 24 percent, and the best lab cell has an efficiency of 26.8 percent.

One strategy to boost a solar cell’s efficiency is by stacking two different light-absorbing materials together into one device. This tandem method increases the spectrum of sunlight the solar cell can harvest. A common approach with tandem cells is to use a top cell made of perovskites to absorb higher-energy visible light and a bottom cell made of silicon for lower-energy infrared rays. Last year scientists unveiled the first perovskite-silicon tandem solar cells to pass the 30 percent efficiency threshold, and last month another group reported the same milestone.

Conventional materials research has scientists prepare a sample on a chip and then go through multiple steps to examine it using different instruments. Existing automation efforts “tend to emulate human workflows—we tend to process materials one parameter at a time,” says Aram Amassian, a materials scientist at North Carolina State University, in Raleigh.

RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

However, modern genetics and pharmaceutical analysis often achieves high throughput by placing dozens of samples on each plate and examining them all at once. RoboMapper also follows this strategy, using printing techniques to miniaturize the material samples.

“We’ve benefited a lot from hardware interoperability with biology and chemistry, such as in liquid handling,” Amassian says. However for RoboMapper, Amassian and his team had to develop new protocols for handling perovskite materials and different characterization experiments from what you’d find in chemistry automation. “One particular development we had to make is to make sure that characterization instruments can handle the high density of materials on a chip with automation. This required a little bit of engineering on both the hardware and software side.”

One key to saving time, energy, material, and money was to shrink the sample size by a factor of 1,000. “The print size is on the order of 50 to 150 [micrometers], while most other tools create samples on the order of centimeters,” Amassian says. “Typically, we print picoliter to nanoliter volumes while other platforms print or coat microliters.”

Perovskite Properties for Pennies

In the first tests of RoboMapper, the scientists analyzed 150 different perovskite compositions. In all RoboMapper was 12 percent the cost, nine times as fast, and 18 times as energy efficient as other robotic platforms. And it was 2 percent the cost, 14 times as fast, and 26 times as energy efficient as manual labor.

“We set out to build a robot that can generate large material libraries so that we can build datasets for training AI models in the future,” Amassian says. Such an AI could then predict which perovskite structures will perform best.

North Carolina State University

The researchers focused on perovskites’ stability, which is a major challenge when it comes to tandem cells. Perovskites tend to degrade when exposed to light, losing the properties that made them desirable in the first place, Amassian explains.

The scientists analyzed perovskite structure, electronic properties, and stability in response to intense light using optical microscopy, microphotoluminescence spectroscopy mapping, and synchrotron-based wide-angle X-ray scattering mapping. This experimental data was then used to develop computational models that identified a specific composition that the researchers predicted would have the best combination of attributes.

“These models are now available for others to use,” Amassian says. He notes they are now in talks with leading tandem solar cell research groups.

Unexpectedly, the scientists found that RoboMapper’s greatest reduction in environmental impact came from improved energy efficiency during testing.

“We and others did not realize this, because electricity used by instruments in the lab is unseen, whereas materials and supplies are tangible,” Amassian says. “RoboMapper was designed in part to address this insidious problem by placing dozens of materials in the same measurement tools and significantly reducing the amount of time it needs to be powered on to collect data. We showed that tenfold reduction in carbon footprint and other negative environmental impacts can be achieved.”

In the future, “we will continue to search for newer and better perovskites,” Amassian says. “We’re also actively looking at organic solar-cell materials to find compositions that are stable for solar-energy applications. The ability to test dozens of compositions under intense simulated sunlight helps save tremendous time and energy.”

The scientists detailed their findings online 25 July in the journal Matter.



When Marc Raibert founded Boston Dynamics in 1992, he wasn’t even sure it was going to be a robotics company—he thought it might become a modeling and simulation company instead. Now, of course, Boston Dynamics is the authority in legged robots, with its Atlas biped and Spot quadruped. But as the company focuses more on commercializing its technology, Raibert has become more interested in pursuing the long-term vision of what robotics can be.

To that end, Raibert founded the Boston Dynamics AI Institute in August of 2022. Funded by Hyundai (the company also acquired Boston Dynamics in 2020), the Institute’s first few projects will focus on making robots useful outside the lab by teaching them to better understand the world around them.

Marc Raibert 

Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

At the 2023 IEEE International Conference on Robotics at Automation (ICRA) in London this past May, Raibert gave a keynote talk that discussed some of his specific goals, with an emphasis on developing practical, helpful capabilities in robots. For example, Raibert hopes to teach robots to watch humans perform tasks, understand what they’re seeing, and then do it themselves—or know when they don’t understand something, and how to ask questions to fill in those gaps. Another of Raibert’s goals is to teach robots to inspect equipment to figure out whether something is working—and if it’s not, to determine what’s wrong with it and make repairs. Raibert showed concept art at ICRA that included robots working in domestic environments such as kitchens, living rooms, and laundry rooms as well as industrial settings. “I look forward to having some demos of something like this happening at ICRA 2028 or 2029,” Raibert quipped.

Following his keynote, IEEE Spectrum spoke with Raibert, and he answered five questions about where he wants robotics to go next.

At the Institute, you’re starting to share your vision for the future of robotics more than you did at Boston Dynamics. Why is that?

Marc Raibert: At Boston Dynamics, I don’t think we talked about the vision. We just did the next thing, saw how it went, and then decided what to do after that. I was taught that when you wrote a paper or gave a presentation, you showed what you had accomplished. All that really mattered was the data in your paper. You could talk about what you want to do, but people talk about all kinds of things that way—the future is so cheap, and so variable. That’s not the same as showing what you did. And I took pride in showing what we actually did at Boston Dynamics.

But if you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision. So I’m starting to be a little more comfortable with doing that. Not to mention that at this point, we don’t have any actual results to show.

Right now, robots must be carefully trained to complete specific tasks. But Marc Raibert wants to give robots the ability to watch a human do a task, understand what\u2019s happening, and then do the task themselves, whether it\u2019s in a factory [top left and bottom] or in your home [top right and bottom]. Boston Dynamics AI Institute

The Institute will be putting a lot of effort into how robots can better manipulate objects. What’s the opportunity there?

Raibert: I think that for 50 years, people have been working on manipulation, and it hasn’t progressed enough. I’m not criticizing anybody, but I think that there’s been so much work on path planning, where path planning means how you move through open space. But that’s not where the action is. The action is when you’re in contact with things—we humans basically juggle with our hands when we’re manipulating, and I’ve seen very few things that look like that. It’s going to be hard, but maybe we can make progress on it. One idea is that going from static robot manipulation to dynamic can advance the field the way that going from static to dynamic advanced legged robots.

How are you going to make your vision happen?

Raibert: I don’t know any of the answers for how we’re going to do any of this! That’s the technical fearlessness—or maybe the technical foolishness. My long-term hope for the Institute is that most of the ideas don’t come from me, and that we succeed in hiring the kind of people who can have ideas that lead the field. We’re looking for people who are good at bracketing a problem, doing a quick pass at it (“quick” being maybe a year), seeing what sticks, and then taking another pass at it. And we’ll give them the resources they need to go after problems that way.

“If you’re going to make the Bell Labs of robotics, and you’re trying to do it quickly from scratch, you have to paint the vision.”

Are you concerned about how the public perception of robots, and especially of robots you have developed, is sometimes negative?

Raibert: The media can be over the top with stories about the fear of robots. I think that by and large, people really love robots. Or at least, a lot of people could love them, even though sometimes they’re afraid of them. But I think people just have to get to know robots, and at some point I’d like to open up an outreach center where people could interact with our robots in positive ways. We are actively working on that.

What do you find so interesting about dancing robots?

Raibert: I think there are a lot of opportunities for emotional expression by robots, and there’s a lot to be done that hasn’t been done. Right now, it’s labor-intensive to create these performances, and the robots are not perceiving anything. They’re just playing back the behaviors that we program. They should be listening to the music. They should be seeing who they’re dancing with, and coordinating with them. And I have to say, every time I think about that, I wonder if I’m getting soft because robots don’t have to be emotional, either on the giving side or on the receiving side. But somehow, it’s captivating.

Marc Raibert was a professor at Carnegie Mellon and MIT before founding Boston Dynamics in 1992. He now leads the Boston Dynamics AI Institute.

This article appears in the August 2023 print issue as “5 Questions for Marc Raibert.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Two interesting things about this video: First, the “where’s the button” poke at 2:20, and second, the custom Spot-friendly wrench.

[ Boston Dynamics ]

This is one of the more interesting drone designs that I’ve seen recently, since it’s modular, and you can clip on wings and props. And somehow it just flies.

[ AIR Lab ]

This soft robotic gripper is not only 3D printed in one print, it also doesn’t need any electronics to work. The researchers wanted to design a soft gripper that would be ready to use right as it comes off the 3D printer, equipped with built in gravity and touch sensors. As a result, the gripper can pick up, hold, and release objects.

[ UCSD ]

Thanks, Daniel!

Through this powerful collaboration with the U.S. Agency for International Development (USAID), we are proud to donate cutting-edge Skydio drones, complemented by 3D Scan technology and comprehensive professional training. These resources will aid the Office of the Prosecutor General to document the more than 115,000 instances of destroyed civilian infrastructure, and evidence of human rights abuses on frontline communities and liberated territories.

[ Skydio ]

Grasping objects with limited or no prior knowledge about them is a highly relevant skill in assistive robotics. Still, in this general setting, it has remained an open problem. We present a deep learning pipeline consisting of a shape completion module that is based on a single depth image, and followed by a grasp predictor that is based on the predicted object shape.

[ DLR RM ]

This is a video announcing the opening of the MyoChallenge 23, part of the challenge track of the NeurIPS23 conference. This competition merges physiologically realistic musculoskeletal models and AI with the goal of creating controllers for locomotion and manipulation.

[ MyoChallenge ]

Thanks, Guillaume!

The new DJI Air 3 has a transmission range of 20 kilometers and a flight time of 46 minutes. Consumer drones have made a lot of progress in a pretty short time, haven’t they?

[ DJI ]

With [human driving’s track] record of nearly 43,000 deaths and 2.5 million injuries in the U.S. alone in 2021, we believe autonomous driving technology has the potential to save lives and improve mobility options for millions of people. The data to-date indicates that the Waymo Driver is reducing traffic injuries and fatalities in the places where we operate, and we aim to continue safely designing and deploying our Driver to help more people in more places.

Humans are bad drivers for sure, but according to expert Missy Cummings, as quoted by AP, “autonomous vehicles from Waymo, a spinoff of Google, are four times more likely than humans to crash.”

Watch Tanner Lecturers, Fei-Fei Li and Eric Horvitz, discuss the topics of AI and Human Values.

[ Stanford HAI ]

Tin Lun Lam writes, “in the last two months, we have organized a Lecture Series on Multi-robot Systems and invited eight world-renowned scholars to share their wisdom to help promote knowledge sharing and technological advancement in this field.” Here’s two of the lectures, and you can find the other six at the link below.

[ Freeform Robotics ]

Thanks, Tin Lun!



It’s been just two years since Hangzhou, China–based Unitree introduced the Go1, a US $2,700 quadruped robot. And since then, the Go1 has had a huge influence over the small quadruped-research market, due to its unique combination of performance, accessibility, and being (as legged robots go) supercheap.

Unitree has just announced the Go2, a new version that manages to both be significantly better and super-duper-cheap—it’s faster and more agile and now even includes a lidar, but somehow costs just $1,600.

Okay, yes, some of the word choice in that video is slightly odd. But who cares, because that’s some very impressive, dynamic mobility at a shockingly low cost. The $1,600 base model, the Go2 Air, includes a chin-mounted 360- by 90-degree hemispherical lidar, which has a minimum sensing range of 0.05 meters for intelligent terrain navigation and obstacle avoidance. The Go2 can move at a brisk 2.5 meters per second with a 7-kilogram payload, and operates for up to 2 hours with a 8000 milliamp-hour (mAh) battery. There’s even a graphical programming interface, if you have no idea what you’re doing but just want to mess around a little bit.

Like its predecessor, the Go2 is available in several different models. For $2,800, you get the Go2 Pro, with an additional kilo of payload capacity, an extra meter per second of speed, onboard compute, and 4G connectivity. It also comes with side-following, which is what will let the robot go for a jog alongside you. And if you need even more, the Go2 Edu (which you’ll have to contact Unitree about directly) boasts a peak speed of a blistering 5 m/s, has force sensors on its feet, and will run for up to 4 hours with a 15,000-mAh battery.

Unitree

“Go2 was a huge project with many difficulties we had to overcome,” Unitree founder and CEO Xingxing Wang told IEEE Spectrum. “We have researched and developed almost every mechanical part and circuit board. Through continuously improving the design, we tried hard to improve its performance and quality as well as reduce costs, which required a lot of work and effort.”

We also asked Wang what has impressed him the most about how other people have used his robots. “We are very happy that many global institutions and companies use our quadruped robot in meaningful and innovative development,” he says. He points to a couple of his favorite examples, including CSIC using a Go1 as a robot guide dog that he hopes will have significant benefits for the visually impaired, and a recent paper in Science Robotics that uses a brain-inspired multimodal hybrid-neural network running on a Go1 for place recognition.

Lastly, we wanted to know whether all of this new footage of Go2 balancing on two legs means that Unitree might be taking an interest in bipeds sometime soon. “I think it’s cool that a quadruped robot can realize bipedal locomotion,” says Wang. “We may try to make a bipedal robot on the basis of a quadruped robot.” Yeah, sign us up for that.



As the automotive industry navigates a new era of self-driving cars, every second matters. Information from sensors and electronics must reach the main CPU as quickly as possible, but faster data rates impact signals. Coping with data loss is imperative for safety. Validating receiver operation in a car’s noisy environment in both ideal and stressed conditions improves in-vehicle network (IVN) performance. Delve into the automotive trends driving focus on receiver testing, understand the implications of not testing, and learn how to prepare for receiver testing and validate its performance at the physical layer in this white paper.

Download this free whitepaper now!



Everybody likes watching robots fall over. We get it, it’s funny. And we here at IEEE Spectrum are as guilty as anyone of making it a thing: Our compilation of robots falling down at the DARPA Robotics Challenge eight years ago has several million views on YouTube. But a couple of months ago, Agility Robotics shared a video of one of its Digit robots collapsing while stacking boxes during the ProMat trade show, which went nuts across Twitter, TikTok, and Instagram. Agility eventually issued a statement to the Associated Press clarifying that Digit didn’t deactivate itself due to the nature of the work, which is how some viewers reacted to the viral clip.

Agility isn’t the only robotics company to share its failures with an online audience. Boston Dynamics, developer of the Spot and Atlas robots, may have been the first company to be accused of “robot abuse” because of its videos, and the company frequently includes footage of its research robots being unsuccessful as well as successful on YouTube. And now there are 1,100 Spots out in the world being useful, falls happen both more frequently, and more visibly.

Even though falling robots aren’t a new thing, what may be a new(ish) thing are some technological advances that have changed the nature of falling. First, both Boston Dynamics and Agility Robotics have human-scale bipedal robots for which not falling seems pretty normal. This is a relatively recent development. Although a number of companies are working on humanoids, the Agility and Boston Dynamics humanoids are (as far as we are aware) the only ones that can routinely handle untethered dynamic walking.

“Sometimes the robot is going to break something when it falls. But it’s learning, and eventually I think these robots will fall even less often than people do.”
—Jonathan Hurst, Agility Robotics

The other important advance is that these humanoid robots are usually able to fall without destroying themselves. During the DARPA Robotics Challenge in 2015, falling generally meant doom for the competitors, with one exception: Carnegie Mellon University’s CHIMP, which was built like a literal tank. Since then, roboticists have tried adding things like armor and airbags to keep a falling robot in one piece. But now, these robots can fall with minimal drama and get back up again. If they do suffer damage, they can be easily fixed.

And yet, even though falling has become much less of a big deal for the roboticists, it’s still a big deal for the general public, as these viral videos of robots falling down prove. We recently spoke with Agility Robotics’ Chief Robot Officer Jonathan Hurst and Head of Customer Experience Bambi Brewer, as well as Boston Dynamics CTO Aaron Saunders to understand why that is, and whether they think things are likely to change anytime soon.

Boston Dynamics’s Aaron Saunders, and Agility Robitics’ Jonathan Hurst and Bambi Brewer on...

Why do you think people react so strongly to seeing robots fall over, especially bipedal robots?

Jonathan Hurst: People post funny videos of pets or kids, making some expression or having a reaction that you can identify with. It’s even funnier when it’s a robot that wouldn’t typically do that. And so when Digit [at ProMat] seems to be just like, “I’m so tired of doing this work” and falls down, people are like, “I understand you, robot!” But [seeing robots behave that way] is going to become more common, and when people see this and it becomes just a regular part of their experience, the novelty will wear off.

Bambi Brewer: People who make robots spend a lot of time trying to present them at their best. The way robots move does seem very repetitive, very scripted. I can see why it’s very interesting when something goes wrong, because the public usually doesn’t see what that looks like, and they’re not used to those moments yet.

“People perceive machines based on how they perceive themselves. Falling on its face is a good example of something that looks bad for a robot but might not actually be bad.”
—Aaron Saunders, Boston Dynamics

How different is falling for robots than for humans?

Hurst: The way I think about the robot right now is like a two-and-a-half-year-old child. They fall more often than adults do, and it’s not terribly concerning. Sometimes they skin their knee. And sometimes the robot is going to break something when it falls. But it’s learning, and eventually I think these robots will fall even less often than people do. Physics is still true, though, and so it’s probably going to be on the same order of magnitude as how often people fall. It won’t be rare.

When you think about this ‘physics is true’ thing—that’s actually where robots will be able to have superhuman capabilities. A robot is going to be close to human strength and close to human speed, but you can take much bigger risks with a robot because you don’t really care that much if you break something.

Fundamentally, I don’t care if the robot breaks. I mean, I care a little bit, but I care a lot if any of our employees were to fall.

Do you think that humanoid robots falling in nonhuman ways might be part of why people react so strongly to these videos?

Aaron Saunders: We have a massive metal frame around the front of Atlas. It’s okay if it face-plants. It tucks its limbs in to protect them and other parts of the robot. A human would do the opposite—we put our limbs out and try to protect our heads. Robots can handle certain types of impacts and forces better than humans can. We have a lot of conversations around how people perceive machines based on how they perceive themselves. Falling on its face is a good example of something that looks bad for a robot but might not actually be bad.

“I can see why it’s very interesting when something goes wrong, because the public usually doesn’t see what that looks like, and they’re not used to those moments yet.”
—Bambi Brewer, Agility Robotics

Return to top

How normal is it for your robot to fall?

Saunders: Almost everything we do on Atlas is about pushing some limit. We don’t shy away from falling, because staying in a safe place means leaving a lot on the table in terms of understanding the performance of the machine and how to solve problems. In our development work, it falls all the time, both because we’re pushing it and because there’s very little risk or hazard—we’re not delivering Atlas out into the world.

On a long flat sidewalk, I don’t think Atlas would fall in a statistically relevant way. People think back to the video of robots falling all over the place at the DARPA Robotics Challenge, and that’s not the type of falling we worry about now.

For Spot, falling can be more of a risk, because it is out in the world. On a weekly basis, our internal fleet of Spots are walking about 2,000 kilometers, and we also have them in these test cells where they’re walking on rocks, on grates, over obstacles, and on slippery floors. We want to robustly test all of this stuff and try to drive those cases of falling down to their minimums.

“If a person is carrying a baby and falls down some stairs, they have this intuition and natural ability to save the baby, even if it means injuring themselves. We can design our robots to do the same kind of thing to protect the people around it when it falls.”
—Jonathan Hurst, Agility Robotics

How big of a deal is it for your robot to fall?

Hurst: Digit was designed to fall. That’s one of the reasons that it has arms—to be able to survive a fall. When we were first designing the robot, we said, okay, at some point the robot’s going to fall, how can we protect it? We calculated how much padding we would need to minimize the acceleration on the electronic components. It turned out that we would have needed several inches of padding, and Digit would have ended up looking like the Michelin Man.

The only realistic way to have Digit safely decelerate was to have an appendage that’s going to stick out and absorb that fall. And where is the best place to locate that appendage? You get the same answer as you do when you think about inertial actuation and bimanual manipulation. Digit’s arms are where they are not because we’re trying to build a humanoid, but because we’re trying to solve locomotion challenges, manipulation challenges, and making sure that we can catch the robot when it falls.

Was there a point during the development of your robot where falling went from normal to unusual?

Saunders: The thing that really took us from worrying about normal walking to feeling pretty good about normal walking is when we pushed aggressively into things that went way beyond walking.

To jump and land successfully, we needed to develop control algorithms that could accommodate all of the mass and the dynamics of the robot. It was no longer about carefully picking where you put your foot for each step, it was about coordinating all of that moving mass in a really robust way. So when Atlas started jumping and doing parkour, it made walking easier too. A few weeks ago, we had a new team member go back and apply some of the latest control algorithms that we’re using for parkour to our standing algorithm. With those new algorithms we saw big improvements in the robot’s ability to handle disturbances from a stand—if somebody were to shove the robot, this new controller is able to think and reason about all of its dynamics, resulting in massive gains in how Atlas reacts.

“We need to give a very clear signal to people to tell them not to try and help—just step back and let the robot fall. It’ll be fine.”
—Bambi Brewer, Agility Robotics

Return to top

At this point, how much is falling just an “oops,” and how much is it a learning opportunity?

Hurst: We’re always looking for bugs that you can iron out. Digit’s collapse at ProMat was one. In this scenario, there really should not have been an emergency stop.

Brewer: Falls are points at which somebody is filing a bug card, or looking through the logs. They’re trying to figure out what happened, and how to make sure it doesn’t happen again. At ProMat, there was something wrong with an encoder in the arm. It’s been updated now. It was a bug that hadn’t occurred before. Now if that happens, the robot’s arm will freeze, but the robot will remain upright.

Saunders: On Spot, I think there are relatively few learning opportunities these days. We know pretty well what Spot’s capable of, in what situations a fall might occur, what the robot is likely to do in those situations, and how it’s going to recover. We designed Spot to be able to fall robustly and not break, and to get up from falls. Obviously, there are some extreme cases—one of our industrial customers had a need for Spot to cross a soapy floor, which is about as close as you can get to walking on ice, a challenge for anything with legs. So our control team set up a slippery environment in our lab, using cooking oil on plastic, and then just started “robustifying.” They figured out how to detect slips and adapt the gait of the robot, and went from a situation where falling was regular to one where falling was infrequent.

For Atlas, generally the falling state happens after the part that we care about. What we’re learning there is what went wrong right before the fall. If we’re working on one of Atlas’s aerial tricks—say, something that we’ve never landed before—then of course we’re doing a ton of work to figure out why falls happen. But if we’re just walking around the lab, and there was some misstep, I don’t think people stress out too much, and we just stand it back up and reset it and go again.

“Robots should be able to fall. We should give them a break when they do.”
—Aaron Saunders, Boston Dynamics

We’re not afraid of a fall—we’re not treating the robots like they’re going to break all the time. Our robot falls a lot, and one of the things we decided a long time ago that we needed to build robots that can fall without breaking. If you can go through that cycle of pushing your robot to failure, studying the failure, and fixing it, you can make progress to where it’s not falling. But if you build a machine or a control system or a culture around never falling, then you’ll never learn what you need to learn to make your robot not fall. We celebrate falls, even the falls that break the robot.

Return to top

If a robot knows that it’s about to fall, what can it do to protect itself, and protect people around it?

Hurst: There are strategies when you know you’re about to fall. If a person is carrying a baby and falls down some stairs, they have this intuition and natural ability to save the baby, even if it means injuring themselves. We can design our robots to do the same kind of thing to protect the people around it when it falls.

Brewer: In addition to the robot falling safely, we need to give a very clear signal to people to tell them not to try and help—just step back and let the robot fall. It’ll be fine.

Hurst: The other thing is to try to fall sooner rather than later. If you’re not sure whether you can stay balanced, you might end up taking a step to try to correct, and then another step, and then maybe you’re moving in a direction that’s not all that controlled. So when it starts to lose its balance, we can tell the robot, “Just fall. You’ll get back up.”

Saunders: We have these detections inside of our control system that trigger when the robot starts doing something that the controller didn’t ask it to do. Maybe the velocity is starting to do something, or the robot is at some angle that it isn’t supposed to be. If that makes us think that a fall might be happening, we’ll run a different controller to try to stop it from falling—Atlas might decide to swing its arms, or move its upper body, or throw its leg out. And if that fails, there’s another control layer for when the robot is really falling. That last layer is about putting the robot in a state that sets its pose and joint stiffnesses to basically ensure that it will do minimal damage to itself and the world. How exactly we do this is different for each robot and for each type of fall. If you comb through videos of Atlas, you might see the robot tucking itself up into a little bit of a ball—that’s a shape and a set of joint stiffnesses that help it mitigate impacts, and also help protect things around it.

Sometimes, though, these falls happen because the robot catastrophically breaks. With Atlas, we definitely have instances where we break the foot off. And at that point, I don’t have good answers.

Return to top

The next time a video of a humanoid robot falling over goes viral, whether it’s your robot or someone else’s, what is one thing you’d like people watching that video to know?

Hurst: If Digit falls, I think it’d be great for people to know that the reaction from the engineers who built that robot would not be, “our robot fell over and we didn’t expect that!” It would just be a shrug.

Brewer: I’d like people to know that when a robot is actually out in the world doing real things, unexpected things are going to happen. You’re going to see some falls, but that’s part of learning to run a really long time in real-world environments. It’s expected, and it’s a sign that you’re not staging things.

Saunders: I think people should recognize that it’s normal for equipment to sometimes fail. Equipment can be fixed, equipment can be improved, and over time, equipment gets more and more reliable. And so, when people see these failures, it may be a situation that the robot has never experienced. They should know that we are gathering all that information and that we’re continuously improving and iterating, and that what they’re seeing now doesn’t represent the end state. It just represents where the technology is today.

I also think that there has to be some balance between our expectations for what robots can do, and the process for getting them to do it. People will come to me and they’ll want a robot that can do amazing things that robots don’t do yet, but they’re very nervous if a robot fails. If we want our robots to do amazing things and enrich our lives and be our tools in the workforce, we’re going to need to build those capabilities over time, because this is emerging technology, not established technology.

Robots should be able to fall. We should give them a break when they do. It’s okay if we laugh at them. But we should also work hard to make our products safe and reliable and things that we can trust, because if we don’t trust our robots, we won’t use them.

Return to top



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.

Enjoy today’s videos!

In this paper, we introduce ROSE, a novel soft gripper that can embrace the object and squeeze it by buckling a soft, funnel-like, thin-walled membrane around the object by simple rotation of the base. Thanks to this design, the ROSE hand can adapt to a wide range of objects that can fall within the funnel, and handle them with pleasant gripping force.

[ Paper ]

Thanks, Van!

Legged robots are designed to perform highly dynamic motions. However, it remains challenging for users to retarget expressive motions onto these complex systems. In this paper, we present a Differentiable Optimal Control (DOC) framework that facilitates the transfer of rich motions from either animals or animations onto these robots.

[ Disney Research ]

We present a team of legged robots for scientific exploration missions in challenging planetary analog environments. The paper was published in Science Robotics, and we deployed this approach at the ESA / ESRIC Space Resources Challenge.

[ ETHZ RSL ]

I physically cringed watching this happen.

[ NORLab ]

At Agility, we make robots that are made for work. Our robot Digit works alongside us in spaces designed for people. Digit handles the tedious and repetitive tasks meant for a machine, allowing companies and their people to focus on the work that requires the human element.

[ Agility Robotics ]

Admit it—you’ve done this with your robot.

[ AeroVironment ]

This looks like a fun game: Can you keep a simulated humanoid from falling over by walking for it?

[ RoboDesign Lab ]

NSFW.

[ Hardcore Robotics ]

I am including this because it’s Scotland, and you deserve to see it.

[ DJI ]

Team RoMeLa’s ARTEMIS vs. RoboCup Champions Team NimbRo. This is an exhibition game generously offered by Team NimbRo after an unfortunate incident of an illegal game controller that interfered with Team RoMeLa’s last official game.

[ RoMeLa ]

Two leading robotics pioneers share how they are driving disruption in wildly different industries: health care and construction. We hear how these innovations will impact businesses and society, and what it takes to be a robotics entrepreneur in today’s economic climate. Vivian Chu, cofounder and CTO, Diligent Robotics, and Tessa Lau, founder and CEO, Dusty Robotics.

[ Fortune ]

Sanctuary AI spends an hour and 20 minutes answering six questions from social media.

[ Sanctuary AI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEXAS

Enjoy today’s videos!

Fourier Intelligence, a global technology company specialising in rehabilitation robotics and artificial intelligence, unveiled its first-generation humanoid robot GR-1 at the 2023 World Artificial Intelligence Conference (WAIC) in Shanghai.

Standing 1.65 metres tall and weighing 55 kilograms, GR-1 has 40 degrees of freedom (actuators) all over its body. With a peak torque of 300NM generated by a joint module installed at the hip, the robot is able to walk at 5 kilometres per hour and carry a load of 50 kilograms.

According to the company, mass production of GR-1 is slated to begin around the end of this year.

[ Fourier Intelligence ]

Here it is, the RoboCup 2023 Middle-Size League final match between Tech United and Falcons.

Tech United has also put together a couple of extra videos from RoboCup talking about some new tech that they’re working on.

[ Tech United ]

RoboCup 2023 Humanoid AdultSize Final: NimbRo vs. HERoEHS.

[ NimbRo ]

So it turns out that RoMeLa’s ARTEMIS has a Turbo Mode?

[ RoMeLa ]

What if animals were substituted with biohybrid robots? The replacement of pets with bioinspired robots has long existed within technological imaginaries and HRI research. Addressing developments of bioengineering and biohybrid robots, we depart from such replacement to study futures inhabited by animal-robot hybrids. In this paper, we introduce a speculative concept of assembling and eating biohybrid robots.

[ Paper ]

With so much coverage of littlish electric quadrupeds, you kind of forget that big hydraulic monsters exist, too.

[ IIT ]

MIT scientists have developed tiny, soft-bodied robots that can be controlled with a weak magnet. The robots, formed from rubbery magnetic spirals, can be programmed to walk, crawl, swim—all in response to a simple, easy-to-apply magnetic field.

[ MIT ]

Huh, that’s an interesting way of getting a quadrotor to translate without rolling.

[ MAVLab ]

With this system developed at EPFL, surgeons can now perform four-handed surgical interventions using two robot arms controlled by haptic interface pedals. This unprecedented advancement in the field of laparoscopic surgery aims to reduce the workload of surgeons while improving precision and safety. A single practitioner can accomplish tasks typically carried out by two or three individuals, thereby enhancing accuracy and coordination. Clinical trials are currently underway in Geneva.

[ EPFL ]

With a robot arm, X20 has capabilities of opening doors, picking up objects, flipping switches and valves; Or maybe a fist pump, clinking glasses, and shaking hands.

[ DeepRobotics ]

The real reason why people become roboticists.

[ Kawasaki ]

Can the multicellular robot, Loopy, move around in its environment? Yes, this video shows it can. Can it move efficiently and naturally? That’s what we are working on...

[ WVUIRL ]

Together, Daimler Truck and Torc Robotics are creating the world’s first great generation of robotic semi trucks. These trucks are currently in development, with the end goal of producing a highly efficient, safe, and sustainable product for fleet owners, owner operators, and all levels of highway user. Built with level 4 autonomy, these driverless trucks are driving the future of freight.

[ Torc Robotics ]

Okay this is kind of a long interview about Pollen Robotics’ Reachy so if you don’t want to sit through all of it just skip ahead to 6:10 because I lol’d.

Don’t worry, Reachy recovers at 7:28.

[ Pollen Robotics ]

The Robotics: Science and Systems 2023 livestream archive is now online; here’s day 1, and you’ll find the other days on the YouTube channel.

[ RSS 2023 ]



This is a sponsored article brought to you by Robotnik.

In today’s ever-evolving world, ensuring the safety and security of our surroundings has become an utmost priority. Traditional methods of surveillance and security often fall short when it comes to precision, reliability and adaptability. Recognizing this need for a smarter solution, Robotnik, a robotic company fully committed to precision engineering and unparalleled expertise is shaping the future with its groundbreaking advancements, has developed the RB-WATCHER. It is a collaborative mobile robot designed specifically for surveillance and security tasks. With its advanced features and cutting-edge technology, RB-WATCHER is set to revolutionize the way we approach surveillance in various environments.

Intelligence, precision, functions and reliability

RB-WATCHER‘s intelligence lies not only in its ability to navigate autonomously but also in its suite of intelligent functions. Whether detecting human presence, monitoring a designated area or identifying intruders, collecting crucial data or identifying potential fire outbreaks, RB-WATCHER’s advanced algorithms and sensors ensure unparalleled precision. This mobile robot autonomously performs a wide range of surveillance tasks with exceptional accuracy.

Surveillance and security tasks demand utmost precision and reliability to detect, prevent and overcome potential hazards and risks. RB-WATCHER has been meticulously engineered to meet these requirements, delivering unparalleled performance in diverse operating environments.

RB-WATCHER: Autonomous Mobile Robot for Surveillance & Security youtu.be

Specifications that unleash the power of autonomous capabilities

This surveillance and security robot stands out for its autonomous capabilities, allowing it to operate efficiently even in challenging and dynamic environments. Equipped with state-of-the-art inspection and navigation sensorization, this robotic platform combines multiple technologies to ensure seamless performance. Among its impressive array of sensors are the bi-spectral camera, front camera, RTK GPS and microphone, all working in harmony to provide comprehensive surveillance coverage. Let’s take a deeper look at specifications the RB-WATCHER enjoys.

Standing out among its competitors, RB-WATCHER‘s inspection sensors play a pivotal role in its surveillance capabilities. The bi-spectral Pan-Tilt-Zoom camera captures high-resolution images, while the microphone provides real-time audio information, boosting situational awareness and threat detection capabilities.

Connectivity is seamless with RB-WATCHER, as it comes equipped with a 4G router for real-time data transmission. Additionally, optional 5G router and Smart Radio support ensure compatibility with the latest communication technologies, enabling RB-WATCHER to remain at the forefront of connectivity advancements.

aspect_ratio

Navigation and localization are critical aspects of RB-WATCHER‘s performance. Equipped with a front depth camera, Inertial Measurement Unit (IMU), 3D LIDAR and advanced 3D SLAM and 2D SLAM technologies, RB-WATCHER achieves precise movement and accurate mapping of its surroundings. The integration of Real-Time Kinematic (RTK) GPS further enhances its position awareness, making it an ideal choice for large outdoor spaces.

RB-WATCHER also offers a host of impressive technical specifications that cement its position as an industry leader. With dimensions of 790 x 614 x 856 mm and weighing 62 kg, RB-WATCHER strikes a balance between compactness and power. Its payload capacity of up to 65 kg ensures it can handle various surveillance tasks effectively, while its top speed of 2.5 m/s allows for swift and efficient coverage of designated areas. This versatility makes RB-WATCHER suitable for both indoor and outdoor environments, further expanding its range of applications.

Built to withstand challenging conditions, RB-WATCHER boasts an IP53 enclosure class, offering protection against dust and water spray. Its temperature range of -10ºC to +45ºC enables reliable operation in a wide array of environments. Additionally, the robot’s ability to navigate slopes of up to 80% showcases its exceptional adaptability and ensures comprehensive surveillance coverage.

Surveillance & Security at its finest

As a vital component of the service portfolio offered by private security and surveillance companies, one of RB-WATCHER’s core strengths lies in its ability to execute surveillance and security tasks across various industries. The RB-WATCHER provides a powerful solution that enables them to enhance their offerings and benefit their clients in several ways. The autonomous mobile robot excels in patrolling predetermined areas, detecting objects, individuals, and identifying potential fire hazards. Its versatility and reliability make the RB-WATCHER an ideal choice for private security and surveillance companies, empowering them to deliver comprehensive and advanced services to their clients.

Reliability: A Hallmark of RB-WATCHER

When it comes to security and surveillance robots, reliability is non-negotiable. Robotnik’s RB-WATCHER instills confidence through its robust construction and resilient performance. Built to withstand demanding environments, RB-WATCHER operates flawlessly, consistently delivering accurate data for effective decision-making. With this reliable companion by your side, you can rest assured that your surveillance and security tasks are in capable hands.

Free Navigation for unrestricted surveillance

RB-WATCHER breaks free from the limitations of conventional surveillance methods. With its free navigation capabilities, this collaborative mobile robot efficiently traverses various terrains, overcoming obstacles effortlessly. By dynamically adapting to changing environments, RB-WATCHER guarantees comprehensive surveillance coverage, leaving no stone unturned in its quest for security.

Conclusion

Robotnik’s RB-WATCHER sets a new standard in the realm of security and surveillance robots. By combining precision, reliability and autonomous capabilities, RB-WATCHER redefines the way we approach surveillance tasks in a rapidly changing world. With its impressive sensorization, versatility and intelligent functions, RB-WATCHER stands as a testament to Robotnik’s commitment to innovation and safety. As we continue to embrace technological advancements, RB-WATCHER paves the way for a safer, more secure future.



It never gets any easier to watch: a control room full of engineers, waiting anxiously as the robotic probe they’ve worked on for years nears the surface of the moon. Telemetry from the spacecraft says everything is working; landing is moments away. But then the vehicle goes silent, and the control room does too, until, after an agonizing wait, the project leader keys a microphone to say the landing appears to have failed.

The last time this happened was in April, in this case to a privately funded Japanese mission called Hakuto-R. It was in many ways similar to crashes by Israel’s Beresheet and India’s Chandrayaan-2 in 2019. All three landers seemed fine until final approach. Since the 1970s, only China has successfully put any uncrewed ships on the moon (most recently in 2020); Russia’s last landing was in 1976, and the United States hasn’t tried since 1972. Why, half a century after the technological triumph of Apollo, have the odds of a safe lunar landing actually gone down?

The question has some urgency because five more landing attempts, by companies or government agencies from four different countries, could well be made before the end of 2023; the next, Chandrayaan-3 from India, is scheduled for launch as early as this week. NASA’s administrator, Bill Nelson, has called this a “golden age” of spaceflight, culminating in the landing of Artemis astronauts on the moon late in 2025. But every setback is watched uneasily by others in the space community.

2023 Possible Lunar Landings

India: Chandrayaan-3, from the Indian Space Research Organization, with a hoped-for launch in mid-July and, if that succeeds, a landing in August.

Chandrayaan-3 could be heading to the moon soon.ISRO

Russia: Luna-25, from the Roscosmos space agency, which currently says it plans an August launch.

United States: Nova-C IM-1, from a private Houston-based company, Intuitive Machines, currently targeted for launch in the third quarter of 2023.

United States: Peregrine Mission 1, from the Pittsburgh-based company Astrobotic Technology, is waiting for modifications to its Vulcan Centaur launch vehicle. A launch date of 4 May was put off; a new one has not been set. [Read about Peregrine 1's rover here.]

Japan: SLIM (Smart Lander for Investigating Moon), from the JAXA space agency. An August launch date has been put off.

Intuitive Machines hopes to launch the Nova-C IM-1 this season.Intuitive Machines

Each of these missions is behind schedule, in some cases by years, and several could slip into 2024 or later.

The Fate of Hakuto-R Mission 1

A day after Hakuto-R went silent, an American spacecraft, Lunar Reconnaissance Orbiter, passed over the landing site; its imagery, compared with previous shots of the area, showed clearly that there had been a crash. The company running Hakuto-R, ispace, did an analysis of the crash and concluded that its software had perhaps been too clever for its own good.

According to ispace, the lander’s onboard sensors indicated a sharp rise in altitude when the craft passed over a 3-kilometer-high cliff. The cliff was later determined to be the rim of a crater. But the onboard computer had not been programmed for any cliff that high; it was told that in case of a large discrepancy in its expected position, the computer should assume something was wrong with the ship’s radar altimeter and disregard its input. The computer, said ispace, therefore behaved as if the ship were near touchdown when it was actually 5 km above the surface. It kept firing its engines, descending ever so gently, until its fuel ran out. “At that time, the controlled descent of the lander ceased, and it is believed to have free-fallen to the moon’s surface,” ispace said in a press release.

The crash site of the privately mounted Japanese Hakuto-R Mission 1 lunar lander, imaged by NASA’s Lunar Reconnaissance Orbiter.NASA/Goddard Space Flight Center/Arizona State University

Takeshi Hakamada, the CEO of ispace, put a brave face on it. “We acquired actual flight data during the landing phase,” he said. “That is a great achievement for future missions.”

Will this failure be helpful to other teams trying to make landings? Only to a limited extent, they say. As the so-called new space economy expands to include startup companies and more countries, there are many collaborative efforts, but there is also heightened competition, so there’s less willingness to share data.

Better Technology, Tighter Budgets

“Our biggest challenges are that we are doing this as a private company,” says John Thornton, the CEO of Astrobotic, whose Peregrine lander is waiting to go. “Only three nations have landed on the moon, and they’ve all been superpowers with gigantic, unlimited budgets compared to what we’re dealing with. We’re landing on the moon for on the order of $100 million. So it’s a very different ballgame for us.”

To put US $100 million in perspective: Between 1966 and 1968, NASA surprised itself by safely landing five of its seven Surveyor spacecraft on the moon as scouts for Apollo. The cost at the time was $469 million. That number today, after inflation, would be about $4.4 billion.

Surveyor’s principal way of determining its distance from landing was radar, a mature but sometimes imprecise technology. Swati Mohan, the guidance and navigation lead for NASA’s Perseverance rover landing on Mars in 2021, likened radar to “closing your eyes and holding your hands out in front of you.” So Astrobotic, for instance, has turned to Doppler lidar—laser ranging—which has about 10 times better resolution. It also uses terrain-relative navigation, or TRN, a visually based system that takes rapid-fire images of the approaching ground and compares them to an onboard database of terrain images. Some TRN imagery comes from the same Lunar Reconnaissance Orbiter that spotted Hakuto-R.

“Our folks are feeling good, and I think we’ve done as much as we possibly can to make sure that it’s successful,” says Thornton. But, he adds, “it’s an unforgiving environment where everything has to work.”



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Instead of one autonomous robot to fly, another to drive on land and one more to navigate on water, a new hybrid drone can do all three. To carry out complex missions, scientists are increasingly experimenting with drones that can do more than just fly.

The idea for a drone capable of navigating land, air, and sea came when researchers at New York University Abu Dhabi’s Arabian Center for Climate and Environmental Sciences (ACCESS) noted they would like a drone “capable of flying out to potentially remote locations and sampling bodies of water,” says study lead author Dimitrios Chaikalis, a doctoral candidate at NYU Abu Dhabi.

Environmental research often “relies on sample collections from hard-to-reach areas,” Chaikalis says. “Flying vehicles can easily navigate to such areas, while being capable of landing on water and navigating on the surface allows for sampling for long hours with minimal energy consumption before flying back to its base.”

The new autonomous vehicle is a tricopter with three pairs of rotors for flight, three wheels for roaming on land, and two thrusters to help it move on water. The rubber wheels were 3D-printed directly around the body of the main wheel frame, eliminating the need for metal screws and ball bearings, which would run the risk of rust after exposure to water. The entire machine weighs less than 10 kilograms, in order to comply with drone regulations.

A buoyant, machine-cut Styrofoam body was placed between the top of the machine, which holds the rotors, and its bottom, which holds the wheels and thrusters. This flotation device served as the machine’s hull in the water, and was shaped like a trefoil to leave room for the airflow of the rotors.

“The resulting vehicle is capable of traversing every available medium—air, water, ground—meaning you can eventually deploy autonomous vehicles capable of overcoming ever-increasing difficulties and obstacles,” Chaikalis says.

The drone possesses two open-source PX4 autopilot systems: one for the air, and the other for navigating both land and water. “Aerial navigation differs heavily from ground or water surface navigation, which actually bear a lot of similarities with each other,” Chaikalis says. “So we designed the ground and water surface navigation to both work with the same autopilot, changing only the motor output for each case.”

An Intel NUC computer served as the command module. The computer can switch between the two autopilots as needed, as well as interface with a radio transceiver and GPS. All these electronics were secured within a waterproof plastic casing.

“Of course, you also have to get waterproof motors for the ground-vehicle wheels, since they’ll be fully submerged when on water,” Chaikalis says. “Such motors proved difficult to interface with commercial autopilot units, so we ended up also designing custom hardware and firmware for interfacing such communications.”

The drone can operate under radio control or autonomously on preprogrammed missions. Its lithium polymer batteries give it a flight time of 18 minutes.

In experiments, the Styrofoam hull absorbed water during floating, increasing its weight by 20 percent within 30 minutes. The Styrofoam did release this water during flight, albeit slowly, with a 20 percent weight loss after 100 minutes. The scientists note this significant variation in weight needs to be accounted for in the autopilot design, or they could add a water-resistant coating, although that would permanently increase the overall weight.

In addition, “although waterproof against splashes and light submersion, this is not yet a fully submersible design, meaning a failure of the flotation device could potentially be catastrophic,” Chaikalis says.

In the future, the researchers note they could optimize the hull to make it strong enough to withstand complex maneuvers and to minimize air drag during flight. They would also like to make the drone fully modular so they can easily change its capabilities by attaching or detaching modules from it.

“We imagine being capable of, for example, selecting to drop the ground mechanism behind if necessary to save power, then returning to it later to land,” Chaikalis says. “Or allow the water module to navigate on water, while the [unmanned aerial vehicle] returns to a nearby base for recharge and picking it up again later.”

A patent application is pending on the new drone. The scientists detailed their findings 9 June at the 2023 International Conference on Unmanned Aircraft Systems in Warsaw.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

Here are a couple of highlight videos from the Tech United RoboCup team, competing in the Middle-Size League at RoboCup 2023 in Bordeaux. There’s an especially impressive goal toward the end of the second video—as a soccer-playing human, I would have been proud to score something like that.

[ Tech United ]

How to infuriate a NAO for 2 minutes and 38 seconds.

[ Team B-Human ]

Goalie behavior testing for Robot ARTEMIS 2, Team RoMeLa UCLA.

[ RoMeLa ]

We fabricated an array of six inflatable actuators driven by a single pressure line that play Beethoven’s Ode to Joy. Our actuators are buckled shells that snap through in a second configuration when the inner pressure reaches a threshold. The actuators are designed to play a piano key each time they snap and we developed an algorithm that based on a desired sequence of notes gives as output the geometrical parameters of each actuator, thus part of the control architecture is encoded in their mechanics.

“Nonlinear inflatable actuators for distributed control in soft robots” was recently published in Nature Advanced Materials.

[ Nature ] via [ SoftLab ]

Thanks, Edoardo!

Always nice to be reminded that you’ve conquered things that used to slip you up.

[ Boston Dynamics ]

Sure, let’s make winged scorpion robots!

[ GRVC ]

Introducing the TIAGo Pro, a revolutionary robot with fully torque-controlled arms with optimal arm mounting. This enhances the manipulation capabilities and enables state-of-the-art Human-Robot Interaction. Designed for agile manufacturing and future healthcare applications.

Equipped with major hardware upgrades like torque-controllable arms, EtherCAT communication bus at 1KHz, and increased computational power, the TIAGo Pro boosts productivity and efficiency with machine learning algorithms.

[ PAL Robotics ]

Visual-inertial odometry (VIO) is the most common approach for estimating the state of autonomous micro aerial vehicles using only onboard sensors. Existing methods improve VIO performance by including a dynamics model in the estimation pipeline. However, such methods degrade in the presence of low-fidelity vehicle models and continuous external disturbances, such as wind. Our hybrid dynamics model uses a history of thrust and IMU measurements to predict vehicle dynamics. To demonstrate the performance of our method, we present results on both public and novel drone dynamics datasets and show real-world experiments of a quadrotor flying in strong winds up to 25 km/h.

To be presented at RSS 2023 in Daegu, South Korea.

[ UZH RPG ]

This cute little robotic dude is how I want all my packages delivered.

[ RoboDesign Lab ]

Telexistence raises USD 170M Series B, Announces new partnerships with SoftBank Robotics Group and Foxconn, accelerating its business expansion in North America and operational capabilities in mass production.

[ Telexistence ]

The qb SoftClaw demonstrating that the only good tomato is a squished to death tomato.

[ qb Robotics ]

You see multi-colored balls, we see pills and tablets, garments and fabrics, ripe berries and unripe berries. Learning one simple concept–how to sort items–can be applied to endless work scenarios.

[ Sanctuary ]

Some highlights from Kuka’s booth at Automatica in Germany. Two moments that jumped out to me included the miniature automotive assembly line made of LEGO Technic, and also the mobile robot with a bit “NO RIDING” sticker on it. C’mon, Kuka, let us have some fun!

Also at Automatica was the final round of the Kuka Innovation Award. It was a really intertesting competition this year, although one of the judges seems like a real dork.

[ Kuka ]

I don’t know how Pieter pulled this off, but here’s a clip from the Robot Brains podcast, where he talks with astronaut Woody Hoburg up on the International Space Station.

[ Robot Brains ]



Robotics engineers often look to how animals get around for inspiration for more effective and efficient artificial limbs, joints, and muscles. One particularly fruitful source of inspiration comes from studying creatures that use their limbs for different kinds of mobility—think amphibians that both walk and swim, or birds that both walk and fly. Such inspiration has led to the SPIDAR that crawls and flies, the LEO that skateboards and slacklines, and robots that can switch between bipedal and quadrupedal modes.

Now engineers at Caltech and Northeastern University (in Boston) have developed a multimodal robot that can navigate in not two or three but eight different ways—including walking, crawling, rolling, tumbling, and even flying. That said, the Multi-Modal Mobility Morphobot (M4) looks more like a sleek little cart than anything out of a bestiary. M4 is 70 centimeters long and 35 cm high, with four legs with two joints each. It also has two ducted fans at the ends of each leg, which can function as legs, propellor thrusters, or wheels. The robot is surprisingly light—around 6 kilograms—considering that this includes its onboard computers, sensors, communication devices, joint actuators, propulsion motors, power electronics, and battery. It is capable of autonomous, self-contained operations.

The details of M4 were published in Nature Communications on 27 June.

The bio-inspired ‘transformer’ that crawls, rolls and flies Nature Video

Integrating so many modes of transport on a single platform is a first, says Alireza Ramezani, a robotics engineer from Northeastern University and one of the lead investigators. The task called for challenging design considerations: “In multimodal robot design, as the number of modes [of locomotion] increases, each mode introduces its own design requirements,” he says. To integrate all these design requirements, the researchers had to play with various trade-offs.

“When you design aerial robots, you want your systems to be extremely light,” says Ramezani. “But if you want to achieve legged locomotion, you need bulky actuators that can produce torque in the joints for dynamic interactions with the ground surface. These bulky components can negatively affect aerial mobility. And this is just for a system with two modes of mobility.” M4 can walk on rough terrain, climb steep slopes, tumble over bulky objects, crawl under low-ceiling obstacles, and fly.

The researchers took their design inspiration from the locomotion plasticity seen in nature. Morteza Gharib, an aeronautics professor from Caltech and a colead for the project, says that nature was “an open book of design for us,” especially in the way that it repurposes systems in order to deliver functionality. “The unique aspect of [this] robot is that it has the highest number of functionalities with the minimum number of components, and also is capable of making decisions on which one to use for different challenges.”

Repurposing was the key to making the design of M4 scalable—that is, increasing its payload capacity without compromising its mobility. Focusing on how the robot could reuse its existing appendages for different locomotions without introducing added mass freed up payload capacity for computers, sensors, and so on. The scalability was achieved by redundancy manipulation.

M4 may look like a simple box on wheels, but it is the first robot capable of eight different ways of getting around town.Northeastern University/Nature Communications

In other words, M4 can use its four appendages to roll like a ground vehicle or crawl as a quadruped, but it can also stand up on two appendages. While standing, the robot has a higher vantage point and more dynamic locomotion, but as a quadruped, it has four contact points with the environment and is therefore more stable. “This is [a matter of] finding a balance between the trade-offs introduced by each mode of mobility, and the mechanism to go from one mode to another is through manipulating these redundancies,” says Gharib.

The research team carried out experiments to put M4 through its paces—wheeled and quadrupedal locomotion, unmanned ground as well as aerial locomotion, and more. They report that M4 showed full autonomy in multimodal path planning between traveling on the ground and flying.

Using its onboard sensors and computers, M4 was able to negotiate an unstructured environment by switching from rolling to flying, but the researchers want more. “The next step for us is to have all of [M4’s] modes of mobility being used by the robot in a completely autonomous fashion based on the sensory information that it gathers from the environment,” Ramezani says.

Funding for the M4 project came from the Jet Propulsion Lab at NASA and the National Science Foundation. The researchers expect multimodal robots to play a big role in future space explorations. Recently, NASA integrated an aerial vehicle, the tiny helicopter Ingenuity, into the Mars rover Perseverance to act as a scout for the larger vehicle, and it was a great success, Ramezani points out.

Space exploration aside, the researchers also see potential for search-and-rescue operations, package handling and delivery, environmental applications, and digital agriculture, among others. The system’s ability to change its shape and form gives it many advantages over robots with fixed geometry, Gharib says.

The researchers are still looking to improve M4. “There is no end to what you would like to see from a robot like this,” Ramezani says. “For instance…it doesn’t take much to extend the existing capabilities of M4 to underwater locomotion using its quad copters.” Meanwhile, they also continue to work on making M4’s existing mobility modes more efficient.



In July of 2010, I traveled to Singapore to take care of my then 6-year-old son Henry while his mother attended an academic conference. But I was really there for the robots.

IEEE Spectrum’s digital product manager, Erico Guizzo, was our robotics editor at the time. We had just combined forces with robot blogger par excellence and now Spectrum senior editor Evan “BotJunkie” Ackerman to supercharge our first and most successful blog, Automaton. When I told Guizzo I was going to be in Singapore, he told me that RoboCup, an international robot soccer competition, was going on at the same time. So of course we wrangled a press pass for me and my plus one.

I brought Henry and a video camera to capture the bustling bots and their handlers. Guizzo told me that videos of robots flailing at balls would do boffo Web traffic, so I was as excited as my first grader (okay, more excited) to be in a convention center filled with robots and teams of engineers toiling away on the sidelines to make adjustments and repairs and talk with each other and us about their creations.

Even better than the large humanoid robots lurching around like zombies and the smaller, wheeled bots scurrying to and fro were the engineers who tended to them. They exuded the kind of joy that comes with working together to build cool stuff, and it was infectious. On page 40 of this issue, Peter Stone—past president of the RoboCup Federation, professor in the computer science department of the University of Texas at Austin, and executive director of Sony AI America—captures some of that unbridled enthusiasm and gives us the history of the event. To go along with his story, we include action shots taken at various RoboCups throughout the 25 years of the event. You can check out this year’s RoboCup competitions going on 6–9 July at the University of Bordeaux, in Nouvelle-Aquitaine, France.

Earlier in 2010, the same year as my first RoboCup, Apple introduced what was in part pitched as the future of magazines: the iPad. Guizzo and photography director Randi Klett instantly grokked the possibilities of the format and the new sort of tactile interactivity (ah, the swipe!) to showcase the coolest robots they could find. Channeling the same spirit I experienced in Singapore, Guizzo, Klett, and app-maker Tendigi launched the Robots app in 2012. It was an instant hit, with more than 1.3 million downloads.

To reach new audiences on other devices beyond the iOS platform, we ported Robots from appworld to the Web. With the help of founding sponsors—including the IEEE Robotics and Automation Society and Walt Disney Imagineering—and the support of the IEEE Foundation, the Robots site launched in 2018 and quickly found a following among STEM educators, students, roboticists, and the general public.

By 2022 it was clear that the site, whose basic design had not changed in years, needed a reboot. We gave it a new name and URL to make it easy for more people to find: RobotsGuide.com. And with the help of Pentagram, the design consultancy that reimagined Spectrum’s print magazine and website in 2021, in collaboration with Standard, a design and technology studio, we built the site as a modern, fully responsive Web app.

Featuring almost 250 of the world’s most advanced and influential robots, hundreds of photos and videos, detailed specs, 360-degree interactives, games, user ratings, educational content, and robot news from around the world, the Robots Guide helps everyone learn more about robotics.

So grab your phone, tablet, or computer and delve into the wondrous world of robots. It will be time—likely a lot of it—well spent.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.

Enjoy today’s videos!

Humanoid robot ARTEMIS training for RoboCup. Fully autonomous soccer playing outdoors.

[ RoMeLa ]

Imperial College London and Empa researchers have built a drone that can withstand high enough temperatures to enter burning buildings. The prototype drone, called FireDrone, could be sent into burning buildings or woodland to assess hazards and provide crucial first-hand data from danger zones. The data would then be sent to first responders to help inform their emergency response.

[ Imperial ]

We integrated Stable Diffusion to give Ameca the power to imagine drawings. One of the big challenges here was converting the image to vectors, (lines), that Ameca could draw. The focus was on making fast sketches that are fun to watch. Ameca always signs their artwork.

I just don’t understand art.

[ Engineered Arts ]

Oregon State Professor Heather Knight and Agility’s Head of Customer Experience Bambi Brewer get together to talk about human-robot interaction.

[ Agility ]

Quadrupeds are great, but they have way more degrees of freedom than it’s comfortable to control. Maybe motion capture can fix that?

[ Leeds ]

The only thing I know for sure about this video is that Skydio has no idea what’s going on here.

[ Ugo ]

We are very sad to share the passing of Joanne Pransky. Robin Murphy shares a retrospective.

[ Robotics Through Science Fiction ]

ICRA 2023 was kind of bonkers. This video doesn’t do it justice, of course, but there were a staggering 6,000 people in attendance. And next year is going to be even bigger!

[ ICRA 2023 ]

India Flying Labs recently engaged more than 350 girls and boys in a two-day STEM workshop with locally-made drones.

[ WeRobotics ]

This paper proposes the application of a very low weight (3.2 kg) anthropomorphic dual-arm system capable of rolling along linear infrastructures such as power lines to perform dexterous and bimanual manipulation tasks like the installation of clip-type bird flight diverters or conduct contact-based inspection operations on pipelines to detect corrosion or leaks.

[ GRVC ]

In collaboration with Trimble, we are announcing a proof-of-concept to enable robots and machines to follow humans and other machines in industrial applications. Together, we have integrated a patent-pending PFF follow™ smart-following module prototype developed by Piaggio Fast Forward onto a Boston Dynamics’ Spot® robot platform controlled by Trimble’s advanced positioning technology.

[ PFF ]

X20 tunnel inspection quadruped robot can achieve accurate detection and real-time uploading of faults such as cable surface discharge, corona discharge, internal discharge, and temperature abnormality. It can also adapt to inspection tasks in rugged terrain.

[ DeepRobotics ]

If you’re wondering why the heck anyone would try to build a robot arm out of stained glass, well, that’s an excellent thing to wonder.

[ Simone Giertz ]



From 2019 to 2022, I had the privilege of serving as president of the RoboCup Federation. RoboCup is an annual international competitive event that merges visionary thinking about how AI and robotics will change the world with practical robot design. Participants spend months solving diverse technical problems to enable their robots to autonomously play soccer, do household chores, or search for disaster victims. And their efforts are in turn enabling fundamental advances in a range of fields, including machine learning, multiagent systems, and human-robot interaction.

Dr Hiroaki Kitano, Japanese artificial intelligence researcher, holding two members of his miniature robot football team.Peter Menzel/Science Source

RoboCup’s original goal, as defined by founding president Hiroaki Kitano, was to enable a team of fully autonomous humanoid robots to beat the best human soccer team in the world on a real, outdoor field by the year 2050. Since the first RoboCup competition in 1997 which featured three leagues—small-size wheeled robots, middle-size wheeled robots, and simulation—the event has expanded to include humanoid robot soccer leagues, as well as other leagues devoted to robots with more immediate practicality. The next RoboCup event takes place in July in Bordeaux, France, where 2,500 humans (and 2,000 robots) from 45 countries are expected to compete.

The Beginning

A history of RoboCup from 1997-2011, by Peter Stone and Manuela Veloso. www.youtube.com

The first RoboCup, which I attended as a student, was held in 1997 in a small exhibit room at the International Joint Conference on AI (IJCAI) in Nagoya, Japan. The level of competition was, by today’s standards, not very high. However, it’s important to remember that many “roboticists” back then didn’t work with real robots. RoboCup was unusual during its early years in that it forced people to build complete, integrated working systems that could sense, decide, and act.

Small-Size League


Over the years, RoboCup has seen huge improvements in the level of play, often following a pattern of one team making a discovery and dominating the competition for a year or two and then being supplanted by another. For example, in the Small-Size League, in which the robots use a golf ball and external perception and computing, Team FU-Fighters from Freie Universität Berlin introduced some innovations in the early 2000s. They began controlling the ball using a device that spins it backward toward the robot to “dribble.” A second device propelled the ball quickly forward to shoot it. As the first team to come up with this strategy, the FU-Fighters had a big advantage, but other teams soon followed suit.


The Small-Size League final between TIGERs Mannheim and ER-Force at RoboCup 2022 in Bangkok, Thailand.

Standard Platform League

Austin Villa, UT Austin’s RoboCup team, describing the RoboCup Standard Platform League in 2008.

While many RoboCup leagues include a hardware design component, some teams prefer to focus more on software. In the Standard Platform League, each team is provided with identical robots, and thus the best combination of algorithms and software engineering wins. The first standard platform for RoboCup was the Aibo, a small robot dog made by Sony. Ultimately, though, the goal of RoboCup is to achieve human-level performance on a bipedal robot, and so the Standard Platform League now uses a small humanoid robot called Nao, made by SoftBank. Rugged and capable, Nao is able to fall over and quickly get up again, a critical skill for soccer-playing humans and robots alike.


The Standard Platform League final between HTWK Robots and B-Human at RoboCup 2022 in Bangkok, Thailand.

Middle-Size League

The Middle-Size League final between Tech United and the Falcons at RoboCup 2022 in Bangkok, Thailand.

The Middle-Size League uses a full-size soccer ball and onboard perception on waist-high wheeled robots. Over the years, it has showcased an enormous amount of progress toward human-speed, human-scale soccer. In recent competitions, the robots have moved briskly around a large field, autonomously developing offensive and defensive strategies and coordinating passes and shots on goal. The typical middle-size robot has the skill of a competent primary-school human, although in this league the robots don’t have to worry about legs. And in some ways, the middle-size robots have advantages over human players—the robots have omnidirectional sensing, wireless communication, and the ability to consistently place very accurate shots thanks to a mechanical ball-launching system.


Humanoid League

The RoboCup Humanoid League, launched in 2002, is critical to meeting our objective of fielding a team of highly skilled humanoid robots by 2050. Bipedal robots are an ongoing challenge, especially when those robots have to interact with full-size soccer balls, balancing on one leg to kick with the other. The humanoids must have humanlike proportions and sensor configurations akin to human perception—which means, among other things, no omnidirectional sensing.


RoboCup 2022 Humanoid AdultSize Soccer Final: NimbRo (Germany) vs. HERoEHS (Korea)

The Adult-Size Humanoid League final between NimbRo and the HERoEHS at RoboCup 2022 in Bangkok, Thailand.

Simulation

Even in the Standard Platform League, hardware can be frustrating, so the RoboCup Simulation League allows teams to work entirely in software. This enables more rapid progress using cutting-edge techniques. My own team, UT Austin Villa, started using hierarchical machine learning to develop skills such as walking and kicking in the Simulation League in 2014, which allowed us to dominate the competition. But in 2022, FC Portugal and Magma Offenburg were able to surpass us with deep reinforcement learning methods.

Other Leagues

While soccer is the ultimate goal of Robocup, and it motivates research on fundamental topics such as robot vision and mobility, it can be hard to see the practicality in a game. Other RoboCup leagues thus focus on more immediate applications. RoboCup@Home features robots for domestic environments, RoboCup Rescue is for search-and-rescue robots for disaster response, and RoboCup@Work develops robots for industry and logistics tasks.


Robots vs. Humans

European RoboCup 2022 Middle-Size winning team Tech United plays an exhibition game against professional women’s team Vitória SC in Guimarães, Portugal. The women intentionally tied against the robots to end the game in penalties.

At the conclusion of a RoboCup event, there has been a tradition since 2011 of the trustees of the RoboCup Federation playing a friendly game against the winning team of the Middle-Size League. In recent years, the middle-size robots have become surprisingly competitive, able to keep possession of the ball, dribble around the opposing team, and string together passes across the field. The robots may not be ready to take on the world champions quite yet, but the progress has been impressive—in 2022, Tech United Eindhoven played a friendly match against a Portuguese professional women’s team, Vitória SC, and the robots managed to score several goals (after the women took it easy on them).

RoboCup’s Legacy

Compared to 25 years ago, there are now many more robotics competitions to choose from, and applications of AI and robotics are much more widespread. RoboCup inspired many of the other competitions, and it remains the largest such event. Our community is determined to keep pushing the state of the art. The event draws teams from research labs specializing in mechanical engineering, medical robotics, human-robot interaction, learning from demonstration, and many other fields, and there’s no better way to train new students than to encourage them to immerse themselves in RoboCup.

The importance of RoboCup can also be measured beyond the competition itself. One of the most notable successes, stemming from the early years of the competition, was the technology spun off from the small-size league to form the basis of Kiva Systems. The hardware of Kiva’s robot was designed by Cornell’s RoboCup team, led by Raffaello D’Andrea. After helming his team to Small-Size League victories in 1999, 2000, 2002, and 2003 , D’Andrea went on to cofound Kiva Systems. The company, which developed warehouse robots that moved shelves of goods, was acquired by Amazon in 2012 for US $775 million to become the core of Amazon’s warehouse robotics program.

Future of RoboCup

At this point, you may be wondering what the prospects are for achieving RoboCup’s founding goal—enabling a team of autonomous humanoid robots to beat the world’s best human team at a game of soccer on a real, outdoor field by the year 2050. Will soccer go the way of chess, checkers, poker, Gran Turismo, “Jeopardy!”, and other human endeavors and be conquered by AI? Or will the requirements for real-world perception and humanlike speed and agility keep soccer out of reach for robots? This question remains a source of uncertainty and debate within the RoboCup community. Although 27 years is a very long time in technological terms, physical automation tends to be significantly harder and take much longer than purely software-oriented tasks do.

Ultimately, if the community is going to achieve its goals, we will need to address two challenges: building hardware that can move as quickly and easily as people do, and creating software that can outsmart the best human players in real time. Some experts point to existing state-of-the-art humanoid robots as evidence that sufficiently capable hardware is already available. As impressive as they are, however, I don’t think these robots can match the capabilities of the most skilled human athletes just yet. I haven’t seen any evidence that even the best humanoid robots today can dribble a soccer ball and deftly change directions at high speed in the way that a professional soccer player can—especially when factoring in the requirement that for professional players to agree to get on the field with robots, the robots will need to be not too heavy or powerful: They will need to be both skilled and eminently safe.

Regardless of how it turns out, there is no question in my mind that RoboCup is an enduring grand challenge for AI and robotics, as well as a great training ground for the next generation of roboticists. The RoboCup community is thriving, generating new ideas and new engineers and scientists. I’ve been proud to have led the RoboCup organization, and look forward to seeing where it will go from here.



The Mechanical Turk was a fraud. The chess-playing automaton, dressed in a turban and elaborate Ottoman robes, toured Europe in the closing decades of the 18th century accompanied by its inventor Wolfgang von Kempelen. The Turk wowed Austrian empress Maria Theresa, French emperor Napoleon Bonaparte, and Prussian king Frederick the Great as it defeated some of the great chess players of its day. In reality, though, the automaton was controlled by a human concealed within its cabinetry.

What was the first chess-playing automaton?

Torres Quevedo made his mark in a number of fields, including funiculars, dirigibles, and remote controls, before turning to “thinking” machines.Alamy

A century and a half after von Kempelen’s charade, Spanish engineer Leonardo Torres Quevedo debuted El Ajedrecista (The Chessplayer), a true chess-playing automaton. The machine played a modified endgame against a human opponent. It featured a vertical chessboard with pegs for the chess pieces; a mechanical arm moved the pegs.

Torres Quevedo invented his electromechanical device in 1912 and publicly debuted it at the University of Paris two years later. Although clunky in appearance, the experimental model still managed to create a stir worldwide, including a brief write-up in 1915 in Scientific American.

In El Ajedrecista’s endgame, the machine (white) played a king and a rook against a human’s lone king (black). The program required a fixed starting position for the machine’s king and rook, but the opposing king could be placed on any square in the first six ranks (the horizontal rows, that is) that wouldn’t put the king in danger. The program assumed that the two kings would be on opposite sides of the rank controlled by the rook. Torres Quevedo’s algorithm allowed for 63 moves without capturing the king, well beyond the usual 50-move rule that results in a draw. With these restrictions in place, El Ajedrecista was guaranteed a win.

In 1920, Torres Quevedo upgraded the appearance and mechanics of his automaton [pictured at top], although not its programming. The new version moved its pieces by way of electromagnets concealed below an ordinary chessboard. A gramophone recording announced jaque al rey (Spanish for “check”) or mate (checkmate). If the human attempted an illegal move, a lightbulb gave a warning signal; after three illegal attempts, the game would shut down.

Building a machine that thinks

The first version of the chess automaton, from 1912, featured a vertical chessboard and a mechanical arm to move the pieces.Leonardo Torres Quevedo Museum/Polytechnic University of Madrid

Unlike Wolfgang von Kempelen, Torres Quevedo did not create his chess-playing automaton for the entertainment of the elite or to make money as a showman. The Spanish engineer was interested in building a machine that “thinks”—or at least makes choices from a relatively complex set of relational possibilities. Torres Quevedo wanted to reframe what we mean by thinking. As the 1915 Scientific American article about the chess automaton notes, “There is, of course, no claim that it will think or accomplish things where thought is necessary, but its inventor claims that the limits within which thought is really necessary need to be better defined, and that the automaton can do many things that are popularly classed with thought.”

In 1914, Torres Quevedo laid out his ideas in an article, “Ensayos sobre automática. Si definición. Extensión teórica de sus aplicaciones” (“Essays on Automatics. Its Definition. Theoretical Extent of Its Applications”). In the article, he updated Charles Babbage’s ideas for the analytical engine with the currency of the day: electricity. He proposed machines doing arithmetic using switching circuits and relays, as well as automated machines equipped with sensors that would be able to adjust to their surroundings and carry out tasks. Automatons with feelings were the future, in Torres Quevedo’s view.

How far could human collaboration with machines go? Torres Quevedo built his chess player to find out, as he explained in his 1917 book Mis inventos y otras páginas de vulgarización (My inventions and other popular writings). By entrusting machines with tasks previously reserved for human intelligence, he believed that he was freeing humans from a type of servitude or bondage. He was also redefining what was categorized as thought.

Claude Shannon, the information-theory pioneer, later picked up this theme in a 1950 article, “A Chess-Playing Machine,” in Scientific American on whether electronic computers could be said to think. From a behavioral perspective, Shannon argued, a chess-playing computer mimics the thinking process. On the other hand, the machine does only what it has been programmed to do, clearly not thinking outside its set parameters. Torres Quevedo hoped his chess player would shed some light on the matter, but I think he just opened a Pandora’s box of questions.

Why isn’t Leonardo Torres Quevedo known outside Spain?

Despite Torres Quevedo’s clear position in the early history of computing—picking up from Babbage and laying a foundation for artificial intelligence —his name has often been omitted from narratives of the development of the field (at least outside of Spain), much to the dismay of the historians and engineers familiar with his work.

That’s not to say he wasn’t known and respected in his own time. Torres Quevedo was elected a member of the Spanish Royal Academy of Sciences in 1901 and became an associate member of the French Academy of Sciences in 1927. He was also a member of the Spanish Society of Physics and Chemists and the Spanish Royal Academy of Language and an honorary member of the Geneva Society of Physics and Natural History. Plus El Ajedrecista has always had a fan base among chess enthusiasts. Even after Torres Quevedo’s death in 1936, the machine continued to garner attention among the cybernetic set, such as when it defeated Norbert Wiener at an influential conference in Paris in 1951. (To be fair, it defeated everyone, and Wiener was known to be a terrible player.)

One reason Torres Quevedo’s efforts in computing aren’t more widely known might be because the experiments came later in his life, after a very successful career in other engineering fields. In a short biography for Proceedings of the IEEE, Antonio Pérez Yuste and Magdalena Salazar Palma outlined three areas that Torres Quevedo contributed to before his work on the automatons.

Torres Quevedo’s design for the Whirlpool Aero Car, which offers a thrilling ride over Niagara River, debuted in 1916.Wolfgang Kaehler/LightRocket/Getty Images

First came his work, beginning in the 1880s, on funiculars, the most famous of which is the Whirlpool Aero Car. The cable car is suspended over a dramatic gorge on the Niagara River on six interlocking steel cables, connecting two points along the shore half a kilometer apart. It is still in operation today.

His second area of expertise was aeronautics, in which he held patents on a semirigid frame system for dirigible balloons based on an internal frame of flexible cables.

And finally, he invented the Telekine, an early remote control device, which he developed as a way to safely test his airships without risking human life. He started by controlling a simple tricycle using a wireless telegraph transmitter. He then successfully used his Telekine to control boats in the Bilbao estuary. But he abandoned these efforts after the Spanish government denied his request for funding. The Telekine was marked with an IEEE Milestone in 2007.

If you’d like to explore Torres Quevedo’s various inventions, including the second chess-playing automaton, consider visiting the Museo Torres Quevedo, located in the School of Civil Engineering at the Polytechnic University of Madrid. The museum has also developed online exhibits in both Spanish and English.

A more cynical view of why Torres Quevedo’s computer prowess is not widely known may be because he saw no need to commercialize his chess player. Nick Montfort, a professor of digital media at MIT, argues in his book Twisty Little Passages (MIT Press, 2005) that El Ajedrecista was the first computer game, although he concedes that people might not recognize it as such because it predated general-purpose digital computing by decades. Of course, for Torres Quevedo, the chess player existed as a physical manifestation of his ideas and techniques. And no matter how visionary he may have been, he did not foresee the multibillion-dollar computer gaming industry.

The upshot is that, for decades, the English-speaking world mostly overlooked Torres Quevedo, and his work had little direct effect on the development of the modern computer. We are left to imagine an alternate history of how things might have unfolded if his work had been considered more central. Fortunately, a number of scholars are working to tell a more international, and more complete, history of computing. Leonardo Torres Quevedo’s is a name worth inserting back into the historical narrative.

References

I first learned about El Ajedrecista while reading the article “Leonardo Torres Quevedo: Pioneer of Computing, Automatics, and Artificial Intelligence” by Francisco González de Posada, Francisco A. González Redondo, and Alfonso Hernando González (IEEE Annals of the History of Computing, July-September 2021). In their introduction, the authors note the minimal English-language scholarship on Torres Quevedo, with the notable exception of Brian Randell’s article “From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush” (IEEE Annals of the History of Computing, October-December 1982).

Although I read Randell’s article after I had drafted my own, I began my research on the chess-playing automaton with the Museo Torres Quevedo’s excellent online exhibit. I then consulted contemporary accounts of the device, such as “Electric Automaton” (Scientific American, 16 May 1914) and “Torres and His Remarkable Automatic Devices” (Scientific American Supplement No. 2079, 6 November 1915).

My reading comprehension of Spanish is not what it should be for true academic scholarship in the field, but I tracked down several of Torres Quevedo’s original books and articles and muddled through translating specific passages to confirm claims by other secondary sources. There is clearly an opportunity for someone with better language skills than I to do justice to this pioneer in computer history.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the July 2023 print issue as “Computer Chess, Circa 1920.”



The Mechanical Turk was a fraud. The chess-playing automaton, dressed in a turban and elaborate Ottoman robes, toured Europe in the closing decades of the 18th century accompanied by its inventor Wolfgang von Kempelen. The Turk wowed Austrian empress Maria Theresa, French emperor Napoleon Bonaparte, and Prussian king Frederick the Great as it defeated some of the great chess players of its day. In reality, though, the automaton was controlled by a human concealed within its cabinetry.

What was the first chess-playing automaton?

Torres Quevedo made his mark in a number of fields, including funiculars, dirigibles, and remote controls, before turning to “thinking” machines.Alamy

A century and a half after von Kempelen’s charade, Spanish engineer Leonardo Torres Quevedo debuted El Ajedrecista (The Chessplayer), a true chess-playing automaton. The machine played a modified endgame against a human opponent. It featured a vertical chessboard with pegs for the chess pieces; a mechanical arm moved the pegs.

Torres Quevedo invented his electromechanical device in 1912 and publicly debuted it at the University of Paris two years later. Although clunky in appearance, the experimental model still managed to create a stir worldwide, including a brief write-up in 1915 in Scientific American.

In El Ajedrecista’s endgame, the machine (white) played a king and a rook against a human’s lone king (black). The program required a fixed starting position for the machine’s king and rook, but the opposing king could be placed on any square in the first six ranks (the horizontal rows, that is) that wouldn’t put the king in danger. The program assumed that the two kings would be on opposite sides of the rank controlled by the rook. Torres Quevedo’s algorithm allowed for 63 moves without capturing the king, well beyond the usual 50-move rule that results in a draw. With these restrictions in place, El Ajedrecista was guaranteed a win.

In 1920, Torres Quevedo upgraded the appearance and mechanics of his automaton [pictured at top], although not its programming. The new version moved its pieces by way of electromagnets concealed below an ordinary chessboard. A gramophone recording announced jaque al rey (Spanish for “check”) or mate (checkmate). If the human attempted an illegal move, a lightbulb gave a warning signal; after three illegal attempts, the game would shut down.

Building a machine that thinks

The first version of the chess automaton, from 1912, featured a vertical chessboard and a mechanical arm to move the pieces.Leonardo Torres Quevedo Museum/Polytechnic University of Madrid

Unlike Wolfgang von Kempelen, Torres Quevedo did not create his chess-playing automaton for the entertainment of the elite or to make money as a showman. The Spanish engineer was interested in building a machine that “thinks”—or at least makes choices from a relatively complex set of relational possibilities. Torres Quevedo wanted to reframe what we mean by thinking. As the 1915 Scientific American article about the chess automaton notes, “There is, of course, no claim that it will think or accomplish things where thought is necessary, but its inventor claims that the limits within which thought is really necessary need to be better defined, and that the automaton can do many things that are popularly classed with thought.”

In 1914, Torres Quevedo laid out his ideas in the article, “Ensayos sobre automática. Si definición. Extensión teórica de sus aplicaciones” (“Essays on Automatics. Its Definition. Theoretical Extent of Its Applications”). In the article, he updated Charles Babbage’s ideas for the analytical engine with the currency of the day: electricity. He proposed machines doing arithmetic using switching circuits and relays, as well as automated machines equipped with sensors that would be able to adjust to their surroundings and carry out tasks. Automatons with feelings were the future, in Torres Quevedo’s view.

How far could human collaboration with machines go? Torres Quevedo built his chess player to find out, as he explained in his 1917 book Mis inventos y otras páginas de vulgarización (My inventions and other popular writings). By entrusting machines with tasks previously reserved for human intelligence, he believed that he was freeing humans from a type of servitude or bondage. He was also redefining what was categorized as thought.

Claude Shannon, the information-theory pioneer, later picked up this theme in a 1950 article, “A Chess-Playing Machine,” in Scientific American on whether electronic computers could be said to think. From a behavioral perspective, Shannon argued, a chess-playing computer mimics the thinking process. On the other hand, the machine does only what it has been programmed to do, clearly not thinking outside its set parameters. Torres Quevedo hoped his chess player would shed some light on the matter, but I think he just opened a Pandora’s box of questions.

Why isn’t Leonardo Torres Quevedo known outside Spain?

Despite Torres Quevedo’s clear position in the early history of computing—picking up from Babbage and laying a foundation for artificial intelligence —his name has often been omitted from narratives of the development of the field (at least outside of Spain), much to the dismay of the historians and engineers familiar with his work.

That’s not to say he wasn’t known and respected in his own time. Torres Quevedo was elected a member of the Spanish Royal Academy of Sciences in 1901 and became an associate member of the French Academy of Sciences in 1927. He was also a member of the Spanish Society of Physics and Chemists and the Spanish Royal Academy of Language and an honorary member of the Geneva Society of Physics and Natural History. Plus El Ajedrecista has always had a fan base among chess enthusiasts. Even after Torres Quevedo’s death in 1936, the machine continued to garner attention among the cybernetic set, such as when it defeated Norbert Wiener at an influential conference in Paris in 1951. (To be fair, it defeated everyone, and Wiener was known to be a terrible player.)

One reason Torres Quevedo’s efforts in computing aren’t more widely known might be because the experiments came later in his life, after a very successful career in other engineering fields. In a short biography for Proceedings of the IEEE, Antonio Pérez Yuste and Magdalena Salazar Palma outlined three areas that Torres Quevedo contributed to before his work on the automatons.

Torres Quevedo’s design for the Whirlpool Aero Car, which offers a thrilling ride over Niagara River, debuted in 1916.Wolfgang Kaehler/LightRocket/Getty Images

First came his work, beginning in the 1880s, on funiculars, the most famous of which is the Whirlpool Aero Car. The cable car is suspended over a dramatic gorge on the Niagara River on six interlocking steel cables, connecting two points along the shore half a kilometer apart. It is still in operation today.

His second area of expertise was aeronautics, in which he held patents on a semirigid frame system for dirigible balloons based on an internal frame of flexible cables.

And finally, he invented the Telekine, an early remote control device, which he developed as a way to safely test his airships without risking human life. He started by controlling a simple tricycle using a wireless telegraph transmitter. He then successfully used his Telekine to control boats in the Bilbao estuary. But he abandoned these efforts after the Spanish government denied his request for funding. The Telekine was marked with an IEEE Milestone in 2007.

If you’d like to explore Torres Quevedo’s various inventions, including the second chess-playing automaton, consider visiting the Museo Torres Quevedo, located in the School of Civil Engineering at the Polytechnic University of Madrid. The museum has also developed online exhibits in both Spanish and English.

A more cynical view of why Torres Quevedo’s computer prowess is not widely known may be because he saw no need to commercialize his chess player. Nick Montfort, a professor of digital media at MIT, argues in his book Twisty Little Passages (MIT Press, 2005) that El Ajedrecista was the first computer game, although he concedes that people might not recognize it as such because it predated general-purpose digital computing by decades. Of course, for Torres Quevedo, the chess player existed as a physical manifestation of his ideas and techniques. And no matter how visionary he may have been, he did not foresee the multibillion-dollar computer gaming industry.

The upshot is that, for decades, the English-speaking world mostly overlooked Torres Quevedo, and his work had little direct effect on the development of the modern computer. We are left to imagine an alternate history of how things might have unfolded if his work had been considered more central. Fortunately, a number of scholars are working to tell a more international, and more complete, history of computing. Leonardo Torres Quevedo’s is a name worth inserting back into the historical narrative.

References

I first learned about El Ajedrecista while reading the article “Leonardo Torres Quevedo: Pioneer of Computing, Automatics, and Artificial Intelligence” by Francisco González de Posada, Francisco A. González Redondo, and Alfonso Hernando González (IEEE Annals of the History of Computing, July-September 2021). In their introduction, the authors note the minimal English-language scholarship on Torres Quevedo, with the notable exception of Brian Randell’s article “From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush” (IEEE Annals of the History of Computing, October-December 1982).

Although I read Randell’s article after I had drafted my own, I began my research on the chess-playing automaton with the Museo Torres Quevedo’s excellent online exhibit. I then consulted contemporary accounts of the device, such as “Electric Automaton” (Scientific American, 16 May 1914) and “Torres and His Remarkable Automatic Devices” (Scientific American Supplement No. 2079, 6 November 1915).

My reading comprehension of Spanish is not what it should be for true academic scholarship in the field, but I tracked down several of Torres Quevedo’s original books and articles and muddled through translating specific passages to confirm claims by other secondary sources. There is clearly an opportunity for someone with better language skills than I to do justice to this pioneer in computer history.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the July 2023 print issue as “Computer Chess, Circa 1920.”

Pages