Feed aggregator



It never gets any easier to watch: a control room full of engineers, waiting anxiously as the robotic probe they’ve worked on for years nears the surface of the moon. Telemetry from the spacecraft says everything is working; landing is moments away. But then the vehicle goes silent, and the control room does too, until, after an agonizing wait, the project leader keys a microphone to say the landing appears to have failed.

The last time this happened was in April, in this case to a privately funded Japanese mission called Hakuto-R. It was in many ways similar to crashes by Israel’s Beresheet and India’s Chandrayaan-2 in 2019. All three landers seemed fine until final approach. Since the 1970s, only China has successfully put any uncrewed ships on the moon (most recently in 2020); Russia’s last landing was in 1976, and the United States hasn’t tried since 1972. Why, half a century after the technological triumph of Apollo, have the odds of a safe lunar landing actually gone down?

The question has some urgency because five more landing attempts, by companies or government agencies from four different countries, could well be made before the end of 2023; the next, Chandrayaan-3 from India, is scheduled for launch as early as this week. NASA’s administrator, Bill Nelson, has called this a “golden age” of spaceflight, culminating in the landing of Artemis astronauts on the moon late in 2025. But every setback is watched uneasily by others in the space community.

2023 Possible Lunar Landings

India: Chandrayaan-3, from the Indian Space Research Organization, with a hoped-for launch in mid-July and, if that succeeds, a landing in August.

Chandrayaan-3 could be heading to the moon soon.ISRO

Russia: Luna-25, from the Roscosmos space agency, which currently says it plans an August launch.

United States: Nova-C IM-1, from a private Houston-based company, Intuitive Machines, currently targeted for launch in the third quarter of 2023.

United States: Peregrine Mission 1, from the Pittsburgh-based company Astrobotic Technology, is waiting for modifications to its Vulcan Centaur launch vehicle. A launch date of 4 May was put off; a new one has not been set. [Read about Peregrine 1's rover here.]

Japan: SLIM (Smart Lander for Investigating Moon), from the JAXA space agency. An August launch date has been put off.

Intuitive Machines hopes to launch the Nova-C IM-1 this season.Intuitive Machines

Each of these missions is behind schedule, in some cases by years, and several could slip into 2024 or later.

The Fate of Hakuto-R Mission 1

A day after Hakuto-R went silent, an American spacecraft, Lunar Reconnaissance Orbiter, passed over the landing site; its imagery, compared with previous shots of the area, showed clearly that there had been a crash. The company running Hakuto-R, ispace, did an analysis of the crash and concluded that its software had perhaps been too clever for its own good.

According to ispace, the lander’s onboard sensors indicated a sharp rise in altitude when the craft passed over a 3-kilometer-high cliff. The cliff was later determined to be the rim of a crater. But the onboard computer had not been programmed for any cliff that high; it was told that in case of a large discrepancy in its expected position, the computer should assume something was wrong with the ship’s radar altimeter and disregard its input. The computer, said ispace, therefore behaved as if the ship were near touchdown when it was actually 5 km above the surface. It kept firing its engines, descending ever so gently, until its fuel ran out. “At that time, the controlled descent of the lander ceased, and it is believed to have free-fallen to the moon’s surface,” ispace said in a press release.

The crash site of the privately mounted Japanese Hakuto-R Mission 1 lunar lander, imaged by NASA’s Lunar Reconnaissance Orbiter.NASA/Goddard Space Flight Center/Arizona State University

Takeshi Hakamada, the CEO of ispace, put a brave face on it. “We acquired actual flight data during the landing phase,” he said. “That is a great achievement for future missions.”

Will this failure be helpful to other teams trying to make landings? Only to a limited extent, they say. As the so-called new space economy expands to include startup companies and more countries, there are many collaborative efforts, but there is also heightened competition, so there’s less willingness to share data.

Better Technology, Tighter Budgets

“Our biggest challenges are that we are doing this as a private company,” says John Thornton, the CEO of Astrobotic, whose Peregrine lander is waiting to go. “Only three nations have landed on the moon, and they’ve all been superpowers with gigantic, unlimited budgets compared to what we’re dealing with. We’re landing on the moon for on the order of $100 million. So it’s a very different ballgame for us.”

To put US $100 million in perspective: Between 1966 and 1968, NASA surprised itself by safely landing five of its seven Surveyor spacecraft on the moon as scouts for Apollo. The cost at the time was $469 million. That number today, after inflation, would be about $4.4 billion.

Surveyor’s principal way of determining its distance from landing was radar, a mature but sometimes imprecise technology. Swati Mohan, the guidance and navigation lead for NASA’s Perseverance rover landing on Mars in 2021, likened radar to “closing your eyes and holding your hands out in front of you.” So Astrobotic, for instance, has turned to Doppler lidar—laser ranging—which has about 10 times better resolution. It also uses terrain-relative navigation, or TRN, a visually based system that takes rapid-fire images of the approaching ground and compares them to an onboard database of terrain images. Some TRN imagery comes from the same Lunar Reconnaissance Orbiter that spotted Hakuto-R.

“Our folks are feeling good, and I think we’ve done as much as we possibly can to make sure that it’s successful,” says Thornton. But, he adds, “it’s an unforgiving environment where everything has to work.”



It never gets any easier to watch: a control room full of engineers, waiting anxiously as the robotic probe they’ve worked on for years nears the surface of the moon. Telemetry from the spacecraft says everything is working; landing is moments away. But then the vehicle goes silent, and the control room does too, until, after an agonizing wait, the project leader keys a microphone to say the landing appears to have failed.

The last time this happened was in April, in this case to a privately funded Japanese mission called Hakuto-R. It was in many ways similar to crashes by Israel’s Beresheet and India’s Chandrayaan-2 in 2019. All three landers seemed fine until final approach. Since the 1970s, only China has successfully put any uncrewed ships on the moon (most recently in 2020); Russia’s last landing was in 1976, and the United States hasn’t tried since 1972. Why, half a century after the technological triumph of Apollo, have the odds of a safe lunar landing actually gone down?

The question has some urgency because five more landing attempts, by companies or government agencies from four different countries, could well be made before the end of 2023; the next, Chandrayaan-3 from India, is scheduled for launch as early as this week. NASA’s administrator, Bill Nelson, has called this a “golden age” of spaceflight, culminating in the landing of Artemis astronauts on the moon late in 2025. But every setback is watched uneasily by others in the space community.

2023 Possible Lunar Landings

India: Chandrayaan-3, from the Indian Space Research Organization, with a hoped-for launch in mid-July and, if that succeeds, a landing in August.

Chandrayaan-3 could be heading to the moon soon.ISRO

Russia: Luna-25, from the Roscosmos space agency, which currently says it plans an August launch.

United States: Nova-C IM-1, from a private Houston-based company, Intuitive Machines, currently targeted for launch in the third quarter of 2023.

United States: Peregrine Mission 1, from the Pittsburgh-based company Astrobotic Technology, is waiting for modifications to its Vulcan Centaur launch vehicle. A launch date of 4 May was put off; a new one has not been set. [Read about Peregrine 1's rover here.]

Japan: SLIM (Smart Lander for Investigating Moon), from the JAXA space agency. An August launch date has been put off.

Intuitive Machines hopes to launch the Nova-C IM-1 this season.Intuitive Machines

Each of these missions is behind schedule, in some cases by years, and several could slip into 2024 or later.

The Fate of Hakuto-R Mission 1

A day after Hakuto-R went silent, an American spacecraft, Lunar Reconnaissance Orbiter, passed over the landing site; its imagery, compared with previous shots of the area, showed clearly that there had been a crash. The company running Hakuto-R, ispace, did an analysis of the crash and concluded that its software had perhaps been too clever for its own good.

According to ispace, the lander’s onboard sensors indicated a sharp rise in altitude when the craft passed over a 3-kilometer-high cliff. The cliff was later determined to be the rim of a crater. But the onboard computer had not been programmed for any cliff that high; it was told that in case of a large discrepancy in its expected position, the computer should assume something was wrong with the ship’s radar altimeter and disregard its input. The computer, said ispace, therefore behaved as if the ship were near touchdown when it was actually 5 km above the surface. It kept firing its engines, descending ever so gently, until its fuel ran out. “At that time, the controlled descent of the lander ceased, and it is believed to have free-fallen to the moon’s surface,” ispace said in a press release.

The crash site of the privately mounted Japanese Hakuto-R Mission 1 lunar lander, imaged by NASA’s Lunar Reconnaissance Orbiter.NASA/Goddard Space Flight Center/Arizona State University

Takeshi Hakamada, the CEO of ispace, put a brave face on it. “We acquired actual flight data during the landing phase,” he said. “That is a great achievement for future missions.”

Will this failure be helpful to other teams trying to make landings? Only to a limited extent, they say. As the so-called new space economy expands to include startup companies and more countries, there are many collaborative efforts, but there is also heightened competition, so there’s less willingness to share data.

Better Technology, Tighter Budgets

“Our biggest challenges are that we are doing this as a private company,” says John Thornton, the CEO of Astrobotic, whose Peregrine lander is waiting to go. “Only three nations have landed on the moon, and they’ve all been superpowers with gigantic, unlimited budgets compared to what we’re dealing with. We’re landing on the moon for on the order of $100 million. So it’s a very different ballgame for us.”

To put US $100 million in perspective: Between 1966 and 1968, NASA surprised itself by safely landing five of its seven Surveyor spacecraft on the moon as scouts for Apollo. The cost at the time was $469 million. That number today, after inflation, would be about $4.4 billion.

Surveyor’s principal way of determining its distance from landing was radar, a mature but sometimes imprecise technology. Swati Mohan, the guidance and navigation lead for NASA’s Perseverance rover landing on Mars in 2021, likened radar to “closing your eyes and holding your hands out in front of you.” So Astrobotic, for instance, has turned to Doppler lidar—laser ranging—which has about 10 times better resolution. It also uses terrain-relative navigation, or TRN, a visually based system that takes rapid-fire images of the approaching ground and compares them to an onboard database of terrain images. Some TRN imagery comes from the same Lunar Reconnaissance Orbiter that spotted Hakuto-R.

“Our folks are feeling good, and I think we’ve done as much as we possibly can to make sure that it’s successful,” says Thornton. But, he adds, “it’s an unforgiving environment where everything has to work.”



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Instead of one autonomous robot to fly, another to drive on land and one more to navigate on water, a new hybrid drone can do all three. To carry out complex missions, scientists are increasingly experimenting with drones that can do more than just fly.

The idea for a drone capable of navigating land, air, and sea came when researchers at New York University Abu Dhabi’s Arabian Center for Climate and Environmental Sciences (ACCESS) noted they would like a drone “capable of flying out to potentially remote locations and sampling bodies of water,” says study lead author Dimitrios Chaikalis, a doctoral candidate at NYU Abu Dhabi.

Environmental research often “relies on sample collections from hard-to-reach areas,” Chaikalis says. “Flying vehicles can easily navigate to such areas, while being capable of landing on water and navigating on the surface allows for sampling for long hours with minimal energy consumption before flying back to its base.”

The new autonomous vehicle is a tricopter with three pairs of rotors for flight, three wheels for roaming on land, and two thrusters to help it move on water. The rubber wheels were 3D-printed directly around the body of the main wheel frame, eliminating the need for metal screws and ball bearings, which would run the risk of rust after exposure to water. The entire machine weighs less than 10 kilograms, in order to comply with drone regulations.

A buoyant, machine-cut Styrofoam body was placed between the top of the machine, which holds the rotors, and its bottom, which holds the wheels and thrusters. This flotation device served as the machine’s hull in the water, and was shaped like a trefoil to leave room for the airflow of the rotors.

“The resulting vehicle is capable of traversing every available medium—air, water, ground—meaning you can eventually deploy autonomous vehicles capable of overcoming ever-increasing difficulties and obstacles,” Chaikalis says.

The drone possesses two open-source PX4 autopilot systems: one for the air, and the other for navigating both land and water. “Aerial navigation differs heavily from ground or water surface navigation, which actually bear a lot of similarities with each other,” Chaikalis says. “So we designed the ground and water surface navigation to both work with the same autopilot, changing only the motor output for each case.”

An Intel NUC computer served as the command module. The computer can switch between the two autopilots as needed, as well as interface with a radio transceiver and GPS. All these electronics were secured within a waterproof plastic casing.

“Of course, you also have to get waterproof motors for the ground-vehicle wheels, since they’ll be fully submerged when on water,” Chaikalis says. “Such motors proved difficult to interface with commercial autopilot units, so we ended up also designing custom hardware and firmware for interfacing such communications.”

The drone can operate under radio control or autonomously on preprogrammed missions. Its lithium polymer batteries give it a flight time of 18 minutes.

In experiments, the Styrofoam hull absorbed water during floating, increasing its weight by 20 percent within 30 minutes. The Styrofoam did release this water during flight, albeit slowly, with a 20 percent weight loss after 100 minutes. The scientists note this significant variation in weight needs to be accounted for in the autopilot design, or they could add a water-resistant coating, although that would permanently increase the overall weight.

In addition, “although waterproof against splashes and light submersion, this is not yet a fully submersible design, meaning a failure of the flotation device could potentially be catastrophic,” Chaikalis says.

In the future, the researchers note they could optimize the hull to make it strong enough to withstand complex maneuvers and to minimize air drag during flight. They would also like to make the drone fully modular so they can easily change its capabilities by attaching or detaching modules from it.

“We imagine being capable of, for example, selecting to drop the ground mechanism behind if necessary to save power, then returning to it later to land,” Chaikalis says. “Or allow the water module to navigate on water, while the [unmanned aerial vehicle] returns to a nearby base for recharge and picking it up again later.”

A patent application is pending on the new drone. The scientists detailed their findings 9 June at the 2023 International Conference on Unmanned Aircraft Systems in Warsaw.



This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Instead of one autonomous robot to fly, another to drive on land and one more to navigate on water, a new hybrid drone can do all three. To carry out complex missions, scientists are increasingly experimenting with drones that can do more than just fly.

The idea for a drone capable of navigating land, air, and sea came when researchers at New York University Abu Dhabi’s Arabian Center for Climate and Environmental Sciences (ACCESS) noted they would like a drone “capable of flying out to potentially remote locations and sampling bodies of water,” says study lead author Dimitrios Chaikalis, a doctoral candidate at NYU Abu Dhabi.

Environmental research often “relies on sample collections from hard-to-reach areas,” Chaikalis says. “Flying vehicles can easily navigate to such areas, while being capable of landing on water and navigating on the surface allows for sampling for long hours with minimal energy consumption before flying back to its base.”

The new autonomous vehicle is a tricopter with three pairs of rotors for flight, three wheels for roaming on land, and two thrusters to help it move on water. The rubber wheels were 3D-printed directly around the body of the main wheel frame, eliminating the need for metal screws and ball bearings, which would run the risk of rust after exposure to water. The entire machine weighs less than 10 kilograms, in order to comply with drone regulations.

A buoyant, machine-cut Styrofoam body was placed between the top of the machine, which holds the rotors, and its bottom, which holds the wheels and thrusters. This flotation device served as the machine’s hull in the water, and was shaped like a trefoil to leave room for the airflow of the rotors.

“The resulting vehicle is capable of traversing every available medium—air, water, ground—meaning you can eventually deploy autonomous vehicles capable of overcoming ever-increasing difficulties and obstacles,” Chaikalis says.

The drone possesses two open-source PX4 autopilot systems: one for the air, and the other for navigating both land and water. “Aerial navigation differs heavily from ground or water surface navigation, which actually bear a lot of similarities with each other,” Chaikalis says. “So we designed the ground and water surface navigation to both work with the same autopilot, changing only the motor output for each case.”

An Intel NUC computer served as the command module. The computer can switch between the two autopilots as needed, as well as interface with a radio transceiver and GPS. All these electronics were secured within a waterproof plastic casing.

“Of course, you also have to get waterproof motors for the ground-vehicle wheels, since they’ll be fully submerged when on water,” Chaikalis says. “Such motors proved difficult to interface with commercial autopilot units, so we ended up also designing custom hardware and firmware for interfacing such communications.”

The drone can operate under radio control or autonomously on preprogrammed missions. Its lithium polymer batteries give it a flight time of 18 minutes.

In experiments, the Styrofoam hull absorbed water during floating, increasing its weight by 20 percent within 30 minutes. The Styrofoam did release this water during flight, albeit slowly, with a 20 percent weight loss after 100 minutes. The scientists note this significant variation in weight needs to be accounted for in the autopilot design, or they could add a water-resistant coating, although that would permanently increase the overall weight.

In addition, “although waterproof against splashes and light submersion, this is not yet a fully submersible design, meaning a failure of the flotation device could potentially be catastrophic,” Chaikalis says.

In the future, the researchers note they could optimize the hull to make it strong enough to withstand complex maneuvers and to minimize air drag during flight. They would also like to make the drone fully modular so they can easily change its capabilities by attaching or detaching modules from it.

“We imagine being capable of, for example, selecting to drop the ground mechanism behind if necessary to save power, then returning to it later to land,” Chaikalis says. “Or allow the water module to navigate on water, while the [unmanned aerial vehicle] returns to a nearby base for recharge and picking it up again later.”

A patent application is pending on the new drone. The scientists detailed their findings 9 June at the 2023 International Conference on Unmanned Aircraft Systems in Warsaw.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

Here are a couple of highlight videos from the Tech United RoboCup team, competing in the Middle-Size League at RoboCup 2023 in Bordeaux. There’s an especially impressive goal toward the end of the second video—as a soccer-playing human, I would have been proud to score something like that.

[ Tech United ]

How to infuriate a NAO for 2 minutes and 38 seconds.

[ Team B-Human ]

Goalie behavior testing for Robot ARTEMIS 2, Team RoMeLa UCLA.

[ RoMeLa ]

We fabricated an array of six inflatable actuators driven by a single pressure line that play Beethoven’s Ode to Joy. Our actuators are buckled shells that snap through in a second configuration when the inner pressure reaches a threshold. The actuators are designed to play a piano key each time they snap and we developed an algorithm that based on a desired sequence of notes gives as output the geometrical parameters of each actuator, thus part of the control architecture is encoded in their mechanics.

“Nonlinear inflatable actuators for distributed control in soft robots” was recently published in Nature Advanced Materials.

[ Nature ] via [ SoftLab ]

Thanks, Edoardo!

Always nice to be reminded that you’ve conquered things that used to slip you up.

[ Boston Dynamics ]

Sure, let’s make winged scorpion robots!

[ GRVC ]

Introducing the TIAGo Pro, a revolutionary robot with fully torque-controlled arms with optimal arm mounting. This enhances the manipulation capabilities and enables state-of-the-art Human-Robot Interaction. Designed for agile manufacturing and future healthcare applications.

Equipped with major hardware upgrades like torque-controllable arms, EtherCAT communication bus at 1KHz, and increased computational power, the TIAGo Pro boosts productivity and efficiency with machine learning algorithms.

[ PAL Robotics ]

Visual-inertial odometry (VIO) is the most common approach for estimating the state of autonomous micro aerial vehicles using only onboard sensors. Existing methods improve VIO performance by including a dynamics model in the estimation pipeline. However, such methods degrade in the presence of low-fidelity vehicle models and continuous external disturbances, such as wind. Our hybrid dynamics model uses a history of thrust and IMU measurements to predict vehicle dynamics. To demonstrate the performance of our method, we present results on both public and novel drone dynamics datasets and show real-world experiments of a quadrotor flying in strong winds up to 25 km/h.

To be presented at RSS 2023 in Daegu, South Korea.

[ UZH RPG ]

This cute little robotic dude is how I want all my packages delivered.

[ RoboDesign Lab ]

Telexistence raises USD 170M Series B, Announces new partnerships with SoftBank Robotics Group and Foxconn, accelerating its business expansion in North America and operational capabilities in mass production.

[ Telexistence ]

The qb SoftClaw demonstrating that the only good tomato is a squished to death tomato.

[ qb Robotics ]

You see multi-colored balls, we see pills and tablets, garments and fabrics, ripe berries and unripe berries. Learning one simple concept–how to sort items–can be applied to endless work scenarios.

[ Sanctuary ]

Some highlights from Kuka’s booth at Automatica in Germany. Two moments that jumped out to me included the miniature automotive assembly line made of LEGO Technic, and also the mobile robot with a bit “NO RIDING” sticker on it. C’mon, Kuka, let us have some fun!

Also at Automatica was the final round of the Kuka Innovation Award. It was a really intertesting competition this year, although one of the judges seems like a real dork.

[ Kuka ]

I don’t know how Pieter pulled this off, but here’s a clip from the Robot Brains podcast, where he talks with astronaut Woody Hoburg up on the International Space Station.

[ Robot Brains ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZIL

Enjoy today’s videos!

Here are a couple of highlight videos from the Tech United RoboCup team, competing in the Middle-Size League at RoboCup 2023 in Bordeaux. There’s an especially impressive goal toward the end of the second video—as a soccer-playing human, I would have been proud to score something like that.

[ Tech United ]

How to infuriate a NAO for 2 minutes and 38 seconds.

[ Team B-Human ]

Goalie behavior testing for Robot ARTEMIS 2, Team RoMeLa UCLA.

[ RoMeLa ]

We fabricated an array of six inflatable actuators driven by a single pressure line that play Beethoven’s Ode to Joy. Our actuators are buckled shells that snap through in a second configuration when the inner pressure reaches a threshold. The actuators are designed to play a piano key each time they snap and we developed an algorithm that based on a desired sequence of notes gives as output the geometrical parameters of each actuator, thus part of the control architecture is encoded in their mechanics.

“Nonlinear inflatable actuators for distributed control in soft robots” was recently published in Nature Advanced Materials.

[ Nature ] via [ SoftLab ]

Thanks, Edoardo!

Always nice to be reminded that you’ve conquered things that used to slip you up.

[ Boston Dynamics ]

Sure, let’s make winged scorpion robots!

[ GRVC ]

Introducing the TIAGo Pro, a revolutionary robot with fully torque-controlled arms with optimal arm mounting. This enhances the manipulation capabilities and enables state-of-the-art Human-Robot Interaction. Designed for agile manufacturing and future healthcare applications.

Equipped with major hardware upgrades like torque-controllable arms, EtherCAT communication bus at 1KHz, and increased computational power, the TIAGo Pro boosts productivity and efficiency with machine learning algorithms.

[ PAL Robotics ]

Visual-inertial odometry (VIO) is the most common approach for estimating the state of autonomous micro aerial vehicles using only onboard sensors. Existing methods improve VIO performance by including a dynamics model in the estimation pipeline. However, such methods degrade in the presence of low-fidelity vehicle models and continuous external disturbances, such as wind. Our hybrid dynamics model uses a history of thrust and IMU measurements to predict vehicle dynamics. To demonstrate the performance of our method, we present results on both public and novel drone dynamics datasets and show real-world experiments of a quadrotor flying in strong winds up to 25 km/h.

To be presented at RSS 2023 in Daegu, South Korea.

[ UZH RPG ]

This cute little robotic dude is how I want all my packages delivered.

[ RoboDesign Lab ]

Telexistence raises USD 170M Series B, Announces new partnerships with SoftBank Robotics Group and Foxconn, accelerating its business expansion in North America and operational capabilities in mass production.

[ Telexistence ]

The qb SoftClaw demonstrating that the only good tomato is a squished to death tomato.

[ qb Robotics ]

You see multi-colored balls, we see pills and tablets, garments and fabrics, ripe berries and unripe berries. Learning one simple concept–how to sort items–can be applied to endless work scenarios.

[ Sanctuary ]

Some highlights from Kuka’s booth at Automatica in Germany. Two moments that jumped out to me included the miniature automotive assembly line made of LEGO Technic, and also the mobile robot with a bit “NO RIDING” sticker on it. C’mon, Kuka, let us have some fun!

Also at Automatica was the final round of the Kuka Innovation Award. It was a really intertesting competition this year, although one of the judges seems like a real dork.

[ Kuka ]

I don’t know how Pieter pulled this off, but here’s a clip from the Robot Brains podcast, where he talks with astronaut Woody Hoburg up on the International Space Station.

[ Robot Brains ]

Over the years, efforts in bioinspired soft robotics have led to mobile systems that emulate features of natural animal locomotion. This includes combining mechanisms from multiple organisms to further improve movement. In this work, we seek to improve locomotion in soft, amphibious robots by combining two independent mechanisms: sea star locomotion gait and gecko adhesion. Specifically, we present a sea star-inspired robot with a gecko-inspired adhesive surface that is able to crawl on a variety of surfaces. It is composed of soft and stretchable elastomer and has five limbs that are powered with pneumatic actuation. The gecko-inspired adhesion provides additional grip on wet and dry surfaces, thus enabling the robot to climb on 25° slopes and hold on statically to 51° slopes.



Robotics engineers often look to how animals get around for inspiration for more effective and efficient artificial limbs, joints, and muscles. One particularly fruitful source of inspiration comes from studying creatures that use their limbs for different kinds of mobility—think amphibians that both walk and swim, or birds that both walk and fly. Such inspiration has led to the SPIDAR that crawls and flies, the LEO that skateboards and slacklines, and robots that can switch between bipedal and quadrupedal modes.

Now engineers at Caltech and Northeastern University (in Boston) have developed a multimodal robot that can navigate in not two or three but eight different ways—including walking, crawling, rolling, tumbling, and even flying. That said, the Multi-Modal Mobility Morphobot (M4) looks more like a sleek little cart than anything out of a bestiary. M4 is 70 centimeters long and 35 cm high, with four legs with two joints each. It also has two ducted fans at the ends of each leg, which can function as legs, propellor thrusters, or wheels. The robot is surprisingly light—around 6 kilograms—considering that this includes its onboard computers, sensors, communication devices, joint actuators, propulsion motors, power electronics, and battery. It is capable of autonomous, self-contained operations.

The details of M4 were published in Nature Communications on 27 June.

The bio-inspired ‘transformer’ that crawls, rolls and flies Nature Video

Integrating so many modes of transport on a single platform is a first, says Alireza Ramezani, a robotics engineer from Northeastern University and one of the lead investigators. The task called for challenging design considerations: “In multimodal robot design, as the number of modes [of locomotion] increases, each mode introduces its own design requirements,” he says. To integrate all these design requirements, the researchers had to play with various trade-offs.

“When you design aerial robots, you want your systems to be extremely light,” says Ramezani. “But if you want to achieve legged locomotion, you need bulky actuators that can produce torque in the joints for dynamic interactions with the ground surface. These bulky components can negatively affect aerial mobility. And this is just for a system with two modes of mobility.” M4 can walk on rough terrain, climb steep slopes, tumble over bulky objects, crawl under low-ceiling obstacles, and fly.

The researchers took their design inspiration from the locomotion plasticity seen in nature. Morteza Gharib, an aeronautics professor from Caltech and a colead for the project, says that nature was “an open book of design for us,” especially in the way that it repurposes systems in order to deliver functionality. “The unique aspect of [this] robot is that it has the highest number of functionalities with the minimum number of components, and also is capable of making decisions on which one to use for different challenges.”

Repurposing was the key to making the design of M4 scalable—that is, increasing its payload capacity without compromising its mobility. Focusing on how the robot could reuse its existing appendages for different locomotions without introducing added mass freed up payload capacity for computers, sensors, and so on. The scalability was achieved by redundancy manipulation.

M4 may look like a simple box on wheels, but it is the first robot capable of eight different ways of getting around town.Northeastern University/Nature Communications

In other words, M4 can use its four appendages to roll like a ground vehicle or crawl as a quadruped, but it can also stand up on two appendages. While standing, the robot has a higher vantage point and more dynamic locomotion, but as a quadruped, it has four contact points with the environment and is therefore more stable. “This is [a matter of] finding a balance between the trade-offs introduced by each mode of mobility, and the mechanism to go from one mode to another is through manipulating these redundancies,” says Gharib.

The research team carried out experiments to put M4 through its paces—wheeled and quadrupedal locomotion, unmanned ground as well as aerial locomotion, and more. They report that M4 showed full autonomy in multimodal path planning between traveling on the ground and flying.

Using its onboard sensors and computers, M4 was able to negotiate an unstructured environment by switching from rolling to flying, but the researchers want more. “The next step for us is to have all of [M4’s] modes of mobility being used by the robot in a completely autonomous fashion based on the sensory information that it gathers from the environment,” Ramezani says.

Funding for the M4 project came from the Jet Propulsion Lab at NASA and the National Science Foundation. The researchers expect multimodal robots to play a big role in future space explorations. Recently, NASA integrated an aerial vehicle, the tiny helicopter Ingenuity, into the Mars rover Perseverance to act as a scout for the larger vehicle, and it was a great success, Ramezani points out.

Space exploration aside, the researchers also see potential for search-and-rescue operations, package handling and delivery, environmental applications, and digital agriculture, among others. The system’s ability to change its shape and form gives it many advantages over robots with fixed geometry, Gharib says.

The researchers are still looking to improve M4. “There is no end to what you would like to see from a robot like this,” Ramezani says. “For instance…it doesn’t take much to extend the existing capabilities of M4 to underwater locomotion using its quad copters.” Meanwhile, they also continue to work on making M4’s existing mobility modes more efficient.



Robotics engineers often look to how animals get around for inspiration for more effective and efficient artificial limbs, joints, and muscles. One particularly fruitful source of inspiration comes from studying creatures that use their limbs for different kinds of mobility—think amphibians that both walk and swim, or birds that both walk and fly. Such inspiration has led to the SPIDAR that crawls and flies, the LEO that skateboards and slacklines, and robots that can switch between bipedal and quadrupedal modes.

Now engineers at Caltech and Northeastern University (in Boston) have developed a multimodal robot that can navigate in not two or three but eight different ways—including walking, crawling, rolling, tumbling, and even flying. That said, the Multi-Modal Mobility Morphobot (M4) looks more like a sleek little cart than anything out of a bestiary. M4 is 70 centimeters long and 35 cm high, with four legs with two joints each. It also has two ducted fans at the ends of each leg, which can function as legs, propellor thrusters, or wheels. The robot is surprisingly light—around 6 kilograms—considering that this includes its onboard computers, sensors, communication devices, joint actuators, propulsion motors, power electronics, and battery. It is capable of autonomous, self-contained operations.

The details of M4 were published in Nature Communications on 27 June.

The bio-inspired ‘transformer’ that crawls, rolls and flies Nature Video

Integrating so many modes of transport on a single platform is a first, says Alireza Ramezani, a robotics engineer from Northeastern University and one of the lead investigators. The task called for challenging design considerations: “In multimodal robot design, as the number of modes [of locomotion] increases, each mode introduces its own design requirements,” he says. To integrate all these design requirements, the researchers had to play with various trade-offs.

“When you design aerial robots, you want your systems to be extremely light,” says Ramezani. “But if you want to achieve legged locomotion, you need bulky actuators that can produce torque in the joints for dynamic interactions with the ground surface. These bulky components can negatively affect aerial mobility. And this is just for a system with two modes of mobility.” M4 can walk on rough terrain, climb steep slopes, tumble over bulky objects, crawl under low-ceiling obstacles, and fly.

The researchers took their design inspiration from the locomotion plasticity seen in nature. Morteza Gharib, an aeronautics professor from Caltech and a colead for the project, says that nature was “an open book of design for us,” especially in the way that it repurposes systems in order to deliver functionality. “The unique aspect of [this] robot is that it has the highest number of functionalities with the minimum number of components, and also is capable of making decisions on which one to use for different challenges.”

Repurposing was the key to making the design of M4 scalable—that is, increasing its payload capacity without compromising its mobility. Focusing on how the robot could reuse its existing appendages for different locomotions without introducing added mass freed up payload capacity for computers, sensors, and so on. The scalability was achieved by redundancy manipulation.

M4 may look like a simple box on wheels, but it is the first robot capable of eight different ways of getting around town.Northeastern University/Nature Communications

In other words, M4 can use its four appendages to roll like a ground vehicle or crawl as a quadruped, but it can also stand up on two appendages. While standing, the robot has a higher vantage point and more dynamic locomotion, but as a quadruped, it has four contact points with the environment and is therefore more stable. “This is [a matter of] finding a balance between the trade-offs introduced by each mode of mobility, and the mechanism to go from one mode to another is through manipulating these redundancies,” says Gharib.

The research team carried out experiments to put M4 through its paces—wheeled and quadrupedal locomotion, unmanned ground as well as aerial locomotion, and more. They report that M4 showed full autonomy in multimodal path planning between traveling on the ground and flying.

Using its onboard sensors and computers, M4 was able to negotiate an unstructured environment by switching from rolling to flying, but the researchers want more. “The next step for us is to have all of [M4’s] modes of mobility being used by the robot in a completely autonomous fashion based on the sensory information that it gathers from the environment,” Ramezani says.

Funding for the M4 project came from the Jet Propulsion Lab at NASA and the National Science Foundation. The researchers expect multimodal robots to play a big role in future space explorations. Recently, NASA integrated an aerial vehicle, the tiny helicopter Ingenuity, into the Mars rover Perseverance to act as a scout for the larger vehicle, and it was a great success, Ramezani points out.

Space exploration aside, the researchers also see potential for search-and-rescue operations, package handling and delivery, environmental applications, and digital agriculture, among others. The system’s ability to change its shape and form gives it many advantages over robots with fixed geometry, Gharib says.

The researchers are still looking to improve M4. “There is no end to what you would like to see from a robot like this,” Ramezani says. “For instance…it doesn’t take much to extend the existing capabilities of M4 to underwater locomotion using its quad copters.” Meanwhile, they also continue to work on making M4’s existing mobility modes more efficient.

In recent years, various service robots have been deployed in stores as recommendation systems. Previous studies have sought to increase the influence of these robots by enhancing their social acceptance and trust. However, when such service robots recommend a product to customers in real environments, the effect on the customers is influenced not only by the robot itself, but also by the social influence of the surrounding people such as store clerks. Therefore, leveraging the social influence of the clerks may increase the influence of the robots on the customers. Hence, we compared the influence of robots with and without collaborative customer service between the robots and clerks in two bakery stores. The experimental results showed that collaborative customer service increased the purchase rate of the recommended bread and improved the impressions of the robot and store experience of the customers. Because the results also showed that the workload required for the clerks to collaborate with the robot was not high, this study suggests that all stores with service robots may demonstrate high effectiveness in introducing collaborative customer service.



In July of 2010, I traveled to Singapore to take care of my then 6-year-old son Henry while his mother attended an academic conference. But I was really there for the robots.

IEEE Spectrum’s digital product manager, Erico Guizzo, was our robotics editor at the time. We had just combined forces with robot blogger par excellence and now Spectrum senior editor Evan “BotJunkie” Ackerman to supercharge our first and most successful blog, Automaton. When I told Guizzo I was going to be in Singapore, he told me that RoboCup, an international robot soccer competition, was going on at the same time. So of course we wrangled a press pass for me and my plus one.

I brought Henry and a video camera to capture the bustling bots and their handlers. Guizzo told me that videos of robots flailing at balls would do boffo Web traffic, so I was as excited as my first grader (okay, more excited) to be in a convention center filled with robots and teams of engineers toiling away on the sidelines to make adjustments and repairs and talk with each other and us about their creations.

Even better than the large humanoid robots lurching around like zombies and the smaller, wheeled bots scurrying to and fro were the engineers who tended to them. They exuded the kind of joy that comes with working together to build cool stuff, and it was infectious. On page 40 of this issue, Peter Stone—past president of the RoboCup Federation, professor in the computer science department of the University of Texas at Austin, and executive director of Sony AI America—captures some of that unbridled enthusiasm and gives us the history of the event. To go along with his story, we include action shots taken at various RoboCups throughout the 25 years of the event. You can check out this year’s RoboCup competitions going on 6–9 July at the University of Bordeaux, in Nouvelle-Aquitaine, France.

Earlier in 2010, the same year as my first RoboCup, Apple introduced what was in part pitched as the future of magazines: the iPad. Guizzo and photography director Randi Klett instantly grokked the possibilities of the format and the new sort of tactile interactivity (ah, the swipe!) to showcase the coolest robots they could find. Channeling the same spirit I experienced in Singapore, Guizzo, Klett, and app-maker Tendigi launched the Robots app in 2012. It was an instant hit, with more than 1.3 million downloads.

To reach new audiences on other devices beyond the iOS platform, we ported Robots from appworld to the Web. With the help of founding sponsors—including the IEEE Robotics and Automation Society and Walt Disney Imagineering—and the support of the IEEE Foundation, the Robots site launched in 2018 and quickly found a following among STEM educators, students, roboticists, and the general public.

By 2022 it was clear that the site, whose basic design had not changed in years, needed a reboot. We gave it a new name and URL to make it easy for more people to find: RobotsGuide.com. And with the help of Pentagram, the design consultancy that reimagined Spectrum’s print magazine and website in 2021, in collaboration with Standard, a design and technology studio, we built the site as a modern, fully responsive Web app.

Featuring almost 250 of the world’s most advanced and influential robots, hundreds of photos and videos, detailed specs, 360-degree interactives, games, user ratings, educational content, and robot news from around the world, the Robots Guide helps everyone learn more about robotics.

So grab your phone, tablet, or computer and delve into the wondrous world of robots. It will be time—likely a lot of it—well spent.



In July of 2010, I traveled to Singapore to take care of my then 6-year-old son Henry while his mother attended an academic conference. But I was really there for the robots.

IEEE Spectrum’s digital product manager, Erico Guizzo, was our robotics editor at the time. We had just combined forces with robot blogger par excellence and now Spectrum senior editor Evan “BotJunkie” Ackerman to supercharge our first and most successful blog, Automaton. When I told Guizzo I was going to be in Singapore, he told me that RoboCup, an international robot soccer competition, was going on at the same time. So of course we wrangled a press pass for me and my plus one.

I brought Henry and a video camera to capture the bustling bots and their handlers. Guizzo told me that videos of robots flailing at balls would do boffo Web traffic, so I was as excited as my first grader (okay, more excited) to be in a convention center filled with robots and teams of engineers toiling away on the sidelines to make adjustments and repairs and talk with each other and us about their creations.

Even better than the large humanoid robots lurching around like zombies and the smaller, wheeled bots scurrying to and fro were the engineers who tended to them. They exuded the kind of joy that comes with working together to build cool stuff, and it was infectious. On page 40 of this issue, Peter Stone—past president of the RoboCup Federation, professor in the computer science department of the University of Texas at Austin, and executive director of Sony AI America—captures some of that unbridled enthusiasm and gives us the history of the event. To go along with his story, we include action shots taken at various RoboCups throughout the 25 years of the event. You can check out this year’s RoboCup competitions going on 6–9 July at the University of Bordeaux, in Nouvelle-Aquitaine, France.

Earlier in 2010, the same year as my first RoboCup, Apple introduced what was in part pitched as the future of magazines: the iPad. Guizzo and photography director Randi Klett instantly grokked the possibilities of the format and the new sort of tactile interactivity (ah, the swipe!) to showcase the coolest robots they could find. Channeling the same spirit I experienced in Singapore, Guizzo, Klett, and app-maker Tendigi launched the Robots app in 2012. It was an instant hit, with more than 1.3 million downloads.

To reach new audiences on other devices beyond the iOS platform, we ported Robots from appworld to the Web. With the help of founding sponsors—including the IEEE Robotics and Automation Society and Walt Disney Imagineering—and the support of the IEEE Foundation, the Robots site launched in 2018 and quickly found a following among STEM educators, students, roboticists, and the general public.

By 2022 it was clear that the site, whose basic design had not changed in years, needed a reboot. We gave it a new name and URL to make it easy for more people to find: RobotsGuide.com. And with the help of Pentagram, the design consultancy that reimagined Spectrum’s print magazine and website in 2021, in collaboration with Standard, a design and technology studio, we built the site as a modern, fully responsive Web app.

Featuring almost 250 of the world’s most advanced and influential robots, hundreds of photos and videos, detailed specs, 360-degree interactives, games, user ratings, educational content, and robot news from around the world, the Robots Guide helps everyone learn more about robotics.

So grab your phone, tablet, or computer and delve into the wondrous world of robots. It will be time—likely a lot of it—well spent.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.

Enjoy today’s videos!

Humanoid robot ARTEMIS training for RoboCup. Fully autonomous soccer playing outdoors.

[ RoMeLa ]

Imperial College London and Empa researchers have built a drone that can withstand high enough temperatures to enter burning buildings. The prototype drone, called FireDrone, could be sent into burning buildings or woodland to assess hazards and provide crucial first-hand data from danger zones. The data would then be sent to first responders to help inform their emergency response.

[ Imperial ]

We integrated Stable Diffusion to give Ameca the power to imagine drawings. One of the big challenges here was converting the image to vectors, (lines), that Ameca could draw. The focus was on making fast sketches that are fun to watch. Ameca always signs their artwork.

I just don’t understand art.

[ Engineered Arts ]

Oregon State Professor Heather Knight and Agility’s Head of Customer Experience Bambi Brewer get together to talk about human-robot interaction.

[ Agility ]

Quadrupeds are great, but they have way more degrees of freedom than it’s comfortable to control. Maybe motion capture can fix that?

[ Leeds ]

The only thing I know for sure about this video is that Skydio has no idea what’s going on here.

[ Ugo ]

We are very sad to share the passing of Joanne Pransky. Robin Murphy shares a retrospective.

[ Robotics Through Science Fiction ]

ICRA 2023 was kind of bonkers. This video doesn’t do it justice, of course, but there were a staggering 6,000 people in attendance. And next year is going to be even bigger!

[ ICRA 2023 ]

India Flying Labs recently engaged more than 350 girls and boys in a two-day STEM workshop with locally-made drones.

[ WeRobotics ]

This paper proposes the application of a very low weight (3.2 kg) anthropomorphic dual-arm system capable of rolling along linear infrastructures such as power lines to perform dexterous and bimanual manipulation tasks like the installation of clip-type bird flight diverters or conduct contact-based inspection operations on pipelines to detect corrosion or leaks.

[ GRVC ]

In collaboration with Trimble, we are announcing a proof-of-concept to enable robots and machines to follow humans and other machines in industrial applications. Together, we have integrated a patent-pending PFF follow™ smart-following module prototype developed by Piaggio Fast Forward onto a Boston Dynamics’ Spot® robot platform controlled by Trimble’s advanced positioning technology.

[ PFF ]

X20 tunnel inspection quadruped robot can achieve accurate detection and real-time uploading of faults such as cable surface discharge, corona discharge, internal discharge, and temperature abnormality. It can also adapt to inspection tasks in rugged terrain.

[ DeepRobotics ]

If you’re wondering why the heck anyone would try to build a robot arm out of stained glass, well, that’s an excellent thing to wonder.

[ Simone Giertz ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, SOUTH KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, SOUTH KOREAIROS 2023: 1–5 October 2023, DETROITCLAWAR 2023: 2–4 October 2023, FLORIANOPOLIS, BRAZILHumanoids 2023: 12–14 December 2023, AUSTIN, TEX.

Enjoy today’s videos!

Humanoid robot ARTEMIS training for RoboCup. Fully autonomous soccer playing outdoors.

[ RoMeLa ]

Imperial College London and Empa researchers have built a drone that can withstand high enough temperatures to enter burning buildings. The prototype drone, called FireDrone, could be sent into burning buildings or woodland to assess hazards and provide crucial first-hand data from danger zones. The data would then be sent to first responders to help inform their emergency response.

[ Imperial ]

We integrated Stable Diffusion to give Ameca the power to imagine drawings. One of the big challenges here was converting the image to vectors, (lines), that Ameca could draw. The focus was on making fast sketches that are fun to watch. Ameca always signs their artwork.

I just don’t understand art.

[ Engineered Arts ]

Oregon State Professor Heather Knight and Agility’s Head of Customer Experience Bambi Brewer get together to talk about human-robot interaction.

[ Agility ]

Quadrupeds are great, but they have way more degrees of freedom than it’s comfortable to control. Maybe motion capture can fix that?

[ Leeds ]

The only thing I know for sure about this video is that Skydio has no idea what’s going on here.

[ Ugo ]

We are very sad to share the passing of Joanne Pransky. Robin Murphy shares a retrospective.

[ Robotics Through Science Fiction ]

ICRA 2023 was kind of bonkers. This video doesn’t do it justice, of course, but there were a staggering 6,000 people in attendance. And next year is going to be even bigger!

[ ICRA 2023 ]

India Flying Labs recently engaged more than 350 girls and boys in a two-day STEM workshop with locally-made drones.

[ WeRobotics ]

This paper proposes the application of a very low weight (3.2 kg) anthropomorphic dual-arm system capable of rolling along linear infrastructures such as power lines to perform dexterous and bimanual manipulation tasks like the installation of clip-type bird flight diverters or conduct contact-based inspection operations on pipelines to detect corrosion or leaks.

[ GRVC ]

In collaboration with Trimble, we are announcing a proof-of-concept to enable robots and machines to follow humans and other machines in industrial applications. Together, we have integrated a patent-pending PFF follow™ smart-following module prototype developed by Piaggio Fast Forward onto a Boston Dynamics’ Spot® robot platform controlled by Trimble’s advanced positioning technology.

[ PFF ]

X20 tunnel inspection quadruped robot can achieve accurate detection and real-time uploading of faults such as cable surface discharge, corona discharge, internal discharge, and temperature abnormality. It can also adapt to inspection tasks in rugged terrain.

[ DeepRobotics ]

If you’re wondering why the heck anyone would try to build a robot arm out of stained glass, well, that’s an excellent thing to wonder.

[ Simone Giertz ]



From 2019 to 2022, I had the privilege of serving as president of the RoboCup Federation. RoboCup is an annual international competitive event that merges visionary thinking about how AI and robotics will change the world with practical robot design. Participants spend months solving diverse technical problems to enable their robots to autonomously play soccer, do household chores, or search for disaster victims. And their efforts are in turn enabling fundamental advances in a range of fields, including machine learning, multiagent systems, and human-robot interaction.

Dr Hiroaki Kitano, Japanese artificial intelligence researcher, holding two members of his miniature robot football team.Peter Menzel/Science Source

RoboCup’s original goal, as defined by founding president Hiroaki Kitano, was to enable a team of fully autonomous humanoid robots to beat the best human soccer team in the world on a real, outdoor field by the year 2050. Since the first RoboCup competition in 1997 which featured three leagues—small-size wheeled robots, middle-size wheeled robots, and simulation—the event has expanded to include humanoid robot soccer leagues, as well as other leagues devoted to robots with more immediate practicality. The next RoboCup event takes place in July in Bordeaux, France, where 2,500 humans (and 2,000 robots) from 45 countries are expected to compete.

The Beginning

A history of RoboCup from 1997-2011, by Peter Stone and Manuela Veloso. www.youtube.com

The first RoboCup, which I attended as a student, was held in 1997 in a small exhibit room at the International Joint Conference on AI (IJCAI) in Nagoya, Japan. The level of competition was, by today’s standards, not very high. However, it’s important to remember that many “roboticists” back then didn’t work with real robots. RoboCup was unusual during its early years in that it forced people to build complete, integrated working systems that could sense, decide, and act.

Small-Size League


Over the years, RoboCup has seen huge improvements in the level of play, often following a pattern of one team making a discovery and dominating the competition for a year or two and then being supplanted by another. For example, in the Small-Size League, in which the robots use a golf ball and external perception and computing, Team FU-Fighters from Freie Universität Berlin introduced some innovations in the early 2000s. They began controlling the ball using a device that spins it backward toward the robot to “dribble.” A second device propelled the ball quickly forward to shoot it. As the first team to come up with this strategy, the FU-Fighters had a big advantage, but other teams soon followed suit.


The Small-Size League final between TIGERs Mannheim and ER-Force at RoboCup 2022 in Bangkok, Thailand.

Standard Platform League

Austin Villa, UT Austin’s RoboCup team, describing the RoboCup Standard Platform League in 2008.

While many RoboCup leagues include a hardware design component, some teams prefer to focus more on software. In the Standard Platform League, each team is provided with identical robots, and thus the best combination of algorithms and software engineering wins. The first standard platform for RoboCup was the Aibo, a small robot dog made by Sony. Ultimately, though, the goal of RoboCup is to achieve human-level performance on a bipedal robot, and so the Standard Platform League now uses a small humanoid robot called Nao, made by SoftBank. Rugged and capable, Nao is able to fall over and quickly get up again, a critical skill for soccer-playing humans and robots alike.


The Standard Platform League final between HTWK Robots and B-Human at RoboCup 2022 in Bangkok, Thailand.

Middle-Size League

The Middle-Size League final between Tech United and the Falcons at RoboCup 2022 in Bangkok, Thailand.

The Middle-Size League uses a full-size soccer ball and onboard perception on waist-high wheeled robots. Over the years, it has showcased an enormous amount of progress toward human-speed, human-scale soccer. In recent competitions, the robots have moved briskly around a large field, autonomously developing offensive and defensive strategies and coordinating passes and shots on goal. The typical middle-size robot has the skill of a competent primary-school human, although in this league the robots don’t have to worry about legs. And in some ways, the middle-size robots have advantages over human players—the robots have omnidirectional sensing, wireless communication, and the ability to consistently place very accurate shots thanks to a mechanical ball-launching system.


Humanoid League

The RoboCup Humanoid League, launched in 2002, is critical to meeting our objective of fielding a team of highly skilled humanoid robots by 2050. Bipedal robots are an ongoing challenge, especially when those robots have to interact with full-size soccer balls, balancing on one leg to kick with the other. The humanoids must have humanlike proportions and sensor configurations akin to human perception—which means, among other things, no omnidirectional sensing.


RoboCup 2022 Humanoid AdultSize Soccer Final: NimbRo (Germany) vs. HERoEHS (Korea)

The Adult-Size Humanoid League final between NimbRo and the HERoEHS at RoboCup 2022 in Bangkok, Thailand.

Simulation

Even in the Standard Platform League, hardware can be frustrating, so the RoboCup Simulation League allows teams to work entirely in software. This enables more rapid progress using cutting-edge techniques. My own team, UT Austin Villa, started using hierarchical machine learning to develop skills such as walking and kicking in the Simulation League in 2014, which allowed us to dominate the competition. But in 2022, FC Portugal and Magma Offenburg were able to surpass us with deep reinforcement learning methods.

Other Leagues

While soccer is the ultimate goal of Robocup, and it motivates research on fundamental topics such as robot vision and mobility, it can be hard to see the practicality in a game. Other RoboCup leagues thus focus on more immediate applications. RoboCup@Home features robots for domestic environments, RoboCup Rescue is for search-and-rescue robots for disaster response, and RoboCup@Work develops robots for industry and logistics tasks.


Robots vs. Humans

European RoboCup 2022 Middle-Size winning team Tech United plays an exhibition game against professional women’s team Vitória SC in Guimarães, Portugal. The women intentionally tied against the robots to end the game in penalties.

At the conclusion of a RoboCup event, there has been a tradition since 2011 of the trustees of the RoboCup Federation playing a friendly game against the winning team of the Middle-Size League. In recent years, the middle-size robots have become surprisingly competitive, able to keep possession of the ball, dribble around the opposing team, and string together passes across the field. The robots may not be ready to take on the world champions quite yet, but the progress has been impressive—in 2022, Tech United Eindhoven played a friendly match against a Portuguese professional women’s team, Vitória SC, and the robots managed to score several goals (after the women took it easy on them).

RoboCup’s Legacy

Compared to 25 years ago, there are now many more robotics competitions to choose from, and applications of AI and robotics are much more widespread. RoboCup inspired many of the other competitions, and it remains the largest such event. Our community is determined to keep pushing the state of the art. The event draws teams from research labs specializing in mechanical engineering, medical robotics, human-robot interaction, learning from demonstration, and many other fields, and there’s no better way to train new students than to encourage them to immerse themselves in RoboCup.

The importance of RoboCup can also be measured beyond the competition itself. One of the most notable successes, stemming from the early years of the competition, was the technology spun off from the small-size league to form the basis of Kiva Systems. The hardware of Kiva’s robot was designed by Cornell’s RoboCup team, led by Raffaello D’Andrea. After helming his team to Small-Size League victories in 1999, 2000, 2002, and 2003 , D’Andrea went on to cofound Kiva Systems. The company, which developed warehouse robots that moved shelves of goods, was acquired by Amazon in 2012 for US $775 million to become the core of Amazon’s warehouse robotics program.

Future of RoboCup

At this point, you may be wondering what the prospects are for achieving RoboCup’s founding goal—enabling a team of autonomous humanoid robots to beat the world’s best human team at a game of soccer on a real, outdoor field by the year 2050. Will soccer go the way of chess, checkers, poker, Gran Turismo, “Jeopardy!”, and other human endeavors and be conquered by AI? Or will the requirements for real-world perception and humanlike speed and agility keep soccer out of reach for robots? This question remains a source of uncertainty and debate within the RoboCup community. Although 27 years is a very long time in technological terms, physical automation tends to be significantly harder and take much longer than purely software-oriented tasks do.

Ultimately, if the community is going to achieve its goals, we will need to address two challenges: building hardware that can move as quickly and easily as people do, and creating software that can outsmart the best human players in real time. Some experts point to existing state-of-the-art humanoid robots as evidence that sufficiently capable hardware is already available. As impressive as they are, however, I don’t think these robots can match the capabilities of the most skilled human athletes just yet. I haven’t seen any evidence that even the best humanoid robots today can dribble a soccer ball and deftly change directions at high speed in the way that a professional soccer player can—especially when factoring in the requirement that for professional players to agree to get on the field with robots, the robots will need to be not too heavy or powerful: They will need to be both skilled and eminently safe.

Regardless of how it turns out, there is no question in my mind that RoboCup is an enduring grand challenge for AI and robotics, as well as a great training ground for the next generation of roboticists. The RoboCup community is thriving, generating new ideas and new engineers and scientists. I’ve been proud to have led the RoboCup organization, and look forward to seeing where it will go from here.



From 2019 to 2022, I had the privilege of serving as president of the RoboCup Federation. RoboCup is an annual international competitive event that merges visionary thinking about how AI and robotics will change the world with practical robot design. Participants spend months solving diverse technical problems to enable their robots to autonomously play soccer, do household chores, or search for disaster victims. And their efforts are in turn enabling fundamental advances in a range of fields, including machine learning, multiagent systems, and human-robot interaction.

Dr Hiroaki Kitano, Japanese artificial intelligence researcher, holding two members of his miniature robot football team.Peter Menzel/Science Source

RoboCup’s original goal, as defined by founding president Hiroaki Kitano, was to enable a team of fully autonomous humanoid robots to beat the best human soccer team in the world on a real, outdoor field by the year 2050. Since the first RoboCup competition in 1997 which featured three leagues—small-size wheeled robots, middle-size wheeled robots, and simulation—the event has expanded to include humanoid robot soccer leagues, as well as other leagues devoted to robots with more immediate practicality. The next RoboCup event takes place in July in Bordeaux, France, where 2,500 humans (and 2,000 robots) from 45 countries are expected to compete.

The Beginning

A history of RoboCup from 1997-2011, by Peter Stone and Manuela Veloso. www.youtube.com

The first RoboCup, which I attended as a student, was held in 1997 in a small exhibit room at the International Joint Conference on AI (IJCAI) in Nagoya, Japan. The level of competition was, by today’s standards, not very high. However, it’s important to remember that many “roboticists” back then didn’t work with real robots. RoboCup was unusual during its early years in that it forced people to build complete, integrated working systems that could sense, decide, and act.

Small-Size League


Over the years, RoboCup has seen huge improvements in the level of play, often following a pattern of one team making a discovery and dominating the competition for a year or two and then being supplanted by another. For example, in the Small-Size League, in which the robots use a golf ball and external perception and computing, Team FU-Fighters from Freie Universität Berlin introduced some innovations in the early 2000s. They began controlling the ball using a device that spins it backward toward the robot to “dribble.” A second device propelled the ball quickly forward to shoot it. As the first team to come up with this strategy, the FU-Fighters had a big advantage, but other teams soon followed suit.


The Small-Size League final between TIGERs Mannheim and ER-Force at RoboCup 2022 in Bangkok, Thailand.

Standard Platform League

Austin Villa, UT Austin’s RoboCup team, describing the RoboCup Standard Platform League in 2008.

While many RoboCup leagues include a hardware design component, some teams prefer to focus more on software. In the Standard Platform League, each team is provided with identical robots, and thus the best combination of algorithms and software engineering wins. The first standard platform for RoboCup was the Aibo, a small robot dog made by Sony. Ultimately, though, the goal of RoboCup is to achieve human-level performance on a bipedal robot, and so the Standard Platform League now uses a small humanoid robot called Nao, made by SoftBank. Rugged and capable, Nao is able to fall over and quickly get up again, a critical skill for soccer-playing humans and robots alike.


The Standard Platform League final between HTWK Robots and B-Human at RoboCup 2022 in Bangkok, Thailand.

Middle-Size League

The Middle-Size League final between Tech United and the Falcons at RoboCup 2022 in Bangkok, Thailand.

The Middle-Size League uses a full-size soccer ball and onboard perception on waist-high wheeled robots. Over the years, it has showcased an enormous amount of progress toward human-speed, human-scale soccer. In recent competitions, the robots have moved briskly around a large field, autonomously developing offensive and defensive strategies and coordinating passes and shots on goal. The typical middle-size robot has the skill of a competent primary-school human, although in this league the robots don’t have to worry about legs. And in some ways, the middle-size robots have advantages over human players—the robots have omnidirectional sensing, wireless communication, and the ability to consistently place very accurate shots thanks to a mechanical ball-launching system.


Humanoid League

The RoboCup Humanoid League, launched in 2002, is critical to meeting our objective of fielding a team of highly skilled humanoid robots by 2050. Bipedal robots are an ongoing challenge, especially when those robots have to interact with full-size soccer balls, balancing on one leg to kick with the other. The humanoids must have humanlike proportions and sensor configurations akin to human perception—which means, among other things, no omnidirectional sensing.


RoboCup 2022 Humanoid AdultSize Soccer Final: NimbRo (Germany) vs. HERoEHS (Korea)

The Adult-Size Humanoid League final between NimbRo and the HERoEHS at RoboCup 2022 in Bangkok, Thailand.

Simulation

Even in the Standard Platform League, hardware can be frustrating, so the RoboCup Simulation League allows teams to work entirely in software. This enables more rapid progress using cutting-edge techniques. My own team, UT Austin Villa, started using hierarchical machine learning to develop skills such as walking and kicking in the Simulation League in 2014, which allowed us to dominate the competition. But in 2022, FC Portugal and Magma Offenburg were able to surpass us with deep reinforcement learning methods.

Other Leagues

While soccer is the ultimate goal of Robocup, and it motivates research on fundamental topics such as robot vision and mobility, it can be hard to see the practicality in a game. Other RoboCup leagues thus focus on more immediate applications. RoboCup@Home features robots for domestic environments, RoboCup Rescue is for search-and-rescue robots for disaster response, and RoboCup@Work develops robots for industry and logistics tasks.


Robots vs. Humans

European RoboCup 2022 Middle-Size winning team Tech United plays an exhibition game against professional women’s team Vitória SC in Guimarães, Portugal. The women intentionally tied against the robots to end the game in penalties.

At the conclusion of a RoboCup event, there has been a tradition since 2011 of the trustees of the RoboCup Federation playing a friendly game against the winning team of the Middle-Size League. In recent years, the middle-size robots have become surprisingly competitive, able to keep possession of the ball, dribble around the opposing team, and string together passes across the field. The robots may not be ready to take on the world champions quite yet, but the progress has been impressive—in 2022, Tech United Eindhoven played a friendly match against a Portuguese professional women’s team, Vitória SC, and the robots managed to score several goals (after the women took it easy on them).

RoboCup’s Legacy

Compared to 25 years ago, there are now many more robotics competitions to choose from, and applications of AI and robotics are much more widespread. RoboCup inspired many of the other competitions, and it remains the largest such event. Our community is determined to keep pushing the state of the art. The event draws teams from research labs specializing in mechanical engineering, medical robotics, human-robot interaction, learning from demonstration, and many other fields, and there’s no better way to train new students than to encourage them to immerse themselves in RoboCup.

The importance of RoboCup can also be measured beyond the competition itself. One of the most notable successes, stemming from the early years of the competition, was the technology spun off from the small-size league to form the basis of Kiva Systems. The hardware of Kiva’s robot was designed by Cornell’s RoboCup team, led by Raffaello D’Andrea. After helming his team to Small-Size League victories in 1999, 2000, 2002, and 2003 , D’Andrea went on to cofound Kiva Systems. The company, which developed warehouse robots that moved shelves of goods, was acquired by Amazon in 2012 for US $775 million to become the core of Amazon’s warehouse robotics program.

Future of RoboCup

At this point, you may be wondering what the prospects are for achieving RoboCup’s founding goal—enabling a team of autonomous humanoid robots to beat the world’s best human team at a game of soccer on a real, outdoor field by the year 2050. Will soccer go the way of chess, checkers, poker, Gran Turismo, “Jeopardy!”, and other human endeavors and be conquered by AI? Or will the requirements for real-world perception and humanlike speed and agility keep soccer out of reach for robots? This question remains a source of uncertainty and debate within the RoboCup community. Although 27 years is a very long time in technological terms, physical automation tends to be significantly harder and take much longer than purely software-oriented tasks do.

Ultimately, if the community is going to achieve its goals, we will need to address two challenges: building hardware that can move as quickly and easily as people do, and creating software that can outsmart the best human players in real time. Some experts point to existing state-of-the-art humanoid robots as evidence that sufficiently capable hardware is already available. As impressive as they are, however, I don’t think these robots can match the capabilities of the most skilled human athletes just yet. I haven’t seen any evidence that even the best humanoid robots today can dribble a soccer ball and deftly change directions at high speed in the way that a professional soccer player can—especially when factoring in the requirement that for professional players to agree to get on the field with robots, the robots will need to be not too heavy or powerful: They will need to be both skilled and eminently safe.

Regardless of how it turns out, there is no question in my mind that RoboCup is an enduring grand challenge for AI and robotics, as well as a great training ground for the next generation of roboticists. The RoboCup community is thriving, generating new ideas and new engineers and scientists. I’ve been proud to have led the RoboCup organization, and look forward to seeing where it will go from here.



The Mechanical Turk was a fraud. The chess-playing automaton, dressed in a turban and elaborate Ottoman robes, toured Europe in the closing decades of the 18th century accompanied by its inventor Wolfgang von Kempelen. The Turk wowed Austrian empress Maria Theresa, French emperor Napoleon Bonaparte, and Prussian king Frederick the Great as it defeated some of the great chess players of its day. In reality, though, the automaton was controlled by a human concealed within its cabinetry.

What was the first chess-playing automaton?

Torres Quevedo made his mark in a number of fields, including funiculars, dirigibles, and remote controls, before turning to “thinking” machines.Alamy

A century and a half after von Kempelen’s charade, Spanish engineer Leonardo Torres Quevedo debuted El Ajedrecista (The Chessplayer), a true chess-playing automaton. The machine played a modified endgame against a human opponent. It featured a vertical chessboard with pegs for the chess pieces; a mechanical arm moved the pegs.

Torres Quevedo invented his electromechanical device in 1912 and publicly debuted it at the University of Paris two years later. Although clunky in appearance, the experimental model still managed to create a stir worldwide, including a brief write-up in 1915 in Scientific American.

In El Ajedrecista’s endgame, the machine (white) played a king and a rook against a human’s lone king (black). The program required a fixed starting position for the machine’s king and rook, but the opposing king could be placed on any square in the first six ranks (the horizontal rows, that is) that wouldn’t put the king in danger. The program assumed that the two kings would be on opposite sides of the rank controlled by the rook. Torres Quevedo’s algorithm allowed for 63 moves without capturing the king, well beyond the usual 50-move rule that results in a draw. With these restrictions in place, El Ajedrecista was guaranteed a win.

In 1920, Torres Quevedo upgraded the appearance and mechanics of his automaton [pictured at top], although not its programming. The new version moved its pieces by way of electromagnets concealed below an ordinary chessboard. A gramophone recording announced jaque al rey (Spanish for “check”) or mate (checkmate). If the human attempted an illegal move, a lightbulb gave a warning signal; after three illegal attempts, the game would shut down.

Building a machine that thinks

The first version of the chess automaton, from 1912, featured a vertical chessboard and a mechanical arm to move the pieces.Leonardo Torres Quevedo Museum/Polytechnic University of Madrid

Unlike Wolfgang von Kempelen, Torres Quevedo did not create his chess-playing automaton for the entertainment of the elite or to make money as a showman. The Spanish engineer was interested in building a machine that “thinks”—or at least makes choices from a relatively complex set of relational possibilities. Torres Quevedo wanted to reframe what we mean by thinking. As the 1915 Scientific American article about the chess automaton notes, “There is, of course, no claim that it will think or accomplish things where thought is necessary, but its inventor claims that the limits within which thought is really necessary need to be better defined, and that the automaton can do many things that are popularly classed with thought.”

In 1914, Torres Quevedo laid out his ideas in an article, “Ensayos sobre automática. Si definición. Extensión teórica de sus aplicaciones” (“Essays on Automatics. Its Definition. Theoretical Extent of Its Applications”). In the article, he updated Charles Babbage’s ideas for the analytical engine with the currency of the day: electricity. He proposed machines doing arithmetic using switching circuits and relays, as well as automated machines equipped with sensors that would be able to adjust to their surroundings and carry out tasks. Automatons with feelings were the future, in Torres Quevedo’s view.

How far could human collaboration with machines go? Torres Quevedo built his chess player to find out, as he explained in his 1917 book Mis inventos y otras páginas de vulgarización (My inventions and other popular writings). By entrusting machines with tasks previously reserved for human intelligence, he believed that he was freeing humans from a type of servitude or bondage. He was also redefining what was categorized as thought.

Claude Shannon, the information-theory pioneer, later picked up this theme in a 1950 article, “A Chess-Playing Machine,” in Scientific American on whether electronic computers could be said to think. From a behavioral perspective, Shannon argued, a chess-playing computer mimics the thinking process. On the other hand, the machine does only what it has been programmed to do, clearly not thinking outside its set parameters. Torres Quevedo hoped his chess player would shed some light on the matter, but I think he just opened a Pandora’s box of questions.

Why isn’t Leonardo Torres Quevedo known outside Spain?

Despite Torres Quevedo’s clear position in the early history of computing—picking up from Babbage and laying a foundation for artificial intelligence —his name has often been omitted from narratives of the development of the field (at least outside of Spain), much to the dismay of the historians and engineers familiar with his work.

That’s not to say he wasn’t known and respected in his own time. Torres Quevedo was elected a member of the Spanish Royal Academy of Sciences in 1901 and became an associate member of the French Academy of Sciences in 1927. He was also a member of the Spanish Society of Physics and Chemists and the Spanish Royal Academy of Language and an honorary member of the Geneva Society of Physics and Natural History. Plus El Ajedrecista has always had a fan base among chess enthusiasts. Even after Torres Quevedo’s death in 1936, the machine continued to garner attention among the cybernetic set, such as when it defeated Norbert Wiener at an influential conference in Paris in 1951. (To be fair, it defeated everyone, and Wiener was known to be a terrible player.)

One reason Torres Quevedo’s efforts in computing aren’t more widely known might be because the experiments came later in his life, after a very successful career in other engineering fields. In a short biography for Proceedings of the IEEE, Antonio Pérez Yuste and Magdalena Salazar Palma outlined three areas that Torres Quevedo contributed to before his work on the automatons.

Torres Quevedo’s design for the Whirlpool Aero Car, which offers a thrilling ride over Niagara River, debuted in 1916.Wolfgang Kaehler/LightRocket/Getty Images

First came his work, beginning in the 1880s, on funiculars, the most famous of which is the Whirlpool Aero Car. The cable car is suspended over a dramatic gorge on the Niagara River on six interlocking steel cables, connecting two points along the shore half a kilometer apart. It is still in operation today.

His second area of expertise was aeronautics, in which he held patents on a semirigid frame system for dirigible balloons based on an internal frame of flexible cables.

And finally, he invented the Telekine, an early remote control device, which he developed as a way to safely test his airships without risking human life. He started by controlling a simple tricycle using a wireless telegraph transmitter. He then successfully used his Telekine to control boats in the Bilbao estuary. But he abandoned these efforts after the Spanish government denied his request for funding. The Telekine was marked with an IEEE Milestone in 2007.

If you’d like to explore Torres Quevedo’s various inventions, including the second chess-playing automaton, consider visiting the Museo Torres Quevedo, located in the School of Civil Engineering at the Polytechnic University of Madrid. The museum has also developed online exhibits in both Spanish and English.

A more cynical view of why Torres Quevedo’s computer prowess is not widely known may be because he saw no need to commercialize his chess player. Nick Montfort, a professor of digital media at MIT, argues in his book Twisty Little Passages (MIT Press, 2005) that El Ajedrecista was the first computer game, although he concedes that people might not recognize it as such because it predated general-purpose digital computing by decades. Of course, for Torres Quevedo, the chess player existed as a physical manifestation of his ideas and techniques. And no matter how visionary he may have been, he did not foresee the multibillion-dollar computer gaming industry.

The upshot is that, for decades, the English-speaking world mostly overlooked Torres Quevedo, and his work had little direct effect on the development of the modern computer. We are left to imagine an alternate history of how things might have unfolded if his work had been considered more central. Fortunately, a number of scholars are working to tell a more international, and more complete, history of computing. Leonardo Torres Quevedo’s is a name worth inserting back into the historical narrative.

References

I first learned about El Ajedrecista while reading the article “Leonardo Torres Quevedo: Pioneer of Computing, Automatics, and Artificial Intelligence” by Francisco González de Posada, Francisco A. González Redondo, and Alfonso Hernando González (IEEE Annals of the History of Computing, July-September 2021). In their introduction, the authors note the minimal English-language scholarship on Torres Quevedo, with the notable exception of Brian Randell’s article “From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush” (IEEE Annals of the History of Computing, October-December 1982).

Although I read Randell’s article after I had drafted my own, I began my research on the chess-playing automaton with the Museo Torres Quevedo’s excellent online exhibit. I then consulted contemporary accounts of the device, such as “Electric Automaton” (Scientific American, 16 May 1914) and “Torres and His Remarkable Automatic Devices” (Scientific American Supplement No. 2079, 6 November 1915).

My reading comprehension of Spanish is not what it should be for true academic scholarship in the field, but I tracked down several of Torres Quevedo’s original books and articles and muddled through translating specific passages to confirm claims by other secondary sources. There is clearly an opportunity for someone with better language skills than I to do justice to this pioneer in computer history.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the July 2023 print issue as “Computer Chess, Circa 1920.”



The Mechanical Turk was a fraud. The chess-playing automaton, dressed in a turban and elaborate Ottoman robes, toured Europe in the closing decades of the 18th century accompanied by its inventor Wolfgang von Kempelen. The Turk wowed Austrian empress Maria Theresa, French emperor Napoleon Bonaparte, and Prussian king Frederick the Great as it defeated some of the great chess players of its day. In reality, though, the automaton was controlled by a human concealed within its cabinetry.

What was the first chess-playing automaton?

Torres Quevedo made his mark in a number of fields, including funiculars, dirigibles, and remote controls, before turning to “thinking” machines.Alamy

A century and a half after von Kempelen’s charade, Spanish engineer Leonardo Torres Quevedo debuted El Ajedrecista (The Chessplayer), a true chess-playing automaton. The machine played a modified endgame against a human opponent. It featured a vertical chessboard with pegs for the chess pieces; a mechanical arm moved the pegs.

Torres Quevedo invented his electromechanical device in 1912 and publicly debuted it at the University of Paris two years later. Although clunky in appearance, the experimental model still managed to create a stir worldwide, including a brief write-up in 1915 in Scientific American.

In El Ajedrecista’s endgame, the machine (white) played a king and a rook against a human’s lone king (black). The program required a fixed starting position for the machine’s king and rook, but the opposing king could be placed on any square in the first six ranks (the horizontal rows, that is) that wouldn’t put the king in danger. The program assumed that the two kings would be on opposite sides of the rank controlled by the rook. Torres Quevedo’s algorithm allowed for 63 moves without capturing the king, well beyond the usual 50-move rule that results in a draw. With these restrictions in place, El Ajedrecista was guaranteed a win.

In 1920, Torres Quevedo upgraded the appearance and mechanics of his automaton [pictured at top], although not its programming. The new version moved its pieces by way of electromagnets concealed below an ordinary chessboard. A gramophone recording announced jaque al rey (Spanish for “check”) or mate (checkmate). If the human attempted an illegal move, a lightbulb gave a warning signal; after three illegal attempts, the game would shut down.

Building a machine that thinks

The first version of the chess automaton, from 1912, featured a vertical chessboard and a mechanical arm to move the pieces.Leonardo Torres Quevedo Museum/Polytechnic University of Madrid

Unlike Wolfgang von Kempelen, Torres Quevedo did not create his chess-playing automaton for the entertainment of the elite or to make money as a showman. The Spanish engineer was interested in building a machine that “thinks”—or at least makes choices from a relatively complex set of relational possibilities. Torres Quevedo wanted to reframe what we mean by thinking. As the 1915 Scientific American article about the chess automaton notes, “There is, of course, no claim that it will think or accomplish things where thought is necessary, but its inventor claims that the limits within which thought is really necessary need to be better defined, and that the automaton can do many things that are popularly classed with thought.”

In 1914, Torres Quevedo laid out his ideas in an article, “Ensayos sobre automática. Si definición. Extensión teórica de sus aplicaciones” (“Essays on Automatics. Its Definition. Theoretical Extent of Its Applications”). In the article, he updated Charles Babbage’s ideas for the analytical engine with the currency of the day: electricity. He proposed machines doing arithmetic using switching circuits and relays, as well as automated machines equipped with sensors that would be able to adjust to their surroundings and carry out tasks. Automatons with feelings were the future, in Torres Quevedo’s view.

How far could human collaboration with machines go? Torres Quevedo built his chess player to find out, as he explained in his 1917 book Mis inventos y otras páginas de vulgarización (My inventions and other popular writings). By entrusting machines with tasks previously reserved for human intelligence, he believed that he was freeing humans from a type of servitude or bondage. He was also redefining what was categorized as thought.

Claude Shannon, the information-theory pioneer, later picked up this theme in a 1950 article, “A Chess-Playing Machine,” in Scientific American on whether electronic computers could be said to think. From a behavioral perspective, Shannon argued, a chess-playing computer mimics the thinking process. On the other hand, the machine does only what it has been programmed to do, clearly not thinking outside its set parameters. Torres Quevedo hoped his chess player would shed some light on the matter, but I think he just opened a Pandora’s box of questions.

Why isn’t Leonardo Torres Quevedo known outside Spain?

Despite Torres Quevedo’s clear position in the early history of computing—picking up from Babbage and laying a foundation for artificial intelligence —his name has often been omitted from narratives of the development of the field (at least outside of Spain), much to the dismay of the historians and engineers familiar with his work.

That’s not to say he wasn’t known and respected in his own time. Torres Quevedo was elected a member of the Spanish Royal Academy of Sciences in 1901 and became an associate member of the French Academy of Sciences in 1927. He was also a member of the Spanish Society of Physics and Chemists and the Spanish Royal Academy of Language and an honorary member of the Geneva Society of Physics and Natural History. Plus El Ajedrecista has always had a fan base among chess enthusiasts. Even after Torres Quevedo’s death in 1936, the machine continued to garner attention among the cybernetic set, such as when it defeated Norbert Wiener at an influential conference in Paris in 1951. (To be fair, it defeated everyone, and Wiener was known to be a terrible player.)

One reason Torres Quevedo’s efforts in computing aren’t more widely known might be because the experiments came later in his life, after a very successful career in other engineering fields. In a short biography for Proceedings of the IEEE, Antonio Pérez Yuste and Magdalena Salazar Palma outlined three areas that Torres Quevedo contributed to before his work on the automatons.

Torres Quevedo’s design for the Whirlpool Aero Car, which offers a thrilling ride over Niagara River, debuted in 1916.Wolfgang Kaehler/LightRocket/Getty Images

First came his work, beginning in the 1880s, on funiculars, the most famous of which is the Whirlpool Aero Car. The cable car is suspended over a dramatic gorge on the Niagara River on six interlocking steel cables, connecting two points along the shore half a kilometer apart. It is still in operation today.

His second area of expertise was aeronautics, in which he held patents on a semirigid frame system for dirigible balloons based on an internal frame of flexible cables.

And finally, he invented the Telekine, an early remote control device, which he developed as a way to safely test his airships without risking human life. He started by controlling a simple tricycle using a wireless telegraph transmitter. He then successfully used his Telekine to control boats in the Bilbao estuary. But he abandoned these efforts after the Spanish government denied his request for funding. The Telekine was marked with an IEEE Milestone in 2007.

If you’d like to explore Torres Quevedo’s various inventions, including the second chess-playing automaton, consider visiting the Museo Torres Quevedo, located in the School of Civil Engineering at the Polytechnic University of Madrid. The museum has also developed online exhibits in both Spanish and English.

A more cynical view of why Torres Quevedo’s computer prowess is not widely known may be because he saw no need to commercialize his chess player. Nick Montfort, a professor of digital media at MIT, argues in his book Twisty Little Passages (MIT Press, 2005) that El Ajedrecista was the first computer game, although he concedes that people might not recognize it as such because it predated general-purpose digital computing by decades. Of course, for Torres Quevedo, the chess player existed as a physical manifestation of his ideas and techniques. And no matter how visionary he may have been, he did not foresee the multibillion-dollar computer gaming industry.

The upshot is that, for decades, the English-speaking world mostly overlooked Torres Quevedo, and his work had little direct effect on the development of the modern computer. We are left to imagine an alternate history of how things might have unfolded if his work had been considered more central. Fortunately, a number of scholars are working to tell a more international, and more complete, history of computing. Leonardo Torres Quevedo’s is a name worth inserting back into the historical narrative.

References

I first learned about El Ajedrecista while reading the article “Leonardo Torres Quevedo: Pioneer of Computing, Automatics, and Artificial Intelligence” by Francisco González de Posada, Francisco A. González Redondo, and Alfonso Hernando González (IEEE Annals of the History of Computing, July-September 2021). In their introduction, the authors note the minimal English-language scholarship on Torres Quevedo, with the notable exception of Brian Randell’s article “From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush” (IEEE Annals of the History of Computing, October-December 1982).

Although I read Randell’s article after I had drafted my own, I began my research on the chess-playing automaton with the Museo Torres Quevedo’s excellent online exhibit. I then consulted contemporary accounts of the device, such as “Electric Automaton” (Scientific American, 16 May 1914) and “Torres and His Remarkable Automatic Devices” (Scientific American Supplement No. 2079, 6 November 1915).

My reading comprehension of Spanish is not what it should be for true academic scholarship in the field, but I tracked down several of Torres Quevedo’s original books and articles and muddled through translating specific passages to confirm claims by other secondary sources. There is clearly an opportunity for someone with better language skills than I to do justice to this pioneer in computer history.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the July 2023 print issue as “Computer Chess, Circa 1920.”



The Mechanical Turk was a fraud. The chess-playing automaton, dressed in a turban and elaborate Ottoman robes, toured Europe in the closing decades of the 18th century accompanied by its inventor Wolfgang von Kempelen. The Turk wowed Austrian empress Maria Theresa, French emperor Napoleon Bonaparte, and Prussian king Frederick the Great as it defeated some of the great chess players of its day. In reality, though, the automaton was controlled by a human concealed within its cabinetry.

What was the first chess-playing automaton?

Torres Quevedo made his mark in a number of fields, including funiculars, dirigibles, and remote controls, before turning to “thinking” machines.Alamy

A century and a half after von Kempelen’s charade, Spanish engineer Leonardo Torres Quevedo debuted El Ajedrecista (The Chessplayer), a true chess-playing automaton. The machine played a modified endgame against a human opponent. It featured a vertical chessboard with pegs for the chess pieces; a mechanical arm moved the pegs.

Torres Quevedo invented his electromechanical device in 1912 and publicly debuted it at the University of Paris two years later. Although clunky in appearance, the experimental model still managed to create a stir worldwide, including a brief write-up in 1915 in Scientific American.

In El Ajedrecista’s endgame, the machine (white) played a king and a rook against a human’s lone king (black). The program required a fixed starting position for the machine’s king and rook, but the opposing king could be placed on any square in the first six ranks (the horizontal rows, that is) that wouldn’t put the king in danger. The program assumed that the two kings would be on opposite sides of the rank controlled by the rook. Torres Quevedo’s algorithm allowed for 63 moves without capturing the king, well beyond the usual 50-move rule that results in a draw. With these restrictions in place, El Ajedrecista was guaranteed a win.

In 1920, Torres Quevedo upgraded the appearance and mechanics of his automaton [pictured at top], although not its programming. The new version moved its pieces by way of electromagnets concealed below an ordinary chessboard. A gramophone recording announced jaque al rey (Spanish for “check”) or mate (checkmate). If the human attempted an illegal move, a lightbulb gave a warning signal; after three illegal attempts, the game would shut down.

Building a machine that thinks

The first version of the chess automaton, from 1912, featured a vertical chessboard and a mechanical arm to move the pieces.Leonardo Torres Quevedo Museum/Polytechnic University of Madrid

Unlike Wolfgang von Kempelen, Torres Quevedo did not create his chess-playing automaton for the entertainment of the elite or to make money as a showman. The Spanish engineer was interested in building a machine that “thinks”—or at least makes choices from a relatively complex set of relational possibilities. Torres Quevedo wanted to reframe what we mean by thinking. As the 1915 Scientific American article about the chess automaton notes, “There is, of course, no claim that it will think or accomplish things where thought is necessary, but its inventor claims that the limits within which thought is really necessary need to be better defined, and that the automaton can do many things that are popularly classed with thought.”

In 1914, Torres Quevedo laid out his ideas in the article, “Ensayos sobre automática. Si definición. Extensión teórica de sus aplicaciones” (“Essays on Automatics. Its Definition. Theoretical Extent of Its Applications”). In the article, he updated Charles Babbage’s ideas for the analytical engine with the currency of the day: electricity. He proposed machines doing arithmetic using switching circuits and relays, as well as automated machines equipped with sensors that would be able to adjust to their surroundings and carry out tasks. Automatons with feelings were the future, in Torres Quevedo’s view.

How far could human collaboration with machines go? Torres Quevedo built his chess player to find out, as he explained in his 1917 book Mis inventos y otras páginas de vulgarización (My inventions and other popular writings). By entrusting machines with tasks previously reserved for human intelligence, he believed that he was freeing humans from a type of servitude or bondage. He was also redefining what was categorized as thought.

Claude Shannon, the information-theory pioneer, later picked up this theme in a 1950 article, “A Chess-Playing Machine,” in Scientific American on whether electronic computers could be said to think. From a behavioral perspective, Shannon argued, a chess-playing computer mimics the thinking process. On the other hand, the machine does only what it has been programmed to do, clearly not thinking outside its set parameters. Torres Quevedo hoped his chess player would shed some light on the matter, but I think he just opened a Pandora’s box of questions.

Why isn’t Leonardo Torres Quevedo known outside Spain?

Despite Torres Quevedo’s clear position in the early history of computing—picking up from Babbage and laying a foundation for artificial intelligence —his name has often been omitted from narratives of the development of the field (at least outside of Spain), much to the dismay of the historians and engineers familiar with his work.

That’s not to say he wasn’t known and respected in his own time. Torres Quevedo was elected a member of the Spanish Royal Academy of Sciences in 1901 and became an associate member of the French Academy of Sciences in 1927. He was also a member of the Spanish Society of Physics and Chemists and the Spanish Royal Academy of Language and an honorary member of the Geneva Society of Physics and Natural History. Plus El Ajedrecista has always had a fan base among chess enthusiasts. Even after Torres Quevedo’s death in 1936, the machine continued to garner attention among the cybernetic set, such as when it defeated Norbert Wiener at an influential conference in Paris in 1951. (To be fair, it defeated everyone, and Wiener was known to be a terrible player.)

One reason Torres Quevedo’s efforts in computing aren’t more widely known might be because the experiments came later in his life, after a very successful career in other engineering fields. In a short biography for Proceedings of the IEEE, Antonio Pérez Yuste and Magdalena Salazar Palma outlined three areas that Torres Quevedo contributed to before his work on the automatons.

Torres Quevedo’s design for the Whirlpool Aero Car, which offers a thrilling ride over Niagara River, debuted in 1916.Wolfgang Kaehler/LightRocket/Getty Images

First came his work, beginning in the 1880s, on funiculars, the most famous of which is the Whirlpool Aero Car. The cable car is suspended over a dramatic gorge on the Niagara River on six interlocking steel cables, connecting two points along the shore half a kilometer apart. It is still in operation today.

His second area of expertise was aeronautics, in which he held patents on a semirigid frame system for dirigible balloons based on an internal frame of flexible cables.

And finally, he invented the Telekine, an early remote control device, which he developed as a way to safely test his airships without risking human life. He started by controlling a simple tricycle using a wireless telegraph transmitter. He then successfully used his Telekine to control boats in the Bilbao estuary. But he abandoned these efforts after the Spanish government denied his request for funding. The Telekine was marked with an IEEE Milestone in 2007.

If you’d like to explore Torres Quevedo’s various inventions, including the second chess-playing automaton, consider visiting the Museo Torres Quevedo, located in the School of Civil Engineering at the Polytechnic University of Madrid. The museum has also developed online exhibits in both Spanish and English.

A more cynical view of why Torres Quevedo’s computer prowess is not widely known may be because he saw no need to commercialize his chess player. Nick Montfort, a professor of digital media at MIT, argues in his book Twisty Little Passages (MIT Press, 2005) that El Ajedrecista was the first computer game, although he concedes that people might not recognize it as such because it predated general-purpose digital computing by decades. Of course, for Torres Quevedo, the chess player existed as a physical manifestation of his ideas and techniques. And no matter how visionary he may have been, he did not foresee the multibillion-dollar computer gaming industry.

The upshot is that, for decades, the English-speaking world mostly overlooked Torres Quevedo, and his work had little direct effect on the development of the modern computer. We are left to imagine an alternate history of how things might have unfolded if his work had been considered more central. Fortunately, a number of scholars are working to tell a more international, and more complete, history of computing. Leonardo Torres Quevedo’s is a name worth inserting back into the historical narrative.

References

I first learned about El Ajedrecista while reading the article “Leonardo Torres Quevedo: Pioneer of Computing, Automatics, and Artificial Intelligence” by Francisco González de Posada, Francisco A. González Redondo, and Alfonso Hernando González (IEEE Annals of the History of Computing, July-September 2021). In their introduction, the authors note the minimal English-language scholarship on Torres Quevedo, with the notable exception of Brian Randell’s article “From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush” (IEEE Annals of the History of Computing, October-December 1982).

Although I read Randell’s article after I had drafted my own, I began my research on the chess-playing automaton with the Museo Torres Quevedo’s excellent online exhibit. I then consulted contemporary accounts of the device, such as “Electric Automaton” (Scientific American, 16 May 1914) and “Torres and His Remarkable Automatic Devices” (Scientific American Supplement No. 2079, 6 November 1915).

My reading comprehension of Spanish is not what it should be for true academic scholarship in the field, but I tracked down several of Torres Quevedo’s original books and articles and muddled through translating specific passages to confirm claims by other secondary sources. There is clearly an opportunity for someone with better language skills than I to do justice to this pioneer in computer history.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the July 2023 print issue as “Computer Chess, Circa 1920.”

Pages