Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHENCoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

This single leg robot is designed to “form a foundation for future bipedal robot development,” but personally, I think it’s perfect as is.

[ KAIST Dynamic Robot Control and Design Lab ]

Selling 17k social robots still amazes me. Aldebaran will be missed.

[ Aldebaran ]

Nice to see some actual challenging shoves as part of biped testing.

[ Under Control Robotics ]

Ground Control made multilegged waves at IEEE’s International Conference on Robotics and Automation 2025 in Atlanta! We competed in the Startup Pitch Competition and demoed our robot at our booth, on NIST standard terrain, and around the convention. We were proud to be a finalist for Best Expo Demo and participate in the Robot Parade.

[ Ground Control Robotics ]

Thanks, Dan!

Humanoid is a UK-based robotics innovation company dedicated to building commercially scalable, reliable and safe robotic solutions for real-world applications.

It’s a nifty bootup screen, I’ll give them that.

[ Humanoid ]

Thanks, Kristina!

Quadrupedal robots have demonstrated remarkable agility and robustness in traversing complex terrains. However, they remain limited in performing object interactions that require sustained contact. In this work, we present LocoTouch, a system that equips quadrupedal robots with tactile sensing to address a challenging task in this category: long-distance transport of unsecured cylindrical objects, which typically requires custom mounting mechanisms to maintain stability.

[ LocoTouch paper ]

Thanks, Changyi!

In this video, Digit is performing tasks autonomously using a whole-body controller for mobile manipulation. This new controller was trained in simulation, enabling Digit to execute tasks while navigating new environments and manipulating objects it has never encountered before.

Not bad, although it’s worth pointing out that those shelves are not representative of any market I’ve ever been to.

[ Agility Robotics ]

It’s always cool to see robots presented as an incidental solution to a problem as opposed to, you know, robots.

The question that you really want answered, though, is “why is there water on the floor?”

[ Boston Dynamics ]

Reinforcement learning (RL) has significantly advanced the control of physics based and robotic characters that track kinematic reference motion. We propose a multi-objective reinforcement learning framework that trains a single policy conditioned on a set of weights, spanning the Pareto front of reward trade-offs. Within this framework, weights can be selected and tuned after training, significantly speeding up iteration time. We demonstrate how this improved workflow can be used to perform highly dynamic motions with a robot character.

[ Disney Research ]

It’s been a week since ICRA 2025, and TRON 1 already misses all the new friends he made!

[ LimX Dynamics ]

ROB 450 in Winter 2025 challenged students to synthesize the knowledge acquired through their Robotics undergraduate courses at the University of Michigan to use a systematic and iterative design and analysis process and apply it to solving a real open-ended Robotics problem.

[ University of Michigan Robotics ]

What’s The Trick? A talk on human vs current robot learning, given by Chris Atkeson at the Robotics and AI Institute.

[ Robotics and AI Institute (RAI) ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHENCoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

This single leg robot is designed to “form a foundation for future bipedal robot development,” but personally, I think it’s perfect as is.

[ KAIST Dynamic Robot Control and Design Lab ]

Selling 17k social robots still amazes me. Aldebaran will be missed.

[ Aldebaran ]

Nice to see some actual challenging shoves as part of biped testing.

[ Under Control Robotics ]

Ground Control made multilegged waves at IEEE’s International Conference on Robotics and Automation 2025 in Atlanta! We competed in the Startup Pitch Competition and demoed our robot at our booth, on NIST standard terrain, and around the convention. We were proud to be a finalist for Best Expo Demo and participate in the Robot Parade.

[ Ground Control Robotics ]

Thanks, Dan!

Humanoid is a UK-based robotics innovation company dedicated to building commercially scalable, reliable and safe robotic solutions for real-world applications.

It’s a nifty bootup screen, I’ll give them that.

[ Humanoid ]

Thanks, Kristina!

Quadrupedal robots have demonstrated remarkable agility and robustness in traversing complex terrains. However, they remain limited in performing object interactions that require sustained contact. In this work, we present LocoTouch, a system that equips quadrupedal robots with tactile sensing to address a challenging task in this category: long-distance transport of unsecured cylindrical objects, which typically requires custom mounting mechanisms to maintain stability.

[ LocoTouch paper ]

Thanks, Changyi!

In this video, Digit is performing tasks autonomously using a whole-body controller for mobile manipulation. This new controller was trained in simulation, enabling Digit to execute tasks while navigating new environments and manipulating objects it has never encountered before.

Not bad, although it’s worth pointing out that those shelves are not representative of any market I’ve ever been to.

[ Agility Robotics ]

It’s always cool to see robots presented as an incidental solution to a problem as opposed to, you know, robots.

The question that you really want answered, though, is “why is there water on the floor?”

[ Boston Dynamics ]

Reinforcement learning (RL) has significantly advanced the control of physics based and robotic characters that track kinematic reference motion. We propose a multi-objective reinforcement learning framework that trains a single policy conditioned on a set of weights, spanning the Pareto front of reward trade-offs. Within this framework, weights can be selected and tuned after training, significantly speeding up iteration time. We demonstrate how this improved workflow can be used to perform highly dynamic motions with a robot character.

[ Disney Research ]

It’s been a week since ICRA 2025, and TRON 1 already misses all the new friends he made!

[ LimX Dynamics ]

ROB 450 in Winter 2025 challenged students to synthesize the knowledge acquired through their Robotics undergraduate courses at the University of Michigan to use a systematic and iterative design and analysis process and apply it to solving a real open-ended Robotics problem.

[ University of Michigan Robotics ]

What’s The Trick? A talk on human vs current robot learning, given by Chris Atkeson at the Robotics and AI Institute.

[ Robotics and AI Institute (RAI) ]



Take a look around the airport during your travels this summer and you might spot a string of new technologies at every touchpoint: from pre-arrival, bag drop, and security to the moment you board the plane.

In this new world, your face is your boarding pass, your electronic luggage tag transforms itself for each new flight, and gate scanners catch line cutters trying to sneak onto the plane early.

It isn’t the future—it’s now. Each of the technologies to follow is in use at airports around the world today, transforming your journey-before-the-journey.

Virtual queuing speeds up airport security

As you pack the night before your trip, you ponder the age-old travel question: What time should I get to the airport? The right answer requires predicting the length of the security line. But at some airports, you no longer have to guess; in fact, you don’t have to wait in line at all.

Instead, you can book ahead and choose a specific time for your security screening—so you can arrive right before your reserved slot, confident that you’ll be whisked to the front of the line, thanks to Copenhagen Optimization’s Virtual Queuing system.

Copenhagen Optimization’s machine learning models use linear regression, heuristic models, and other techniques to forecast the volume of passenger arrivals based on historical data. The system is integrated with airport programs to access flight schedules and passenger-flow data from boarding-pass scans, and it also takes in data from lidar sensors and cameras at security checkpoints, X-ray luggage scanners, and other areas.

If a given day’s passenger volume ends up differing from historical projections, the platform can use real-time data from these inputs to adjust the Virtual Queuing time slots—and recommend that the airport make changes to security staffing and the number of open lanes. The Virtual Queuing system is constantly adjusting to flatten the passenger arrival curve, tactically redistributing demand across time slots to optimize resources and reduce congestion.

While this system is doing the most, you as a passenger can do the least. Just book a time slot on your airport’s website or app, and get some extra sleep knowing you’ll waltz right up to the security check tomorrow morning.

Electronic bag tags

MCKIBILLO

Checking a bag? Here’s another step you can take care of before you arrive: Skip the old-school paper tags and generate your own electronic Bagtag. This e-ink device (costing about US $80, or €70) looks like a traditional luggage-tag holder, but it can generate a new, paperless tag for each one of your flights.

You provide your booking details through your airline’s app or the Bagtag app, and the Bagtag system then uses application programming interfaces and secure data protocols to retrieve the necessary information from the airline’s system: your name, flight details, the baggage you’re allowed, and the unique barcode that identifies your bag. The app uses this data to generate a digital tag. Hold your phone near your Bagtag, and it will transmit the encrypted tag data via Bluetooth or NFC. Simultaneously, your phone’s NFC antenna powers the battery-free Bagtag device.

On the Bagtag itself, a low-power microcontroller decrypts the tag data and displays the digital tag on the e-ink screen. Once you’re at the airport, the tag can be scanned at the airline’s self-service bag drop or desk, just like a traditional paper tag. The device also contains an RFID chip that’s compatible with the luggage-tracking systems that some airlines are using, allowing your bag to be identified and tracked—even if it takes a different journey than you do. When you arrive at the airport, just drop that checked bag and make your way to the security area.

Biometric boarding passes

MCKIBILLO

Over at security, you’ll need your boarding pass and ID. Compared with the old days of printing a physical slip from a kiosk, digital QR code boarding passes are quite handy—but what if you didn’t need anything besides your face? That’s the premise of Idemia Public Security’s biometric boarding-pass technology.

Instead of waiting in a queue for a security agent, you’ll approach a self-service kiosk or check-in point and insert your government-issued identification document, such as a driver’s license or passport. The system uses visible light, infrared, and ultraviolet imaging to analyze the document’s embedded security features and verify its authenticity. Then, computer-vision algorithms locate and extract the image of your face on the ID for identity verification.

Next, it’s time for your close-up. High-resolution cameras within the system capture a live image of your face using 3D and infrared imaging. The system’s antispoofing technology prevents people from trying to trick the system with items like photos, videos, or masks. The technology compares your live image to the one extracted from your ID using facial-recognition algorithms. Each image is then converted into a compact biometric template—a mathematical representation of your facial features—and a similarity score is generated to confirm a match.

Finally, the system checks your travel information against secure flight databases to make sure the ticket is valid and that you’re authorized to fly that day. Assuming all checks out, you’re cleared to head to the body scanners—with no biometric data retained by Idemia Public Security’s system.

X-rays that can tell ecstasy from eczema meds

MCKIBILLO

While you pass through your security screening, that luggage you checked is undergoing its own screening—with a major new upgrade that can tell exactly what’s inside.

Traditional scanners use one or a few X-ray sources and work by transmission, measuring the attenuation of the beam as it passes through the bag. These systems create a 2D “shadow” image based on differences in the amount and type of the materials inside. More recently, these systems have begun using computed tomography to scan the bag from all directions and to reconstruct 3D images of the objects inside. But even with CT, harmless objects may look similar to dangerous materials—which can lead to false positives and also require security staff to visually inspect the X-ray images or even bust open your luggage.

By contrast, Smiths Detection’s new X-ray diffraction machines measure the molecular structure of the items inside your bag to identify the exact materials—no human review required.

The machine uses a multifocus X-ray tube to quickly scan a bag from various angles, measuring the way the radiation diffracts while switching the position of the focal spots every few microseconds. Then, it analyzes the diffraction patterns to determine the crystal structure and molecular composition of the objects inside the bag—building a “fingerprint” of each material that can much more finely differentiate threats, like explosives and drugs, from benign items.

The system’s algorithms process this diffraction data and build a 3D spatial image, which allows real-time automated screening without the need for manual visual inspection by a human. After your bag passes through the X-ray diffraction machine without incident, it’s loaded into the cargo hold. Meanwhile, you’ve passed through your own scan at security and are ready to head toward your gate.

Airport shops with no cashiers or checkout lanes

MCKIBILLO

While meandering over to your gate from security, you decide you could use a little pick-me-up. Just down the corridor is a convenience store with snacks, drinks, and other treats—but no cashiers. It’s a contactless shop that uses Just Walk Out technology by Amazon.

As you enter the store with the tap of a credit card or mobile wallet, a scanner reads the card and assigns you a unique session identifier that will let the Just Walk Out system link your actions in the store to your payment. Overhead cameras track you by the top of your head, not your face, as you move through the store.

The Just Walk Out system uses a deep-learning model to follow your movements and detect when you interact with items. In most cases, computer vision can identify a product you pick up simply based on the video feed, but sometimes weight sensors embedded in the shelves provide additional data to determine what you removed. The video and weight data are encoded as tokens, and a neural network processes those tokens in a way similar to how large language models encode text—determining the result of your actions to create a “virtual cart.”

While you shop, the system continuously updates this cart: adding a can of soda when you pick it up, swapping one brand of gum for another if you change your mind, or removing that bag of chips if you put it back on the shelf. Once your shopping is complete, you can indeed just walk out with your soda and gum. The items you take will make up your finalized virtual cart, and the credit card you entered the store with will be charged as usual. (You can look up a receipt, if you want.) With provisions procured, it’s onward to the gate.

Airport-cleaning robots

MCKIBILLO

As you amble toward the gate with your luggage and snacks, you promptly spill that soda you just bought. Cleanup in Terminal C! Along comes Avidbots’ Neo, a fully autonomous floor-scrubbing robot designed to clean commercial spaces like airports with minimal human intervention.

When a Neo is first delivered to the airport, the robot performs a comprehensive scan of the various areas it will be cleaning using lidar and 3D depth cameras. Avidbots software processes the data to create a detailed map of the environment, including walls and other obstacles, and this serves as the foundation for Neo’s cleaning plans and navigation.

Neo’s human overlords can use a touchscreen on the robot to direct it to the area that needs cleaning—either as part of scheduled upkeep, or when someone (ahem) spills their soda. The robot springs into action, and as it moves, it continuously locates itself within its map and plans its movements using data from wheel encoders, inertial measurement units, and a gyroscope. Neo also updates its map and adjusts its path in real time by using the lidar and depth cameras to detect any changes from its initial mapping, such as a translocated trash can or perambulating passengers.

Then comes the scrubbing. Neo’s software plans the optimal path for cleaning a given area at this moment in time, adjusting the robot’s speed and steering as it moves along. A water-delivery system pumps and controls the flow of cleaning solution to the motorized brushes, whose speed and pressure can also be adjusted based on the surface the robot is cleaning. A powerful vacuum system collects the dirty water, and a flexible squeegee prevents slippery floors from being left behind.

While the robot’s various sensors and planning algorithms continuously detect and avoid obstacles, any physical contact with the robot’s bumpers triggers an emergency stop. And if Neo finds itself in a situation it’s just not sure how to handle, the robot will stop and call for assistance from a human operator, who can review sensor data and camera feeds remotely to help it along.

“Wrong group” plane-boarding alarm

MCKIBILLO

Your airport journey is coming to an end, and your real journey is about to begin. As you wait at the gate, you notice a fair number of your fellow passengers hovering to board even before the agent has made any announcements. And when boarding does begin, a surprising number of people hop in line. Could all these people really be in boarding groups 1 and 2? you wonder.

If they’re not…they’ll get called out. American Airlines’ new boarding technology stops those pesky passengers who try to join the wrong boarding group and sneak onto the plane early.

If one such passenger approaches the gate before their assigned group has been called, scanning their boarding pass will trigger an audible alert—notifying the airline crew, and everyone else for that matter. The passengers will be politely asked to wait to board. As they slink back into line, try not to look too smug. After all, it’s been a remarkably easy, tech-assisted journey through the airport today.



Take a look around the airport during your travels this summer and you might spot a string of new technologies at every touchpoint: from pre-arrival, bag drop, and security to the moment you board the plane.

In this new world, your face is your boarding pass, your electronic luggage tag transforms itself for each new flight, and gate scanners catch line cutters trying to sneak onto the plane early.

It isn’t the future—it’s now. Each of the technologies to follow is in use at airports around the world today, transforming your journey-before-the-journey.

Virtual queuing speeds up airport security

As you pack the night before your trip, you ponder the age-old travel question: What time should I get to the airport? The right answer requires predicting the length of the security line. But at some airports, you no longer have to guess; in fact, you don’t have to wait in line at all.

Instead, you can book ahead and choose a specific time for your security screening—so you can arrive right before your reserved slot, confident that you’ll be whisked to the front of the line, thanks to Copenhagen Optimization’s Virtual Queuing system.

Copenhagen Optimization’s machine learning models use linear regression, heuristic models, and other techniques to forecast the volume of passenger arrivals based on historical data. The system is integrated with airport programs to access flight schedules and passenger-flow data from boarding-pass scans, and it also takes in data from lidar sensors and cameras at security checkpoints, X-ray luggage scanners, and other areas.

If a given day’s passenger volume ends up differing from historical projections, the platform can use real-time data from these inputs to adjust the Virtual Queuing time slots—and recommend that the airport make changes to security staffing and the number of open lanes. The Virtual Queuing system is constantly adjusting to flatten the passenger arrival curve, tactically redistributing demand across time slots to optimize resources and reduce congestion.

While this system is doing the most, you as a passenger can do the least. Just book a time slot on your airport’s website or app, and get some extra sleep knowing you’ll waltz right up to the security check tomorrow morning.

Electronic bag tags

MCKIBILLO

Checking a bag? Here’s another step you can take care of before you arrive: Skip the old-school paper tags and generate your own electronic Bagtag. This e-ink device (costing about US $80, or €70) looks like a traditional luggage-tag holder, but it can generate a new, paperless tag for each one of your flights.

You provide your booking details through your airline’s app or the Bagtag app, and the Bagtag system then uses application programming interfaces and secure data protocols to retrieve the necessary information from the airline’s system: your name, flight details, the baggage you’re allowed, and the unique barcode that identifies your bag. The app uses this data to generate a digital tag. Hold your phone near your Bagtag, and it will transmit the encrypted tag data via Bluetooth or NFC. Simultaneously, your phone’s NFC antenna powers the battery-free Bagtag device.

On the Bagtag itself, a low-power microcontroller decrypts the tag data and displays the digital tag on the e-ink screen. Once you’re at the airport, the tag can be scanned at the airline’s self-service bag drop or desk, just like a traditional paper tag. The device also contains an RFID chip that’s compatible with the luggage-tracking systems that some airlines are using, allowing your bag to be identified and tracked—even if it takes a different journey than you do. When you arrive at the airport, just drop that checked bag and make your way to the security area.

Biometric boarding passes

MCKIBILLO

Over at security, you’ll need your boarding pass and ID. Compared with the old days of printing a physical slip from a kiosk, digital QR code boarding passes are quite handy—but what if you didn’t need anything besides your face? That’s the premise of Idemia Public Security’s biometric boarding-pass technology.

Instead of waiting in a queue for a security agent, you’ll approach a self-service kiosk or check-in point and insert your government-issued identification document, such as a driver’s license or passport. The system uses visible light, infrared, and ultraviolet imaging to analyze the document’s embedded security features and verify its authenticity. Then, computer-vision algorithms locate and extract the image of your face on the ID for identity verification.

Next, it’s time for your close-up. High-resolution cameras within the system capture a live image of your face using 3D and infrared imaging. The system’s antispoofing technology prevents people from trying to trick the system with items like photos, videos, or masks. The technology compares your live image to the one extracted from your ID using facial-recognition algorithms. Each image is then converted into a compact biometric template—a mathematical representation of your facial features—and a similarity score is generated to confirm a match.

Finally, the system checks your travel information against secure flight databases to make sure the ticket is valid and that you’re authorized to fly that day. Assuming all checks out, you’re cleared to head to the body scanners—with no biometric data retained by Idemia Public Security’s system.

X-rays that can tell ecstasy from eczema meds

MCKIBILLO

While you pass through your security screening, that luggage you checked is undergoing its own screening—with a major new upgrade that can tell exactly what’s inside.

Traditional scanners use one or a few X-ray sources and work by transmission, measuring the attenuation of the beam as it passes through the bag. These systems create a 2D “shadow” image based on differences in the amount and type of the materials inside. More recently, these systems have begun using computed tomography to scan the bag from all directions and to reconstruct 3D images of the objects inside. But even with CT, harmless objects may look similar to dangerous materials—which can lead to false positives and also require security staff to visually inspect the X-ray images or even bust open your luggage.

By contrast, Smiths Detection’s new X-ray diffraction machines measure the molecular structure of the items inside your bag to identify the exact materials—no human review required.

The machine uses a multifocus X-ray tube to quickly scan a bag from various angles, measuring the way the radiation diffracts while switching the position of the focal spots every few microseconds. Then, it analyzes the diffraction patterns to determine the crystal structure and molecular composition of the objects inside the bag—building a “fingerprint” of each material that can much more finely differentiate threats, like explosives and drugs, from benign items.

The system’s algorithms process this diffraction data and build a 3D spatial image, which allows real-time automated screening without the need for manual visual inspection by a human. After your bag passes through the X-ray diffraction machine without incident, it’s loaded into the cargo hold. Meanwhile, you’ve passed through your own scan at security and are ready to head toward your gate.

Airport shops with no cashiers or checkout lanes

MCKIBILLO

While meandering over to your gate from security, you decide you could use a little pick-me-up. Just down the corridor is a convenience store with snacks, drinks, and other treats—but no cashiers. It’s a contactless shop that uses Just Walk Out technology by Amazon.

As you enter the store with the tap of a credit card or mobile wallet, a scanner reads the card and assigns you a unique session identifier that will let the Just Walk Out system link your actions in the store to your payment. Overhead cameras track you by the top of your head, not your face, as you move through the store.

The Just Walk Out system uses a deep-learning model to follow your movements and detect when you interact with items. In most cases, computer vision can identify a product you pick up simply based on the video feed, but sometimes weight sensors embedded in the shelves provide additional data to determine what you removed. The video and weight data are encoded as tokens, and a neural network processes those tokens in a way similar to how large language models encode text—determining the result of your actions to create a “virtual cart.”

While you shop, the system continuously updates this cart: adding a can of soda when you pick it up, swapping one brand of gum for another if you change your mind, or removing that bag of chips if you put it back on the shelf. Once your shopping is complete, you can indeed just walk out with your soda and gum. The items you take will make up your finalized virtual cart, and the credit card you entered the store with will be charged as usual. (You can look up a receipt, if you want.) With provisions procured, it’s onward to the gate.

Airport-cleaning robots

MCKIBILLO

As you amble toward the gate with your luggage and snacks, you promptly spill that soda you just bought. Cleanup in Terminal C! Along comes Avidbots’ Neo, a fully autonomous floor-scrubbing robot designed to clean commercial spaces like airports with minimal human intervention.

When a Neo is first delivered to the airport, the robot performs a comprehensive scan of the various areas it will be cleaning using lidar and 3D depth cameras. Avidbots software processes the data to create a detailed map of the environment, including walls and other obstacles, and this serves as the foundation for Neo’s cleaning plans and navigation.

Neo’s human overlords can use a touchscreen on the robot to direct it to the area that needs cleaning—either as part of scheduled upkeep, or when someone (ahem) spills their soda. The robot springs into action, and as it moves, it continuously locates itself within its map and plans its movements using data from wheel encoders, inertial measurement units, and a gyroscope. Neo also updates its map and adjusts its path in real time by using the lidar and depth cameras to detect any changes from its initial mapping, such as a translocated trash can or perambulating passengers.

Then comes the scrubbing. Neo’s software plans the optimal path for cleaning a given area at this moment in time, adjusting the robot’s speed and steering as it moves along. A water-delivery system pumps and controls the flow of cleaning solution to the motorized brushes, whose speed and pressure can also be adjusted based on the surface the robot is cleaning. A powerful vacuum system collects the dirty water, and a flexible squeegee prevents slippery floors from being left behind.

While the robot’s various sensors and planning algorithms continuously detect and avoid obstacles, any physical contact with the robot’s bumpers triggers an emergency stop. And if Neo finds itself in a situation it’s just not sure how to handle, the robot will stop and call for assistance from a human operator, who can review sensor data and camera feeds remotely to help it along.

“Wrong group” plane-boarding alarm

MCKIBILLO

Your airport journey is coming to an end, and your real journey is about to begin. As you wait at the gate, you notice a fair number of your fellow passengers hovering to board even before the agent has made any announcements. And when boarding does begin, a surprising number of people hop in line. Could all these people really be in boarding groups 1 and 2? you wonder.

If they’re not…they’ll get called out. American Airlines’ new boarding technology stops those pesky passengers who try to join the wrong boarding group and sneak onto the plane early.

If one such passenger approaches the gate before their assigned group has been called, scanning their boarding pass will trigger an audible alert—notifying the airline crew, and everyone else for that matter. The passengers will be politely asked to wait to board. As they slink back into line, try not to look too smug. After all, it’s been a remarkably easy, tech-assisted journey through the airport today.



The robots that share our public spaces today are so demure. Social robots and service robots aim to avoid offense, erring toward polite airs, positive emotions, and obedience. In some ways, this makes sense—would you really want to have a yelling match with a delivery robot in a hotel? Probably not, even if you’re in New York City and trying to absorb the local culture.

In other ways, this passive social robot design aligns with paternalistic standards that link assistance to subservience. Thoughtlessly following such outdated social norms in robot design may be ill-advised, since it can help to reinforce outdated or harmful ideas such as restricting people’s rights and reflecting only the needs of majority-identity users.

In my robotics lab at Oregon State University, we work with a playful spirit and enjoy challenging the problematic norms that are entrenched within “polite” interactions and social roles. So we decided to experiment with robots that use foul language around humans. After all, many people are using foul language more than ever in 2025. Why not let robots have a chance, too?

Why and How to Study Cursing Robots

Societal standards in the United States suggest that cursing robots would likely rub people the wrong way in most contexts, as swearing has a predominantly negative connotation. Although some past research shows that cursing can enhance team cohesion and elicit humor, certain members of society (such as women) are often expected to avoid risking offense through profanity. We wondered whether cursing robots would be viewed negatively, or if they might perhaps offer benefits in certain situations.

We decided to study cursing robots in the context of responding to mistakes. Past work in human-robot interaction has already shown that responding to error (rather than ignoring it) can help robots be perceived more positively in human-populated spaces, especially in the case of personal and service robots. And one study found that compared to other faux pas, foul language is more forgivable in a robot.

With this past work in mind, we generated videos with three common types of robot failure: bumping into a table, dropping an object, and failing to grasp an object. We crossed these situations with three types of responses from the robot: no verbal reaction, a non-expletive verbal declaration, and an expletive verbal declaration. We then asked people to rate the robots on things like competence, discomfort, and likability, using standard scales in an online survey.

What If Robots Cursed? These Videos Helped Us Learn How People Feel about Profane RobotsVideo: Naomi Fitter

What People Thought of Our Cursing Robots

On the whole, we were surprised by how acceptable swearing seemed to the study participants, especially within an initial group of Oregon State University students, but even among the general public as well. Cursing had no negative impact, and even some positive impacts, among the college students after we removed one religiously connotated curse (god***it), which seemed to be received in a stronger negative way than other cuss words.

In fact, university participants rated swearing robots as the most socially close and most humorous, and rated non-expletive and expletive robot reactions equivalent on social warmth, competence, discomfort, anthropomorphism, and likability scales. The general public judged non-profane and profane robots as equivalent on most scales, although expletive reactions were deemed most discomforting and non-expletive responses seemed most likable. We believe that the university students were slightly more accepting of cursing robots because of the campus’s progressive culture, where cursing is considered a peccadillo.

Since experiments run solely in an online setting do not always represent real-life interactions well, we also conducted a final replication study in person with a robot that made errors while distributing goodie bags to campus community members at Oregon State, which reinforced our prior results.

Humans React to a Cursing Robot in the WildVideo: Naomi Fitter

We have submitted this work, which represents a well-designed series of empirical experiments with interesting results and replications along the way, to several different journals and conferences. Despite consistently enthusiastic reviewer comments, no editors have yet accepted our work for publication—it seems to be the type of paper that editors are nervous to touch. Currently, the work is under review for a fourth time, for possible inclusion in the 2025 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), in a paper titled “Oh F**k! How Do People Feel About Robots That Leverage Profanity?

Give Cursing Robots a Chance

Based on our results, we think cursing robots deserve a chance! Our findings show that swearing robots would typically have little downside and some upside, especially in open-minded spaces such as university campuses. Even for the general public, reactions to errors with profanity yielded much less distaste than we expected. Our data showed that people cared more about whether robots acknowledged their error at all than whether or not they swore.

People do have some reservations about cursing robots, especially when it comes to comfort and likability, so thoughtfulness may be required to apply curse words at the right time. For example, just as humans do, robots should likely hold back their swear words around children and be more careful in settings that typically demand cleaner language. Robot practitioners might also consider surveying individual users about profanity acceptance as they set up new technology in personal settings—rather than letting robotic systems learn the hard way, perhaps alienating users in the process.

As more robots enter our day-to-day spaces, they are bound to make mistakes. How they react to these errors is important. Fundamentally, our work shows that people prefer robots that notice when a mistake has occurred and react to this error in a relatable way. And it seems that a range of styles in the response itself, from the profane to the mundane, can work well. So we invite designers to give cursing robots a chance!



The robots that share our public spaces today are so demure. Social robots and service robots aim to avoid offense, erring toward polite airs, positive emotions, and obedience. In some ways, this makes sense—would you really want to have a yelling match with a delivery robot in a hotel? Probably not, even if you’re in New York City and trying to absorb the local culture.

In other ways, this passive social robot design aligns with paternalistic standards that link assistance to subservience. Thoughtlessly following such outdated social norms in robot design may be ill-advised, since it can help to reinforce outdated or harmful ideas such as restricting people’s rights and reflecting only the needs of majority-identity users.

In my robotics lab at Oregon State University, we work with a playful spirit and enjoy challenging the problematic norms that are entrenched within “polite” interactions and social roles. So we decided to experiment with robots that use foul language around humans. After all, many people are using foul language more than ever in 2025. Why not let robots have a chance, too?

Why and How to Study Cursing Robots

Societal standards in the United States suggest that cursing robots would likely rub people the wrong way in most contexts, as swearing has a predominantly negative connotation. Although some past research shows that cursing can enhance team cohesion and elicit humor, certain members of society (such as women) are often expected to avoid risking offense through profanity. We wondered whether cursing robots would be viewed negatively, or if they might perhaps offer benefits in certain situations.

We decided to study cursing robots in the context of responding to mistakes. Past work in human-robot interaction has already shown that responding to error (rather than ignoring it) can help robots be perceived more positively in human-populated spaces, especially in the case of personal and service robots. And one study found that compared to other faux pas, foul language is more forgivable in a robot.

With this past work in mind, we generated videos with three common types of robot failure: bumping into a table, dropping an object, and failing to grasp an object. We crossed these situations with three types of responses from the robot: no verbal reaction, a non-expletive verbal declaration, and an expletive verbal declaration. We then asked people to rate the robots on things like competence, discomfort, and likability, using standard scales in an online survey.

What If Robots Cursed? These Videos Helped Us Learn How People Feel about Profane RobotsVideo: Naomi Fitter

What People Thought of Our Cursing Robots

On the whole, we were surprised by how acceptable swearing seemed to the study participants, especially within an initial group of Oregon State University students, but even among the general public as well. Cursing had no negative impact, and even some positive impacts, among the college students after we removed one religiously connotated curse (god***it), which seemed to be received in a stronger negative way than other cuss words.

In fact, university participants rated swearing robots as the most socially close and most humorous, and rated non-expletive and expletive robot reactions equivalent on social warmth, competence, discomfort, anthropomorphism, and likability scales. The general public judged non-profane and profane robots as equivalent on most scales, although expletive reactions were deemed most discomforting and non-expletive responses seemed most likable. We believe that the university students were slightly more accepting of cursing robots because of the campus’s progressive culture, where cursing is considered a peccadillo.

Since experiments run solely in an online setting do not always represent real-life interactions well, we also conducted a final replication study in person with a robot that made errors while distributing goodie bags to campus community members at Oregon State, which reinforced our prior results.

Humans React to a Cursing Robot in the WildVideo: Naomi Fitter

We have submitted this work, which represents a well-designed series of empirical experiments with interesting results and replications along the way, to several different journals and conferences. Despite consistently enthusiastic reviewer comments, no editors have yet accepted our work for publication—it seems to be the type of paper that editors are nervous to touch. Currently, the work is under review for a fourth time, for possible inclusion in the 2025 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), in a paper titled “Oh F**k! How Do People Feel About Robots That Leverage Profanity?

Give Cursing Robots a Chance

Based on our results, we think cursing robots deserve a chance! Our findings show that swearing robots would typically have little downside and some upside, especially in open-minded spaces such as university campuses. Even for the general public, reactions to errors with profanity yielded much less distaste than we expected. Our data showed that people cared more about whether robots acknowledged their error at all than whether or not they swore.

People do have some reservations about cursing robots, especially when it comes to comfort and likability, so thoughtfulness may be required to apply curse words at the right time. For example, just as humans do, robots should likely hold back their swear words around children and be more careful in settings that typically demand cleaner language. Robot practitioners might also consider surveying individual users about profanity acceptance as they set up new technology in personal settings—rather than letting robotic systems learn the hard way, perhaps alienating users in the process.

As more robots enter our day-to-day spaces, they are bound to make mistakes. How they react to these errors is important. Fundamentally, our work shows that people prefer robots that notice when a mistake has occurred and react to this error in a relatable way. And it seems that a range of styles in the response itself, from the profane to the mundane, can work well. So we invite designers to give cursing robots a chance!



The robots that share our public spaces today are so demure. Social robots and service robots aim to avoid offense, erring toward polite airs, positive emotions, and obedience. In some ways, this makes sense—would you really want to have a yelling match with a delivery robot in a hotel? Probably not, even if you’re in New York City and trying to absorb the local culture.

In other ways, this passive social robot design aligns with paternalistic standards that link assistance to subservience. Thoughtlessly following such outdated social norms in robot design may be ill-advised, since it can help to reinforce outdated or harmful ideas such as restricting people’s rights and reflecting only the needs of majority-identity users.

In my robotics lab at Oregon State University, we work with a playful spirit and enjoy challenging the problematic norms that are entrenched within “polite” interactions and social roles. So we decided to experiment with robots that use foul language around humans. After all, many people are using foul language more than ever in 2025. Why not let robots have a chance, too?

Why and How to Study Cursing Robots

Societal standards in the United States suggest that cursing robots would likely rub people the wrong way in most contexts, as swearing has a predominantly negative connotation. Although some past research shows that cursing can enhance team cohesion and elicit humor, certain members of society (such as women) are often expected to avoid risking offense through profanity. We wondered whether cursing robots would be viewed negatively, or if they might perhaps offer benefits in certain situations.

We decided to study cursing robots in the context of responding to mistakes. Past work in human-robot interaction has already shown that responding to error (rather than ignoring it) can help robots be perceived more positively in human-populated spaces, especially in the case of personal and service robots. And one study found that compared to other faux pas, foul language is more forgivable in a robot.

With this past work in mind, we generated videos with three common types of robot failure: bumping into a table, dropping an object, and failing to grasp an object. We crossed these situations with three types of responses from the robot: no verbal reaction, a non-expletive verbal declaration, and an expletive verbal declaration. We then asked people to rate the robots on things like competence, discomfort, and likability, using standard scales in an online survey.

What People Thought of our Cursing Robots

On the whole, we were surprised by how acceptable swearing seemed to the study participants, especially within an initial group of Oregon State University students, but even among the general public as well. Cursing had no negative impact, and even some positive impacts, among the college students after we removed one religiously-connotated curse (god***it), which seemed to be received in a stronger negative way than other cuss words.

In fact, university participants rated swearing robots as the most socially close and most humorous, and rated non-expletive and expletive robot reactions equivalent on social warmth, competence, discomfort, anthropomorphism, and likability scales. The general public judged non-profane and profane robots as equivalent on most scales, although expletive reactions were deemed most discomforting and non-expletive responses seemed most likable. We believe that the university students were slightly more accepting of cursing robots because of the campus’s progressive culture, where cursing is considered a peccadillo.

Since experiments run solely in an online setting do not always represent real life interactions well, we also conducted a final replication study in person with a robot that made errors while distributing goodie bags to campus community members at Oregon State, which reinforced our prior results.

We have submitted this work, which represents a well designed series of empirical experiments with interesting results and replications along the way, to several different journals and conferences. Despite consistently enthusiastic reviewer comments, no editors have yet accepted our work for publication—it seems to be the type of paper that editors are nervous to touch. Currently, the work is under review for a fourth time, for possible inclusion in the 2025 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), in a paper entitled “Oh F**k! How Do People Feel about Robots that Leverage Profanity?

Give Cursing Robots a Chance

Based on our results, we think cursing robots deserve a chance! Our findings show that swearing robots would typically have little downside and some upside, especially in open-minded spaces such as university campuses. Even for the general public, reactions to errors with profanity yielded much less distaste than we expected. Our data showed that people cared more about whether robots acknowledged their error at all than whether or not they swore.

People do have some reservations about cursing robots, especially when it comes to comfort and likability, so thoughtfulness may be required to apply curse words at the right time. For example, just as humans do, robots should likely hold back their swear words around children and be more careful in settings that typically demand cleaner language. Robot practitioners might also consider surveying individual users about profanity acceptance as they set up new technology in personal settings—rather than letting robotic systems learn the hard way, perhaps alienating users in the process.

As more robots enter our day-to-day spaces, they are bound to make mistakes. How they react to these errors is important. Fundamentally, our work shows that people prefer robots that notice when a mistake has occurred and react to this error in a relatable way. And it seems that a range of styles in the response itself, from the profane to the mundane, can work well. So we invite designers to give cursing robots a chance!



The robots that share our public spaces today are so demure. Social robots and service robots aim to avoid offense, erring toward polite airs, positive emotions, and obedience. In some ways, this makes sense—would you really want to have a yelling match with a delivery robot in a hotel? Probably not, even if you’re in New York City and trying to absorb the local culture.

In other ways, this passive social robot design aligns with paternalistic standards that link assistance to subservience. Thoughtlessly following such outdated social norms in robot design may be ill-advised, since it can help to reinforce outdated or harmful ideas such as restricting people’s rights and reflecting only the needs of majority-identity users.

In my robotics lab at Oregon State University, we work with a playful spirit and enjoy challenging the problematic norms that are entrenched within “polite” interactions and social roles. So we decided to experiment with robots that use foul language around humans. After all, many people are using foul language more than ever in 2025. Why not let robots have a chance, too?

Why and How to Study Cursing Robots

Societal standards in the United States suggest that cursing robots would likely rub people the wrong way in most contexts, as swearing has a predominantly negative connotation. Although some past research shows that cursing can enhance team cohesion and elicit humor, certain members of society (such as women) are often expected to avoid risking offense through profanity. We wondered whether cursing robots would be viewed negatively, or if they might perhaps offer benefits in certain situations.

We decided to study cursing robots in the context of responding to mistakes. Past work in human-robot interaction has already shown that responding to error (rather than ignoring it) can help robots be perceived more positively in human-populated spaces, especially in the case of personal and service robots. And one study found that compared to other faux pas, foul language is more forgivable in a robot.

With this past work in mind, we generated videos with three common types of robot failure: bumping into a table, dropping an object, and failing to grasp an object. We crossed these situations with three types of responses from the robot: no verbal reaction, a non-expletive verbal declaration, and an expletive verbal declaration. We then asked people to rate the robots on things like competence, discomfort, and likability, using standard scales in an online survey.

What People Thought of our Cursing Robots

On the whole, we were surprised by how acceptable swearing seemed to the study participants, especially within an initial group of Oregon State University students, but even among the general public as well. Cursing had no negative impact, and even some positive impacts, among the college students after we removed one religiously-connotated curse (god***it), which seemed to be received in a stronger negative way than other cuss words.

In fact, university participants rated swearing robots as the most socially close and most humorous, and rated non-expletive and expletive robot reactions equivalent on social warmth, competence, discomfort, anthropomorphism, and likability scales. The general public judged non-profane and profane robots as equivalent on most scales, although expletive reactions were deemed most discomforting and non-expletive responses seemed most likable. We believe that the university students were slightly more accepting of cursing robots because of the campus’s progressive culture, where cursing is considered a peccadillo.

Since experiments run solely in an online setting do not always represent real life interactions well, we also conducted a final replication study in person with a robot that made errors while distributing goodie bags to campus community members at Oregon State, which reinforced our prior results.

We have submitted this work, which represents a well designed series of empirical experiments with interesting results and replications along the way, to several different journals and conferences. Despite consistently enthusiastic reviewer comments, no editors have yet accepted our work for publication—it seems to be the type of paper that editors are nervous to touch. Currently, the work is under review for a fourth time, for possible inclusion in the 2025 IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), in a paper entitled “Oh F**k! How Do People Feel about Robots that Leverage Profanity?

Give Cursing Robots a Chance

Based on our results, we think cursing robots deserve a chance! Our findings show that swearing robots would typically have little downside and some upside, especially in open-minded spaces such as university campuses. Even for the general public, reactions to errors with profanity yielded much less distaste than we expected. Our data showed that people cared more about whether robots acknowledged their error at all than whether or not they swore.

People do have some reservations about cursing robots, especially when it comes to comfort and likability, so thoughtfulness may be required to apply curse words at the right time. For example, just as humans do, robots should likely hold back their swear words around children and be more careful in settings that typically demand cleaner language. Robot practitioners might also consider surveying individual users about profanity acceptance as they set up new technology in personal settings—rather than letting robotic systems learn the hard way, perhaps alienating users in the process.

As more robots enter our day-to-day spaces, they are bound to make mistakes. How they react to these errors is important. Fundamentally, our work shows that people prefer robots that notice when a mistake has occurred and react to this error in a relatable way. And it seems that a range of styles in the response itself, from the profane to the mundane, can work well. So we invite designers to give cursing robots a chance!



As a mere earthling, I remember watching in fascination as Sojourner sent back photos of the Martian surface during the summer of 1997. I was not alone. The servers at NASA’s Jet Propulsion Lab slowed to a crawl when they got more than 47 million hits (a record number!) from people attempting to download those early images of the Red Planet. To be fair, it was the late 1990s, the Internet was still young, and most people were using dial-up modems. By the end of the 83-day mission, Sojourner had sent back 550 photos and performed more than 15 chemical analyses of Martian rocks and soil.

Sojourner, of course, remains on Mars. Pictured here is Marie Curie, its twin. Functionally identical, either one of the rovers could have made the voyage to Mars, but one of them was bound to become the famous face of the mission, while the other was destined to be left behind in obscurity. Did I write this piece because I feel a little bad for Marie Curie? Maybe. But it also gave me a chance to revisit this pioneering Mars mission, which established that robots could effectively explore the surface of planets and captivate the public imagination.

Sojourner’s sojourn on Mars

On 4 July 1997, the Mars Pathfinder parachuted through the Martian atmosphere and bounced about 15 times on glorified airbags before finally coming to a rest. The lander, renamed the Carl Sagan Memorial Station, carried precious cargo stowed inside. The next day, after the airbags retracted, the solar-powered Sojourner eased its way down the ramp, the first human-made vehicle to roll around on the surface of another planet. (It wasn’t the first extraterrestrial body, though. The Soviet Lunokhod rovers conducted two successful missions on the moon in 1970 and 1973. The Soviets had also landed a rover on Mars back in 1971, but communication was lost before the PROP-M ever deployed.)

This giant sandbox at JPL provided Marie Curie with an approximation of Martian terrain. Mike Nelson/AFP/Getty Images

The six-wheeled, 10.6-kilogram, microwave-oven-size Sojourner was equipped with three low-resolution cameras (two on the front for black-and-white images and a color camera on the rear), a laser hazard–avoidance system, an alpha-proton X-ray spectrometer, experiments for testing wheel abrasion and material adherence, and several accelerometers. The robot also demonstrated the value of the six-wheeled “rocker-bogie” suspension system that became NASA’s go-to design for all later Mars rovers. Sojourner never roamed more than about 12 meters from the lander due to the limited range of its radio.

Pathfinder had landed in Ares Vallis, an assumed ancient floodplain chosen because of the wide variety of rocks present. Scientists hoped to confirm the past existence of water on the surface of Mars. Sojourner did discover rounded pebbles that suggested running water, and later missions confirmed it.

A highlight of Sojourner’s 83-day mission on Mars was its encounter with a rock nicknamed Barnacle Bill [to the rover’s left]. JPL/NASA

As its first act of exploration, Sojourner rolled forward 36 centimeters and encountered a rock, dubbed Barnacle Bill due to its rough surface. The rover spent about 10 hours analyzing the rock, using its spectrometer to determine the elemental composition. Over the next few weeks, while the lander collected atmospheric information and took photos, the rover studied rocks in detail and tested the Martian soil.

Marie Curie’s sojourn…in a JPL sandbox

Meanwhile back on Earth, engineers at JPL used Marie Curie to mimic Sojourner’s movements in a Mars-like setting. During the original design and testing of the rovers, the team had set up giant sandboxes, each holding thousands of kilograms of playground sand, in the Space Flight Operations Facility at JPL. They exhaustively practiced the remote operation of Sojourner, including an 11-minute delay in communications between Mars and Earth. (The actual delay can vary from 7 to 20 minutes.) Even after Sojourner landed, Marie Curie continued to help them strategize.

Initially, Sojourner was remotely operated from Earth, which was tricky given the lengthy communication delay. Mike Nelson/AFP/Getty Images

During its first few days on Mars, Sojourner was maneuvered by an Earth-based operator wearing 3D goggles and using a funky input device called a Spaceball 2003. Images pieced together from both the lander and the rover guided the operator. It was like a very, very slow video game—the rover sometimes moved only a few centimeters a day. NASA then turned on Sojourner’s hazard-avoidance system, which allowed the rover some autonomy to explore its world. A human would suggest a path for that day’s exploration, and then the rover had to autonomously avoid any obstacles in its way, such as a big rock, a cliff, or a steep slope.

JPL designed Sojourner to operate for a week. But the little rover that could kept chugging along for 83 Martian days before NASA finally lost contact, on 7 October 1997. The lander had conked out on 27 September. In all, the mission collected 1.2 gigabytes of data (which at the time was a lot) and sent back 10,000 images of the planet’s surface.

NASA held on to Marie Curie with the hopes of sending it on another mission to Mars. For a while, it was slated to be part of the Mars 2001 set of missions, but that didn’t happen. In 2015, JPL transferred the rover to the Smithsonian’s National Air and Space Museum.

When NASA Embraced Faster, Better, Cheaper

The Pathfinder mission was the second one in NASA administrator Daniel S. Goldin’s Discovery Program, which embodied his “faster, better, cheaper” philosophy of making NASA more nimble and efficient. (The first Discovery mission was to the asteroid Eros.) In the financial climate of the early 1990s, the space agency couldn’t risk a billion-dollar loss if a major mission failed. Goldin opted for smaller projects; the Pathfinder mission’s overall budget, including flight and operations, was capped at US $300 million.

In his 2014 book Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus), science writer Rod Pyle interviews Rob Manning, chief engineer for the Pathfinder mission and subsequent Mars rovers. Manning recalled that one of the best things about the mission was its relatively minimal requirements. The team was responsible for landing on Mars, delivering the rover, and transmitting images—technically challenging, to be sure, but beyond that the team had no constraints.

Sojourner was succeeded by the rovers Spirit, Opportunity, and Curiosity. Shown here are four mission spares, including Marie Curie [foreground]. JPL-Caltech/NASA

The real mission was to prove to Congress and the American public that NASA could do groundbreaking work more efficiently. Behind the scenes, there was a little bit of accounting magic happening, with the “faster, better, cheaper” missions often being silently underwritten by larger, older projects. For example, the radioisotope heater units that kept Sojourner’s electronics warm enough to operate were leftover spares from the Galileo mission to Jupiter, so they were “free.”

Not only was the Pathfinder mission successful but it captured the hearts of Americans and reinvigorated an interest in exploring Mars. In the process, it set the foundation for the future missions that allowed the rovers Spirit, Opportunity, and Curiosity (which, incredibly, is still operating nearly 13 years after it landed) to explore even more of the Red Planet.

How the rovers Sojourner and Marie Curie got their names

To name its first Mars rovers, NASA launched a student contest in March 1994, with the specific guidance of choosing a “heroine.” Entry essays were judged on their quality and creativity, the appropriateness of the name for a rover, and the student’s knowledge of the woman to be honored as well as the mission’s goals. Students from all over the world entered.

Twelve-year-old Valerie Ambroise of Bridgeport, Conn., won for her essay on Sojourner Truth, while 18-year-old Deepti Rohatgi of Rockville, Md., came in second for hers on Marie Curie. Truth was a Black woman born into slavery at the end of the 18th century. She escaped with her infant daughter and two years later won freedom for her son through legal action. She became a vocal advocate for civil rights, women’s rights, and alcohol temperance. Curie was a Polish-French physicist and chemist famous for her studies of radioactivity, a term she coined. She was the first woman to win a Nobel Prize, as well as the first person to win a second Nobel.

NASA subsequently recognized several other women with named structures. One of the last women to be so honored was Nancy Grace Roman, the space agency’s first chief of astronomy. In May 2020, NASA announced it would name the Wide Field Infrared Survey Telescope after Roman; the space telescope is set to launch as early as October 2026, although the Trump administration has repeatedly said it wants to cancel the project.

These days, NASA tries to avoid naming its major projects after people. It quietly changed its naming policy in December 2022 after allegations came to light that James Webb, for whom the James Webb Space Telescope is named, had fired LGBTQ+ employees at NASA and, before that, the State Department. A NASA investigation couldn’t substantiate the allegations, and so the telescope retained Webb’s name. But the bar is now much higher for NASA projects to memorialize anyone, deserving or otherwise. (The agency did allow the hopping lunar robot IM-2 Micro Nova Hopper, built by Intuitive Machines, to be named for computer-software pioneer Grace Hopper.)

And so Marie Curie and Sojourner will remain part of a rarefied clique. Sojourner, inducted into the Robot Hall of Fame in 2003, will always be the celebrity of the pair. And Marie Curie will always remain on the sidelines. But think about it this way: Marie Curie is now on exhibit at one of the most popular museums in the world, where millions of visitors can see the rover up close. That’s not too shabby a legacy either.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the June 2025 print issue.

References

Curator Matthew Shindell of the National Air and Space Museum first suggested I feature Marie Curie. I found additional information from the museum’s collections website, an article by David Kindy in Smithsonian magazine, and the book After Sputnik: 50 Years of the Space Age (Smithsonian Books/HarperCollins, 2007) by Smithsonian curator Martin Collins.

NASA has numerous resources documenting the Mars Pathfinder mission, such as the mission website, fact sheet, and many lovely photos (including some of Barnacle Bill and a composite of Marie Curie during a prelaunch test).

Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus, 2014) by Rod Pyle and Roving Mars: Spirit, Opportunity, and the Exploration of the Red Planet (Hyperion, 2005) by planetary scientist Steve Squyres are both about later Mars missions and their rovers, but they include foundational information about Sojourner.



As a mere earthling, I remember watching in fascination as Sojourner sent back photos of the Martian surface during the summer of 1997. I was not alone. The servers at NASA’s Jet Propulsion Lab slowed to a crawl when they got more than 47 million hits (a record number!) from people attempting to download those early images of the Red Planet. To be fair, it was the late 1990s, the Internet was still young, and most people were using dial-up modems. By the end of the 83-day mission, Sojourner had sent back 550 photos and performed more than 15 chemical analyses of Martian rocks and soil.

Sojourner, of course, remains on Mars. Pictured here is Marie Curie, its twin. Functionally identical, either one of the rovers could have made the voyage to Mars, but one of them was bound to become the famous face of the mission, while the other was destined to be left behind in obscurity. Did I write this piece because I feel a little bad for Marie Curie? Maybe. But it also gave me a chance to revisit this pioneering Mars mission, which established that robots could effectively explore the surface of planets and captivate the public imagination.

Sojourner’s sojourn on Mars

On 4 July 1997, the Mars Pathfinder parachuted through the Martian atmosphere and bounced about 15 times on glorified airbags before finally coming to a rest. The lander, renamed the Carl Sagan Memorial Station, carried precious cargo stowed inside. The next day, after the airbags retracted, the solar-powered Sojourner eased its way down the ramp, the first human-made vehicle to roll around on the surface of another planet. (It wasn’t the first extraterrestrial body, though. The Soviet Lunokhod rovers conducted two successful missions on the moon in 1970 and 1973. The Soviets had also landed a rover on Mars back in 1971, but communication was lost before the PROP-M ever deployed.)

This giant sandbox at JPL provided Marie Curie with an approximation of Martian terrain. Mike Nelson/AFP/Getty Images

The six-wheeled, 10.6-kilogram, microwave-oven-size Sojourner was equipped with three low-resolution cameras (two on the front for black-and-white images and a color camera on the rear), a laser hazard–avoidance system, an alpha-proton X-ray spectrometer, experiments for testing wheel abrasion and material adherence, and several accelerometers. The robot also demonstrated the value of the six-wheeled “rocker-bogie” suspension system that became NASA’s go-to design for all later Mars rovers. Sojourner never roamed more than about 12 meters from the lander due to the limited range of its radio.

Pathfinder had landed in Ares Vallis, an assumed ancient floodplain chosen because of the wide variety of rocks present. Scientists hoped to confirm the past existence of water on the surface of Mars. Sojourner did discover rounded pebbles that suggested running water, and later missions confirmed it.

A highlight of Sojourner’s 83-day mission on Mars was its encounter with a rock nicknamed Barnacle Bill [to the rover’s left]. JPL/NASA

As its first act of exploration, Sojourner rolled forward 36 centimeters and encountered a rock, dubbed Barnacle Bill due to its rough surface. The rover spent about 10 hours analyzing the rock, using its spectrometer to determine the elemental composition. Over the next few weeks, while the lander collected atmospheric information and took photos, the rover studied rocks in detail and tested the Martian soil.

Marie Curie’s sojourn…in a JPL sandbox

Meanwhile back on Earth, engineers at JPL used Marie Curie to mimic Sojourner’s movements in a Mars-like setting. During the original design and testing of the rovers, the team had set up giant sandboxes, each holding thousands of kilograms of playground sand, in the Space Flight Operations Facility at JPL. They exhaustively practiced the remote operation of Sojourner, including an 11-minute delay in communications between Mars and Earth. (The actual delay can vary from 7 to 20 minutes.) Even after Sojourner landed, Marie Curie continued to help them strategize.

Initially, Sojourner was remotely operated from Earth, which was tricky given the lengthy communication delay. Mike Nelson/AFP/Getty Images

During its first few days on Mars, Sojourner was maneuvered by an Earth-based operator wearing 3D goggles and using a funky input device called a Spaceball 2003. Images pieced together from both the lander and the rover guided the operator. It was like a very, very slow video game—the rover sometimes moved only a few centimeters a day. NASA then turned on Sojourner’s hazard-avoidance system, which allowed the rover some autonomy to explore its world. A human would suggest a path for that day’s exploration, and then the rover had to autonomously avoid any obstacles in its way, such as a big rock, a cliff, or a steep slope.

JPL designed Sojourner to operate for a week. But the little rover that could kept chugging along for 83 Martian days before NASA finally lost contact, on 7 October 1997. The lander had conked out on 27 September. In all, the mission collected 1.2 gigabytes of data (which at the time was a lot) and sent back 10,000 images of the planet’s surface.

NASA held on to Marie Curie with the hopes of sending it on another mission to Mars. For a while, it was slated to be part of the Mars 2001 set of missions, but that didn’t happen. In 2015, JPL transferred the rover to the Smithsonian’s National Air and Space Museum.

When NASA Embraced Faster, Better, Cheaper

The Pathfinder mission was the second one in NASA administrator Daniel S. Goldin’s Discovery Program, which embodied his “faster, better, cheaper” philosophy of making NASA more nimble and efficient. (The first Discovery mission was to the asteroid Eros.) In the financial climate of the early 1990s, the space agency couldn’t risk a billion-dollar loss if a major mission failed. Goldin opted for smaller projects; the Pathfinder mission’s overall budget, including flight and operations, was capped at US $300 million.

In his 2014 book Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus), science writer Rod Pyle interviews Rob Manning, chief engineer for the Pathfinder mission and subsequent Mars rovers. Manning recalled that one of the best things about the mission was its relatively minimal requirements. The team was responsible for landing on Mars, delivering the rover, and transmitting images—technically challenging, to be sure, but beyond that the team had no constraints.

Sojourner was succeeded by the rovers Spirit, Opportunity, and Curiosity. Shown here are four mission spares, including Marie Curie [foreground]. JPL-Caltech/NASA

The real mission was to prove to Congress and the American public that NASA could do groundbreaking work more efficiently. Behind the scenes, there was a little bit of accounting magic happening, with the “faster, better, cheaper” missions often being silently underwritten by larger, older projects. For example, the radioisotope heater units that kept Sojourner’s electronics warm enough to operate were leftover spares from the Galileo mission to Jupiter, so they were “free.”

Not only was the Pathfinder mission successful but it captured the hearts of Americans and reinvigorated an interest in exploring Mars. In the process, it set the foundation for the future missions that allowed the rovers Spirit, Opportunity, and Curiosity (which, incredibly, is still operating nearly 13 years after it landed) to explore even more of the Red Planet.

How the rovers Sojourner and Marie Curie got their names

To name its first Mars rovers, NASA launched a student contest in March 1994, with the specific guidance of choosing a “heroine.” Entry essays were judged on their quality and creativity, the appropriateness of the name for a rover, and the student’s knowledge of the woman to be honored as well as the mission’s goals. Students from all over the world entered.

Twelve-year-old Valerie Ambroise of Bridgeport, Conn., won for her essay on Sojourner Truth, while 18-year-old Deepti Rohatgi of Rockville, Md., came in second for hers on Marie Curie. Truth was a Black woman born into slavery at the end of the 18th century. She escaped with her infant daughter and two years later won freedom for her son through legal action. She became a vocal advocate for civil rights, women’s rights, and alcohol temperance. Curie was a Polish-French physicist and chemist famous for her studies of radioactivity, a term she coined. She was the first woman to win a Nobel Prize, as well as the first person to win a second Nobel.

NASA subsequently recognized several other women with named structures. One of the last women to be so honored was Nancy Grace Roman, the space agency’s first chief of astronomy. In May 2020, NASA announced it would name the Wide Field Infrared Survey Telescope after Roman; the space telescope is set to launch as early as October 2026, although the Trump administration has repeatedly said it wants to cancel the project.

These days, NASA tries to avoid naming its major projects after people. It quietly changed its naming policy in December 2022 after allegations came to light that James Webb, for whom the James Webb Space Telescope is named, had fired LGBTQ+ employees at NASA and, before that, the State Department. A NASA investigation couldn’t substantiate the allegations, and so the telescope retained Webb’s name. But the bar is now much higher for NASA projects to memorialize anyone, deserving or otherwise. (The agency did allow the hopping lunar robot IM-2 Micro Nova Hopper, built by Intuitive Machines, to be named for computer-software pioneer Grace Hopper.)

And so Marie Curie and Sojourner will remain part of a rarefied clique. Sojourner, inducted into the Robot Hall of Fame in 2003, will always be the celebrity of the pair. And Marie Curie will always remain on the sidelines. But think about it this way: Marie Curie is now on exhibit at one of the most popular museums in the world, where millions of visitors can see the rover up close. That’s not too shabby a legacy either.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the June 2025 print issue.

References

Curator Matthew Shindell of the National Air and Space Museum first suggested I feature Marie Curie. I found additional information from the museum’s collections website, an article by David Kindy in Smithsonian magazine, and the book After Sputnik: 50 Years of the Space Age (Smithsonian Books/HarperCollins, 2007) by Smithsonian curator Martin Collins.

NASA has numerous resources documenting the Mars Pathfinder mission, such as the mission website, fact sheet, and many lovely photos (including some of Barnacle Bill and a composite of Marie Curie during a prelaunch test).

Curiosity: An Inside Look at the Mars Rover Mission and the People Who Made It Happen (Prometheus, 2014) by Rod Pyle and Roving Mars: Spirit, Opportunity, and the Exploration of the Red Planet (Hyperion, 2005) by planetary scientist Steve Squyres are both about later Mars missions and their rovers, but they include foundational information about Sojourner.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHENCoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

For a humanoid robot to be successful and generalizable in a factory, warehouse, or even at home requires a comprehensive understanding of the world around it—both the shape and the context of the objects and environments the robot interacts with. To do those tasks with agility and adaptability, Atlas needs an equally agile and adaptable perception system.

[Boston Dynamics]

What happens when a bipedal robot is placed in the back of a moving cargo truck without any support? LimX Dynamics explored this idea in a real-world test. During the test, TRON 1 was positioned in the compartment of a medium-sized truck. The vehicle carried out a series of demanding maneuvers—sudden stops, rapid acceleration, sharp turns, and lane changes. With no external support, TRON 1 had to rely entirely on its onboard control system to stay upright, presenting a real challenge for dynamic stability.

[LimX Dynamics]

Thanks, Jinyan!

We present a quiet, smooth-walking controller for quadruped guide robots, addressing key challenges for blind and low-vision (BLV) users. Unlike conventional controllers, which produce distracting noise and jerky motion, ours enables slow, stable, and human-speed walking—even on stairs. Through interviews and user studies with BLV individuals, we show that our controller reduces noise by half and significantly improves user acceptance, making quadruped robots a more viable mobility aid.

[University of Massachusetts Amherst]

Thanks, Julia!

RIVR, the leader in physical AI and robotics, is partnering with Veho to pilot our delivery robots in the heart of Austin, Texas. Designed to solve the “last-100-yard” challenge, our wheeled-legged robots navigate stairs, gates, and real-world terrain to deliver parcels directly to the doorstep—working alongside human drivers, not replacing them.

[RIVR]

We will have more on this robot shortly, but for now, this is all you need to know.

[Pintobotics]

Some pretty awesome quadruped parkour here—haven’t seen the wall running before.

[Paper] via [Science Robotics]

This is fun, and also useful, because it’s all about recovering from unpredictable and forceful impacts.

What is that move at 0:06, though?! Wow.

[Unitree]

Maybe an option for all of those social robots that are now not social?

[RoboHearts]

Oh, good, another robot I want nowhere near me.

[SDU Biorobotics Lab, University of Southern Denmark]

While this “has become the first humanoid robot to skillfully use chopsticks,” I’m pretty skeptical of the implied autonomy. Also, those chopsticks are cheaters.

[ROBOTERA]

Looks like Westwood Robotics had a fun time at ICRA!

[Westwood Robotics]

Tessa Lau, CEO and co-founder of Dusty Robotics, delivered a plenary session (keynote) at the 2025 IEEE International Conference on Robotics & Automation (ICRA) in May 2025.

[Dusty Robotics]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHENCoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

For a humanoid robot to be successful and generalizable in a factory, warehouse, or even at home requires a comprehensive understanding of the world around it—both the shape and the context of the objects and environments the robot interacts with. To do those tasks with agility and adaptability, Atlas needs an equally agile and adaptable perception system.

[Boston Dynamics]

What happens when a bipedal robot is placed in the back of a moving cargo truck without any support? LimX Dynamics explored this idea in a real-world test. During the test, TRON 1 was positioned in the compartment of a medium-sized truck. The vehicle carried out a series of demanding maneuvers—sudden stops, rapid acceleration, sharp turns, and lane changes. With no external support, TRON 1 had to rely entirely on its onboard control system to stay upright, presenting a real challenge for dynamic stability.

[LimX Dynamics]

Thanks, Jinyan!

We present a quiet, smooth-walking controller for quadruped guide robots, addressing key challenges for blind and low-vision (BLV) users. Unlike conventional controllers, which produce distracting noise and jerky motion, ours enables slow, stable, and human-speed walking—even on stairs. Through interviews and user studies with BLV individuals, we show that our controller reduces noise by half and significantly improves user acceptance, making quadruped robots a more viable mobility aid.

[University of Massachusetts Amherst]

Thanks, Julia!

RIVR, the leader in physical AI and robotics, is partnering with Veho to pilot our delivery robots in the heart of Austin, Texas. Designed to solve the “last-100-yard” challenge, our wheeled-legged robots navigate stairs, gates, and real-world terrain to deliver parcels directly to the doorstep—working alongside human drivers, not replacing them.

[RIVR]

We will have more on this robot shortly, but for now, this is all you need to know.

[Pintobotics]

Some pretty awesome quadruped parkour here—haven’t seen the wall running before.

[Paper] via [Science Robotics]

This is fun, and also useful, because it’s all about recovering from unpredictable and forceful impacts.

What is that move at 0:06, though?! Wow.

[Unitree]

Maybe an option for all of those social robots that are now not social?

[RoboHearts]

Oh, good, another robot I want nowhere near me.

[SDU Biorobotics Lab, University of Southern Denmark]

While this “has become the first humanoid robot to skillfully use chopsticks,” I’m pretty skeptical of the implied autonomy. Also, those chopsticks are cheaters.

[ROBOTERA]

Looks like Westwood Robotics had a fun time at ICRA!

[Westwood Robotics]

Tessa Lau, CEO and co-founder of Dusty Robotics, delivered a plenary session (keynote) at the 2025 IEEE International Conference on Robotics & Automation (ICRA) in May 2025.

[Dusty Robotics]



As drones evolve into critical agents across defense, disaster response, and infrastructure inspection, they must become more adaptive, secure, and resilient. Traditional AI methods fall short in real-world unpredictability. This whitepaper from the Technology Innovation Institute (TII) explores how Embodied AI – AI that integrates perception, action, memory, and learning in dynamic environments, can revolutionize drone operations. Drawing from innovations in GenAI, Physical AI, and zero-trust frameworks, TII outlines a future where drones can perceive threats, adapt to change, and collaborate safely in real time. The result: smarter, safer, and more secure autonomous aerial systems.

Download this free whitepaper now!



As drones evolve into critical agents across defense, disaster response, and infrastructure inspection, they must become more adaptive, secure, and resilient. Traditional AI methods fall short in real-world unpredictability. This whitepaper from the Technology Innovation Institute (TII) explores how Embodied AI – AI that integrates perception, action, memory, and learning in dynamic environments, can revolutionize drone operations. Drawing from innovations in GenAI, Physical AI, and zero-trust frameworks, TII outlines a future where drones can perceive threats, adapt to change, and collaborate safely in real time. The result: smarter, safer, and more secure autonomous aerial systems.

Download this free whitepaper now!



Less than three years ago, these were bare fields in humble Ellabell, Georgia. Today, the vast Hyundai Motor Group Metaplant is exactly what people imagine when they talk about the future of EV and automobile manufacturing in America.

I’ve driven the 2026 Hyundai Ioniq9 here from nearby Savannah, a striking three-row electric SUV with everything it takes to succeed in today’s market: up to 530 kilometers (335 miles) of efficient driving range, the latest features and tech, and a native NACS connector that lets owners—finally—hook into Tesla Superchargers with streamlined Plug and Charge ease.

The success of the Ioniq9 and popular Ioniq5 crossover is deeply intertwined with the US $7.6 billion Metaplant, whose inaugural 2025 Ioniq5 rolled off its assembly line in October. That includes the Ioniq models’ full eligibility for $7,500 consumer tax credits for U.S.-built EVs with North American batteries, although the credits are on the Trump administration’s chopping block. Still, the factory gives Hyundai a bulwark and some breathing room against potential tariffs and puts the South Korean automaker ahead of many rivals.

America’s Largest EV Plant

With 11 cavernous buildings and a massive 697,000 square meters (7.5 million square feet) of space, it’s set to become America’s largest dedicated plant for EVs and hybrids, with capacity for 500,000 Hyundai, Kia, and Genesis models per year. (Tesla’s Texas Gigafactory can produce 375,000.) Company executives say this is North America’s most heavily automated factory, bar none, a showcase for AI and robotic tech.

The factory is also environmentally friendly, as I see when I roll into the factory: “Meta Pros,” as Hyundai calls its workers, can park in nearly 1,900 spaces beneath solar roofs, shielded from the baking Georgia sun that provides up to 5 percent of the plant’s electricity. The automaker has a target of obtaining 100 percent of its energy from renewable sources. Those include hydrogen trucks from the Hyundai-owned Xcient, the world’s first commercialized hydrogen fuel-cell semis. A fleet of 21 trucks haul parts here from area suppliers, taking advantage of 400-kilometer driving ranges with zero tailpipe emissions. The bulk of finished vehicles are shipped by rail rather than truck, trimming fossil-fuel emissions and the automaker’s carbon footprint.

At the docks, some of the plant’s 850 robots unload parts from the hydrogen trucks. About 300 automated guided vehicles, or AGVs, glide around the factory with no tracks required, smartly avoiding human workers. As part of an AI-based procurement and logistics system, the AGVs automatically allocate and ferry parts to their proper work stations for just-in-time delivery, saving space, time, and money otherwise used to stockpile parts.

“They’re delivering the right parts to the right station at the right time, so you’re no longer relying on people to make decisions,” says Jerry Roach, senior manager of general assembly.

The building blocks of a modern unibody car chassis, called “bodies in white,” are welded by an army of 475 robots at Hyundai’s new plant.Hyundai

I’ve seen AGVs in action around the world, but the Metaplant shows me a new trick: A pair of sled-like AGVs slide below these electric Hyundais as they roll off the line. They grab and hoist their wheels and autonomously ferry the finished Hyundais to a parking area, with no need for a human driver.

Robotic Innovations in Hyundai Factories

Some companies have strict policies about pets at work. Here, Spots—robotic quadrupeds designed by Hyundai-owned Boston Dynamics—use 360-degree vision and “athletic intelligence” to sniff out potential defects on car welds. Those four-legged friends may soon have a biped partner: Atlas, the humanoid robots from Boston Dynamics whose breathtaking physical skills—including crawling, cartwheeling, and even breakdance moves—have observers wondering if autoworkers are next in line to be replaced by AI. Hyundai executives say that’s not the case, even as they plan to deploy Atlas models (non-union of course) throughout their global factories. With RGB cameras in their charming 360-degree swiveling heads, Atlas robots are being trained to sense their environments, avoid collisions, and manipulate and move parts in factories in impressively complex sequences.

The welding shop alone houses 475 industrial robots, among about 850 in total. I watch massive robots cobble together “bodies in white,” the building blocks of every car chassis, with ruthless speed and precision. A trip to the onsite steel stamping plant reveals a facility so quiet that no ear protection is required. Here, a whirling mass of robots stamp out roofs, fenders, and hoods, which are automatically stored in soaring racks overhead.

Roach says the Metaplant offered a unique opportunity to design an electrified car plant from the ground up, rather than retrofit an existing factory that made internal-combustion cars, which even Tesla and Rivian were forced to do in California and Illinois, respectively.

Regarding automation replacing human workers, Roach acknowledges that some of it is inevitable. But robots are also freeing humans from heavy lifting and repetitive, mindless tasks that, for decades, made factory work both hazardous and unfulfilling.

He offers a technical first as an example: A collaborative robot—sophisticated enough to work alongside humans with no physical separation for safety—installs bulky doors on the assembly line. It’s a notoriously cumbersome process to perform without scratching the pretty paint on a door or surrounding panels.

“Guess what? Robots do that perfectly,” Roach says. “They always put the door in the exact same place. So here, that technology makes sense.”

It also frees people to do what they’re best at: precision tasks that require dexterous fingers, vision, intelligence, and skill. “I want my people doing craftsmanship,” Roach says.

The plant currently employs 1,340 Meta Pros at an annual average pay of $58,100. That’s 25 percent higher than average in Bryan County, Ga. Hyundai’s annual local payroll has already reached $497 million. The company foresees an eventual 8,500 jobs on site and another 7,000 indirect jobs for local suppliers and businesses.

On the battery front, Hyundai is currently sourcing cells from Georgia and SK On, with some Ioniq5 batteries imported from Hungary. But the Metaplant campus includes the HL-GA battery company. The $4 billion plant, a joint operation with LG Energy Solutions, plans to produce nickel-cobalt-magnesium cells beginning next year, assembled into packs on site by Hyundai’s Mobis subsidiary. Hyundai is also on track to open a second $5 billion battery plant in Georgia, a joint operation with SK On. It’s all part of Hyundai’s planned $21 billion in U.S. investment between now and 2028—more than the $20 billion it invested since entering the U.S. market in 1986. Even a robot could crunch those numbers and come away impressed.



Less than three years ago, these were bare fields in humble Ellabell, Georgia. Today, the vast Hyundai Motor Group Metaplant is exactly what people imagine when they talk about the future of EV and automobile manufacturing in America.

I’ve driven the 2026 Hyundai Ioniq9 here from nearby Savannah, a striking three-row electric SUV with everything it takes to succeed in today’s market: up to 530 kilometers (335 miles) of efficient driving range, the latest features and tech, and a native NACS connector that lets owners—finally—hook into Tesla Superchargers with streamlined Plug and Charge ease.

The success of the Ioniq9 and popular Ioniq5 crossover is deeply intertwined with the US $7.6 billion Metaplant, whose inaugural 2025 Ioniq5 rolled off its assembly line in October. That includes the Ioniq models’ full eligibility for $7,500 consumer tax credits for U.S.-built EVs with North American batteries, although the credits are on the Trump administration’s chopping block. Still, the factory gives Hyundai a bulwark and some breathing room against potential tariffs and puts the South Korean automaker ahead of many rivals.

America’s Largest EV Plant

With 11 cavernous buildings and a massive 697,000 square meters (7.5 million square feet) of space, it’s set to become America’s largest dedicated plant for EVs and hybrids, with capacity for 500,000 Hyundai, Kia, and Genesis models per year. (Tesla’s Texas Gigafactory can produce 375,000.) Company executives say this is North America’s most heavily automated factory, bar none, a showcase for AI and robotic tech.

The factory is also environmentally friendly, as I see when I roll into the factory: “Meta Pros,” as Hyundai calls its workers, can park in nearly 1,900 spaces beneath solar roofs, shielded from the baking Georgia sun that provides up to 5 percent of the plant’s electricity. The automaker has a target of obtaining 100 percent of its energy from renewable sources. Those include hydrogen trucks from the Hyundai-owned Xcient, the world’s first commercialized hydrogen fuel-cell semis. A fleet of 21 trucks haul parts here from area suppliers, taking advantage of 400-kilometer driving ranges with zero tailpipe emissions. The bulk of finished vehicles are shipped by rail rather than truck, trimming fossil-fuel emissions and the automaker’s carbon footprint.

At the docks, some of the plant’s 850 robots unload parts from the hydrogen trucks. About 300 automated guided vehicles, or AGVs, glide around the factory with no tracks required, smartly avoiding human workers. As part of an AI-based procurement and logistics system, the AGVs automatically allocate and ferry parts to their proper work stations for just-in-time delivery, saving space, time, and money otherwise used to stockpile parts.

“They’re delivering the right parts to the right station at the right time, so you’re no longer relying on people to make decisions,” says Jerry Roach, senior manager of general assembly.

The building blocks of a modern unibody car chassis, called “bodies in white,” are welded by an army of 475 robots at Hyundai’s new plant.Hyundai

I’ve seen AGVs in action around the world, but the Metaplant shows me a new trick: A pair of sled-like AGVs slide below these electric Hyundais as they roll off the line. They grab and hoist their wheels and autonomously ferry the finished Hyundais to a parking area, with no need for a human driver.

Robotic Innovations in Hyundai Factories

Some companies have strict policies about pets at work. Here, Spots—robotic quadrupeds designed by Hyundai-owned Boston Dynamics—use 360-degree vision and “athletic intelligence” to sniff out potential defects on car welds. Those four-legged friends may soon have a biped partner: Atlas, the humanoid robots from Boston Dynamics whose breathtaking physical skills—including crawling, cartwheeling, and even breakdance moves—have observers wondering if autoworkers are next in line to be replaced by AI. Hyundai executives say that’s not the case, even as they plan to deploy Atlas models (non-union of course) throughout their global factories. With RGB cameras in their charming 360-degree swiveling heads, Atlas robots are being trained to sense their environments, avoid collisions, and manipulate and move parts in factories in impressively complex sequences.

The welding shop alone houses 475 industrial robots, among about 850 in total. I watch massive robots cobble together “bodies in white,” the building blocks of every car chassis, with ruthless speed and precision. A trip to the onsite steel stamping plant reveals a facility so quiet that no ear protection is required. Here, a whirling mass of robots stamp out roofs, fenders, and hoods, which are automatically stored in soaring racks overhead.

Roach says the Metaplant offered a unique opportunity to design an electrified car plant from the ground up, rather than retrofit an existing factory that made internal-combustion cars, which even Tesla and Rivian were forced to do in California and Illinois, respectively.

Regarding automation replacing human workers, Roach acknowledges that some of it is inevitable. But robots are also freeing humans from heavy lifting and repetitive, mindless tasks that, for decades, made factory work both hazardous and unfulfilling.

He offers a technical first as an example: A collaborative robot—sophisticated enough to work alongside humans with no physical separation for safety—installs bulky doors on the assembly line. It’s a notoriously cumbersome process to perform without scratching the pretty paint on a door or surrounding panels.

“Guess what? Robots do that perfectly,” Roach says. “They always put the door in the exact same place. So here, that technology makes sense.”

It also frees people to do what they’re best at: precision tasks that require dexterous fingers, vision, intelligence, and skill. “I want my people doing craftsmanship,” Roach says.

The plant currently employs 1,340 Meta Pros at an annual average pay of $58,100. That’s 25 percent higher than average in Bryan County, Ga. Hyundai’s annual local payroll has already reached $497 million. The company foresees an eventual 8,500 jobs on site and another 7,000 indirect jobs for local suppliers and businesses.

On the battery front, Hyundai is currently sourcing cells from Georgia and SK On, with some Ioniq5 batteries imported from Hungary. But the Metaplant campus includes the HL-GA battery company. The $4 billion plant, a joint operation with LG Energy Solutions, plans to produce nickel-cobalt-magnesium cells beginning next year, assembled into packs on site by Hyundai’s Mobis subsidiary. Hyundai is also on track to open a second $5 billion battery plant in Georgia, a joint operation with SK On. It’s all part of Hyundai’s planned $21 billion in U.S. investment between now and 2028—more than the $20 billion it invested since entering the U.S. market in 1986. Even a robot could crunch those numbers and come away impressed.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

London Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHENCoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

This is our latest work about a hybrid aerial-terrestrial quadruped robot called SPIDAR, which shows a unique grasping style in midair. This work has been presented in the 2025 IEEE International Conference on Robotics & Automation (ICRA).

[DRAGON Lab]

Thanks, Moju!

These wormlike soft robots can intertwine into physically entangled “blobs,” like living California blackworms. Both the robots and the living worms can operate individually as well as collectively as a blob, carrying out functions like directed movement and transporting objects.

[Designing Emergence Lab]

At only 3 centimeters tall, Zippy, the world’s smallest bipedal robot, is also self-contained--all the controls, power, and motor are on board so that it operates autonomously. Moving at 10 leg lengths per second, it is also the fastest bipedal robot [relative to its size].

[CMU]

Spot is getting some AI upgrades to help it with industrial inspection.

[Boston Dynamics]

A 3D-printed sphere that can morph from smooth to dimpled on demand could help researchers improve how underwater vehicles and aircraft maneuver. Inspired by a golf ball aerodynamics problem, Assistant Professor of Naval Architecture and Marine Engineering and Mechanical Engineering Anchal Sareen and her team applied soft robotic techniques with fluid dynamics principles to study how different dimple depths at different flow velocities could reduce an underwater vehicle’s drag, as well as allow it to maneuver without fins and rudders.

[UMich]

Tool use is critical for enabling robots to perform complex real-world tasks, and leveraging human tool-use data can be instrumental for teaching robots. However, existing data-collection methods like teleoperation are slow, prone to control delays, and unsuitable for dynamic tasks. In contrast, human play—where humans directly perform tasks with tools—offers natural, unstructured interactions that are both efficient and easy to collect. Building on the insight that humans and robots can share the same tools, we propose a framework to transfer tool-use knowledge from human play to robots.

[Tool as Interface]

Thanks, Haonan!

UR15 is our new high-performance collaborative robot. UR15 is engineered for ultimate versatility, combining a lightweight design with a compact footprint to deliver unmatched flexibility—even in the most space-restricted environments. It reaches an impressive maximum speed of 5 meters per second, which ultimately enables reduced cycle times and increased productivity, and is designed to perform heavy-duty tasks while delivering speed and precision wherever you need it.

[Universal Robots]

Debuting at the 2025 IEEE International Conference on Robotics & Automation (May 19–23, Atlanta, USA), this interactive art installation features buoyant bipedal robots—composed of helium balloons and articulated legs—moving freely within a shared playground in the exhibition space. Visitors are invited to engage with the robots via touch, gamepads, or directed airflow, influencing their motion, color-changing lights, and expressive behavior.

[RoMeLa]

We gave TRON 1 an arm. Now, it’s faster, stronger, and ready for whatever the terrain throws at it.

[LimX Dynamics]

Humanoid robots can support human workers in physically demanding environments by performing tasks that require whole-body coordination, such as lifting and transporting heavy objects. These tasks, which we refer to as Dynamic Mobile Manipulation (DMM), require the simultaneous control of locomotion, manipulation, and posture under dynamic interaction forces. This paper presents a teleoperation framework for DMM on a height-adjustable wheeled humanoid robot for carrying heavy payloads.

[RoboDesign Lab]

Yoshua Bengio—the world’s most-cited computer scientist and a “godfather” of artificial intelligence—is deadly concerned about the current trajectory of the technology. As AI models race toward full-blown agency, Bengio warns that they’ve already learned to deceive, cheat, self-preserve, and slip out of our control. Drawing on his groundbreaking research, he reveals a bold plan to keep AI safe and ensure that human flourishing, not machines with unchecked power and autonomy, defines our future.

[TED]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

London Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTON, TXRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHENCoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

This is our latest work about a hybrid aerial-terrestrial quadruped robot called SPIDAR, which shows a unique grasping style in midair. This work has been presented in the 2025 IEEE International Conference on Robotics & Automation (ICRA).

[DRAGON Lab]

Thanks, Moju!

These wormlike soft robots can intertwine into physically entangled “blobs,” like living California blackworms. Both the robots and the living worms can operate individually as well as collectively as a blob, carrying out functions like directed movement and transporting objects.

[Designing Emergence Lab]

At only 3 centimeters tall, Zippy, the world’s smallest bipedal robot, is also self-contained--all the controls, power, and motor are on board so that it operates autonomously. Moving at 10 leg lengths per second, it is also the fastest bipedal robot [relative to its size].

[CMU]

Spot is getting some AI upgrades to help it with industrial inspection.

[Boston Dynamics]

A 3D-printed sphere that can morph from smooth to dimpled on demand could help researchers improve how underwater vehicles and aircraft maneuver. Inspired by a golf ball aerodynamics problem, Assistant Professor of Naval Architecture and Marine Engineering and Mechanical Engineering Anchal Sareen and her team applied soft robotic techniques with fluid dynamics principles to study how different dimple depths at different flow velocities could reduce an underwater vehicle’s drag, as well as allow it to maneuver without fins and rudders.

[UMich]

Tool use is critical for enabling robots to perform complex real-world tasks, and leveraging human tool-use data can be instrumental for teaching robots. However, existing data-collection methods like teleoperation are slow, prone to control delays, and unsuitable for dynamic tasks. In contrast, human play—where humans directly perform tasks with tools—offers natural, unstructured interactions that are both efficient and easy to collect. Building on the insight that humans and robots can share the same tools, we propose a framework to transfer tool-use knowledge from human play to robots.

[Tool as Interface]

Thanks, Haonan!

UR15 is our new high-performance collaborative robot. UR15 is engineered for ultimate versatility, combining a lightweight design with a compact footprint to deliver unmatched flexibility—even in the most space-restricted environments. It reaches an impressive maximum speed of 5 meters per second, which ultimately enables reduced cycle times and increased productivity, and is designed to perform heavy-duty tasks while delivering speed and precision wherever you need it.

[Universal Robots]

Debuting at the 2025 IEEE International Conference on Robotics & Automation (May 19–23, Atlanta, USA), this interactive art installation features buoyant bipedal robots—composed of helium balloons and articulated legs—moving freely within a shared playground in the exhibition space. Visitors are invited to engage with the robots via touch, gamepads, or directed airflow, influencing their motion, color-changing lights, and expressive behavior.

[RoMeLa]

We gave TRON 1 an arm. Now, it’s faster, stronger, and ready for whatever the terrain throws at it.

[LimX Dynamics]

Humanoid robots can support human workers in physically demanding environments by performing tasks that require whole-body coordination, such as lifting and transporting heavy objects. These tasks, which we refer to as Dynamic Mobile Manipulation (DMM), require the simultaneous control of locomotion, manipulation, and posture under dynamic interaction forces. This paper presents a teleoperation framework for DMM on a height-adjustable wheeled humanoid robot for carrying heavy payloads.

[RoboDesign Lab]

Yoshua Bengio—the world’s most-cited computer scientist and a “godfather” of artificial intelligence—is deadly concerned about the current trajectory of the technology. As AI models race toward full-blown agency, Bengio warns that they’ve already learned to deceive, cheat, self-preserve, and slip out of our control. Drawing on his groundbreaking research, he reveals a bold plan to keep AI safe and ensure that human flourishing, not machines with unchecked power and autonomy, defines our future.

[TED]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, SOUTH KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHEN, CHINACoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

Behind the scenes at DARPA Triage Challenge Workshop 2 at the Guardian Centers in Perry, Ga.

[ DARPA ]

Watch our coworker in action as he performs high-precision stretch routines enabled by 31 degrees of freedom. Designed for dynamic adaptability, this is where robotics meets real-world readiness.

[ LimX Dynamics ]

Thanks, Jinyan!

Featuring a lightweight design and continuous operation capabilities under extreme conditions, LYNX M20 sets a new benchmark for intelligent robotic platforms working in complex scenarios.

[ DEEP Robotics ]

The sound in this video is either excellent or terrible, I’m not quite sure which.

[ TU Berlin ]

Humanoid loco-manipulation holds transformative potential for daily service and industrial tasks, yet achieving precise, robust whole-body control with 3D end-effector force interaction remains a major challenge. Prior approaches are often limited to lightweight tasks or quadrupedal/wheeled platforms. To overcome these limitations, we propose FALCON, a dual-agent reinforcement-learning-based framework for robust force-adaptive humanoid loco-manipulation.

[ FALCON ]

An MRSD Team at the CMU Robotics Institute is developing a robotic platform to map environments through perceptual degradation, identify points of interest, and relay that information back to first responders. The goal is to reduce information blindness and increase safety.

[ Carnegie Mellon University ]

We introduce an eldercare robot (E-BAR) capable of lifting a human body, assisting with postural changes/ambulation, and catching a user during a fall, all without the use of any wearable device or harness. With a minimum width of 38 centimeters, the robot’s small footprint allows it to navigate the typical home environment. We demonstrate E-BAR’s utility in multiple typical home scenarios that elderly persons experience, including getting into/out of a bathtub, bending to reach for objects, sit-to-stand transitions, and ambulation.

[ MIT ]

Sanctuary AI had the pleasure of accompanying Microsoft to Hannover Messe, where we demonstrated how our technology is shaping the future of work with autonomous labor powered by physical AI and general-purpose robots.

[ Sanctuary AI ]

Watch how drywall finishing machines incorporate collaborative robots, and learn why Canvas chose the Universal Robots platform.

[ Canvas ] via [ Universal Robots ]

We’ve officially put a stake in the ground in Dallas–Fort Worth. Torc’s new operations hub is open for business—and it’s more than just a dot on the map. It’s a strategic launchpad as we expand our autonomous freight network across the southern United States.

[ Torc ]

This Stanford Robotics Center talk is by Jonathan Hurst at Agility Robotics, on “Humanoid Robots: From the Warehouse to Your House.”

How close are we to having safe, reliable, useful in-home humanoids? If you believe recent press, it’s just around the corner. Unquestionably, advances in Al and robotics are driving innovation and activity in the sector; it truly is an exciting time to be building robots! But what does it really take to execute on the vision of useful, human-centric, multipurpose robots? Robots that can operate in human spaces, predictably and safely? We think it starts with humanoids in warehouses, an unsexy but necessary beachhead market to our future with robots as part of everyday life. I’ll talk about why a humanoid is more than a sensible form factor, it’s inevitable; and I will speak to the excitement around a ChatGPT moment for robotics, and what it will take to leverage Al advances and innovation in robotics into useful, safe humanoids.

[ Stanford ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, SOUTH KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHEN, CHINACoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

Behind the scenes at DARPA Triage Challenge Workshop 2 at the Guardian Centers in Perry, Ga.

[ DARPA ]

Watch our coworker in action as he performs high-precision stretch routines enabled by 31 degrees of freedom. Designed for dynamic adaptability, this is where robotics meets real-world readiness.

[ LimX Dynamics ]

Thanks, Jinyan!

Featuring a lightweight design and continuous operation capabilities under extreme conditions, LYNX M20 sets a new benchmark for intelligent robotic platforms working in complex scenarios.

[ DEEP Robotics ]

The sound in this video is either excellent or terrible, I’m not quite sure which.

[ TU Berlin ]

Humanoid loco-manipulation holds transformative potential for daily service and industrial tasks, yet achieving precise, robust whole-body control with 3D end-effector force interaction remains a major challenge. Prior approaches are often limited to lightweight tasks or quadrupedal/wheeled platforms. To overcome these limitations, we propose FALCON, a dual-agent reinforcement-learning-based framework for robust force-adaptive humanoid loco-manipulation.

[ FALCON ]

An MRSD Team at the CMU Robotics Institute is developing a robotic platform to map environments through perceptual degradation, identify points of interest, and relay that information back to first responders. The goal is to reduce information blindness and increase safety.

[ Carnegie Mellon University ]

We introduce an eldercare robot (E-BAR) capable of lifting a human body, assisting with postural changes/ambulation, and catching a user during a fall, all without the use of any wearable device or harness. With a minimum width of 38 centimeters, the robot’s small footprint allows it to navigate the typical home environment. We demonstrate E-BAR’s utility in multiple typical home scenarios that elderly persons experience, including getting into/out of a bathtub, bending to reach for objects, sit-to-stand transitions, and ambulation.

[ MIT ]

Sanctuary AI had the pleasure of accompanying Microsoft to Hannover Messe, where we demonstrated how our technology is shaping the future of work with autonomous labor powered by physical AI and general-purpose robots.

[ Sanctuary AI ]

Watch how drywall finishing machines incorporate collaborative robots, and learn why Canvas chose the Universal Robots platform.

[ Canvas ] via [ Universal Robots ]

We’ve officially put a stake in the ground in Dallas–Fort Worth. Torc’s new operations hub is open for business—and it’s more than just a dot on the map. It’s a strategic launchpad as we expand our autonomous freight network across the southern United States.

[ Torc ]

This Stanford Robotics Center talk is by Jonathan Hurst at Agility Robotics, on “Humanoid Robots: From the Warehouse to Your House.”

How close are we to having safe, reliable, useful in-home humanoids? If you believe recent press, it’s just around the corner. Unquestionably, advances in Al and robotics are driving innovation and activity in the sector; it truly is an exciting time to be building robots! But what does it really take to execute on the vision of useful, human-centric, multipurpose robots? Robots that can operate in human spaces, predictably and safely? We think it starts with humanoids in warehouses, an unsexy but necessary beachhead market to our future with robots as part of everyday life. I’ll talk about why a humanoid is more than a sensible form factor, it’s inevitable; and I will speak to the excitement around a ChatGPT moment for robotics, and what it will take to leverage Al advances and innovation in robotics into useful, safe humanoids.

[ Stanford ]

Pages