Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, SOUTH KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHEN, CHINACoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

Behind the scenes at DARPA Triage Challenge Workshop 2 at the Guardian Centers in Perry, Ga.

[ DARPA ]

Watch our coworker in action as he performs high-precision stretch routines enabled by 31 degrees of freedom. Designed for dynamic adaptability, this is where robotics meets real-world readiness.

[ LimX Dynamics ]

Thanks, Jinyan!

Featuring a lightweight design and continuous operation capabilities under extreme conditions, LYNX M20 sets a new benchmark for intelligent robotic platforms working in complex scenarios.

[ DEEP Robotics ]

The sound in this video is either excellent or terrible, I’m not quite sure which.

[ TU Berlin ]

Humanoid loco-manipulation holds transformative potential for daily service and industrial tasks, yet achieving precise, robust whole-body control with 3D end-effector force interaction remains a major challenge. Prior approaches are often limited to lightweight tasks or quadrupedal/wheeled platforms. To overcome these limitations, we propose FALCON, a dual-agent reinforcement-learning-based framework for robust force-adaptive humanoid loco-manipulation.

[ FALCON ]

An MRSD Team at the CMU Robotics Institute is developing a robotic platform to map environments through perceptual degradation, identify points of interest, and relay that information back to first responders. The goal is to reduce information blindness and increase safety.

[ Carnegie Mellon University ]

We introduce an eldercare robot (E-BAR) capable of lifting a human body, assisting with postural changes/ambulation, and catching a user during a fall, all without the use of any wearable device or harness. With a minimum width of 38 centimeters, the robot’s small footprint allows it to navigate the typical home environment. We demonstrate E-BAR’s utility in multiple typical home scenarios that elderly persons experience, including getting into/out of a bathtub, bending to reach for objects, sit-to-stand transitions, and ambulation.

[ MIT ]

Sanctuary AI had the pleasure of accompanying Microsoft to Hannover Messe, where we demonstrated how our technology is shaping the future of work with autonomous labor powered by physical AI and general-purpose robots.

[ Sanctuary AI ]

Watch how drywall finishing machines incorporate collaborative robots, and learn why Canvas chose the Universal Robots platform.

[ Canvas ] via [ Universal Robots ]

We’ve officially put a stake in the ground in Dallas–Fort Worth. Torc’s new operations hub is open for business—and it’s more than just a dot on the map. It’s a strategic launchpad as we expand our autonomous freight network across the southern United States.

[ Torc ]

This Stanford Robotics Center talk is by Jonathan Hurst at Agility Robotics, on “Humanoid Robots: From the Warehouse to Your House.”

How close are we to having safe, reliable, useful in-home humanoids? If you believe recent press, it’s just around the corner. Unquestionably, advances in Al and robotics are driving innovation and activity in the sector; it truly is an exciting time to be building robots! But what does it really take to execute on the vision of useful, human-centric, multipurpose robots? Robots that can operate in human spaces, predictably and safely? We think it starts with humanoids in warehouses, an unsexy but necessary beachhead market to our future with robots as part of everyday life. I’ll talk about why a humanoid is more than a sensible form factor, it’s inevitable; and I will speak to the excitement around a ChatGPT moment for robotics, and what it will take to leverage Al advances and innovation in robotics into useful, safe humanoids.

[ Stanford ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, SOUTH KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHEN, CHINACoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

Behind the scenes at DARPA Triage Challenge Workshop 2 at the Guardian Centers in Perry, Ga.

[ DARPA ]

Watch our coworker in action as he performs high-precision stretch routines enabled by 31 degrees of freedom. Designed for dynamic adaptability, this is where robotics meets real-world readiness.

[ LimX Dynamics ]

Thanks, Jinyan!

Featuring a lightweight design and continuous operation capabilities under extreme conditions, LYNX M20 sets a new benchmark for intelligent robotic platforms working in complex scenarios.

[ DEEP Robotics ]

The sound in this video is either excellent or terrible, I’m not quite sure which.

[ TU Berlin ]

Humanoid loco-manipulation holds transformative potential for daily service and industrial tasks, yet achieving precise, robust whole-body control with 3D end-effector force interaction remains a major challenge. Prior approaches are often limited to lightweight tasks or quadrupedal/wheeled platforms. To overcome these limitations, we propose FALCON, a dual-agent reinforcement-learning-based framework for robust force-adaptive humanoid loco-manipulation.

[ FALCON ]

An MRSD Team at the CMU Robotics Institute is developing a robotic platform to map environments through perceptual degradation, identify points of interest, and relay that information back to first responders. The goal is to reduce information blindness and increase safety.

[ Carnegie Mellon University ]

We introduce an eldercare robot (E-BAR) capable of lifting a human body, assisting with postural changes/ambulation, and catching a user during a fall, all without the use of any wearable device or harness. With a minimum width of 38 centimeters, the robot’s small footprint allows it to navigate the typical home environment. We demonstrate E-BAR’s utility in multiple typical home scenarios that elderly persons experience, including getting into/out of a bathtub, bending to reach for objects, sit-to-stand transitions, and ambulation.

[ MIT ]

Sanctuary AI had the pleasure of accompanying Microsoft to Hannover Messe, where we demonstrated how our technology is shaping the future of work with autonomous labor powered by physical AI and general-purpose robots.

[ Sanctuary AI ]

Watch how drywall finishing machines incorporate collaborative robots, and learn why Canvas chose the Universal Robots platform.

[ Canvas ] via [ Universal Robots ]

We’ve officially put a stake in the ground in Dallas–Fort Worth. Torc’s new operations hub is open for business—and it’s more than just a dot on the map. It’s a strategic launchpad as we expand our autonomous freight network across the southern United States.

[ Torc ]

This Stanford Robotics Center talk is by Jonathan Hurst at Agility Robotics, on “Humanoid Robots: From the Warehouse to Your House.”

How close are we to having safe, reliable, useful in-home humanoids? If you believe recent press, it’s just around the corner. Unquestionably, advances in Al and robotics are driving innovation and activity in the sector; it truly is an exciting time to be building robots! But what does it really take to execute on the vision of useful, human-centric, multipurpose robots? Robots that can operate in human spaces, predictably and safely? We think it starts with humanoids in warehouses, an unsexy but necessary beachhead market to our future with robots as part of everyday life. I’ll talk about why a humanoid is more than a sensible form factor, it’s inevitable; and I will speak to the excitement around a ChatGPT moment for robotics, and what it will take to leverage Al advances and innovation in robotics into useful, safe humanoids.

[ Stanford ]



Being long and skinny and wiggly is a strategy that’s been wildly successful for animals, ever since there have been animals, more or less. Roboticists, eternally jealous of biology, have taken notice of this, and have spent decades trying to build robotic versions of snakes, salamanders, worms, and more. There’s been some success, of a sort, although most of the robotic snakes and whatnot that we’ve seen have been for things like disaster relief, which is kind of just what you do when you have a robot with a novel movement strategy but without any other obvious practical application.

Dan Goldman at Georgia Tech has been working on bioinspired robotic locomotion for as long as anyone, and as it turns out, that’s exactly the amount of time that it takes to develop a long and skinny and wiggly robot with a viable commercial use case. Goldman has a new Atlanta-based startup called Ground Control Robotics (GCR) that’s bringing what are essentially giant robotic arthropods to agricultural crop management.

- YouTube

I’m not entirely sure what you’d call this—a robotic giant centipede might be the easiest description to agree on, I guess? But Goldman tells us that he doesn’t consider his robots to be bioinspired as much as they’re “robophysical” models of living systems. “I like the idea of carefully studying the animals,” Goldman says. “We use the models to test biological principles, discover new phenomena with them, and then bring those insights into hardened robots which can go outside of the lab.”

Centipede Robots for Crop Management

The robot itself is not that complicated, at least on the scale of how complicated robots usually are. It’s made up of a head with some sensors in it plus a handful of identical cable-connected segments, each with a couple of motors for leg actuation. On paper, this works out to be a lot of degrees of freedom, but you can get surprisingly good performance using relatively simple control techniques.

“Centipede robots, like snake robots, are basically swimmers,” Goldman says. The key difference is that adding legs expands the different kinds of environments through which swimming robots can move. The right pattern of lifting and lowering the legs generates a fluidlike thrust force that helps the robot to push off more stuff as it moves to make its motion more consistent and reliable. “We created a new kind of mechanism to take actuation away from the centerline of the robot to the sides, using cables back and forth,” says Goldman. “When you tune things properly, the robot goes from being stiff to unidirectionally compliant. And if you do that, what you find is almost like magic—this thing swims through arbitrarily complex environments with no brain power.”

The complex environments that the robot is designed for are agricultural. Think sensing and weed control in fields, but don’t think about gentle rolling hills lined with neat rows of crops. That kind of farming is very amenable to automation at scale, and there are plenty of robotics companies in that space already. Not all plants grow in well-kept rows on mostly flat ground, however: Perennial crops, where the plant itself sticks around and you harvest stuff off of it every year, can be much more complicated to manage. This is especially true for crops like wine grapes, which can grow on very steep and often rocky slopes. Those kinds of environments are an opportunity for GCR’s robots, offering an initial use case that brings the robot from academic curiosity to something with unique commercial potential.

Wiggly antennae-like structures help the robot to climb over obstacles taller than itself.Ground Control Robotics

“Robotics researchers tend to treat robots as one-off demonstrations of a theory or principle,” Goldman says. “You get the darn thing to work, you submit it to [the International Conference on Robotics and Automation], and then you go onto the next thing. But we’ve had to build in robustness from the get-go, because our robots are experimental physics tools.” Much of the research that Goldman does in his lab is on using these robo-physical models to try to systematically test and (hopefully) understand how animals move the way that they do. “And that’s where we started to see that we could have these robots not just be laboratory toys,” says Goldman, “but that they could become a minimum viable product.”

Automated Weed-Control Solutions

According to GCR, there is currently no automated solution for weed control around scraggly bushy or vinelike plants (like blueberries or strawberries or grapes), and farmers can spend an enormous amount of money having humans crawl around under the plants to check health and pull weeds. GCR estimates that weed control for blueberries in California can run US $300 per acre or more, and strawberries are even worse, sometimes more than $1,000 per acre. It’s not a fun job, and it’s getting increasingly difficult to find humans willing to do it. For farmers who don’t want to spray pesticides, there aren’t a lot of good options, and GCR thinks that its robotic centipedes could fill that niche.

An obvious question with any novel robotic mobility system is whether you could accomplish basically the same thing with a system that’s much less novel. Like, quadrupeds are getting pretty good these days, why not just use one of them? Or a wheeled robot, for that matter? “We want to send the robot as close to the crops as possible,” says Goldman. “And we don’t want a bigger, clunkier machine to destroy those fields.” This gets back to the clutter problem: A robot large enough to ignore clutter could cause damage, and most robots small enough not to damage clutter become a nightmare of a control problem.

When most of the obstacles that robots encounter are at a comparable scale to themselves, control becomes very difficult. “The terrain reaction forces are almost impossible to predict,” explains Goldman, which means that the robot’s mobility regime gets dominated by environmental noise. One approach would be to try to model all of this noise and the resulting dynamics and implement some kind of control policy, but it turns out that there’s a much simpler strategy: more legs. “It’s possible to generate reliable motion without any sensing at all,” says Goldman, “if we have a lot of legs.”

For this design of robot, adding more legs is easy, which is another advantage of this type of mobility over something like a quadruped. Each of GCR’s robots will cost a lot less than you probably think—likely in the thousand-dollar range, because the leg modules themselves are relatively cheap, and most of the intelligence is mechanical rather than sense-based or compute-based. The concept is that a decentralized swarm of these robots would operate in fields 24/7—just scouting for now, where there’s still a substantial amount of value, and then eventually physically ripping out weeds with some big robotic centipede jaws (or maybe even lasers!) for a lower cost than any other option.

Eventually, these robots will operate autonomously in swarms, and could also be useful for applications like disaster response.Ground Control Robotics

Ground Control Robotics is currently working with a blueberry farmer and a vineyard owner in Georgia on pilot projects to refine the mobility and sensing capabilities of the robots within the next few months. Obviously, there are options to expand into disaster relief (for real) and perhaps even military applications, although Goldman tells us that different environments might require different limb configurations or the ability to tuck the limbs away entirely. I do appreciate that GCR is starting with an application that will likely take a lot more work but also a lot more potential. It’s not often that we get to see such a direct transition between novel robotics research and a commercial product, and while it’s certainly going to be a challenge, I’ve already put my backyard garden on the waiting list.



Being long and skinny and wiggly is a strategy that’s been wildly successful for animals, ever since there have been animals, more or less. Roboticists, eternally jealous of biology, have taken notice of this, and have spent decades trying to build robotic versions of snakes, salamanders, worms, and more. There’s been some success, of a sort, although most of the robotic snakes and whatnot that we’ve seen have been for things like disaster relief, which is kind of just what you do when you have a robot with a novel movement strategy but without any other obvious practical application.

Dan Goldman at Georgia Tech has been working on bioinspired robotic locomotion for as long as anyone, and as it turns out, that’s exactly the amount of time that it takes to develop a long and skinny and wiggly robot with a viable commercial use case. Goldman has a new Atlanta-based startup called Ground Control Robotics (GCR) that’s bringing what are essentially giant robotic arthropods to agricultural crop management.

- YouTube

I’m not entirely sure what you’d call this—a robotic giant centipede might be the easiest description to agree on, I guess? But Goldman tells us that he doesn’t consider his robots to be bioinspired as much as they’re “robophysical” models of living systems. “I like the idea of carefully studying the animals,” Goldman says. “We use the models to test biological principles, discover new phenomena with them, and then bring those insights into hardened robots which can go outside of the lab.”

Centipede Robots for Crop Management

The robot itself is not that complicated, at least on the scale of how complicated robots usually are. It’s made up of a head with some sensors in it plus a handful of identical cable-connected segments, each with a couple of motors for leg actuation. On paper, this works out to be a lot of degrees of freedom, but you can get surprisingly good performance using relatively simple control techniques.

“Centipede robots, like snake robots, are basically swimmers,” Goldman says. The key difference is that adding legs expands the different kinds of environments through which swimming robots can move. The right pattern of lifting and lowering the legs generates a fluidlike thrust force that helps the robot to push off more stuff as it moves to make its motion more consistent and reliable. “We created a new kind of mechanism to take actuation away from the centerline of the robot to the sides, using cables back and forth,” says Goldman. “When you tune things properly, the robot goes from being stiff to unidirectionally compliant. And if you do that, what you find is almost like magic—this thing swims through arbitrarily complex environments with no brain power.”

The complex environments that the robot is designed for are agricultural. Think sensing and weed control in fields, but don’t think about gentle rolling hills lined with neat rows of crops. That kind of farming is very amenable to automation at scale, and there are plenty of robotics companies in that space already. Not all plants grow in well-kept rows on mostly flat ground, however: Perennial crops, where the plant itself sticks around and you harvest stuff off of it every year, can be much more complicated to manage. This is especially true for crops like wine grapes, which can grow on very steep and often rocky slopes. Those kinds of environments are an opportunity for GCR’s robots, offering an initial use case that brings the robot from academic curiosity to something with unique commercial potential.

Wiggly antennae-like structures help the robot to climb over obstacles taller than itself.Ground Control Robotics

“Robotics researchers tend to treat robots as one-off demonstrations of a theory or principle,” Goldman says. “You get the darn thing to work, you submit it to [the International Conference on Robotics and Automation], and then you go onto the next thing. But we’ve had to build in robustness from the get-go, because our robots are experimental physics tools.” Much of the research that Goldman does in his lab is on using these robo-physical models to try to systematically test and (hopefully) understand how animals move the way that they do. “And that’s where we started to see that we could have these robots not just be laboratory toys,” says Goldman, “but that they could become a minimum viable product.”

Automated Weed-Control Solutions

According to GCR, there is currently no automated solution for weed control around scraggly bushy or vinelike plants (like blueberries or strawberries or grapes), and farmers can spend an enormous amount of money having humans crawl around under the plants to check health and pull weeds. GCR estimates that weed control for blueberries in California can run US $300 per acre or more, and strawberries are even worse, sometimes more than $1,000 per acre. It’s not a fun job, and it’s getting increasingly difficult to find humans willing to do it. For farmers who don’t want to spray pesticides, there aren’t a lot of good options, and GCR thinks that its robotic centipedes could fill that niche.

An obvious question with any novel robotic mobility system is whether you could accomplish basically the same thing with a system that’s much less novel. Like, quadrupeds are getting pretty good these days, why not just use one of them? Or a wheeled robot, for that matter? “We want to send the robot as close to the crops as possible,” says Goldman. “And we don’t want a bigger, clunkier machine to destroy those fields.” This gets back to the clutter problem: A robot large enough to ignore clutter could cause damage, and most robots small enough not to damage clutter become a nightmare of a control problem.

When most of the obstacles that robots encounter are at a comparable scale to themselves, control becomes very difficult. “The terrain reaction forces are almost impossible to predict,” explains Goldman, which means that the robot’s mobility regime gets dominated by environmental noise. One approach would be to try to model all of this noise and the resulting dynamics and implement some kind of control policy, but it turns out that there’s a much simpler strategy: more legs. “It’s possible to generate reliable motion without any sensing at all,” says Goldman, “if we have a lot of legs.”

For this design of robot, adding more legs is easy, which is another advantage of this type of mobility over something like a quadruped. Each of GCR’s robots will cost a lot less than you probably think—likely in the thousand-dollar range, because the leg modules themselves are relatively cheap, and most of the intelligence is mechanical rather than sense-based or compute-based. The concept is that a decentralized swarm of these robots would operate in fields 24/7—just scouting for now, where there’s still a substantial amount of value, and then eventually physically ripping out weeds with some big robotic centipede jaws (or maybe even lasers!) for a lower cost than any other option.

Eventually, these robots will operate autonomously in swarms, and could also be useful for applications like disaster response.Ground Control Robotics

Ground Control Robotics is currently working with a blueberry farmer and a vineyard owner in Georgia on pilot projects to refine the mobility and sensing capabilities of the robots within the next few months. Obviously, there are options to expand into disaster relief (for real) and perhaps even military applications, although Goldman tells us that different environments might require different limb configurations or the ability to tuck the limbs away entirely. I do appreciate that GCR is starting with an application that will likely take a lot more work but also a lot more potential. It’s not often that we get to see such a direct transition between novel robotics research and a commercial product, and while it’s certainly going to be a challenge, I’ve already put my backyard garden on the waiting list.



The main assumption about humanoid robotics that the industry is making right now is that the most realistic near-term pathway to actually making money is in either warehouses or factories. It’s easy to see where this assumption comes from: Repetitive tasks requiring strength or flexibility in well-structured environments is one place where it really seems like robots could thrive, and if you need to make billions of dollars (because somehow that’s how much your company is valued at), it doesn’t appear as though there are a lot of other good options.

Cartwheel Robotics is trying to do something different with humanoids. Cartwheel is more interested in building robots that people can connect with, with the eventual goal of general-purpose home companionship. Founder Scott LaValley describes Cartwheel’s robot as “a small, friendly humanoid robot designed to bring joy, warmth, and a bit of everyday magic into the spaces we live in. It’s expressive, emotionally intelligent, and full of personality—not just a piece of technology but a presence you can feel.”

This rendering shows the design and scale of Cartwheel’s humanoid prototype.Cartwheel

Historically, making a commercially viable social robot is a huge challenge. A little less than a decade ago, a series of social home robots (backed by a substantial amount of investment) tried very, very hard to justify themselves to consumers and did not succeed. Whether the fundamental problems with the concept of social home robots (namely, cost and interactive novelty) have been solved at this point isn’t totally clear, but Cartwheel is making things even more difficult for themselves by going the humanoid route, legs and all. That means dealing with all kinds of problems from motion planning to balancing to safety, all in a way that’s reliable enough for the robot to operate around children.

LaValley is arguably one of the few people who could plausibly make a commercial social humanoid actually happen. His extensive background in humanoid robotics includes nearly a decade at Boston Dynamics working on the Atlas robots, followed by five years at Disney, where he led the team that developed Disney’s Baby Groot robot.

Building Robots to Be People’s Friends

In humanoid robot terms, there’s quite a contrast between the versions of Atlas that LaValley worked on (DRC Atlas in particular) and Baby Groot. They’re obviously designed and built to do very different things, but LaValley says that what really struck him was how his kids reacted when he introduced them to the robots he was working on. “At Boston Dynamics, we were known for terrifying robots,” LaValley remembers. “I was excited to work on the Atlas robots because they were cool technology, but my kids would look at them and go, ‘That’s scary.’ At Disney, I brought my kids in and they would light up with a big smile on their face and ask, ‘Is that really Baby Groot? Can I give it a hug?’ And I thought, this is the type of experience I want to see robots delivering.” While Baby Groot was never a commercial project, for LaValley it marked a pivotal milestone in emotional robotics that shaped his vision for Cartwheel: “Seeing how my kids connected with Baby Groot reframed what robots could and should evoke.”

The current generation of commercial humanoids is pretty much the opposite of what LaValley is looking for. You could argue that this is because they’re designed to do work, rather than be anyone’s friend, but many of the design choices seem to be based on the sort of thing that would be the most eye-catching to the public (and investors) in a rather boringly “futuristic” way. And look, there are plenty of good reasons why you might want to very deliberately design a humanoid with commercial (or at least industrial) aspirations to look or not look a certain way, but for better or worse, nobody is going to like those robots. Respect them? Sure. Think they’re cool? Probably. Want to be friends with them? Not likely. And for Cartwheel, this is the opportunity, LaValley says. “These humanoid robots are built to be tools. They lack personality. They’re soulless. But we’re designing a robot to be a humanoid that humans will want in their day-to-day lives.”

Eventually, Cartwheel’s robots will likely need to be practical (as this rendering suggests) in order to find a place in people’s homes.Cartwheel

Yogi is one of Cartwheel’s prototypes, which LaValley describes as having “toddler proportions,” which are the key to making it appear friendly and approachable. “It has rounded lines, with a big head, and it’s even a little chubby. I don’t see a robot when I see Yogi; I see a character.” A second prototype, called Speedy, is a bit less complicated and is intended to be more of a near-term customizable commercial platform. Think something like Baby Groot, except available as any character you like, and to companies who aren’t Disney. LaValley tells us that a version of Speedy with a special torso designed for a “particular costume” is headed to a customer in the near future.

As the previous generation of social robots learned the hard way, it takes a lot more than good looks for a robot to connect with humans over the long term. Somewhat inevitably, LaValley sees AI as one potential answer to this, since it might offer a way of preserving novelty by keeping interactions fresh. This extends beyond verbal interactions, too, and Cartwheel is experimenting with using AI for whole-body motion generation, where each robot behavior will be unique, even under the same conditions or when given the same inputs.

Cartwheel’s Home Robots Plan

While Cartwheel is starting with a commercial platform, the end goal is to put these small social humanoids into homes. This means considering safety and affordability in a way that doesn’t really apply to humanoids that are designed to work in warehouses or factories. The small size of Cartwheel’s robots will certainly help with both of those things, but we’re still talking about a robot that’s likely to cost a significant amount—certainly more than a major appliance, although perhaps not as much as a new car, is as much as LaValley was willing to commit to at this point. With that kind of price comes high expectations, and for most people, the only way to justify buying a home humanoid will be if it can somehow be practical as well as lovable.

LaValley is candid about the challenge here: “I don’t have all the answers,” he says. “There’s a lot to figure out.” One approach that’s becoming increasingly common with robots is to go with a service model, where the robot is essentially being rented in the same way that you might pay for the services of a housekeeper or gardener. But again, for that to make sense, Cartwheel’s robots will have to justify themselves financially. “This problem won’t be solved in the next year, or maybe not even in the next five years,” LaValley says. “There are a lot of things we don’t understand—this is going to take a while. We have to work our way to understanding and then addressing the problem set, and our approach is to find development partners and get our robots out into the real world.”

Cartwheel

Cartwheel has been in business for three years now, and got off the ground by providing robotics engineering services to corporate customers. That, along with an initial funding round, allowed LaValley to bootstrap the development of Cartwheel’s own robots, and he expects to deliver a couple dozen variations on Speedy to places like museums and science centers over the next 12 months.

The dream, though, is small home robots that are both companionable and capable, and LaValley is even willing to throw around terms like “general purpose.” “Capability increases over time,” he says, “and maybe our robots will be able to do more than just play with your kids or pick up a few items around the house. I see all robots eventually moving towards general purpose. Our strategy is not to get to general purpose on day one, or even get into the home day one. But we’re working towards that goal. That’s our north star.”



The main assumption about humanoid robotics that the industry is making right now is that the most realistic near-term pathway to actually making money is in either warehouses or factories. It’s easy to see where this assumption comes from: Repetitive tasks requiring strength or flexibility in well-structured environments is one place where it really seems like robots could thrive, and if you need to make billions of dollars (because somehow that’s how much your company is valued at), it doesn’t appear as though there are a lot of other good options.

Cartwheel Robotics is trying to do something different with humanoids. Cartwheel is more interested in building robots that people can connect with, with the eventual goal of general-purpose home companionship. Founder Scott LaValley describes Cartwheel’s robot as “a small, friendly humanoid robot designed to bring joy, warmth, and a bit of everyday magic into the spaces we live in. It’s expressive, emotionally intelligent, and full of personality—not just a piece of technology but a presence you can feel.”

This rendering shows the design and scale of Cartwheel’s humanoid prototype.Cartwheel

Historically, making a commercially viable social robot is a huge challenge. A little less than a decade ago, a series of social home robots (backed by a substantial amount of investment) tried very, very hard to justify themselves to consumers and did not succeed. Whether the fundamental problems with the concept of social home robots (namely, cost and interactive novelty) have been solved at this point isn’t totally clear, but Cartwheel is making things even more difficult for themselves by going the humanoid route, legs and all. That means dealing with all kinds of problems from motion planning to balancing to safety, all in a way that’s reliable enough for the robot to operate around children.

LaValley is arguably one of the few people who could plausibly make a commercial social humanoid actually happen. His extensive background in humanoid robotics includes nearly a decade at Boston Dynamics working on the Atlas robots, followed by five years at Disney, where he led the team that developed Disney’s Baby Groot robot.

Building Robots to Be People’s Friends

In humanoid robot terms, there’s quite a contrast between the versions of Atlas that LaValley worked on (DRC Atlas in particular) and Baby Groot. They’re obviously designed and built to do very different things, but LaValley says that what really struck him was how his kids reacted when he introduced them to the robots he was working on. “At Boston Dynamics, we were known for terrifying robots,” LaValley remembers. “I was excited to work on the Atlas robots because they were cool technology, but my kids would look at them and go, ‘That’s scary.’ At Disney, I brought my kids in and they would light up with a big smile on their face and ask, ‘Is that really Baby Groot? Can I give it a hug?’ And I thought, this is the type of experience I want to see robots delivering.” While Baby Groot was never a commercial project, for LaValley it marked a pivotal milestone in emotional robotics that shaped his vision for Cartwheel: “Seeing how my kids connected with Baby Groot reframed what robots could and should evoke.”

The current generation of commercial humanoids is pretty much the opposite of what LaValley is looking for. You could argue that this is because they’re designed to do work, rather than be anyone’s friend, but many of the design choices seem to be based on the sort of thing that would be the most eye-catching to the public (and investors) in a rather boringly “futuristic” way. And look, there are plenty of good reasons why you might want to very deliberately design a humanoid with commercial (or at least industrial) aspirations to look or not look a certain way, but for better or worse, nobody is going to like those robots. Respect them? Sure. Think they’re cool? Probably. Want to be friends with them? Not likely. And for Cartwheel, this is the opportunity, LaValley says. “These humanoid robots are built to be tools. They lack personality. They’re soulless. But we’re designing a robot to be a humanoid that humans will want in their day-to-day lives.”

Eventually, Cartwheel’s robots will likely need to be practical (as this rendering suggests) in order to find a place in people’s homes.Cartwheel

Yogi is one of Cartwheel’s prototypes, which LaValley describes as having “toddler proportions,” which are the key to making it appear friendly and approachable. “It has rounded lines, with a big head, and it’s even a little chubby. I don’t see a robot when I see Yogi; I see a character.” A second prototype, called Speedy, is a bit less complicated and is intended to be more of a near-term customizable commercial platform. Think something like Baby Groot, except available as any character you like, and to companies who aren’t Disney. LaValley tells us that a version of Speedy with a special torso designed for a “particular costume” is headed to a customer in the near future.

As the previous generation of social robots learned the hard way, it takes a lot more than good looks for a robot to connect with humans over the long term. Somewhat inevitably, LaValley sees AI as one potential answer to this, since it might offer a way of preserving novelty by keeping interactions fresh. This extends beyond verbal interactions, too, and Cartwheel is experimenting with using AI for whole-body motion generation, where each robot behavior will be unique, even under the same conditions or when given the same inputs.

Cartwheel’s Home Robots Plan

While Cartwheel is starting with a commercial platform, the end goal is to put these small social humanoids into homes. This means considering safety and affordability in a way that doesn’t really apply to humanoids that are designed to work in warehouses or factories. The small size of Cartwheel’s robots will certainly help with both of those things, but we’re still talking about a robot that’s likely to cost a significant amount—certainly more than a major appliance, although perhaps not as much as a new car, is as much as LaValley was willing to commit to at this point. With that kind of price comes high expectations, and for most people, the only way to justify buying a home humanoid will be if it can somehow be practical as well as lovable.

LaValley is candid about the challenge here: “I don’t have all the answers,” he says. “There’s a lot to figure out.” One approach that’s becoming increasingly common with robots is to go with a service model, where the robot is essentially being rented in the same way that you might pay for the services of a housekeeper or gardener. But again, for that to make sense, Cartwheel’s robots will have to justify themselves financially. “This problem won’t be solved in the next year, or maybe not even in the next five years,” LaValley says. “There are a lot of things we don’t understand—this is going to take a while. We have to work our way to understanding and then addressing the problem set, and our approach is to find development partners and get our robots out into the real world.”

Cartwheel

Cartwheel has been in business for three years now, and got off the ground by providing robotics engineering services to corporate customers. That, along with an initial funding round, allowed LaValley to bootstrap the development of Cartwheel’s own robots, and he expects to deliver a couple dozen variations on Speedy to places like museums and science centers over the next 12 months.

The dream, though, is small home robots that are both companionable and capable, and LaValley is even willing to throw around terms like “general purpose.” “Capability increases over time,” he says, “and maybe our robots will be able to do more than just play with your kids or pick up a few items around the house. I see all robots eventually moving towards general purpose. Our strategy is not to get to general purpose on day one, or even get into the home day one. But we’re working towards that goal. That’s our north star.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.ICRA 2025: 19–23 May 2025, ATLANTALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHEN, CHINACoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKAIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

Today I learned that “hippotherapy” is not quite what I wanted it to be.

The integration of KUKA robots into robotic physiotherapy equipment offers numerous advantages, such as precise motion planning and control of robot-assisted therapy, individualized training, reduced therapist workload and patient-progress monitoring. As a result, these robotic therapies can be superior to many conventional physical therapies in restabilizing patients’ limbs.

[ Kuka ]

MIT engineers are getting in on the robotic ping-pong game with a powerful, lightweight design that returns shots with high-speed precision. The new table-tennis bot comprises a multijointed robotic arm that is fixed to one end of a ping-pong table and wields a standard ping-pong paddle. Aided by several high-speed cameras and a high-bandwidth predictive control system, the robot quickly estimates the speed and trajectory of an incoming ball and executes one of several swing types—loop, drive, or chop—to precisely hit the ball to a desired location on the table with various types of spin.

[ MIT News ]

Pan flipping involves dynamically flipping various objects, such as eggs, burger buns, and meat patties. This demonstrates precision, agility, and the ability to adapt to different challenges in motion control. Our framework enables robots to learn highly dynamic movements.

[ GitHub ] via [ Human Centered Autonomy Lab ]

Thanks, Haonan!

An edible robot made by EPFL scientists leverages a combination of biodegradable fuel and surface tension to zip around the water’s surface, creating a safe—and nutritious—alternative to environmental monitoring devices made from artificial polymers and electronics.

[ EPFL ]

Traditional quadcopters excel in flight agility and maneuverability but often face limitations in hovering efficiency and horizontal field of view. Nature-inspired rotary wings, while offering a broader perspective and enhanced hovering efficiency, are hampered by substantial angular momentum restrictions. In this study, we introduce QuadRotary, a novel vehicle that integrates the strengths of both flight characteristics through a reconfigurable design.

[ Paper ] via [ Singapore University of Technology and Design ]

I like the idea of a humanoid that uses jumping as a primary locomotion mode not because it has to, but because it’s fun.

[ PAL Robotics ]

I had not realized how much nuance there is to digging stuff up with a shovel.

[ Intelligent Motion Laboratory ]

A new 10,000-gallon [38,000-liter] water tank at the University of Michigan will help researchers design, build, and test a variety of autonomous underwater systems that could help robots map lakes and oceans and conduct inspections of ships and bridges. The tank, funded by the Office of Naval Research, allows roboticists to further test projects on robot control and behavior, marine sensing and perception, and multivehicle coordination.

“The lore is that this helps to jump-start research, as each testing tank is a living reservoir for all of the knowledge gained from within it,” said Jason Bundoff, lead engineer in research at U-M’s Friedman Marine Hydrodynamics Laboratory. “You mix the waters from other tanks to imbue the newly founded tank with all of that living knowledge from the other tanks, which helps to keep the knowledge from being lost.”

[ Michigan Robotics ]

If you have a humanoid robot and you’re wondering how it should communicate, here’s the answer.

[ Pollen ]

Whose side are you on, Dusty?

Even construction robots should be mindful about siding with the Empire, though there can be consequences!

- YouTube

[ Dusty Robotics ]

This Michigan Robotics Seminar is by Danfei Xu from Georgia Tech, on “Generative Task and Motion Planning.”

Long-horizon planning is fundamental to our ability to solve complex physical problems, from using tools to cooking dinners. Despite recent progress in commonsense-rich foundation models, the ability to do the same is still lacking in robots, particularly with learning-based approaches. In this talk, I will present a body of work that aims to transform Task and Motion Planning—one of the most powerful computational frameworks in robot planning—into a fully generative model framework, enabling compositional generalization in a largely data-driven approach.

[ Michigan Robotics ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.ICRA 2025: 19–23 May 2025, ATLANTALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHEN, CHINACoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKAIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

Today I learned that “hippotherapy” is not quite what I wanted it to be.

The integration of KUKA robots into robotic physiotherapy equipment offers numerous advantages, such as precise motion planning and control of robot-assisted therapy, individualized training, reduced therapist workload and patient-progress monitoring. As a result, these robotic therapies can be superior to many conventional physical therapies in restabilizing patients’ limbs.

[ Kuka ]

MIT engineers are getting in on the robotic ping-pong game with a powerful, lightweight design that returns shots with high-speed precision. The new table-tennis bot comprises a multijointed robotic arm that is fixed to one end of a ping-pong table and wields a standard ping-pong paddle. Aided by several high-speed cameras and a high-bandwidth predictive control system, the robot quickly estimates the speed and trajectory of an incoming ball and executes one of several swing types—loop, drive, or chop—to precisely hit the ball to a desired location on the table with various types of spin.

[ MIT News ]

Pan flipping involves dynamically flipping various objects, such as eggs, burger buns, and meat patties. This demonstrates precision, agility, and the ability to adapt to different challenges in motion control. Our framework enables robots to learn highly dynamic movements.

[ GitHub ] via [ Human Centered Autonomy Lab ]

Thanks, Haonan!

An edible robot made by EPFL scientists leverages a combination of biodegradable fuel and surface tension to zip around the water’s surface, creating a safe—and nutritious—alternative to environmental monitoring devices made from artificial polymers and electronics.

[ EPFL ]

Traditional quadcopters excel in flight agility and maneuverability but often face limitations in hovering efficiency and horizontal field of view. Nature-inspired rotary wings, while offering a broader perspective and enhanced hovering efficiency, are hampered by substantial angular momentum restrictions. In this study, we introduce QuadRotary, a novel vehicle that integrates the strengths of both flight characteristics through a reconfigurable design.

[ Paper ] via [ Singapore University of Technology and Design ]

I like the idea of a humanoid that uses jumping as a primary locomotion mode not because it has to, but because it’s fun.

[ PAL Robotics ]

I had not realized how much nuance there is to digging stuff up with a shovel.

[ Intelligent Motion Laboratory ]

A new 10,000-gallon [38,000-liter] water tank at the University of Michigan will help researchers design, build, and test a variety of autonomous underwater systems that could help robots map lakes and oceans and conduct inspections of ships and bridges. The tank, funded by the Office of Naval Research, allows roboticists to further test projects on robot control and behavior, marine sensing and perception, and multivehicle coordination.

“The lore is that this helps to jump-start research, as each testing tank is a living reservoir for all of the knowledge gained from within it,” said Jason Bundoff, lead engineer in research at U-M’s Friedman Marine Hydrodynamics Laboratory. “You mix the waters from other tanks to imbue the newly founded tank with all of that living knowledge from the other tanks, which helps to keep the knowledge from being lost.”

[ Michigan Robotics ]

If you have a humanoid robot and you’re wondering how it should communicate, here’s the answer.

[ Pollen ]

Whose side are you on, Dusty?

Even construction robots should be mindful about siding with the Empire, though there can be consequences!

- YouTube

[ Dusty Robotics ]

This Michigan Robotics Seminar is by Danfei Xu from Georgia Tech, on “Generative Task and Motion Planning.”

Long-horizon planning is fundamental to our ability to solve complex physical problems, from using tools to cooking dinners. Despite recent progress in commonsense-rich foundation models, the ability to do the same is still lacking in robots, particularly with learning-based approaches. In this talk, I will present a body of work that aims to transform Task and Motion Planning—one of the most powerful computational frameworks in robot planning—into a fully generative model framework, enabling compositional generalization in a largely data-driven approach.

[ Michigan Robotics ]



At an event in Dortmund, Germany today, Amazon announced a new robotic system called Vulcan, which the company is calling “its first robotic system with a genuine sense of touch—designed to transform how robots interact with the physical world.” In the short to medium term, the physical world that Amazon is most concerned with is its warehouses, and Vulcan is designed to assist (or take over, depending on your perspective) with stowing and picking items in its mobile robotic inventory system.

In two upcoming papers in IEEE Transactions on Robotics, Amazon researchers describe how both the stowing and picking side of the system operates. We covered stowing in detail a couple of years ago, when we spoke with Aaron Parness, the director of applied science at Amazon Robotics. Parness and his team have made a lot of progress on stowing since then, improving speed and reliability over more than 500,000 stows in operational warehouses to the point where the average stowing robot is now slightly faster than the average stowing human. We spoke with Parness to get an update on stowing, as well as an in-depth look at how Vulcan handles picking, which you can find in this separate article. It’s a much different problem, and well worth a read.

Optimizing Amazon’s Stowing Process

Stowing is the process by which Amazon brings products into its warehouses and adds them to its inventory so that you can order them. Not surprisingly, Amazon has gone to extreme lengths to optimize this process to maximize efficiency in both space and time. Human stowers are presented with a mobile robotic pod full of fabric cubbies (bins) with elastic bands across the front of them to keep stuff from falling out. The human’s job is to find a promising space in a bin, pull the plastic band aside, and stuff the thing into that space. The item’s new home is recorded in Amazon’s system, the pod then drives back into the warehouse, and the next pod comes along, ready for the next item.

Different manipulation tools are used to interact with human-optimized bins.Amazon

The new paper on stowing includes some interesting numbers about Amazon’s inventory-handling process that helps put the scale of the problem in perspective. More than 14 billion items are stowed by hand every year at Amazon warehouses. Amazon is hoping that Vulcan robots will be able to stow 80 percent of these items at a rate of 300 items per hour, while operating 20 hours per day. It’s a very, very high bar.

After a lot of practice, Amazon’s robots are now quite good at the stowing task. Parness tells us that the stow system is operating three times as fast as it was 18 months ago, meaning that it’s actually a little bit faster than an average human. This is exciting, but as Parness explains, expert humans still put the robots to shame. “The fastest humans at this task are like Olympic athletes. They’re far faster than the robots, and they’re able to store items in pods at much higher densities.” High density is important because it means that more stuff can fit into warehouses that are physically closer to more people, which is especially relevant in urban areas where space is at a premium. The best humans can get very creative when it comes to this physical three-dimensional “Tetris-ing,” which the robots are still working on.

Where robots do excel is planning ahead, and this is likely why the average robot stower is now able to outpace the average human stower—Tetris-ing is a mental process, too. In the same way that good Tetris players are thinking about where the next piece is going to go, not just the current piece, robots are able to leverage a lot more information than humans can to optimize what gets stowed where and when, says Parness. “When you’re a person doing this task, you’ve got a buffer of 20 or 30 items, and you’re looking for an opportunity to fit those items into different bins, and having to remember which item might go into which space. But the robot knows all of the properties of all of our items at once, and we can also look at all of the bins at the same time along with the bins in the next couple of pods that are coming up. So we can do this optimization over the whole set of information in 100 milliseconds.”

Essentially, robots are far better at optimization within the planning side of Tetrising, while humans are (still) far better at the manipulation side, but that gap is closing as robots get more experienced at operating in clutter and contact. Amazon has had Vulcan stowing robots operating for over a year in live warehouses in Germany and Washington state to collect training data, and those robots have successfully stowed hundreds of thousands of items.

Stowing is of course only half of what Vulcan is designed to do. Picking offers all kinds of unique challenges too, and you can read our in-depth discussion with Parness on that topic right here.



At an event in Dortmund, Germany today, Amazon announced a new robotic system called Vulcan, which the company is calling “its first robotic system with a genuine sense of touch—designed to transform how robots interact with the physical world.” In the short to medium term, the physical world that Amazon is most concerned with is its warehouses, and Vulcan is designed to assist (or take over, depending on your perspective) with stowing and picking items in its mobile robotic inventory system.

In two upcoming papers in IEEE Transactions on Robotics, Amazon researchers describe how both the stowing and picking side of the system operates. We covered stowing in detail a couple of years ago, when we spoke with Aaron Parness, the director of applied science at Amazon Robotics. Parness and his team have made a lot of progress on stowing since then, improving speed and reliability over more than 500,000 stows in operational warehouses to the point where the average stowing robot is now slightly faster than the average stowing human. We spoke with Parness to get an update on stowing, as well as an in-depth look at how Vulcan handles picking, which you can find in this separate article. It’s a much different problem, and well worth a read.

Optimizing Amazon’s Stowing Process

Stowing is the process by which Amazon brings products into its warehouses and adds them to its inventory so that you can order them. Not surprisingly, Amazon has gone to extreme lengths to optimize this process to maximize efficiency in both space and time. Human stowers are presented with a mobile robotic pod full of fabric cubbies (bins) with elastic bands across the front of them to keep stuff from falling out. The human’s job is to find a promising space in a bin, pull the plastic band aside, and stuff the thing into that space. The item’s new home is recorded in Amazon’s system, the pod then drives back into the warehouse, and the next pod comes along, ready for the next item.

Different manipulation tools are used to interact with human-optimized bins.Amazon

The new paper on stowing includes some interesting numbers about Amazon’s inventory-handling process that helps put the scale of the problem in perspective. More than 14 billion items are stowed by hand every year at Amazon warehouses. Amazon is hoping that Vulcan robots will be able to stow 80 percent of these items at a rate of 300 items per hour, while operating 20 hours per day. It’s a very, very high bar.

After a lot of practice, Amazon’s robots are now quite good at the stowing task. Parness tells us that the stow system is operating three times as fast as it was 18 months ago, meaning that it’s actually a little bit faster than an average human. This is exciting, but as Parness explains, expert humans still put the robots to shame. “The fastest humans at this task are like Olympic athletes. They’re far faster than the robots, and they’re able to store items in pods at much higher densities.” High density is important because it means that more stuff can fit into warehouses that are physically closer to more people, which is especially relevant in urban areas where space is at a premium. The best humans can get very creative when it comes to this physical three-dimensional “Tetris-ing,” which the robots are still working on.

Where robots do excel is planning ahead, and this is likely why the average robot stower is now able to outpace the average human stower—Tetris-ing is a mental process, too. In the same way that good Tetris players are thinking about where the next piece is going to go, not just the current piece, robots are able to leverage a lot more information than humans can to optimize what gets stowed where and when, says Parness. “When you’re a person doing this task, you’ve got a buffer of 20 or 30 items, and you’re looking for an opportunity to fit those items into different bins, and having to remember which item might go into which space. But the robot knows all of the properties of all of our items at once, and we can also look at all of the bins at the same time along with the bins in the next couple of pods that are coming up. So we can do this optimization over the whole set of information in 100 milliseconds.”

Essentially, robots are far better at optimization within the planning side of Tetrising, while humans are (still) far better at the manipulation side, but that gap is closing as robots get more experienced at operating in clutter and contact. Amazon has had Vulcan stowing robots operating for over a year in live warehouses in Germany and Washington state to collect training data, and those robots have successfully stowed hundreds of thousands of items.

Stowing is of course only half of what Vulcan is designed to do. Picking offers all kinds of unique challenges too, and you can read our in-depth discussion with Parness on that topic right here.



As far as I can make out, Amazon’s warehouses are highly structured, extremely organized, very tidy, absolute raging messes. Everything in an Amazon warehouse is (usually) exactly where it’s supposed to be, which is typically jammed into some pseudorandom fabric bin the size of a shoebox along with a bunch of other pseudorandom crap. Somehow, this turns out to be the most space- and time-efficient way of doing things, because (as we’ve written about before) you have to consider the process of stowing items away in a warehouse as well as the process of picking them, and that involves some compromises in favor of space and speed.

For humans, this isn’t so much of a problem. When someone orders something on Amazon, a human can root around in those bins, shove some things out of the way, and then pull out the item that they’re looking for. This is exactly the sort of thing that robots tend to be terrible at, because not only is this process slightly different every single time, it’s also very hard to define exactly how humans go about it.

As you might expect, Amazon has been working very very hard on this picking problem. Today at an event in Germany, the company announced Vulcan, a robotic system that can both stow and pick items at human(ish) speeds.

Last time we talked with Aaron Parness, the director of applied science at Amazon Robotics, our conversation was focused on stowing—putting items into bins. As part of today’s announcement, Amazon revealed that its robots are now slightly faster at stowing than the average human is. But in the stow context, there’s a limited amount that a robot really has to understand about what’s actually happening in the bin. Fundamentally, the stowing robot’s job is to squoosh whatever is currently in a bin as far to one side as possible in order to make enough room to cram a new item in. As long as the robot is at least somewhat careful not to crushify anything, it’s a relatively straightforward task, at least compared to picking.

The choices made when an item is stowed into a bin will affect how hard it is to get that item out of that bin later on—this is called “bin etiquette.” Amazon is trying to learn bin etiquette with AI to make picking more efficient.Amazon

The defining problem of picking, as far as robots are concerned, is sensing and manipulation in clutter. “It’s a naturally contact-rich task, and we have to plan on that contact and react to it,” Parness says. And it’s not enough to solve these problems slowly and carefully, because Amazon Robotics is trying to put robots in production, which means that its systems are being directly compared to a not-so-small army of humans who are doing this exact same job very efficiently.

“There’s a new science challenge here, which is to identify the right item,” explains Parness. The thing to understand about identifying items in an Amazon warehouse is that there are a lot of them: something like 400 million unique items. One single floor of an Amazon warehouse can easily contain 15,000 pods, which is over a million bins, and Amazon has several hundred warehouses. This is a lot of stuff.

In theory, Amazon knows exactly which items are in every single bin. Amazon also knows (again, in theory), the weight and dimensions of each of those items, and probably has some pictures of each item from previous times that the item has been stowed or picked. This is a great starting point for item identification, but as Parness points out, “We have lots of items that aren’t feature rich—imagine all of the different things you might get in a brown cardboard box.”

Clutter and Contact

As challenging as it is to correctly identify an item in a bin that may be stuffed to the brim with nearly identical items, an even bigger challenge is actually getting that item that you just identified out of the bin. The hardware and software that humans have for doing this task is unmatched by any robot, which is always a problem, but the real complicating factor is dealing with items that are all jumbled together in a small fabric bin. And the picking process itself involves more than just extraction—once the item is out of the bin, you then have to get it to the next order-fulfillment step, which means dropping it into another bin or putting it on a conveyor or something.

“When we were originally starting out, we assumed we’d have to carry the item over some distance after we pulled it out of the bin,” explains Parness. “So we were thinking we needed pinch grasping.” A pinch grasp is when you grab something between a finger (or fingers) and your thumb, and at least for humans, it’s a versatile and reliable way of grabbing a wide variety of stuff. But as Parness notes, for robots in this context, it’s more complicated: “Even pinch grasping is not ideal because if you pinch the edge of a book, or the end of a plastic bag with something inside it, you don’t have pose control of the item and it may flop around unpredictably.”

At some point, Parness and his team realized that while an item did have to move farther than just out of the bin, it didn’t actually have to get moved by the picking robot itself. Instead, they came up with a lifting conveyor that positions itself directly outside of the bin being picked from, so that all the robot has to do is get the item out of the bin and onto the conveyor. “It doesn’t look that graceful right now,” admits Parness, but it’s a clever use of hardware to substantially simplify the manipulation problem, and has the side benefit of allowing the robot to work more efficiently, since the conveyor can move the item along while the arm starts working on the next pick.

Amazon’s robots have different techniques for extracting items from bins, using different gripping hardware depending on what needs to be picked. The type of end effector that the system chooses and the grasping approach depend on what the item is, where it is in the bin, and also what it’s next to. It’s a complicated planning problem that Amazon is tackling with AI, as Parness explains. “We’re starting to build foundation models of items, including properties like how squishy they are, how fragile they are, and whether they tend to get stuck on other items or no. So we’re trying to learn those things, and it’s early stage for us, but we think reasoning about item properties is going to be important to get to that level of reliability that we need.”

Reliability has to be superhigh for Amazon (and with many other commercial robotic deployments) simply because small errors multiplied over huge deployments result in an unacceptable amount of screwing up. There’s a very, very long tail of unusual things that Amazon’s robots might encounter when trying to extract an item from a bin. Even if there’s some particularly weird bin situation that might only show up once in a million picks, that still ends up happening many times per day on the scale at which Amazon operates. Fortunately for Amazon, they’ve got humans around, and part of the reason that this robotic system can be effective in production at all is that if the robot gets stuck, or even just sees a bin that it knows is likely to cause problems, it can just give up, route that particular item to a human picker, and move on to the next one.

The other new technique that Amazon is implementing is a sort of modern approach to “visual servoing,” where the robot watches itself move and then adjusts its movement based on what it sees. As Parness explains: “It’s an important capability because it allows us to catch problems before they happen. I think that’s probably our biggest innovation, and it spans not just our problem, but problems across robotics.”

A (More) Automated Future

Parness was very clear that (for better or worse) Amazon isn’t thinking about its stowing and picking robots in terms of replacing humans completely. There’s that long tail of items that need a human touch, and it’s frankly hard to imagine any robotic-manipulation system capable enough to make at least occasional human help unnecessary in an environment like an Amazon warehouse, which somehow manages to maximize organization and chaos at the same time.

These stowing and picking robots have been undergoing live testing in an Amazon warehouse in Germany for the past year, where they’re already demonstrating ways in which human workers could directly benefit from their presence. For example, Amazon pods can be up to 2.5 meters tall, meaning that human workers need to use a stepladder to reach the highest bins and bend down to reach the lowest ones. If the robots were primarily tasked with interacting with these bins, it would help humans work faster while putting less stress on their bodies.

With the robots so far managing to keep up with human workers, Parness tells us that the emphasis going forward will be primarily on getting better at not screwing up: “I think our speed is in a really good spot. The thing we’re focused on now is getting that last bit of reliability, and that will be our next year of work.” While it may seem like Amazon is optimizing for its own very specific use cases, Parness reiterates that the bigger picture here is using every last one of those 400 million items jumbled into bins as a unique opportunity to do fundamental research on fast, reliable manipulation in complex environments.

“If you can build the science to handle high contact and high clutter, we’re going to use it everywhere,” says Parness. “It’s going to be useful for everything, from warehouses to your own home. What we’re working on now are just the first problems that are forcing us to develop these capabilities, but I think it’s the future of robotic manipulation.”



As far as I can make out, Amazon’s warehouses are highly structured, extremely organized, very tidy, absolute raging messes. Everything in an Amazon warehouse is (usually) exactly where it’s supposed to be, which is typically jammed into some pseudorandom fabric bin the size of a shoebox along with a bunch of other pseudorandom crap. Somehow, this turns out to be the most space- and time-efficient way of doing things, because (as we’ve written about before) you have to consider the process of stowing items away in a warehouse as well as the process of picking them, and that involves some compromises in favor of space and speed.

For humans, this isn’t so much of a problem. When someone orders something on Amazon, a human can root around in those bins, shove some things out of the way, and then pull out the item that they’re looking for. This is exactly the sort of thing that robots tend to be terrible at, because not only is this process slightly different every single time, it’s also very hard to define exactly how humans go about it.

As you might expect, Amazon has been working very very hard on this picking problem. Today at an event in Germany, the company announced Vulcan, a robotic system that can both stow and pick items at human(ish) speeds.

Last time we talked with Aaron Parness, the director of applied science at Amazon Robotics, our conversation was focused on stowing—putting items into bins. As part of today’s announcement, Amazon revealed that its robots are now slightly faster at stowing than the average human is. But in the stow context, there’s a limited amount that a robot really has to understand about what’s actually happening in the bin. Fundamentally, the stowing robot’s job is to squoosh whatever is currently in a bin as far to one side as possible in order to make enough room to cram a new item in. As long as the robot is at least somewhat careful not to crushify anything, it’s a relatively straightforward task, at least compared to picking.

The choices made when an item is stowed into a bin will affect how hard it is to get that item out of that bin later on—this is called “bin etiquette.” Amazon is trying to learn bin etiquette with AI to make picking more efficient.Amazon

The defining problem of picking, as far as robots are concerned, is sensing and manipulation in clutter. “It’s a naturally contact-rich task, and we have to plan on that contact and react to it,” Parness says. And it’s not enough to solve these problems slowly and carefully, because Amazon Robotics is trying to put robots in production, which means that its systems are being directly compared to a not-so-small army of humans who are doing this exact same job very efficiently.

“There’s a new science challenge here, which is to identify the right item,” explains Parness. The thing to understand about identifying items in an Amazon warehouse is that there are a lot of them: something like 400 million unique items. One single floor of an Amazon warehouse can easily contain 15,000 pods, which is over a million bins, and Amazon has several hundred warehouses. This is a lot of stuff.

In theory, Amazon knows exactly which items are in every single bin. Amazon also knows (again, in theory), the weight and dimensions of each of those items, and probably has some pictures of each item from previous times that the item has been stowed or picked. This is a great starting point for item identification, but as Parness points out, “We have lots of items that aren’t feature rich—imagine all of the different things you might get in a brown cardboard box.”

Clutter and Contact

As challenging as it is to correctly identify an item in a bin that may be stuffed to the brim with nearly identical items, an even bigger challenge is actually getting that item that you just identified out of the bin. The hardware and software that humans have for doing this task is unmatched by any robot, which is always a problem, but the real complicating factor is dealing with items that are all jumbled together in a small fabric bin. And the picking process itself involves more than just extraction—once the item is out of the bin, you then have to get it to the next order-fulfillment step, which means dropping it into another bin or putting it on a conveyor or something.

“When we were originally starting out, we assumed we’d have to carry the item over some distance after we pulled it out of the bin,” explains Parness. “So we were thinking we needed pinch grasping.” A pinch grasp is when you grab something between a finger (or fingers) and your thumb, and at least for humans, it’s a versatile and reliable way of grabbing a wide variety of stuff. But as Parness notes, for robots in this context, it’s more complicated: “Even pinch grasping is not ideal because if you pinch the edge of a book, or the end of a plastic bag with something inside it, you don’t have pose control of the item and it may flop around unpredictably.”

At some point, Parness and his team realized that while an item did have to move farther than just out of the bin, it didn’t actually have to get moved by the picking robot itself. Instead, they came up with a lifting conveyor that positions itself directly outside of the bin being picked from, so that all the robot has to do is get the item out of the bin and onto the conveyor. “It doesn’t look that graceful right now,” admits Parness, but it’s a clever use of hardware to substantially simplify the manipulation problem, and has the side benefit of allowing the robot to work more efficiently, since the conveyor can move the item along while the arm starts working on the next pick.

Amazon’s robots have different techniques for extracting items from bins, using different gripping hardware depending on what needs to be picked. The type of end effector that the system chooses and the grasping approach depend on what the item is, where it is in the bin, and also what it’s next to. It’s a complicated planning problem that Amazon is tackling with AI, as Parness explains. “We’re starting to build foundation models of items, including properties like how squishy they are, how fragile they are, and whether they tend to get stuck on other items or no. So we’re trying to learn those things, and it’s early stage for us, but we think reasoning about item properties is going to be important to get to that level of reliability that we need.”

Reliability has to be superhigh for Amazon (and with many other commercial robotic deployments) simply because small errors multiplied over huge deployments result in an unacceptable amount of screwing up. There’s a very, very long tail of unusual things that Amazon’s robots might encounter when trying to extract an item from a bin. Even if there’s some particularly weird bin situation that might only show up once in a million picks, that still ends up happening many times per day on the scale at which Amazon operates. Fortunately for Amazon, they’ve got humans around, and part of the reason that this robotic system can be effective in production at all is that if the robot gets stuck, or even just sees a bin that it knows is likely to cause problems, it can just give up, route that particular item to a human picker, and move on to the next one.

The other new technique that Amazon is implementing is a sort of modern approach to “visual servoing,” where the robot watches itself move and then adjusts its movement based on what it sees. As Parness explains: “It’s an important capability because it allows us to catch problems before they happen. I think that’s probably our biggest innovation, and it spans not just our problem, but problems across robotics.”

A (More) Automated Future

Parness was very clear that (for better or worse) Amazon isn’t thinking about its stowing and picking robots in terms of replacing humans completely. There’s that long tail of items that need a human touch, and it’s frankly hard to imagine any robotic-manipulation system capable enough to make at least occasional human help unnecessary in an environment like an Amazon warehouse, which somehow manages to maximize organization and chaos at the same time.

These stowing and picking robots have been undergoing live testing in an Amazon warehouse in Germany for the past year, where they’re already demonstrating ways in which human workers could directly benefit from their presence. For example, Amazon pods can be up to 2.5 meters tall, meaning that human workers need to use a stepladder to reach the highest bins and bend down to reach the lowest ones. If the robots were primarily tasked with interacting with these bins, it would help humans work faster while putting less stress on their bodies.

With the robots so far managing to keep up with human workers, Parness tells us that the emphasis going forward will be primarily on getting better at not screwing up: “I think our speed is in a really good spot. The thing we’re focused on now is getting that last bit of reliability, and that will be our next year of work.” While it may seem like Amazon is optimizing for its own very specific use cases, Parness reiterates that the bigger picture here is using every last one of those 400 million items jumbled into bins as a unique opportunity to do fundamental research on fast, reliable manipulation in complex environments.

“If you can build the science to handle high contact and high clutter, we’re going to use it everywhere,” says Parness. “It’s going to be useful for everything, from warehouses to your own home. What we’re working on now are just the first problems that are forcing us to develop these capabilities, but I think it’s the future of robotic manipulation.”



As far as I can make out, Amazon’s warehouses are highly structured, extremely organized, very tidy, absolute raging messes. Everything in an Amazon warehouse is (usually) exactly where it’s supposed to be, which is typically jammed into some pseudorandom fabric bin the size of a shoebox along with a bunch of other pseudorandom crap. Somehow, this turns out to be the most space- and time-efficient way of doing things, because (as we’ve written about before) you have to consider the process of stowing items away in a warehouse as well as the process of picking them, and that involves some compromises in favor of space and speed.

For humans, this isn’t so much of a problem. When someone orders something on Amazon, a human can root around in those bins, shove some things out of the way, and then pull out the item that they’re looking for. This is exactly the sort of thing that robots tend to be terrible at, because not only is this process slightly different every single time, it’s also very hard to define exactly how humans go about it.

As you might expect, Amazon has been working very very hard on this picking problem. Today at an event in Germany, the company announced Vulcan, a robotic system that can both stow and pick items at human(ish) speeds.

Last time we talked with Aaron Parness, the director of applied science at Amazon Robotics, our conversation was focused on stowing—putting items into bins. As part of today’s announcement, Amazon revealed that its robots are now slightly faster at stowing than the average human is. But in the stow context, there’s a limited amount that a robot really has to understand about what’s actually happening in the bin. Fundamentally, the stowing robot’s job is to squoosh whatever is currently in a bin as far to one side as possible in order to make enough room to cram a new item in. As long as the robot is at least somewhat careful not to crushify anything, it’s a relatively straightforward task, at least compared to picking.

The choices made when an item is stowed into a bin will affect how hard it is to get that item out of that bin later on—this is called “bin etiquette.” Amazon is trying to learn bin etiquette with AI to make picking more efficient.Amazon

The defining problem of picking, as far as robots are concerned, is sensing and manipulation in clutter. “It’s a naturally contact-rich task, and we have to plan on that contact and react to it,” Parness says. And it’s not enough to solve these problems slowly and carefully, because Amazon Robotics is trying to put robots in production, which means that its systems are being directly compared to a not-so-small army of humans who are doing this exact same job very efficiently.

“There’s a new science challenge here, which is to identify the right item,” explains Parness. The thing to understand about identifying items in an Amazon warehouse is that there are a lot of them: something like 400 million unique items. One single floor of an Amazon warehouse can easily contain 15,000 pods, which is over a million bins, and Amazon has several hundred warehouses. This is a lot of stuff.

In theory, Amazon knows exactly which items are in every single bin. Amazon also knows (again, in theory), the weight and dimensions of each of those items, and probably has some pictures of each item from previous times that the item has been stowed or picked. This is a great starting point for item identification, but as Parness points out, “We have lots of items that aren’t feature rich—imagine all of the different things you might get in a brown cardboard box.”

Clutter and Contact

As challenging as it is to correctly identify an item in a bin that may be stuffed to the brim with nearly identical items, an even bigger challenge is actually getting that item that you just identified out of the bin. The hardware and software that humans have for doing this task is unmatched by any robot, which is always a problem, but the real complicating factor is dealing with items that are all jumbled together in a small fabric bin. And the picking process itself involves more than just extraction—once the item is out of the bin, you then have to get it to the next order-fulfillment step, which means dropping it into another bin or putting it on a conveyor or something.

“When we were originally starting out, we assumed we’d have to carry the item over some distance after we pulled it out of the bin,” explains Parness. “So we were thinking we needed pinch grasping.” A pinch grasp is when you grab something between a finger (or fingers) and your thumb, and at least for humans, it’s a versatile and reliable way of grabbing a wide variety of stuff. But as Parness notes, for robots in this context, it’s more complicated: “Even pinch grasping is not ideal because if you pinch the edge of a book, or the end of a plastic bag with something inside it, you don’t have pose control of the item and it may flop around unpredictably.”

At some point, Parness and his team realized that while an item did have to move farther than just out of the bin, it didn’t actually have to get moved by the picking robot itself. Instead, they came up with a lifting conveyor that positions itself directly outside of the bin being picked from, so that all the robot has to do is get the item out of the bin and onto the conveyor. “It doesn’t look that graceful right now,” admits Parness, but it’s a clever use of hardware to substantially simplify the manipulation problem, and has the side benefit of allowing the robot to work more efficiently, since the conveyor can move the item along while the arm starts working on the next pick.

Amazon’s robots have different techniques for extracting items from bins, using different gripping hardware depending on what needs to be picked. The type of end effector that the system chooses and the grasping approach depend on what the item is, where it is in the bin, and also what it’s next to. It’s a complicated planning problem that Amazon is tackling with AI, as Parness explains. “We’re starting to build foundation models of items, including properties like how squishy they are, how fragile they are, and whether they tend to get stuck on other items or no. So we’re trying to learn those things, and it’s early stage for us, but we think reasoning about item properties is going to be important to get to that level of reliability that we need.”

Reliability has to be superhigh for Amazon (and with many other commercial robotic deployments) simply because small errors multiplied over huge deployments result in an unacceptable amount of screwing up. There’s a very, very long tail of unusual things that Amazon’s robots might encounter when trying to extract an item from a bin. Even if there’s some particularly weird bin situation that might only show up once in a million picks, that still ends up happening many times per day on the scale at which Amazon operates. Fortunately for Amazon, they’ve got humans around, and part of the reason that this robotic system can be effective in production at all is that if the robot gets stuck, or even just sees a bin that it knows is likely to cause problems, it can just give up, route that particular item to a human picker, and move on to the next one.

The other new technique that Amazon is implementing is a sort of modern approach to “visual servoing,” where the robot watches itself move and then adjusts its movement based on what it sees. As Parness explains: “It’s an important capability because it allows us to catch problems before they happen. I think that’s probably our biggest innovation, and it spans not just our problem, but problems across robotics.”

A (More) Automated Future

Parness was very clear that (for better or worse) Amazon isn’t thinking about its stowing and picking robots in terms of replacing humans completely. There’s that long tail of items that need a human touch, and it’s frankly hard to imagine any robotic-manipulation system capable enough to make at least occasional human help unnecessary in an environment like an Amazon warehouse, which somehow manages to maximize organization and chaos at the same time.

These stowing and picking robots have been undergoing live testing in an Amazon warehouse in Germany for the past year, where they’re already demonstrating ways in which human workers could directly benefit from their presence. For example, Amazon pods can be up to 2.5 meters tall, meaning that human workers need to use a stepladder to reach the highest bins and bend down to reach the lowest ones. If the robots were primarily tasked with interacting with these bins, it would help humans work faster while putting less stress on their bodies.

With the robots so far managing to keep up with human workers, Parness tells us that the emphasis going forward will be primarily on getting better at not screwing up: “I think our speed is in a really good spot. The thing we’re focused on now is getting that last bit of reliability, and that will be our next year of work.” While it may seem like Amazon is optimizing for its own very specific use cases, Parness reiterates that the bigger picture here is using every last one of those 400 million items jumbled into bins as a unique opportunity to do fundamental research on fast, reliable manipulation in complex environments.

“If you can build the science to handle high contact and high clutter, we’re going to use it everywhere,” says Parness. “It’s going to be useful for everything, from warehouses to your own home. What we’re working on now are just the first problems that are forcing us to develop these capabilities, but I think it’s the future of robotic manipulation.”



As far as I can make out, Amazon’s warehouses are highly structured, extremely organized, very tidy, absolute raging messes. Everything in an Amazon warehouse is (usually) exactly where it’s supposed to be, which is typically jammed into some pseudorandom fabric bin the size of a shoebox along with a bunch of other pseudorandom crap. Somehow, this turns out to be the most space- and time-efficient way of doing things, because (as we’ve written about before) you have to consider the process of stowing items away in a warehouse as well as the process of picking them, and that involves some compromises in favor of space and speed.

For humans, this isn’t so much of a problem. When someone orders something on Amazon, a human can root around in those bins, shove some things out of the way, and then pull out the item that they’re looking for. This is exactly the sort of thing that robots tend to be terrible at, because not only is this process slightly different every single time, it’s also very hard to define exactly how humans go about it.

As you might expect, Amazon has been working very very hard on this picking problem. Today at an event in Germany, the company announced Vulcan, a robotic system that can both stow and pick items at human(ish) speeds.

Last time we talked with Aaron Parness, the director of applied science at Amazon Robotics, our conversation was focused on stowing—putting items into bins. As part of today’s announcement, Amazon revealed that its robots are now slightly faster at stowing than the average human is. But in the stow context, there’s a limited amount that a robot really has to understand about what’s actually happening in the bin. Fundamentally, the stowing robot’s job is to squoosh whatever is currently in a bin as far to one side as possible in order to make enough room to cram a new item in. As long as the robot is at least somewhat careful not to crushify anything, it’s a relatively straightforward task, at least compared to picking.

The choices made when an item is stowed into a bin will affect how hard it is to get that item out of that bin later on—this is called “bin etiquette.” Amazon is trying to learn bin etiquette with AI to make picking more efficient.Amazon

The defining problem of picking, as far as robots are concerned, is sensing and manipulation in clutter. “It’s a naturally contact-rich task, and we have to plan on that contact and react to it,” Parness says. And it’s not enough to solve these problems slowly and carefully, because Amazon Robotics is trying to put robots in production, which means that its systems are being directly compared to a not-so-small army of humans who are doing this exact same job very efficiently.

“There’s a new science challenge here, which is to identify the right item,” explains Parness. The thing to understand about identifying items in an Amazon warehouse is that there are a lot of them: something like 400 million unique items. One single floor of an Amazon warehouse can easily contain 15,000 pods, which is over a million bins, and Amazon has several hundred warehouses. This is a lot of stuff.

In theory, Amazon knows exactly which items are in every single bin. Amazon also knows (again, in theory), the weight and dimensions of each of those items, and probably has some pictures of each item from previous times that the item has been stowed or picked. This is a great starting point for item identification, but as Parness points out, “We have lots of items that aren’t feature rich—imagine all of the different things you might get in a brown cardboard box.”

Clutter and Contact

As challenging as it is to correctly identify an item in a bin that may be stuffed to the brim with nearly identical items, an even bigger challenge is actually getting that item that you just identified out of the bin. The hardware and software that humans have for doing this task is unmatched by any robot, which is always a problem, but the real complicating factor is dealing with items that are all jumbled together in a small fabric bin. And the picking process itself involves more than just extraction—once the item is out of the bin, you then have to get it to the next order-fulfillment step, which means dropping it into another bin or putting it on a conveyor or something.

“When we were originally starting out, we assumed we’d have to carry the item over some distance after we pulled it out of the bin,” explains Parness. “So we were thinking we needed pinch grasping.” A pinch grasp is when you grab something between a finger (or fingers) and your thumb, and at least for humans, it’s a versatile and reliable way of grabbing a wide variety of stuff. But as Parness notes, for robots in this context, it’s more complicated: “Even pinch grasping is not ideal because if you pinch the edge of a book, or the end of a plastic bag with something inside it, you don’t have pose control of the item and it may flop around unpredictably.”

At some point, Parness and his team realized that while an item did have to move farther than just out of the bin, it didn’t actually have to get moved by the picking robot itself. Instead, they came up with a lifting conveyor that positions itself directly outside of the bin being picked from, so that all the robot has to do is get the item out of the bin and onto the conveyor. “It doesn’t look that graceful right now,” admits Parness, but it’s a clever use of hardware to substantially simplify the manipulation problem, and has the side benefit of allowing the robot to work more efficiently, since the conveyor can move the item along while the arm starts working on the next pick.

Amazon’s robots have different techniques for extracting items from bins, using different gripping hardware depending on what needs to be picked. The type of end effector that the system chooses and the grasping approach depend on what the item is, where it is in the bin, and also what it’s next to. It’s a complicated planning problem that Amazon is tackling with AI, as Parness explains. “We’re starting to build foundation models of items, including properties like how squishy they are, how fragile they are, and whether they tend to get stuck on other items or no. So we’re trying to learn those things, and it’s early stage for us, but we think reasoning about item properties is going to be important to get to that level of reliability that we need.”

Reliability has to be superhigh for Amazon (and with many other commercial robotic deployments) simply because small errors multiplied over huge deployments result in an unacceptable amount of screwing up. There’s a very, very long tail of unusual things that Amazon’s robots might encounter when trying to extract an item from a bin. Even if there’s some particularly weird bin situation that might only show up once in a million picks, that still ends up happening many times per day on the scale at which Amazon operates. Fortunately for Amazon, they’ve got humans around, and part of the reason that this robotic system can be effective in production at all is that if the robot gets stuck, or even just sees a bin that it knows is likely to cause problems, it can just give up, route that particular item to a human picker, and move on to the next one.

The other new technique that Amazon is implementing is a sort of modern approach to “visual servoing,” where the robot watches itself move and then adjusts its movement based on what it sees. As Parness explains: “It’s an important capability because it allows us to catch problems before they happen. I think that’s probably our biggest innovation, and it spans not just our problem, but problems across robotics.”

A (More) Automated Future

Parness was very clear that (for better or worse) Amazon isn’t thinking about its stowing and picking robots in terms of replacing humans completely. There’s that long tail of items that need a human touch, and it’s frankly hard to imagine any robotic-manipulation system capable enough to make at least occasional human help unnecessary in an environment like an Amazon warehouse, which somehow manages to maximize organization and chaos at the same time.

These stowing and picking robots have been undergoing live testing in an Amazon warehouse in Germany for the past year, where they’re already demonstrating ways in which human workers could directly benefit from their presence. For example, Amazon pods can be up to 2.5 meters tall, meaning that human workers need to use a stepladder to reach the highest bins and bend down to reach the lowest ones. If the robots were primarily tasked with interacting with these bins, it would help humans work faster while putting less stress on their bodies.

With the robots so far managing to keep up with human workers, Parness tells us that the emphasis going forward will be primarily on getting better at not screwing up: “I think our speed is in a really good spot. The thing we’re focused on now is getting that last bit of reliability, and that will be our next year of work.” While it may seem like Amazon is optimizing for its own very specific use cases, Parness reiterates that the bigger picture here is using every last one of those 400 million items jumbled into bins as a unique opportunity to do fundamental research on fast, reliable manipulation in complex environments.

“If you can build the science to handle high contact and high clutter, we’re going to use it everywhere,” says Parness. “It’s going to be useful for everything, from warehouses to your own home. What we’re working on now are just the first problems that are forcing us to develop these capabilities, but I think it’s the future of robotic manipulation.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.ICRA 2025: 19–23 May 2025, ATLANTALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, SOUTH KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHENCoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

The LYNX M20 series represents the world’s first wheeled-legged robot built specifically for challenging terrains and hazardous environments during industrial operation. Featuring lightweight design with extreme-environment endurance, it conquers rugged mountain trails, muddy wetlands and debris-strewn ruins—pioneering embodied intelligence in power inspection, emergency response, logistics, and scientific exploration.

[ DEEP Robotics ]

The latest OK Go music video includes lots of robots.

And here’s a bit more on how it was done, mostly with arms from Universal Robots.

[ OK Go ]

Despite significant interest and advancements in humanoid robotics, most existing commercially available hardware remains high-cost, closed-source, and nontransparent within the robotics community. This lack of accessibility and customization hinders the growth of the field and the broader development of humanoid technologies. To address these challenges and promote democratization in humanoid robotics, we demonstrate Berkeley Humanoid Lite, an open-source humanoid robot designed to be accessible, customizable, and beneficial for the entire community.

[ Berkeley Humanoid Lite ]

I think this may be the first time I’ve ever seen a pedestal-mounted Atlas from Boston Dynamics.

[ NVIDIA ]

We are increasingly adopting domestic robots (Roomba, for example) that provide relief from mundane household tasks. However, these robots usually only spend little time executing their specific task and remain idle for long periods. Our work explores this untapped potential of domestic robots in ubiquitous computing, focusing on how they can improve and support modern lifestyles.

[ University of Bath ]

Whenever I see a soft robot, I have to ask, “Okay, but how soft is it really?” And usually, there’s a pump or something hidden away off-camera somewhere. So it’s always cool to see actually soft robotics actuators, like these, which are based on phase-changing water.

[ Nature Communications ] via [ Collaborative Robotics Laboratory, University of Coimbra ]

Thanks, Pedro!

Pruning is an essential agricultural practice for orchards. Robot manipulators have been developed as an automated solution for this repetitive task, which typically requires seasonal labor with specialized skills. Our work addresses the behavior planning challenge for a robotic pruning system, which entails a multilevel planning problem in environments with complex collisions. In this article, we formulate the planning problem for a high-dimensional robotic arm in a pruning scenario, investigate the system’s intrinsic redundancies, and propose a comprehensive pruning workflow that integrates perception, modeling, and holistic planning.

[ Paper ] via [ IEEE Robotics and Automation Magazine ]

Thanks, Bram!

Watch the Waymo Driver quickly react to potential hazards and avoid collisions with other road users, making streets safer in cities where it operates.

[ Waymo ]

This video showcases some of the early testing footage of HARRI (High-speed Adaptive Robot for Robust Interactions), a next-generation proprioceptive robotic manipulator developed at the Robotics & Mechanisms Laboratory (RoMeLa) at University of California, Los Angeles. Designed for dynamic and force-critical tasks, HARRI leverages quasi-direct drive proprioceptive actuators combined with advanced control strategies such as impedance control and real-time model predictive control (MPC) to achieve high-speed, precise, and safe manipulation in human-centric and unstructured environments.

[ Robotics & Mechanisms Laboratory ]

Building on reinforcement learning for natural gait, we’ve upped the challenge for Adam: introducing complex terrain in training to adapt to real-world surfaces. From steep slopes to start-stop inclines, Adam handles it all with ease!

[ PNDbotics ]

ABB Robotics is serving up the future of fast food with BurgerBots—a groundbreaking new restaurant concept launched in Los Gatos, Calif. Designed to deliver perfectly cooked, made-to-order burgers every time, the automated kitchen uses ABB’s IRB 360 FlexPicker and YuMi collaborative robot to assemble meals with precision and speed, while accurately monitoring stock levels and freeing staff to focus on customer experience.

[ Burger Bots ]

Look at this little guy, such a jaunty walk!

[ Science Advances ]

General-purpose humanoid robots are expected to interact intuitively with humans, enabling seamless integration into daily life. Natural language provides the most accessible medium for this purpose. In this work, we present an end-to-end, language-directed policy for real-world humanoid whole-body control.

[ Hybrid Robotics ]

It’s debatable whether this is technically a robot, but sure, let’s go with it, because it’s pretty neat—a cable car of sorts consisting of a soft twisted ring that’s powered by infrared light.

[ North Carolina State University ]

Robert Playter, CEO of Boston Dynamics, discusses the future of robotics amid rising competition and advances in artificial intelligence.

[ Bloomberg ]

AI is at the forefront of technological advances and is also reshaping creativity, ownership, and societal interactions. In episode 7 of Penn Engineering’s Innovation & Impact podcast, host Vijay Kumar, Nemirovsky Family dean of Penn Engineering and professor in mechanical engineering and applied mechanics, speaks with Meta’s chief AI scientist and Turing Award winner Yann LeCun about the journey of AI, how we define intelligence, and the possibilities and challenges it presents.

[ University of Pennsylvania ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICUAS 2025: 14–17 May 2025, CHARLOTTE, N.C.ICRA 2025: 19–23 May 2025, ATLANTALondon Humanoids Summit: 29–30 May 2025, LONDONIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPAN2025 Energy Drone & Robotics Summit: 16–18 June 2025, HOUSTONRSS 2025: 21–25 June 2025, LOS ANGELESETH Robotics Summer School: 21–27 June 2025, GENEVAIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, SOUTH KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZILRO-MAN 2025: 25–29 August 2025, EINDHOVEN, THE NETHERLANDSCLAWAR 2025: 5–7 September 2025, SHENZHENCoRL 2025: 27–30 September 2025, SEOULIEEE Humanoids: 30 September–2 October 2025, SEOULWorld Robot Summit: 10–12 October 2025, OSAKA, JAPANIROS 2025: 19–25 October 2025, HANGZHOU, CHINA

Enjoy today’s videos!

The LYNX M20 series represents the world’s first wheeled-legged robot built specifically for challenging terrains and hazardous environments during industrial operation. Featuring lightweight design with extreme-environment endurance, it conquers rugged mountain trails, muddy wetlands and debris-strewn ruins—pioneering embodied intelligence in power inspection, emergency response, logistics, and scientific exploration.

[ DEEP Robotics ]

The latest OK Go music video includes lots of robots.

And here’s a bit more on how it was done, mostly with arms from Universal Robots.

[ OK Go ]

Despite significant interest and advancements in humanoid robotics, most existing commercially available hardware remains high-cost, closed-source, and nontransparent within the robotics community. This lack of accessibility and customization hinders the growth of the field and the broader development of humanoid technologies. To address these challenges and promote democratization in humanoid robotics, we demonstrate Berkeley Humanoid Lite, an open-source humanoid robot designed to be accessible, customizable, and beneficial for the entire community.

[ Berkeley Humanoid Lite ]

I think this may be the first time I’ve ever seen a pedestal-mounted Atlas from Boston Dynamics.

[ NVIDIA ]

We are increasingly adopting domestic robots (Roomba, for example) that provide relief from mundane household tasks. However, these robots usually only spend little time executing their specific task and remain idle for long periods. Our work explores this untapped potential of domestic robots in ubiquitous computing, focusing on how they can improve and support modern lifestyles.

[ University of Bath ]

Whenever I see a soft robot, I have to ask, “Okay, but how soft is it really?” And usually, there’s a pump or something hidden away off-camera somewhere. So it’s always cool to see actually soft robotics actuators, like these, which are based on phase-changing water.

[ Nature Communications ] via [ Collaborative Robotics Laboratory, University of Coimbra ]

Thanks, Pedro!

Pruning is an essential agricultural practice for orchards. Robot manipulators have been developed as an automated solution for this repetitive task, which typically requires seasonal labor with specialized skills. Our work addresses the behavior planning challenge for a robotic pruning system, which entails a multilevel planning problem in environments with complex collisions. In this article, we formulate the planning problem for a high-dimensional robotic arm in a pruning scenario, investigate the system’s intrinsic redundancies, and propose a comprehensive pruning workflow that integrates perception, modeling, and holistic planning.

[ Paper ] via [ IEEE Robotics and Automation Magazine ]

Thanks, Bram!

Watch the Waymo Driver quickly react to potential hazards and avoid collisions with other road users, making streets safer in cities where it operates.

[ Waymo ]

This video showcases some of the early testing footage of HARRI (High-speed Adaptive Robot for Robust Interactions), a next-generation proprioceptive robotic manipulator developed at the Robotics & Mechanisms Laboratory (RoMeLa) at University of California, Los Angeles. Designed for dynamic and force-critical tasks, HARRI leverages quasi-direct drive proprioceptive actuators combined with advanced control strategies such as impedance control and real-time model predictive control (MPC) to achieve high-speed, precise, and safe manipulation in human-centric and unstructured environments.

[ Robotics & Mechanisms Laboratory ]

Building on reinforcement learning for natural gait, we’ve upped the challenge for Adam: introducing complex terrain in training to adapt to real-world surfaces. From steep slopes to start-stop inclines, Adam handles it all with ease!

[ PNDbotics ]

ABB Robotics is serving up the future of fast food with BurgerBots—a groundbreaking new restaurant concept launched in Los Gatos, Calif. Designed to deliver perfectly cooked, made-to-order burgers every time, the automated kitchen uses ABB’s IRB 360 FlexPicker and YuMi collaborative robot to assemble meals with precision and speed, while accurately monitoring stock levels and freeing staff to focus on customer experience.

[ Burger Bots ]

Look at this little guy, such a jaunty walk!

[ Science Advances ]

General-purpose humanoid robots are expected to interact intuitively with humans, enabling seamless integration into daily life. Natural language provides the most accessible medium for this purpose. In this work, we present an end-to-end, language-directed policy for real-world humanoid whole-body control.

[ Hybrid Robotics ]

It’s debatable whether this is technically a robot, but sure, let’s go with it, because it’s pretty neat—a cable car of sorts consisting of a soft twisted ring that’s powered by infrared light.

[ North Carolina State University ]

Robert Playter, CEO of Boston Dynamics, discusses the future of robotics amid rising competition and advances in artificial intelligence.

[ Bloomberg ]

AI is at the forefront of technological advances and is also reshaping creativity, ownership, and societal interactions. In episode 7 of Penn Engineering’s Innovation & Impact podcast, host Vijay Kumar, Nemirovsky Family dean of Penn Engineering and professor in mechanical engineering and applied mechanics, speaks with Meta’s chief AI scientist and Turing Award winner Yann LeCun about the journey of AI, how we define intelligence, and the possibilities and challenges it presents.

[ University of Pennsylvania ]



I come from dairy-farming stock. My grandfather, the original Harry Goldstein, owned a herd of dairy cows and a creamery in Louisville, Ky., that bore the family name. One fateful day in early April 1944, Harry was milking his cows when a heavy metallic part of his homemade milking contraption—likely some version of the then-popular Surge Bucket Milker—struck him in the abdomen, causing a blood clot that ultimately led to cardiac arrest and his subsequent demise a few days later, at the age of 48.

Fast forward 80 years and dairy farming is still a dangerous occupation. According to an analysis of U.S. Bureau of Labor Statistics data done by the advocacy group Farmworker Justice, the U.S. dairy industry recorded 223 injuries per 10,000 full-time workers in 2020, almost double the rate for all of private industry combined. Contact with animals tops the list of occupational hazards for dairy workers, followed by slips, trips, and falls. Other significant risks include contact with objects or equipment, overexertion, and exposure to toxic substances. Every year, a few dozen dairy workers in the United States meet a fate similar to my grandfather’s, with 31 reported deadly accidents on dairy farms in 2021.

As Senior Editor Evan Ackerman notes in “Robots for Cows (and Their Humans)”, traditional dairy farming is very labor-intensive. Cows need to be milked at least twice per day to prevent discomfort. Conventional milking facilities are engineered for human efficiency, with systems like rotating carousels that bring the cows to the dairy workers.

The robotic systems that Netherlands-based Lely has been developing since the early 1990s are much more about doing things the bovine way. That includes letting the cows choose when to visit the milking robot, resulting in a happier herd and up to 10 percent more milk production.

Turns out that what’s good for the cows might be good for the humans, too. Another Lely bot deals with feeding, while yet another mops up the manure, the proximate cause of much of the slipping and sliding that can result in injuries. The robots tend to reset the cow–human relationship—it becomes less adversarial because the humans aren’t always there bossing the cows around.

Farmer well-being is also enhanced because the humans don’t have to be around to tempt fate, and they can spend time doing other things, freed up by the robot laborers. In fact, when Ackerman visited Lely’s demonstration farm in Schipluiden, Netherlands, to see the Lely robots in action, he says, “The original plan was for me to interview the farmer, and he was just not there at all for the entire visit while the cows were getting milked by the robots. In retrospect, that might have been the most effective way he could communicate how these robots are changing work for dairy farmers.”

The farmer’s absence also speaks volumes about how far dairy technology has evolved since my grandfather’s day. Harry Goldstein’s life was cut short by the very equipment he hacked to make his own work easier. Today’s dairy-farming innovations aren’t just improving efficiency—they’re keeping humans out of harm’s way entirely. In the dairy farms of the future, the most valuable safety features might simply be a barn resounding with the whirring of robots and moos of contentment.



I come from dairy-farming stock. My grandfather, the original Harry Goldstein, owned a herd of dairy cows and a creamery in Louisville, Ky., that bore the family name. One fateful day in early April 1944, Harry was milking his cows when a heavy metallic part of his homemade milking contraption—likely some version of the then-popular Surge Bucket Milker—struck him in the abdomen, causing a blood clot that ultimately led to cardiac arrest and his subsequent demise a few days later, at the age of 48.

Fast forward 80 years and dairy farming is still a dangerous occupation. According to an analysis of U.S. Bureau of Labor Statistics data done by the advocacy group Farmworker Justice, the U.S. dairy industry recorded 223 injuries per 10,000 full-time workers in 2020, almost double the rate for all of private industry combined. Contact with animals tops the list of occupational hazards for dairy workers, followed by slips, trips, and falls. Other significant risks include contact with objects or equipment, overexertion, and exposure to toxic substances. Every year, a few dozen dairy workers in the United States meet a fate similar to my grandfather’s, with 31 reported deadly accidents on dairy farms in 2021.

As Senior Editor Evan Ackerman notes in “Robots for Cows (and Their Humans)”, traditional dairy farming is very labor-intensive. Cows need to be milked at least twice per day to prevent discomfort. Conventional milking facilities are engineered for human efficiency, with systems like rotating carousels that bring the cows to the dairy workers.

The robotic systems that Netherlands-based Lely has been developing since the early 1990s are much more about doing things the bovine way. That includes letting the cows choose when to visit the milking robot, resulting in a happier herd and up to 10 percent more milk production.

Turns out that what’s good for the cows might be good for the humans, too. Another Lely bot deals with feeding, while yet another mops up the manure, the proximate cause of much of the slipping and sliding that can result in injuries. The robots tend to reset the cow–human relationship—it becomes less adversarial because the humans aren’t always there bossing the cows around.

Farmer well-being is also enhanced because the humans don’t have to be around to tempt fate, and they can spend time doing other things, freed up by the robot laborers. In fact, when Ackerman visited Lely’s demonstration farm in Schipluiden, Netherlands, to see the Lely robots in action, he says, “The original plan was for me to interview the farmer, and he was just not there at all for the entire visit while the cows were getting milked by the robots. In retrospect, that might have been the most effective way he could communicate how these robots are changing work for dairy farmers.”

The farmer’s absence also speaks volumes about how far dairy technology has evolved since my grandfather’s day. Harry Goldstein’s life was cut short by the very equipment he hacked to make his own work easier. Today’s dairy-farming innovations aren’t just improving efficiency—they’re keeping humans out of harm’s way entirely. In the dairy farms of the future, the most valuable safety features might simply be a barn resounding with the whirring of robots and moos of contentment.



Meet FREDERICK Mark 2, the Friendly Robot for Education, Discussion and Entertainment, the Retrieval of Information, and the Collation of Knowledge, better known as Freddy II. This remarkable robot could put together a simple model car from an assortment of parts dumped in its workspace. Its video-camera eyes and pincer hand identified and sorted the individual pieces before assembling the desired end product. But onlookers had to be patient. Assembly took about 16 hours, and that was after a day or two of “learning” and programming.

Freddy II was completed in 1973 as one of a series of research robots developed by Donald Michie and his team at the University of Edinburgh during the 1960s and ’70s. The robots became the focus of an intense debate over the future of AI in the United Kingdom. Michie eventually lost, his funding was gutted, and the ensuing AI winter set back U.K. research in the field for a decade.

Why were the Freddy I and II robots built?

In 1967, Donald Michie, along with Richard Gregory and Hugh Christopher Longuet-Higgins, founded the Department of Machine Intelligence and Perception at the University of Edinburgh with the near-term goal of developing a semiautomated robot and then longer-term vision of programming “integrated cognitive systems,” or what other people might call intelligent robots. At the time, the U.S. Defense Advanced Research Projects Agency and Japan’s Computer Usage Development Institute were both considering plans to create fully automated factories within a decade. The team at Edinburgh thought they should get in on the action too.

Two years later, Stephen Salter and Harry G. Barrow joined Michie and got to work on Freddy I. Salter devised the hardware while Barrow designed and wrote the software and computer interfacing. The resulting simple robot worked, but it was crude. The AI researcher Jean Hayes (who would marry Michie in 1971) referred to this iteration of Freddy as an “arthritic Lady of Shalott.”

Freddy I consisted of a robotic arm, a camera, a set of wheels, and some bumpers to detect obstacles. Instead of roaming freely, it remained stationary while a small platform moved beneath it. Barrow developed an adaptable program that enabled Freddy I to recognize irregular objects. In 1969, Salter and Barrow published in Machine Intelligence their results, “Design of Low-Cost Equipment for Cognitive Robot Research,” which included suggestions for the next iteration of the robot.

Freddy I, completed in 1969, could recognize objects placed in front of it—in this case, a teacup.University of Edinburgh

More people joined the team to build Freddy Mark 1.5, which they finished in May 1971. Freddy 1.5 was a true robotic hand-eye system. The hand consisted of two vertical, parallel plates that could grip an object and lift it off the platform. The eyes were two cameras: one looking directly down on the platform, and the other mounted obliquely on the truss that suspended the hand over the platform. Freddy 1.5’s world was a 2-meter by 2-meter square platform that moved in an x-y plane.

Freddy 1.5 quickly morphed into Freddy II as the team continued to grow. Improvements included force transducers added to the “wrist” that could deduce the strength of the grip, the weight of the object held, and whether it had collided with an object. But what really set Freddy II apart was its versatile assembly program: The robot could be taught to recognize the shapes of various parts, and then after a day or two of programming, it could assemble simple models. The various steps can be seen in this extended video, narrated by Barrow:

The Lighthill Report Takes Down Freddy the Robot

And then what happened? So much. But before I get into all that, let me just say that rarely do I, as a historian, have the luxury of having my subjects clearly articulate the aims of their projects, imagine the future, and then, years later, reflect on their experiences. As a cherry on top of this historian’s delight, the topic at hand—artificial intelligence—also happens to be of current interest to pretty much everyone.

As with many fascinating histories of technology, events turn on a healthy dose of professional bickering. In this case, the disputants were Michie and the applied mathematician James Lighthill, who had drastically different ideas about the direction of robotics research. Lighthill favored applied research, while Michie was more interested in the theoretical and experimental possibilities. Their fight escalated quickly, became public with a televised debate on the BBC, and concluded with the demise of an entire research field in Britain.

A damning report in 1973 by applied mathematician James Lighthill [left] resulted in funding being pulled from the AI and robotics program led by Donald Michie [right]. Left: Chronicle/Alamy; Right: University of Edinburgh

It all started in September 1971, when the British Science Research Council, which distributed public funds for scientific research, commissioned Lighthill to survey the state of academic research in artificial intelligence. The SRC was finding it difficult to make informed funding decisions in AI, given the field’s complexity. It suspected that some AI researchers’ interests were too narrowly focused, while others might be outright charlatans. Lighthill was called in to give the SRC a road map.

No intellectual slouch, Lighthill was the Lucasian Professor of Mathematics at the University of Cambridge, a position also held by Isaac Newton, Charles Babbage, and Stephen Hawking. Lighthill solicited input from scholars in the field and completed his report in March 1972. Officially titled “ Artificial Intelligence: A General Survey,” but informally called the Lighthill Report, it divided AI into three broad categories: A, for advanced automation; B, for building robots, but also bridge activities between categories A and C; and C, for computer-based central nervous system research. Lighthill acknowledged some progress in categories A and C, as well as a few disappointments.

Lighthill viewed Category B, though, as a complete failure. “Progress in category B has been even slower and more discouraging,” he wrote, “tending to sap confidence in whether the field of research called AI has any true coherence.” For good measure, he added, “AI not only fails to take the first fence but ignores the rest of the steeplechase altogether.” So very British.

Lighthill concluded his report with his view of the next 25 years in AI. He predicted a “fission of the field of AI research,” with some tempered optimism for achievement in categories A and C but a valley of continued failures in category B. Success would come in fields with clear applications, he argued, but basic research was a lost cause.

The Science Research Council published Lighthill’s report the following year, with responses from N. Stuart Sutherland of the University of Sussex and Roger M. Needham of the University of Cambridge, as well as Michie and his colleague Longuet-Higgins.

Sutherland sought to relabel category B as “basic research in AI” and to have the SRC increase funding for it. Needham mostly supported Lighthill’s conclusions and called for the elimination of the term AI—“a rather pernicious label to attach to a very mixed bunch of activities, and one could argue that the sooner we forget it the better.”

Longuet-Higgins focused on his own area of interest, cognitive science, and ended with an ominous warning that any spin-off of advanced automation would be “more likely to inflict multiple injuries on human society,” but he didn’t explain what those might be.

Michie, as the United Kingdom’s academic leader in robots and machine intelligence, understandably saw the Lighthill Report as a direct attack on his research agenda. With his funding at stake, he provided the most critical response, questioning the very foundation of the survey: Did Lighthill talk with any international experts? How did he overcome his own biases? Did he have any sources and references that others could check? He ended with a request for more funding—specifically the purchase of a DEC System 10 (also known as the PDP-10) mainframe computer. According to Michie, if his plan were followed, Britain would be internationally competitive in AI by the end of the decade.

After Michie’s funding was cut, the many researchers affiliated with his bustling lab lost their jobs.University of Edinburgh

This whole affair might have remained an academic dispute, but then the BBC decided to include a debate between Lighthill and a panel of experts as part of its “Controversy” TV series. “Controversy” was an experiment to engage the public in science. On 9 May 1973, an interested but nonspecialist audience filled the auditorium at the Royal Institution in London to hear the debate.

Lighthill started with a review of his report, explaining the differences he saw between automation and what he called “the mirage” of general-purpose robots. Michie responded with a short film of Freddy II assembling a model, explaining how the robot processes information. Michie argued that AI is a subject with its own purposes, its own criteria, and its own professional standards.

After a brief back and forth between Lighthill and Michie, the show’s host turned to the other panelists: John McCarthy, a professor of computer science at Stanford University, and Richard Gregory, a professor in the department of anatomy at the University of Bristol who had been Michie’s colleague at Edinburgh. McCarthy, who coined the term artificial intelligence in 1955, supported Michie’s position that AI should be its own area of research, not simply a bridge between automation and a robot that mimics a human brain. Gregory described how the work of Michie and McCarthy had influenced the field of psychology.

You can watch the debate or read a transcript.

A Look Back at the Lighthill Report

Despite international support from the AI community, though, the SRC sided with Lighthill and gutted funding for AI and robotics; Michie had lost. Michie’s bustling lab went from being an international center of research to just Michie, a technician, and an administrative assistant. The loss ushered in the first British AI winter, with the United Kingdom making little progress in the field for a decade.

For his part, Michie pivoted and recovered. He decommissioned Freddy II in 1980, at which point it moved to the Royal Museum of Scotland (now the National Museum of Scotland), and he replaced it with a Unimation PUMA robot.

In 1983, Michie founded the Turing Institute in Glasgow, an AI lab that worked with industry on both basic and applied research. The year before, he had written Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach). Michie intended it as intellectual musings that he hoped scientists would read, perhaps on the weekend, to help them get beyond the pursuits of the workweek. The book is wide-ranging, covering his three decades of work.

In the introduction to the chapters covering Freddy and the aftermath of the Lighthill report, Michie wrote, perhaps with an eye toward history:

“Work of excellence by talented young people was stigmatised as bad science and the experiment killed in mid-trajectory. This destruction of a co-operative human mechanism and of the careful craft of many hands is elsewhere described as a mishap. But to speak plainly, it was an outrage. In some later time when the values and methods of science have further expanded, and those adversary politics have contracted, it will be seen as such.”

History has indeed rendered judgment on the debate and the Lighthill Report. In 2019, for example, computer scientist Maarten van Emden, a colleague of Michie’s, reflected on the demise of the Freddy project with these choice words for Lighthill: “a pompous idiot who lent himself to produce a flaky report to serve as a blatantly inadequate cover for a hatchet job.”

And in a March 2024 post on GitHub, the blockchain entrepreneur Jeffrey Emanuel thoughtfully dissected Lighthill’s comments and the debate itself. Of Lighthill, he wrote, “I think we can all learn a very valuable lesson from this episode about the dangers of overconfidence and the importance of keeping an open mind. The fact that such a brilliant and learned person could be so confidently wrong about something so important should give us pause.”

Arguably, both Lighthill and Michie correctly predicted certain aspects of the AI future while failing to anticipate others. On the surface, the report and the debate could be described as simply about funding. But it was also more fundamentally about the role of academic research in shaping science and engineering and, by extension, society. Ideally, universities can support both applied research and more theoretical work. When funds are limited, though, choices are made. Lighthill chose applied automation as the future, leaving research in AI and machine intelligence in the cold.

It helps to take the long view. Over the decades, AI research has cycled through several periods of spring and winter, boom and bust. We’re currently in another AI boom. Is this time different? No one can be certain what lies just over the horizon, of course. That very uncertainty is, I think, the best argument for supporting people to experiment and conduct research into fundamental questions, so that they may help all of us to dream up the next big thing.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the May 2025 print issue as “This Robot Was the Fall Guy for British AI.”

References

Donald Michie’s lab regularly published articles on the group’s progress, especially in Machine Intelligence, a journal founded by Michie.

The Lighthill Report and recordings of the debate are both available in their entirety online—primary sources that capture the intensity of the moment.

In 2009, a group of alumni from Michie’s Edinburgh lab, including Harry Barrow and Pat Fothergill (formerly Ambler), created a website to share their memories of working on Freddy. The site offers great firsthand accounts of the development of the robot. Unfortunately for the historian, they didn’t explore the lasting effects of the experience. A decade later, though, Maarten van Emden did, in his 2019 article “Reflecting Back on the Lighthill Affair,” in the IEEE Annals of the History of Computing.

Beyond his academic articles, Michie was a prolific author. Two collections of essays I found particularly useful are On Machine Intelligence (John Wiley & Sons, 1974) and Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach, 1982).

Jon Agar’s 2020 article “What Is Science for? The Lighthill Report on Artificial Intelligence Reinterpreted” and Jeffrey Emanuel’s GitHub post offer historical interpretations on this mostly forgotten blip in the history of robotics and artificial intelligence.



Meet FREDERICK Mark 2, the Friendly Robot for Education, Discussion and Entertainment, the Retrieval of Information, and the Collation of Knowledge, better known as Freddy II. This remarkable robot could put together a simple model car from an assortment of parts dumped in its workspace. Its video-camera eyes and pincer hand identified and sorted the individual pieces before assembling the desired end product. But onlookers had to be patient. Assembly took about 16 hours, and that was after a day or two of “learning” and programming.

Freddy II was completed in 1973 as one of a series of research robots developed by Donald Michie and his team at the University of Edinburgh during the 1960s and ’70s. The robots became the focus of an intense debate over the future of AI in the United Kingdom. Michie eventually lost, his funding was gutted, and the ensuing AI winter set back U.K. research in the field for a decade.

Why were the Freddy I and II robots built?

In 1967, Donald Michie, along with Richard Gregory and Hugh Christopher Longuet-Higgins, founded the Department of Machine Intelligence and Perception at the University of Edinburgh with the near-term goal of developing a semiautomated robot and then longer-term vision of programming “integrated cognitive systems,” or what other people might call intelligent robots. At the time, the U.S. Defense Advanced Research Projects Agency and Japan’s Computer Usage Development Institute were both considering plans to create fully automated factories within a decade. The team at Edinburgh thought they should get in on the action too.

Two years later, Stephen Salter and Harry G. Barrow joined Michie and got to work on Freddy I. Salter devised the hardware while Barrow designed and wrote the software and computer interfacing. The resulting simple robot worked, but it was crude. The AI researcher Jean Hayes (who would marry Michie in 1971) referred to this iteration of Freddy as an “arthritic Lady of Shalott.”

Freddy I consisted of a robotic arm, a camera, a set of wheels, and some bumpers to detect obstacles. Instead of roaming freely, it remained stationary while a small platform moved beneath it. Barrow developed an adaptable program that enabled Freddy I to recognize irregular objects. In 1969, Salter and Barrow published in Machine Intelligence their results, “Design of Low-Cost Equipment for Cognitive Robot Research,” which included suggestions for the next iteration of the robot.

Freddy I, completed in 1969, could recognize objects placed in front of it—in this case, a teacup.University of Edinburgh

More people joined the team to build Freddy Mark 1.5, which they finished in May 1971. Freddy 1.5 was a true robotic hand-eye system. The hand consisted of two vertical, parallel plates that could grip an object and lift it off the platform. The eyes were two cameras: one looking directly down on the platform, and the other mounted obliquely on the truss that suspended the hand over the platform. Freddy 1.5’s world was a 2-meter by 2-meter square platform that moved in an x-y plane.

Freddy 1.5 quickly morphed into Freddy II as the team continued to grow. Improvements included force transducers added to the “wrist” that could deduce the strength of the grip, the weight of the object held, and whether it had collided with an object. But what really set Freddy II apart was its versatile assembly program: The robot could be taught to recognize the shapes of various parts, and then after a day or two of programming, it could assemble simple models. The various steps can be seen in this extended video, narrated by Barrow:

The Lighthill Report Takes Down Freddy the Robot

And then what happened? So much. But before I get into all that, let me just say that rarely do I, as a historian, have the luxury of having my subjects clearly articulate the aims of their projects, imagine the future, and then, years later, reflect on their experiences. As a cherry on top of this historian’s delight, the topic at hand—artificial intelligence—also happens to be of current interest to pretty much everyone.

As with many fascinating histories of technology, events turn on a healthy dose of professional bickering. In this case, the disputants were Michie and the applied mathematician James Lighthill, who had drastically different ideas about the direction of robotics research. Lighthill favored applied research, while Michie was more interested in the theoretical and experimental possibilities. Their fight escalated quickly, became public with a televised debate on the BBC, and concluded with the demise of an entire research field in Britain.

A damning report in 1973 by applied mathematician James Lighthill [left] resulted in funding being pulled from the AI and robotics program led by Donald Michie [right]. Left: Chronicle/Alamy; Right: University of Edinburgh

It all started in September 1971, when the British Science Research Council, which distributed public funds for scientific research, commissioned Lighthill to survey the state of academic research in artificial intelligence. The SRC was finding it difficult to make informed funding decisions in AI, given the field’s complexity. It suspected that some AI researchers’ interests were too narrowly focused, while others might be outright charlatans. Lighthill was called in to give the SRC a road map.

No intellectual slouch, Lighthill was the Lucasian Professor of Mathematics at the University of Cambridge, a position also held by Isaac Newton, Charles Babbage, and Stephen Hawking. Lighthill solicited input from scholars in the field and completed his report in March 1972. Officially titled “ Artificial Intelligence: A General Survey,” but informally called the Lighthill Report, it divided AI into three broad categories: A, for advanced automation; B, for building robots, but also bridge activities between categories A and C; and C, for computer-based central nervous system research. Lighthill acknowledged some progress in categories A and C, as well as a few disappointments.

Lighthill viewed Category B, though, as a complete failure. “Progress in category B has been even slower and more discouraging,” he wrote, “tending to sap confidence in whether the field of research called AI has any true coherence.” For good measure, he added, “AI not only fails to take the first fence but ignores the rest of the steeplechase altogether.” So very British.

Lighthill concluded his report with his view of the next 25 years in AI. He predicted a “fission of the field of AI research,” with some tempered optimism for achievement in categories A and C but a valley of continued failures in category B. Success would come in fields with clear applications, he argued, but basic research was a lost cause.

The Science Research Council published Lighthill’s report the following year, with responses from N. Stuart Sutherland of the University of Sussex and Roger M. Needham of the University of Cambridge, as well as Michie and his colleague Longuet-Higgins.

Sutherland sought to relabel category B as “basic research in AI” and to have the SRC increase funding for it. Needham mostly supported Lighthill’s conclusions and called for the elimination of the term AI—“a rather pernicious label to attach to a very mixed bunch of activities, and one could argue that the sooner we forget it the better.”

Longuet-Higgins focused on his own area of interest, cognitive science, and ended with an ominous warning that any spin-off of advanced automation would be “more likely to inflict multiple injuries on human society,” but he didn’t explain what those might be.

Michie, as the United Kingdom’s academic leader in robots and machine intelligence, understandably saw the Lighthill Report as a direct attack on his research agenda. With his funding at stake, he provided the most critical response, questioning the very foundation of the survey: Did Lighthill talk with any international experts? How did he overcome his own biases? Did he have any sources and references that others could check? He ended with a request for more funding—specifically the purchase of a DEC System 10 (also known as the PDP-10) mainframe computer. According to Michie, if his plan were followed, Britain would be internationally competitive in AI by the end of the decade.

After Michie’s funding was cut, the many researchers affiliated with his bustling lab lost their jobs.University of Edinburgh

This whole affair might have remained an academic dispute, but then the BBC decided to include a debate between Lighthill and a panel of experts as part of its “Controversy” TV series. “Controversy” was an experiment to engage the public in science. On 9 May 1973, an interested but nonspecialist audience filled the auditorium at the Royal Institution in London to hear the debate.

Lighthill started with a review of his report, explaining the differences he saw between automation and what he called “the mirage” of general-purpose robots. Michie responded with a short film of Freddy II assembling a model, explaining how the robot processes information. Michie argued that AI is a subject with its own purposes, its own criteria, and its own professional standards.

After a brief back and forth between Lighthill and Michie, the show’s host turned to the other panelists: John McCarthy, a professor of computer science at Stanford University, and Richard Gregory, a professor in the department of anatomy at the University of Bristol who had been Michie’s colleague at Edinburgh. McCarthy, who coined the term artificial intelligence in 1955, supported Michie’s position that AI should be its own area of research, not simply a bridge between automation and a robot that mimics a human brain. Gregory described how the work of Michie and McCarthy had influenced the field of psychology.

You can watch the debate or read a transcript.

A Look Back at the Lighthill Report

Despite international support from the AI community, though, the SRC sided with Lighthill and gutted funding for AI and robotics; Michie had lost. Michie’s bustling lab went from being an international center of research to just Michie, a technician, and an administrative assistant. The loss ushered in the first British AI winter, with the United Kingdom making little progress in the field for a decade.

For his part, Michie pivoted and recovered. He decommissioned Freddy II in 1980, at which point it moved to the Royal Museum of Scotland (now the National Museum of Scotland), and he replaced it with a Unimation PUMA robot.

In 1983, Michie founded the Turing Institute in Glasgow, an AI lab that worked with industry on both basic and applied research. The year before, he had written Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach). Michie intended it as intellectual musings that he hoped scientists would read, perhaps on the weekend, to help them get beyond the pursuits of the workweek. The book is wide-ranging, covering his three decades of work.

In the introduction to the chapters covering Freddy and the aftermath of the Lighthill report, Michie wrote, perhaps with an eye toward history:

“Work of excellence by talented young people was stigmatised as bad science and the experiment killed in mid-trajectory. This destruction of a co-operative human mechanism and of the careful craft of many hands is elsewhere described as a mishap. But to speak plainly, it was an outrage. In some later time when the values and methods of science have further expanded, and those adversary politics have contracted, it will be seen as such.”

History has indeed rendered judgment on the debate and the Lighthill Report. In 2019, for example, computer scientist Maarten van Emden, a colleague of Michie’s, reflected on the demise of the Freddy project with these choice words for Lighthill: “a pompous idiot who lent himself to produce a flaky report to serve as a blatantly inadequate cover for a hatchet job.”

And in a March 2024 post on GitHub, the blockchain entrepreneur Jeffrey Emanuel thoughtfully dissected Lighthill’s comments and the debate itself. Of Lighthill, he wrote, “I think we can all learn a very valuable lesson from this episode about the dangers of overconfidence and the importance of keeping an open mind. The fact that such a brilliant and learned person could be so confidently wrong about something so important should give us pause.”

Arguably, both Lighthill and Michie correctly predicted certain aspects of the AI future while failing to anticipate others. On the surface, the report and the debate could be described as simply about funding. But it was also more fundamentally about the role of academic research in shaping science and engineering and, by extension, society. Ideally, universities can support both applied research and more theoretical work. When funds are limited, though, choices are made. Lighthill chose applied automation as the future, leaving research in AI and machine intelligence in the cold.

It helps to take the long view. Over the decades, AI research has cycled through several periods of spring and winter, boom and bust. We’re currently in another AI boom. Is this time different? No one can be certain what lies just over the horizon, of course. That very uncertainty is, I think, the best argument for supporting people to experiment and conduct research into fundamental questions, so that they may help all of us to dream up the next big thing.

Part of a continuing series looking at historical artifacts that embrace the boundless potential of technology.

An abridged version of this article appears in the May 2025 print issue as “This Robot Was the Fall Guy for British AI.”

References

Donald Michie’s lab regularly published articles on the group’s progress, especially in Machine Intelligence, a journal founded by Michie.

The Lighthill Report and recordings of the debate are both available in their entirety online—primary sources that capture the intensity of the moment.

In 2009, a group of alumni from Michie’s Edinburgh lab, including Harry Barrow and Pat Fothergill (formerly Ambler), created a website to share their memories of working on Freddy. The site offers great firsthand accounts of the development of the robot. Unfortunately for the historian, they didn’t explore the lasting effects of the experience. A decade later, though, Maarten van Emden did, in his 2019 article “Reflecting Back on the Lighthill Affair,” in the IEEE Annals of the History of Computing.

Beyond his academic articles, Michie was a prolific author. Two collections of essays I found particularly useful are On Machine Intelligence (John Wiley & Sons, 1974) and Machine Intelligence and Related Topics: An Information Scientist’s Weekend Book (Gordon and Breach, 1982).

Jon Agar’s 2020 article “What Is Science for? The Lighthill Report on Artificial Intelligence Reinterpreted” and Jeffrey Emanuel’s GitHub post offer historical interpretations on this mostly forgotten blip in the history of robotics and artificial intelligence.

Pages