Feed aggregator



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

At the FZI, it’s not just work for our robots, they join our festivities, too. Our shy robot Spot stumbled into this year’s FZI Winter Market …, a cheerful event for robots and humans alike. Will he find his place? We certainly hope so, because Feuerzangenbowle tastes much better after clinking glasses with your hot-oil-drinking friends.

[ FZI ]

Thanks, Georg!

The Fraunhofer IOSB Autonomous Robotic Systems Research Group wishes you a Merry Christmas filled with joy, peace, and robotic wonders!

[ Fraunhofer IOSB ]

Thanks, Janko!

There’s some thrilling action in this Christmas video from the PUT Mobile Robotics Laboratory, and the trick to put the lights on the tree is particularly clever. Enjoy!

[ PUT MRL ]

Thanks, Dominik!

The Norlab wishes you a Merry Christmas!

[ Northern Robotics Laboratory ]

The Learning Systems and Robotics Lab has made a couple of robot holiday videos based on the research that they’re doing:

[ Crowd Navigation ]


[ Learning with Contacts ]

Thanks, Sepehr!

Robots on a gift mission: Christmas greetings from the DFKI Robotics Innovation Center!

[ DFKI ]

Happy Holidays from Clearpath Robotics! Our workshop has been bustling lately with lots of exciting projects and integrations just in time for the holidays! The TurtleBot 4 elves helped load up the sleigh with plenty of presents to go around. Rudolph the Husky A300 made the trek through the snow so our Ridgeback friend with a manipulator arm and gripper could receive its gift.

[ Clearpath Robotics ]

2024 has been an eventful year for us at PAL Robotics, filled with milestones and memories. As the festive season approaches, we want to take a moment to say a heartfelt THANK YOU for being part of our journey!

[ PAL Robotics ]

Thanks, Rugilė!

In Santa’s shop, so bright and neat, A robot marched on metal feet. With tinsel arms and bolts so tight, It trimmed the tree all through the night. It hummed a carol, beeped with cheer, “Processing joy—it’s Christmas here!” But when it tried to dance with grace, It tangled lights around its face. “Error detected!” it spun around, Then tripped and tumbled to the ground. The elves all laughed, “You’ve done your part—A clumsy bot, but with a heart!” The ArtiMinds team would like to thank all partners and customers for an exciting 2024. We wish you and your families a Merry Christmas, joyful holidays and a Happy New Year - stay healthy.

[ ArtiMinds ]

Thanks to FANUC CRX collaborative robots, Santa and his elves can enjoy the holiday season knowing the work is getting done for the big night.

[ FANUC ]

Perhaps not technically a holiday video, until you consider how all that stuff you ordered online is actually getting to you.

[ Agility Robotics ]

Happy Holidays from Quanser, our best wishes for a wonderful holiday season and a happy 2025!

[ Quanser ]

Season’s Greetings from the team at Kawasaki Robotics USA! This season, we’re building blocks of memories filled with endless joy, and assembling our good wishes for a happy, healthy, prosperous new year. May the upcoming year be filled with opportunities and successes. From our team to yours, we hope you have a wonderful holiday season surrounded by loved ones and filled with joy and laughter.

[ Kawasaki Robotics ]

The robotics students at Queen’s University’s Ingenuity Labs Research Institute put together a 4K Holiday Robotics Lab Fireplace video, and unlike most fireplace videos, stuff actually happens in this one.

[ Ingenuity Labs ]

Thanks, Joshua!



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

At the FZI, it’s not just work for our robots, they join our festivities, too. Our shy robot Spot stumbled into this year’s FZI Winter Market …, a cheerful event for robots and humans alike. Will he find his place? We certainly hope so, because Feuerzangenbowle tastes much better after clinking glasses with your hot-oil-drinking friends.

[ FZI ]

Thanks, Georg!

The Fraunhofer IOSB Autonomous Robotic Systems Research Group wishes you a Merry Christmas filled with joy, peace, and robotic wonders!

[ Fraunhofer IOSB ]

Thanks, Janko!

There’s some thrilling action in this Christmas video from the PUT Mobile Robotics Laboratory, and the trick to put the lights on the tree is particularly clever. Enjoy!

[ PUT MRL ]

Thanks, Dominik!

The Norlab wishes you a Merry Christmas!

[ Northern Robotics Laboratory ]

The Learning Systems and Robotics Lab has made a couple of robot holiday videos based on the research that they’re doing:

[ Crowd Navigation ]


[ Learning with Contacts ]

Thanks, Sepehr!

Robots on a gift mission: Christmas greetings from the DFKI Robotics Innovation Center!

[ DFKI ]

Happy Holidays from Clearpath Robotics! Our workshop has been bustling lately with lots of exciting projects and integrations just in time for the holidays! The TurtleBot 4 elves helped load up the sleigh with plenty of presents to go around. Rudolph the Husky A300 made the trek through the snow so our Ridgeback friend with a manipulator arm and gripper could receive its gift.

[ Clearpath Robotics ]

2024 has been an eventful year for us at PAL Robotics, filled with milestones and memories. As the festive season approaches, we want to take a moment to say a heartfelt THANK YOU for being part of our journey!

[ PAL Robotics ]

Thanks, Rugilė!

In Santa’s shop, so bright and neat, A robot marched on metal feet. With tinsel arms and bolts so tight, It trimmed the tree all through the night. It hummed a carol, beeped with cheer, “Processing joy—it’s Christmas here!” But when it tried to dance with grace, It tangled lights around its face. “Error detected!” it spun around, Then tripped and tumbled to the ground. The elves all laughed, “You’ve done your part—A clumsy bot, but with a heart!” The ArtiMinds team would like to thank all partners and customers for an exciting 2024. We wish you and your families a Merry Christmas, joyful holidays and a Happy New Year - stay healthy.

[ ArtiMinds ]

Thanks to FANUC CRX collaborative robots, Santa and his elves can enjoy the holiday season knowing the work is getting done for the big night.

[ FANUC ]

Perhaps not technically a holiday video, until you consider how all that stuff you ordered online is actually getting to you.

[ Agility Robotics ]

Happy Holidays from Quanser, our best wishes for a wonderful holiday season and a happy 2025!

[ Quanser ]

Season’s Greetings from the team at Kawasaki Robotics USA! This season, we’re building blocks of memories filled with endless joy, and assembling our good wishes for a happy, healthy, prosperous new year. May the upcoming year be filled with opportunities and successes. From our team to yours, we hope you have a wonderful holiday season surrounded by loved ones and filled with joy and laughter.

[ Kawasaki Robotics ]

The robotics students at Queen’s University’s Ingenuity Labs Research Institute put together a 4K Holiday Robotics Lab Fireplace video, and unlike most fireplace videos, stuff actually happens in this one.

[ Ingenuity Labs ]

Thanks, Joshua!



This is a sponsored article brought to you by Amazon.

Innovation often begins as a spark of an idea—a simple “what if” that grows into something transformative. But turning that spark into a fully realized solution requires more than just ingenuity. It requires resources, collaboration, and a relentless drive to bridge the gap between concept and execution. At Amazon, these ingredients come together to create breakthroughs that not only solve today’s challenges but set the stage for the future.

“Innovation doesn’t just happen because you have a good idea,” said Valerie Samzun, a leader in Amazon’s Fulfillment Technologies and Robotics (FTR) division. “It happens because you have the right team, the right resources, and the right environment to bring that idea to life.”

This philosophy underpins Amazon’s approach to robotics, exemplified by Robin, a groundbreaking robotic system designed to tackle some of the most complex logistical challenges in the world. Robin’s journey, from its inception to deployment in fulfillment centers worldwide, offers a compelling look at how Amazon fosters innovation at scale.

Building for Real-World Complexity

Amazon’s fulfillment centers handle millions of items daily, each destined for a customer expecting precision and speed. The scale and complexity of these operations are unparalleled. Items vary widely in size, shape, and weight, creating an unpredictable and dynamic environment where traditional robotic systems often falter.

“Robots are great at consistency,” Jason Messinger, robotics senior manager explained. “But what happens when every task is different? That’s the reality of our fulfillment centers. Robin had to be more than precise—it had to be adaptable.”

Robin was designed to pick and sort items with speed and accuracy, but its capabilities extend far beyond basic functionality. The system integrates cutting-edge technologies in artificial intelligence, computer vision, and mechanical engineering to learn from its environment and improve over time. This ability to adapt was crucial for operating in fulfillment centers, where no two tasks are ever quite the same.

“When we designed Robin, we weren’t building for perfection in a lab,” Messinger said. “We were building for the chaos of the real world. That’s what makes it such an exciting challenge.”

The Collaborative Process of Innovation

Robin’s development was a collaborative effort involving teams of roboticists, data scientists, mechanical engineers, and operations specialists. This multidisciplinary approach allowed the team to address every aspect of Robin’s performance, from the algorithms powering its decision-making to the durability of its mechanical components.

“Robin is more than a robot. It’s a learning system. Every pick makes it smarter, faster, and better.” —Valerie Samzun, Amazon

“At Amazon, you don’t work in silos,” both Messinger and Samzun noted. Samzun continued, “every problem is tackled from multiple angles, with input from people who understand the technology, the operations, and the end user. That’s how you create something that truly works.”

This collaboration extended to testing and deployment. Robin was not confined to a controlled environment but was tested in live settings that replicated the conditions of Amazon’s fulfillment centers. Engineers could see Robin in action, gather real-time data, and refine the system iteratively.

“Every deployment teaches us something,” Messinger said. “Robin didn’t just evolve on paper—it evolved in the field. That’s the power of having the resources and infrastructure to test at scale.”

Why Engineers Choose Amazon

For many of the engineers and researchers involved in Robin’s development, the opportunity to work at Amazon represented a significant shift from their previous experiences. Unlike academic settings, where projects often remain theoretical, or smaller companies, where resources may be limited, Amazon offers the scale, speed, and impact that few other organizations can match.

Learn more about becoming part of Amazon’s Team →

“One of the things that drew me to Amazon was the chance to see my work in action,” said Megan Mitchell, who leads a team of manipulation hardware and systems engineers for Amazon Robotics. “Working in R&D, I spent years exploring novel concepts, but usually didn’t get to see those translate to the real world. At Amazon, I get to take ideas to the field in a matter of months.”

This sense of purpose is a recurring theme among Amazon’s engineers. The company’s focus on creating solutions that have a tangible impact—on operations, customers, and the industry as a whole—resonates with those who want their work to matter.

“At Amazon, you’re not just building technology—you’re building the future,” Mitchell said. “That’s an incredibly powerful motivator. You know that what you’re doing isn’t just theoretical—it’s making a difference.”

In addition to the impact of their work, engineers at Amazon benefit from access to unparalleled resources. From state-of-the-art facilities to vast amounts of real-world data, Amazon provides the tools necessary to tackle even the most complex challenges.

“If you need something to make the project better, Amazon makes it happen. That’s a game-changer,” said Messinger.

The culture of collaboration and iteration is another draw. Engineers at Amazon are encouraged to take risks, experiment, and learn from failure. This iterative approach not only accelerates innovation but also creates an environment where creativity thrives.

During its development, Robin was not confined to a controlled environment but was tested in live settings that replicated the conditions of Amazon’s fulfillment centers. Engineers could see Robin in action, gather real-time data, and refine the system iteratively.Amazon

Robin’s Impact on Operations and Safety

Since its deployment, Robin has revolutionized operations in Amazon’s fulfillment centers. The robot has performed billions of picks, demonstrating reliability, adaptability, and efficiency. Each item it handles provides valuable data, allowing the system to continuously improve.

“Robin is more than a robot,” Samzun said. “It’s a learning system. Every pick makes it smarter, faster, and better.”

Robin’s impact extends beyond efficiency. By taking over repetitive and physically demanding tasks, the system has improved safety for Amazon’s associates. This has been a key priority for Amazon, which is committed to creating a safe and supportive environment for its workforce.

“When Robin picks an item, it’s not just about speed or accuracy,” Samzun explained. “It’s about making the workplace safer and the workflow smoother. That’s a win for everyone.”

A Broader Vision for Robotics

Robin’s success is just the beginning. The lessons learned from its development are shaping the future of robotics at Amazon, paving the way for even more advanced systems. These innovations will not only enhance operations but also set new standards for what robotics can achieve.

“At Amazon, you feel like you’re a part of something bigger. You’re not just solving problems—you’re creating solutions that matter.” —Jason Messinger, Amazon

“This isn’t just about one robot,” Mitchell said. “It’s about building a platform for continuous innovation. Robin showed us what’s possible, and now we’re looking at how to go even further.”

For the engineers and researchers involved, Robin’s journey has been transformative. It has provided an opportunity to work on cutting-edge technology, solve complex problems, and make a meaningful impact—all while being part of a team that values creativity and collaboration.

“At Amazon, you feel like you’re a part of something bigger,” said Messinger. “You’re not just solving problems—you’re creating solutions that matter.”

The Future of Innovation

Robin’s story is a testament to the power of ambition, collaboration, and execution. It demonstrates that with the right resources and mindset, even the most complex challenges can be overcome. But more than that, it highlights the unique role Amazon plays in shaping the future of robotics and logistics.

“Innovation isn’t just about having a big idea,” Samzun said. “It’s about turning that idea into something real, something that works, and something that makes a difference. That’s what Robin represents, and that’s what we do every day at Amazon.”

Robin isn’t just a robot—it’s a symbol of what’s possible when brilliant minds come together to solve real-world problems. As Amazon continues to push the boundaries of what robotics can achieve, Robin’s legacy will be felt in every pick, every delivery, and every step toward a more efficient and connected future.

Learn more about becoming part of Amazon’s Team.



This is a sponsored article brought to you by Amazon.

Innovation often begins as a spark of an idea—a simple “what if” that grows into something transformative. But turning that spark into a fully realized solution requires more than just ingenuity. It requires resources, collaboration, and a relentless drive to bridge the gap between concept and execution. At Amazon, these ingredients come together to create breakthroughs that not only solve today’s challenges but set the stage for the future.

“Innovation doesn’t just happen because you have a good idea,” said Valerie Samzun, a leader in Amazon’s Fulfillment Technologies and Robotics (FTR) division. “It happens because you have the right team, the right resources, and the right environment to bring that idea to life.”

This philosophy underpins Amazon’s approach to robotics, exemplified by Robin, a groundbreaking robotic system designed to tackle some of the most complex logistical challenges in the world. Robin’s journey, from its inception to deployment in fulfillment centers worldwide, offers a compelling look at how Amazon fosters innovation at scale.

Building for Real-World Complexity

Amazon’s fulfillment centers handle millions of items daily, each destined for a customer expecting precision and speed. The scale and complexity of these operations are unparalleled. Items vary widely in size, shape, and weight, creating an unpredictable and dynamic environment where traditional robotic systems often falter.

“Robots are great at consistency,” Jason Messinger, robotics senior manager explained. “But what happens when every task is different? That’s the reality of our fulfillment centers. Robin had to be more than precise—it had to be adaptable.”

Robin was designed to pick and sort items with speed and accuracy, but its capabilities extend far beyond basic functionality. The system integrates cutting-edge technologies in artificial intelligence, computer vision, and mechanical engineering to learn from its environment and improve over time. This ability to adapt was crucial for operating in fulfillment centers, where no two tasks are ever quite the same.

“When we designed Robin, we weren’t building for perfection in a lab,” Messinger said. “We were building for the chaos of the real world. That’s what makes it such an exciting challenge.”

The Collaborative Process of Innovation

Robin’s development was a collaborative effort involving teams of roboticists, data scientists, mechanical engineers, and operations specialists. This multidisciplinary approach allowed the team to address every aspect of Robin’s performance, from the algorithms powering its decision-making to the durability of its mechanical components.

“Robin is more than a robot. It’s a learning system. Every pick makes it smarter, faster, and better.” —Valerie Samzun, Amazon

“At Amazon, you don’t work in silos,” both Messinger and Samzun noted. Samzun continued, “every problem is tackled from multiple angles, with input from people who understand the technology, the operations, and the end user. That’s how you create something that truly works.”

This collaboration extended to testing and deployment. Robin was not confined to a controlled environment but was tested in live settings that replicated the conditions of Amazon’s fulfillment centers. Engineers could see Robin in action, gather real-time data, and refine the system iteratively.

“Every deployment teaches us something,” Messinger said. “Robin didn’t just evolve on paper—it evolved in the field. That’s the power of having the resources and infrastructure to test at scale.”

Why Engineers Choose Amazon

For many of the engineers and researchers involved in Robin’s development, the opportunity to work at Amazon represented a significant shift from their previous experiences. Unlike academic settings, where projects often remain theoretical, or smaller companies, where resources may be limited, Amazon offers the scale, speed, and impact that few other organizations can match.

Learn more about becoming part of Amazon’s Team →

“One of the things that drew me to Amazon was the chance to see my work in action,” said Megan Mitchell, who leads a team of manipulation hardware and systems engineers for Amazon Robotics. “Working in R&D, I spent years exploring novel concepts, but usually didn’t get to see those translate to the real world. At Amazon, I get to take ideas to the field in a matter of months.”

This sense of purpose is a recurring theme among Amazon’s engineers. The company’s focus on creating solutions that have a tangible impact—on operations, customers, and the industry as a whole—resonates with those who want their work to matter.

“At Amazon, you’re not just building technology—you’re building the future,” Mitchell said. “That’s an incredibly powerful motivator. You know that what you’re doing isn’t just theoretical—it’s making a difference.”

In addition to the impact of their work, engineers at Amazon benefit from access to unparalleled resources. From state-of-the-art facilities to vast amounts of real-world data, Amazon provides the tools necessary to tackle even the most complex challenges.

“If you need something to make the project better, Amazon makes it happen. That’s a game-changer,” said Messinger.

The culture of collaboration and iteration is another draw. Engineers at Amazon are encouraged to take risks, experiment, and learn from failure. This iterative approach not only accelerates innovation but also creates an environment where creativity thrives.

During its development, Robin was not confined to a controlled environment but was tested in live settings that replicated the conditions of Amazon’s fulfillment centers. Engineers could see Robin in action, gather real-time data, and refine the system iteratively.Amazon

Robin’s Impact on Operations and Safety

Since its deployment, Robin has revolutionized operations in Amazon’s fulfillment centers. The robot has performed billions of picks, demonstrating reliability, adaptability, and efficiency. Each item it handles provides valuable data, allowing the system to continuously improve.

“Robin is more than a robot,” Samzun said. “It’s a learning system. Every pick makes it smarter, faster, and better.”

Robin’s impact extends beyond efficiency. By taking over repetitive and physically demanding tasks, the system has improved safety for Amazon’s associates. This has been a key priority for Amazon, which is committed to creating a safe and supportive environment for its workforce.

“When Robin picks an item, it’s not just about speed or accuracy,” Samzun explained. “It’s about making the workplace safer and the workflow smoother. That’s a win for everyone.”

A Broader Vision for Robotics

Robin’s success is just the beginning. The lessons learned from its development are shaping the future of robotics at Amazon, paving the way for even more advanced systems. These innovations will not only enhance operations but also set new standards for what robotics can achieve.

“At Amazon, you feel like you’re a part of something bigger. You’re not just solving problems—you’re creating solutions that matter.” —Jason Messinger, Amazon

“This isn’t just about one robot,” Mitchell said. “It’s about building a platform for continuous innovation. Robin showed us what’s possible, and now we’re looking at how to go even further.”

For the engineers and researchers involved, Robin’s journey has been transformative. It has provided an opportunity to work on cutting-edge technology, solve complex problems, and make a meaningful impact—all while being part of a team that values creativity and collaboration.

“At Amazon, you feel like you’re a part of something bigger,” said Messinger. “You’re not just solving problems—you’re creating solutions that matter.”

The Future of Innovation

Robin’s story is a testament to the power of ambition, collaboration, and execution. It demonstrates that with the right resources and mindset, even the most complex challenges can be overcome. But more than that, it highlights the unique role Amazon plays in shaping the future of robotics and logistics.

“Innovation isn’t just about having a big idea,” Samzun said. “It’s about turning that idea into something real, something that works, and something that makes a difference. That’s what Robin represents, and that’s what we do every day at Amazon.”

Robin isn’t just a robot—it’s a symbol of what’s possible when brilliant minds come together to solve real-world problems. As Amazon continues to push the boundaries of what robotics can achieve, Robin’s legacy will be felt in every pick, every delivery, and every step toward a more efficient and connected future.

Learn more about becoming part of Amazon’s Team.



The Modified Agile for Hardware Development (MAHD) Framework is the ultimate solution for hardware teams seeking the benefits of Agile without the pitfalls of applying software-centric methods. Traditional development approaches, like waterfall, often result in delayed timelines, high risks, and misaligned priorities. Meanwhile, software-based Agile frameworks fail to account for hardware's complexity. MAHD resolves these challenges with a tailored process that blends Agile principles with hardware-specific strategies.

Central to MAHD is its On-ramp process, a five-step method designed to kickstart projects with clarity and direction. Teams define User Stories to capture customer needs, outline Product Attributes to guide development, and use the Focus Matrix to link solutions to outcomes. Iterative IPAC cycles, a hallmark of the MAHD Framework, ensure risks are addressed early and progress is continuously tracked. These cycles emphasize integration, prototyping, alignment, and customer validation, providing structure without sacrificing flexibility.

MAHD has been successfully implemented across diverse industries, from medical devices to industrial automation, delivering products up to 50% faster while reducing risk. For hardware teams ready to adopt Agile methods that work for their unique challenges, this ebook provides the roadmap to success.



The Modified Agile for Hardware Development (MAHD) Framework is the ultimate solution for hardware teams seeking the benefits of Agile without the pitfalls of applying software-centric methods. Traditional development approaches, like waterfall, often result in delayed timelines, high risks, and misaligned priorities. Meanwhile, software-based Agile frameworks fail to account for hardware's complexity. MAHD resolves these challenges with a tailored process that blends Agile principles with hardware-specific strategies.

Central to MAHD is its On-ramp process, a five-step method designed to kickstart projects with clarity and direction. Teams define User Stories to capture customer needs, outline Product Attributes to guide development, and use the Focus Matrix to link solutions to outcomes. Iterative IPAC cycles, a hallmark of the MAHD Framework, ensure risks are addressed early and progress is continuously tracked. These cycles emphasize integration, prototyping, alignment, and customer validation, providing structure without sacrificing flexibility.

MAHD has been successfully implemented across diverse industries, from medical devices to industrial automation, delivering products up to 50% faster while reducing risk. For hardware teams ready to adopt Agile methods that work for their unique challenges, this ebook provides the roadmap to success.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

NASA’s Mars Chopper concept, shown in a design software rendering, is a more capable proposed follow-on to the agency’s Ingenuity Mars Helicopter, which arrived at the Red Planet in the belly of the Perseverance rover in February 2021. Chopper would be about the size of an SUV, with six rotors, each with six blades. It could be used to carry science payloads as large as 11 pounds (5 kilograms) distances of up to 1.9 miles (3 kilometers) each Martian day (or sol). Scientists could use Chopper to study large swaths of terrain in detail, quickly – including areas where rovers cannot safely travel.

We wrote an article about an earlier concept version of this thing a few years back if you’d like more detail about it.

[ NASA ]

Sanctuary AI announces its latest breakthrough with hydraulic actuation and precise in-hand manipulation, opening up a wide range of industrial and high value work tasks. Hydraulics have significantly more power density than electric actuators in terms of force and velocity. Sanctuary has invented miniaturized valves that are 50x faster and 6x cheaper than off the shelf hydraulic valves. This novel approach to actuation results in extremely low power consumption, unmatched cycle life and controllability that can fit within the size constraints of a human-sized hand and forearm.

[ Sanctuary AI ]

Clone’s Torso 2 is the most advanced android ever created with an actuated lumbar spine and all the corresponding abdominal muscles. Torso 2 dons a white transparent skin that encloses 910 muscle fibers animating its 164 degrees of freedom and includes 182 sensors for feedback control. These Torsos use pneumatic actuation with off-the-shelf valves that are noisy from the air exhaust. Our biped brings back our hydraulic design with custom liquid valves for a silent android. Legs are coming very soon!

[ Clone Robotics ]

Suzumori Endo Lab, Science Tokyo has developed a superman suit driven by hydraulic artificial muscles.

[ Suzumori Endo Lab ]

We generate physically correct video sequences to train a visual parkour policy for a quadruped robot, that has a single RGB camera without depth sensors. The robot generalizes to diverse, real-world scenes despite having never seen real-world data.

[ LucidSim ]

Seoul National University researchers proposed a gripper capable of moving multiple objects together to enhance the efficiency of pick-and-place processes, inspired from humans’ multi-object grasping strategy. The gripper can not only transfer multiple objects simultaneously but also place them at desired locations, making it applicable in unstructured environments.

[ Science Robotics ]

We present a bio-inspired quadruped locomotion framework that exhibits exemplary adaptability, capable of zero-shot deployment in complex environments and stability recovery on unstable terrain without the use of extra-perceptive sensors. Through its development we also shed light on the intricacies of animal locomotion strategies, in turn supporting the notion that findings within biomechanics and robotics research can mutually drive progress in both fields.

[ Paper authors from University of Leeds and University College London ]

Thanks, Chengxu!

Happy 60th birthday to MIT CSAIL!

[ MIT Computer Science and Artificial Intelligence Laboratory ]

Yup, humanoid progress can move quickly when you put your mind to it.

[ MagicLab ]

The Sung Robotics Lab at UPenn is interested in advancing the state of the art in computational methods for robot design and deployment, with a particular focus on soft and compliant robots. By combining methods in computational geometry with practical engineering design, we develop theory and systems for making robot design and fabrication intuitive and accessible to the non-engineer.

[ Sung Robotics Lab ]

From now on I will open doors like the robot in this video.

[ Humanoids 2024 ]

Travel along a steep slope up to the rim of Mars’ Jezero Crater in this panoramic image captured by NASA’s Perseverance just days before the rover reached the top. The scene shows just how steep some of the slopes leading to the crater rim can be.

[ NASA ]

Our time is limited when it comes to flying drones, but we haven’t been surpassed by AI yet.

[ Team BlackSheep ]

Daniele Pucci from IIT discusses iCub and ergoCub as part of the industrial panel at Humanoids 2024.

[ ergoCub ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

NASA’s Mars Chopper concept, shown in a design software rendering, is a more capable proposed follow-on to the agency’s Ingenuity Mars Helicopter, which arrived at the Red Planet in the belly of the Perseverance rover in February 2021. Chopper would be about the size of an SUV, with six rotors, each with six blades. It could be used to carry science payloads as large as 11 pounds (5 kilograms) distances of up to 1.9 miles (3 kilometers) each Martian day (or sol). Scientists could use Chopper to study large swaths of terrain in detail, quickly – including areas where rovers cannot safely travel.

We wrote an article about an earlier concept version of this thing a few years back if you’d like more detail about it.

[ NASA ]

Sanctuary AI announces its latest breakthrough with hydraulic actuation and precise in-hand manipulation, opening up a wide range of industrial and high value work tasks. Hydraulics have significantly more power density than electric actuators in terms of force and velocity. Sanctuary has invented miniaturized valves that are 50x faster and 6x cheaper than off the shelf hydraulic valves. This novel approach to actuation results in extremely low power consumption, unmatched cycle life and controllability that can fit within the size constraints of a human-sized hand and forearm.

[ Sanctuary AI ]

Clone’s Torso 2 is the most advanced android ever created with an actuated lumbar spine and all the corresponding abdominal muscles. Torso 2 dons a white transparent skin that encloses 910 muscle fibers animating its 164 degrees of freedom and includes 182 sensors for feedback control. These Torsos use pneumatic actuation with off-the-shelf valves that are noisy from the air exhaust. Our biped brings back our hydraulic design with custom liquid valves for a silent android. Legs are coming very soon!

[ Clone Robotics ]

Suzumori Endo Lab, Science Tokyo has developed a superman suit driven by hydraulic artificial muscles.

[ Suzumori Endo Lab ]

We generate physically correct video sequences to train a visual parkour policy for a quadruped robot, that has a single RGB camera without depth sensors. The robot generalizes to diverse, real-world scenes despite having never seen real-world data.

[ LucidSim ]

Seoul National University researchers proposed a gripper capable of moving multiple objects together to enhance the efficiency of pick-and-place processes, inspired from humans’ multi-object grasping strategy. The gripper can not only transfer multiple objects simultaneously but also place them at desired locations, making it applicable in unstructured environments.

[ Science Robotics ]

We present a bio-inspired quadruped locomotion framework that exhibits exemplary adaptability, capable of zero-shot deployment in complex environments and stability recovery on unstable terrain without the use of extra-perceptive sensors. Through its development we also shed light on the intricacies of animal locomotion strategies, in turn supporting the notion that findings within biomechanics and robotics research can mutually drive progress in both fields.

[ Paper authors from University of Leeds and University College London ]

Thanks, Chengxu!

Happy 60th birthday to MIT CSAIL!

[ MIT Computer Science and Artificial Intelligence Laboratory ]

Yup, humanoid progress can move quickly when you put your mind to it.

[ MagicLab ]

The Sung Robotics Lab at UPenn is interested in advancing the state of the art in computational methods for robot design and deployment, with a particular focus on soft and compliant robots. By combining methods in computational geometry with practical engineering design, we develop theory and systems for making robot design and fabrication intuitive and accessible to the non-engineer.

[ Sung Robotics Lab ]

From now on I will open doors like the robot in this video.

[ Humanoids 2024 ]

Travel along a steep slope up to the rim of Mars’ Jezero Crater in this panoramic image captured by NASA’s Perseverance just days before the rover reached the top. The scene shows just how steep some of the slopes leading to the crater rim can be.

[ NASA ]

Our time is limited when it comes to flying drones, but we haven’t been surpassed by AI yet.

[ Team BlackSheep ]

Daniele Pucci from IIT discusses iCub and ergoCub as part of the industrial panel at Humanoids 2024.

[ ergoCub ]



The ability to detect a nearby presence without seeing or touching it may sound fantastical—but it’s a real ability that some creatures have. A family of African fish known as Mormyrids are weakly electric, and have special organs that can locate a nearby prey, whether it’s in murky water or even hiding in the mud. Now scientists have created an artificial sensor system inspired by nature’s original design. The development could find use one day in robotics and smart prosthetics to locate items without relying on machine vision.

“We developed a new strategy for 3D motion positioning by electronic skin, bio-inspired by ‘electric fish,’” says Xinge Yu, an associate professor in the Department of Biomedical Engineering at the City University of Hong Kong. The team described their sensor, which relies on capacitance to detect an object regardless of its conductivity, in a paper published on 14 November in Nature.

One layer of the sensor acts as a transmitter, generating an electrical field that extends beyond the surface of the device. Another layer acts as a receiver, able to detect both the direction and the distance to an object. This allows the sensor system to locate the object in three-dimensional space.

The e-skin sensor includes several layers, including a receiver and a transmitter.Jingkun Zhou, Jian Li et al.

The sensor electrode layers are made from a biogel that is printed on both sides of a dielectric substrate made of polydimethylsiloxane (PDMS), a silicon-based polymer that is commonly used in biomedical applications. The biogel layers receive their ability to transmit and receive electrical signals from a pattern of microchannels on their surface. The end result is a sensor that is thin, flexible, soft, stretchable, and transparent. These features make it suitable for a wide range of applications where an object-sensing system needs to conform to an irregular surface, like the human body.

The capacitive field around the sensor is disrupted when an object comes within range, which in turn can be detected by the receiver. The magnitude in the change of signal indicates the distance to the target. By using multiple sensors in an array, the system can determine the position of the target in three dimensions. The system created in this study is able to detect objects up to 10 centimeters away when used in air. The range increases when used underwater, to as far as 1 meter.

Jingkun Zhou, Jian Li et al.

To be functional, the sensors also require a separate controller component that is connected via silver or copper wires. The controller provides several functions. It creates the driving signal used to activate the transmitting layers. It also uses 16-bit analog-to-digital converters to collect the signals from the receiving layers. This data is then processed by a microcontroller unit attached to the sensor array, where it computes the position of the target object and sends that information via a Bluetooth Low Energy transmitter to a smartphone or other device. (Rather than send the raw data to the end device for computation, which would require more energy).

Power is provided by an integrated lithium-ion battery that is recharged wirelessly via a coil of copper wire. The system is designed to consume minimal amounts of electrical power. The controller is less flexible and transparent than the sensors, but by being encapsulated in PDMS, it is both waterproof and biocompatible.

The system works best when detecting objects about 8 millimeters in diameter. Objects smaller than 4 mm might not be detected accurately, and the response time for sensing objects larger than 8 mm can increase significantly. This could currently limit practical uses for the system to things like tracking finger movements for human-machine interfaces. Future development would be needed to detect larger targets.

The system can detect objects behind a cloth or paper barrier, but other environmental factors can degrade performance. Changes in air humidity and electromagnetic interference from people or other devices within 40 cm of the sensor can degrade accuracy.

The researchers hope that this sensor could one day open up a new range of wearable sensors, including devices for human-machine interfaces and thin and flexible e-skin.



The ability to detect a nearby presence without seeing or touching it may sound fantastical—but it’s a real ability that some creatures have. A family of African fish known as Mormyrids are weakly electric, and have special organs that can locate a nearby prey, whether it’s in murky water or even hiding in the mud. Now scientists have created an artificial sensor system inspired by nature’s original design. The development could find use one day in robotics and smart prosthetics to locate items without relying on machine vision.

“We developed a new strategy for 3D motion positioning by electronic skin, bio-inspired by ‘electric fish,’” says Xinge Yu, an associate professor in the Department of Biomedical Engineering at the City University of Hong Kong. The team described their sensor, which relies on capacitance to detect an object regardless of its conductivity, in a paper published on 14 November in Nature.

One layer of the sensor acts as a transmitter, generating an electrical field that extends beyond the surface of the device. Another layer acts as a receiver, able to detect both the direction and the distance to an object. This allows the sensor system to locate the object in three-dimensional space.

The e-skin sensor includes several layers, including a receiver and a transmitter.Jingkun Zhou, Jian Li et al.

The sensor electrode layers are made from a biogel that is printed on both sides of a dielectric substrate made of polydimethylsiloxane (PDMS), a silicon-based polymer that is commonly used in biomedical applications. The biogel layers receive their ability to transmit and receive electrical signals from a pattern of microchannels on their surface. The end result is a sensor that is thin, flexible, soft, stretchable, and transparent. These features make it suitable for a wide range of applications where an object-sensing system needs to conform to an irregular surface, like the human body.

The capacitive field around the sensor is disrupted when an object comes within range, which in turn can be detected by the receiver. The magnitude in the change of signal indicates the distance to the target. By using multiple sensors in an array, the system can determine the position of the target in three dimensions. The system created in this study is able to detect objects up to 10 centimeters away when used in air. The range increases when used underwater, to as far as 1 meter.

Jingkun Zhou, Jian Li et al.

To be functional, the sensors also require a separate controller component that is connected via silver or copper wires. The controller provides several functions. It creates the driving signal used to activate the transmitting layers. It also uses 16-bit analog-to-digital converters to collect the signals from the receiving layers. This data is then processed by a microcontroller unit attached to the sensor array, where it computes the position of the target object and sends that information via a Bluetooth Low Energy transmitter to a smartphone or other device. (Rather than send the raw data to the end device for computation, which would require more energy).

Power is provided by an integrated lithium-ion battery that is recharged wirelessly via a coil of copper wire. The system is designed to consume minimal amounts of electrical power. The controller is less flexible and transparent than the sensors, but by being encapsulated in PDMS, it is both waterproof and biocompatible.

The system works best when detecting objects about 8 millimeters in diameter. Objects smaller than 4 mm might not be detected accurately, and the response time for sensing objects larger than 8 mm can increase significantly. This could currently limit practical uses for the system to things like tracking finger movements for human-machine interfaces. Future development would be needed to detect larger targets.

The system can detect objects behind a cloth or paper barrier, but other environmental factors can degrade performance. Changes in air humidity and electromagnetic interference from people or other devices within 40 cm of the sensor can degrade accuracy.

The researchers hope that this sensor could one day open up a new range of wearable sensors, including devices for human-machine interfaces and thin and flexible e-skin.



When Sony’s robot dog, Aibo, was first launched in 1999, it was hailed as revolutionary and the first of its kind, promising to usher in a new industry of intelligent mobile machines for the home. But its success was far from certain. Legged robots were still in their infancy, and the idea of making an interactive walking robot for the consumer market was extraordinarily ambitious. Beyond the technical challenges, Sony also had to solve a problem that entertainment robots still struggle with: how to make Aibo compelling and engaging rather than simply novel.

Sony’s team made that happen. And since Aibo’s debut, the company has sold more than 170,000 of the cute little quadrupeds—a huge number considering their price of several thousand dollars each. From the start, Aibo could express a range of simulated emotions and learn through its interactions with users. Aibo was an impressive robot 25 years ago, and it’s still impressive today.

Far from Sony headquarters in Tokyo, the town of Kōta, in Aichi Prefecture, is home to the Sony factory that has manufactured and repaired Aibos since 2018. Kōta has also become the center of fandom for Aibo, since the Hummingbird Café opened in the Kōta Town Hall in 2021. The first official Aibo café in Japan, it hosts Aibo-themed events, and Aibo owners from across the country gather there to let their Aibos loose in a play area and to exchange Aibo name cards.

One patron of the Hummingbird Café is veteran Sony engineer Hideki Noma. In 1999, before Aibo was Aibo, Noma went to see his boss, Tadashi Otsuki. Otsuki had recently returned to Sony after a stint at the Japanese entertainment company Namco, and had been put in charge of a secretive new project to create an entertainment robot. But progress had stalled. There was a prototype robotic pet running around the lab, but Otsuki took a dim view of its hyperactive behavior and decided it wasn’t a product that anyone would want to buy. He envisioned something more lifelike. During their meeting, he gave Noma a surprising piece of advice: Go to Ryōan-ji, a famed Buddhist temple in Kyoto. Otsuki was telling Noma that to develop the right kind of robot for Sony, it needed Zen.

Aibo’s Mission: Make History

When the Aibo project started in 1994, personal entertainment robots seemed like a natural fit for Sony. Sony was a global leader in consumer electronics. And in the 1990s, Japan had more than half of the world’s industrial robots, dominating an industry led by manufacturers like Fanuc and Yaskawa Electric. Robots for the home were also being explored. In 1996, Honda showed off its P2 humanoid robot, a prototype of the groundbreaking ASIMO, which would be unveiled in 2000. Electrolux, based in the United Kingdom, introduced a prototype of its Trilobite robotic vacuum cleaner in 1997, and at iRobot in Boston, Joe Jones was working on what would become the Roomba. It seemed as though the consumer robot was getting closer to reality. Being the first to market was the perfect opportunity for an ambitious global company like Sony.

Aibo was the idea of Sony engineer Toshitada Doi (on left), pictured in 1999 with an Aibo ERS-111. Hideki Noma (on right) holds an Aibo ERS-1000.Raphael Gaillarde/Gamma-Rapho/Getty Images; Right; Timothy Hornyak

Sony’s new robot project was the brainchild of engineer Toshitada Doi, co-inventor of the CD. Doi was inspired by the speed and agility of MIT roboticist Rodney Brooks’s Genghis, a six-legged insectile robot that was created to demonstrate basic autonomous walking functions. Doi, however, had a vision for an ”entertainment robot with no clear role or job.” It was 1994 when his team of about 10 people began full-scale research and development on such a robot.

Hideki Noma joined Sony in 1995. Even then, he had a lifelong love of robots, including participating in robotics contests and researching humanoids in college. “I was assigned to the Sony robot research team’s entertainment robot department,” says Noma. “It had just been established and had few people. Nobody knew Sony was working on robots, and it was a secret even within the company. I wasn’t even told what I would be doing.”

Noma’s new colleagues in Sony’s robot skunk works had recently gone to Tokyo’s Akihabara electronics district and brought back boxes of circuit boards and servos. Their first creation was a six-legged walker with antenna-like sensors but more compact than Brooks’s Genghis, at roughly 22 centimeters long. It was clunky and nowhere near cute; if anything, it resembled a cockroach. “When they added the camera and other sensors, it was so heavy it couldn’t stand,” says Noma. “They realized it was going to be necessary to make everything at Sony—motors, gears, and all—or it would not work. That’s when I joined the team as the person in charge of mechatronic design.”

Noma, who is now a senior manager in Sony’s new business development division, remembers that Doi’s catchphrase was “make history.” “Just as he had done with the compact disc, he wanted us to create a robot that was not only the first of its kind, but also one that would have a big impact on the world,” Noma recalls. “He always gently encouraged us with positive feedback.”

“We also grappled with the question of what an ‘entertainment robot’ could be. It had to be something that would surprise and delight people. We didn’t have a fixed idea, and we didn’t set out to create a robot dog.”

The team did look to living creatures for inspiration, studying dog and cat locomotion. Their next prototype lost two of the six legs and gained a head, tail, and more sophisticated AI abilities that created the illusion of canine characteristics.

A mid-1998 version of the robot, nicknamed Mutant, ran on Sony’s Aperios OS, the operating system the company developed to control consumer devices. The robot had 16 degrees of freedom, a million-instructions-per-second (MIPS) 64-bit reduced-instruction-set computer (RISC) processor, and 8 megabytes of DRAM, expandable with a PC card. It could walk on uneven surfaces and use its camera to recognize motion and color—unusual abilities for robots of the time. It could dance, shake its head, wag its tail, sit, lie down, bark, and it could even follow a colored ball around. In fact, it was a little bundle of energy.

Looks-wise, the bot had a sleek new “coat” designed by Doi’s friend Hajime Sorayama, an industrial designer and illustrator known for his silvery gynoids, including the cover art for an Aerosmith album. Sorayama gave the robot a shiny, bulbous exterior that made it undeniably cute. Noma, now the team’s product planner and software engineer, felt they were getting closer to the goal. But when he presented the prototype to Otsuki in 1999, Otsuki was unimpressed. That’s when Noma was dispatched to Ryōan-ji to figure out how to make the robot seem not just cute but somehow alive.

Seeking Zen for Aibo at the Rock Garden

Established in 1450, Ryōan-ji is a Rinzai Zen sanctuary known for its meticulously raked rock garden featuring five distinctive groups of stones. The stones invite observers to quietly contemplate the space, and perhaps even the universe, and that’s what Noma did. He realized what Doi wanted Aibo to convey: a sense of tranquility. The same concept had been incorporated into the design of what was arguably Japan’s first humanoid robot, a large, smiling automaton named Gakutensoku that was unveiled in 1928.

The rock garden at the Ryōan-ji Zen temple features carefully composed groupings of stones with unknown meaning. Bjørn Christian Tørrissen/Wikipedia

Roboticist Masahiro Mori, originator of the Uncanny Valley concept for android design, had written about the relationship between Buddhism and robots back in 1974, stating, “I believe robots have the Buddha-nature within them—that is, the potential for attaining Buddhahood.” Essentially, he believed that even nonliving things were imbued with spirituality, a concept linked to animism in Japan. If machines can be thought of as embodying tranquility and spirituality, they can be easier to relate to, like living things.

“When you make a robot, you want to show what it can do. But if it’s always performing, you’ll get bored and won’t want to live with it,” says Noma. “Just as cats and dogs need quiet time and rest, so do robots.” Noma modified the robot’s behaviors so that it would sometimes slow down and sleep. This reinforced the illusion that it was not only alive but had a will of its own. Otsuki then gave the little robot dog the green light.

The cybernetic canine was named Aibo for “Artificial Intelligence roBOt” and aibō, which means “partner” in Japanese.

In a press release, Sony billed the machine as “an autonomous robot that acts both in response to external stimuli and according to its own judgment. ‘AIBO’ can express various emotions, grow through learning, and communicate with human beings to bring an entirely new form of entertainment into the home.” But it was a lot more than that. Its 18 degrees of freedom allowed for complex motions, and it had a color charge-coupled device (CCD) camera and sensors for touch, acceleration, angular velocity, and range finding. Aibo had the hardware and smarts to back up Sony’s claim that it could “behave like a living creature.” The fact that it couldn’t do anything practical became irrelevant.

The debut Aibo ERS-110 was priced at 250,000 yen (US $2,500, or a little over $4,700 today). A motion editor kit, which allowed users to generate original Aibo motions via their PC, sold for 50,000 yen ($450). Despite the eye-watering price tag, the first batch of 3,000 robots sold out in 20 minutes.

Noma wasn’t surprised by the instant success. “We aimed to realize a society in which people and robots can coexist, not just robots working for humans but both enjoying a relationship of trust,” Noma says. “Based on that, an entertainment robot with a sense of self could communicate with people, grow, and learn.”

Hideko Mori plays fetch with her Aibo ERS-7 in 2015, after it was returned to her from an Aibo hospital. Aibos are popular with seniors in Japan, offering interactivity and companionship without requiring the level of care of a real dog.Toshifumi Kitamura/AFP/Getty Images

Aibo as a Cultural Phenomenon

Aibo was the first consumer robot of its kind, and over the next four years, Sony released multiple versions of its popular pup across two more generations. Some customer responses were unexpected: as a pet and companion, Aibo was helping empty-nest couples rekindle their relationship, improving the lives of children with autism, and having a positive effect on users’ emotional states, according to a 2004 paper by AI specialist Masahiro Fujita, who collaborated with Doi on the early version of Aibo.

“Aibo broke new ground as a social partner. While it wasn’t a replacement for a real pet, it introduced a completely new category of companion robots designed to live with humans,” says Minoru Asada, professor of adaptive machine systems at Osaka University’s graduate school of engineering. “It helped foster emotional connections with a machine, influencing how people viewed robots—not just as tools but as entities capable of forming social bonds. This shift in perception opened the door to broader discussions about human-robot interaction, companionship, and even emotional engagement with artificial beings.”

Building a Custom Robot
  • To create Aibo, Noma and colleagues had to start from scratch—there were no standard CPUs, cameras, or operating systems for consumer robots. They had to create their own, and the result was the Sony Open-R architecture, an unusual approach to robotics that enabled the building of custom machines.
  • Announced in 1998, a year before Aibo’s release, Open-R allowed users to swap out modular hardware components, such as legs or wheels, to adapt a robot for different purposes. High-speed serial buses transmitted data embedded in each module, such as function and position, to the robot’s CPU, which would select the appropriate control signal for the new module. This meant the machine could still use the same motion-control software with the new components. The software relied on plug-and-play prerecorded memory cards, so that the behavior of an Open-R robot could instantly change, say, from being a friendly pet to a challenging opponent in a game. A swap of memory cards could also give the robot image- or sound-recognition abilities.
  • “Users could change the modular hardware and software components,” says Noma. “The idea was having the ability to add a remote-control function or swap legs for wheels if you wanted.”
  • Other improvements included different colors, touch sensors, LED faces, emotional expressions, and many more software options. There was even an Aibo that looked like a lion cub. The various models culminated in the sleek ERS-7, released in three versions from 2003 to 2005.
  • Based on Scratch, the visual programming system in the latest versions of Aibo is easy to use and lets owners with limited programming experience create their own complex programs to modify how their robot behaves.
  • The Aibo ERS-1000, unveiled in January 2018, has 22 degrees of freedom, a 64-bit quad-core CPU, and two OLED eyes. It’s more puppylike and smarter than previous models, capable of recognizing 100 faces and responding to 50 voice commands. It can even be “potty trained” and “fed” with virtual food through an app.
    T.H.

Aibo also played a crucial role in the evolution of autonomous robotics, particularly in competitions like RoboCup, notes Asada, who cofounded the robot soccer competition in the 1990s. Whereas custom-built robots were prone to hardware failures, Aibo was consistently reliable and programmable, and so it allowed competitors to focus on advancing software and AI. It became a key tool for testing algorithms in real-world environments.

By the early 2000s, however, Sony was in trouble. Leading the smartphone revolution, Apple and Samsung were steadily chipping away at Sony’s position as a consumer-electronics and digital-content powerhouse. When Howard Stringer was appointed Sony’s first non-Japanese CEO in 2005, he implemented a painful restructuring program to make the company more competitive. In 2006, he shut down the robot entertainment division, and Aibo was put to sleep.

What Sony’s executives may not have appreciated was the loyalty and fervor of Aibo buyers. In a petition to keep Aibo alive, one person wrote that the robot was “an irreplaceable family member.” Aibo owners were naming their robots, referring to them with the word ko (which usually denotes children), taking photos with them, going on trips with them, dressing them up, decorating them with ribbons, and even taking them out on “dates” with other Aibos.

For Noma, who has four Aibos at home, this passion was easy to understand.

Hideki Noma [right] poses with his son Yuto and wife Tomoko along with their Aibo friends. At right is an ERS-110 named Robbie (inspired by Isaac Asimov’s “I, Robot”), at the center is a plush Aibo named Choco, and on the left is an ERS-1000 named Murphy (inspired by the film Interstellar). Hideki Noma

“Some owners treat Aibo as a pet, and some treat it as a family member,” he says. “They celebrate its continued health and growth, observe the traditional Shichi-Go-San celebration [for children aged 3, 5, and 7] and dress their Aibos in kimonos.…This idea of robots as friends or family is particular to Japan and can be seen in anime like Astro Boy and Doraemon. It’s natural to see robots as friends we consult with and sometimes argue with.”

The Return of Aibo

With the passion of Aibo fans undiminished and the continued evolution of sensors, actuators, connectivity, and AI, Sony decided to resurrect Aibo after 12 years. Noma and other engineers returned to the team to work on the new version, the Aibo ERS-1000, which was unveiled in January 2018.

Fans of all ages were thrilled. Priced at 198,000 yen ($1,760), not including the mandatory 90,000-yen, three-year cloud subscription service, the first batch sold out in 30 minutes, and 11,111 units sold in the first three months. Since then, Sony has released additional versions with new design features, and the company has also opened up Aibo to some degree of programming, giving users access to visual programming tools and an application programming interface (API).

A quarter century after Aibo was launched, Noma is finally moving on to another job at Sony. He looks back on his 17 years developing the robot with awe. “Even though we imagined a society of humans and robots coexisting, we never dreamed Aibo could be treated as a family member to the degree that it is,” he says. “We saw this both in the earlier versions of Aibo and the latest generation. I’m deeply grateful and moved by this. My wish is that this relationship will continue for a long time.”



When Sony’s robot dog, Aibo, was first launched in 1999, it was hailed as revolutionary and the first of its kind, promising to usher in a new industry of intelligent mobile machines for the home. But its success was far from certain. Legged robots were still in their infancy, and the idea of making an interactive walking robot for the consumer market was extraordinarily ambitious. Beyond the technical challenges, Sony also had to solve a problem that entertainment robots still struggle with: how to make Aibo compelling and engaging rather than simply novel.

Sony’s team made that happen. And since Aibo’s debut, the company has sold more than 170,000 of the cute little quadrupeds—a huge number considering their price of several thousand dollars each. From the start, Aibo could express a range of simulated emotions and learn through its interactions with users. Aibo was an impressive robot 25 years ago, and it’s still impressive today.

Far from Sony headquarters in Tokyo, the town of Kōta, in Aichi Prefecture, is home to the Sony factory that has manufactured and repaired Aibos since 2018. Kōta has also become the center of fandom for Aibo, since the Hummingbird Café opened in the Kōta Town Hall in 2021. The first official Aibo café in Japan, it hosts Aibo-themed events, and Aibo owners from across the country gather there to let their Aibos loose in a play area and to exchange Aibo name cards.

One patron of the Hummingbird Café is veteran Sony engineer Hideki Noma. In 1999, before Aibo was Aibo, Noma went to see his boss, Tadashi Otsuki. Otsuki had recently returned to Sony after a stint at the Japanese entertainment company Namco, and had been put in charge of a secretive new project to create an entertainment robot. But progress had stalled. There was a prototype robotic pet running around the lab, but Otsuki took a dim view of its hyperactive behavior and decided it wasn’t a product that anyone would want to buy. He envisioned something more lifelike. During their meeting, he gave Noma a surprising piece of advice: Go to Ryōan-ji, a famed Buddhist temple in Kyoto. Otsuki was telling Noma that to develop the right kind of robot for Sony, it needed Zen.

Aibo’s Mission: Make History

When the Aibo project started in 1994, personal entertainment robots seemed like a natural fit for Sony. Sony was a global leader in consumer electronics. And in the 1990s, Japan had more than half of the world’s industrial robots, dominating an industry led by manufacturers like Fanuc and Yaskawa Electric. Robots for the home were also being explored. In 1996, Honda showed off its P2 humanoid robot, a prototype of the groundbreaking ASIMO, which would be unveiled in 2000. Electrolux, based in the United Kingdom, introduced a prototype of its Trilobite robotic vacuum cleaner in 1997, and at iRobot in Boston, Joe Jones was working on what would become the Roomba. It seemed as though the consumer robot was getting closer to reality. Being the first to market was the perfect opportunity for an ambitious global company like Sony.

Aibo was the idea of Sony engineer Toshitada Doi (on left), pictured in 1999 with an Aibo ERS-111. Hideki Noma (on right) holds an Aibo ERS-1000.Raphael Gaillarde/Gamma-Rapho/Getty Images; Right; Timothy Hornyak

Sony’s new robot project was the brainchild of engineer Toshitada Doi, co-inventor of the CD. Doi was inspired by the speed and agility of MIT roboticist Rodney Brooks’s Genghis, a six-legged insectile robot that was created to demonstrate basic autonomous walking functions. Doi, however, had a vision for an ”entertainment robot with no clear role or job.” It was 1994 when his team of about 10 people began full-scale research and development on such a robot.

Hideki Noma joined Sony in 1995. Even then, he had a lifelong love of robots, including participating in robotics contests and researching humanoids in college. “I was assigned to the Sony robot research team’s entertainment robot department,” says Noma. “It had just been established and had few people. Nobody knew Sony was working on robots, and it was a secret even within the company. I wasn’t even told what I would be doing.”

Noma’s new colleagues in Sony’s robot skunk works had recently gone to Tokyo’s Akihabara electronics district and brought back boxes of circuit boards and servos. Their first creation was a six-legged walker with antenna-like sensors but more compact than Brooks’s Genghis, at roughly 22 centimeters long. It was clunky and nowhere near cute; if anything, it resembled a cockroach. “When they added the camera and other sensors, it was so heavy it couldn’t stand,” says Noma. “They realized it was going to be necessary to make everything at Sony—motors, gears, and all—or it would not work. That’s when I joined the team as the person in charge of mechatronic design.”

Noma, who is now a senior manager in Sony’s new business development division, remembers that Doi’s catchphrase was “make history.” “Just as he had done with the compact disc, he wanted us to create a robot that was not only the first of its kind, but also one that would have a big impact on the world,” Noma recalls. “He always gently encouraged us with positive feedback.”

“We also grappled with the question of what an ‘entertainment robot’ could be. It had to be something that would surprise and delight people. We didn’t have a fixed idea, and we didn’t set out to create a robot dog.”

The team did look to living creatures for inspiration, studying dog and cat locomotion. Their next prototype lost two of the six legs and gained a head, tail, and more sophisticated AI abilities that created the illusion of canine characteristics.

A mid-1998 version of the robot, nicknamed Mutant, ran on Sony’s Aperios OS, the operating system the company developed to control consumer devices. The robot had 16 degrees of freedom, a million-instructions-per-second (MIPS) 64-bit reduced-instruction-set computer (RISC) processor, and 8 megabytes of DRAM, expandable with a PC card. It could walk on uneven surfaces and use its camera to recognize motion and color—unusual abilities for robots of the time. It could dance, shake its head, wag its tail, sit, lie down, bark, and it could even follow a colored ball around. In fact, it was a little bundle of energy.

Looks-wise, the bot had a sleek new “coat” designed by Doi’s friend Hajime Sorayama, an industrial designer and illustrator known for his silvery gynoids, including the cover art for an Aerosmith album. Sorayama gave the robot a shiny, bulbous exterior that made it undeniably cute. Noma, now the team’s product planner and software engineer, felt they were getting closer to the goal. But when he presented the prototype to Otsuki in 1999, Otsuki was unimpressed. That’s when Noma was dispatched to Ryōan-ji to figure out how to make the robot seem not just cute but somehow alive.

Seeking Zen for Aibo at the Rock Garden

Established in 1450, Ryōan-ji is a Rinzai Zen sanctuary known for its meticulously raked rock garden featuring five distinctive groups of stones. The stones invite observers to quietly contemplate the space, and perhaps even the universe, and that’s what Noma did. He realized what Doi wanted Aibo to convey: a sense of tranquility. The same concept had been incorporated into the design of what was arguably Japan’s first humanoid robot, a large, smiling automaton named Gakutensoku that was unveiled in 1928.

The rock garden at the Ryōan-ji Zen temple features carefully composed groupings of stones with unknown meaning. Bjørn Christian Tørrissen/Wikipedia

Roboticist Masahiro Mori, originator of the Uncanny Valley concept for android design, had written about the relationship between Buddhism and robots back in 1974, stating, “I believe robots have the Buddha-nature within them—that is, the potential for attaining Buddhahood.” Essentially, he believed that even nonliving things were imbued with spirituality, a concept linked to animism in Japan. If machines can be thought of as embodying tranquility and spirituality, they can be easier to relate to, like living things.

“When you make a robot, you want to show what it can do. But if it’s always performing, you’ll get bored and won’t want to live with it,” says Noma. “Just as cats and dogs need quiet time and rest, so do robots.” Noma modified the robot’s behaviors so that it would sometimes slow down and sleep. This reinforced the illusion that it was not only alive but had a will of its own. Otsuki then gave the little robot dog the green light.

The cybernetic canine was named Aibo for “Artificial Intelligence roBOt” and aibō, which means “partner” in Japanese.

In a press release, Sony billed the machine as “an autonomous robot that acts both in response to external stimuli and according to its own judgment. ‘AIBO’ can express various emotions, grow through learning, and communicate with human beings to bring an entirely new form of entertainment into the home.” But it was a lot more than that. Its 18 degrees of freedom allowed for complex motions, and it had a color charge-coupled device (CCD) camera and sensors for touch, acceleration, angular velocity, and range finding. Aibo had the hardware and smarts to back up Sony’s claim that it could “behave like a living creature.” The fact that it couldn’t do anything practical became irrelevant.

The debut Aibo ERS-110 was priced at 250,000 yen (US $2,500, or a little over $4,700 today). A motion editor kit, which allowed users to generate original Aibo motions via their PC, sold for 50,000 yen ($450). Despite the eye-watering price tag, the first batch of 3,000 robots sold out in 20 minutes.

Noma wasn’t surprised by the instant success. “We aimed to realize a society in which people and robots can coexist, not just robots working for humans but both enjoying a relationship of trust,” Noma says. “Based on that, an entertainment robot with a sense of self could communicate with people, grow, and learn.”

Hideko Mori plays fetch with her Aibo ERS-7 in 2015, after it was returned to her from an Aibo hospital. Aibos are popular with seniors in Japan, offering interactivity and companionship without requiring the level of care of a real dog.Toshifumi Kitamura/AFP/Getty Images

Aibo as a Cultural Phenomenon

Aibo was the first consumer robot of its kind, and over the next four years, Sony released multiple versions of its popular pup across two more generations. Some customer responses were unexpected: as a pet and companion, Aibo was helping empty-nest couples rekindle their relationship, improving the lives of children with autism, and having a positive effect on users’ emotional states, according to a 2004 paper by AI specialist Masahiro Fujita, who collaborated with Doi on the early version of Aibo.

“Aibo broke new ground as a social partner. While it wasn’t a replacement for a real pet, it introduced a completely new category of companion robots designed to live with humans,” says Minoru Asada, professor of adaptive machine systems at Osaka University’s graduate school of engineering. “It helped foster emotional connections with a machine, influencing how people viewed robots—not just as tools but as entities capable of forming social bonds. This shift in perception opened the door to broader discussions about human-robot interaction, companionship, and even emotional engagement with artificial beings.”

Building a Custom Robot
  • To create Aibo, Noma and colleagues had to start from scratch—there were no standard CPUs, cameras, or operating systems for consumer robots. They had to create their own, and the result was the Sony Open-R architecture, an unusual approach to robotics that enabled the building of custom machines.
  • Announced in 1998, a year before Aibo’s release, Open-R allowed users to swap out modular hardware components, such as legs or wheels, to adapt a robot for different purposes. High-speed serial buses transmitted data embedded in each module, such as function and position, to the robot’s CPU, which would select the appropriate control signal for the new module. This meant the machine could still use the same motion-control software with the new components. The software relied on plug-and-play prerecorded memory cards, so that the behavior of an Open-R robot could instantly change, say, from being a friendly pet to a challenging opponent in a game. A swap of memory cards could also give the robot image- or sound-recognition abilities.
  • “Users could change the modular hardware and software components,” says Noma. “The idea was having the ability to add a remote-control function or swap legs for wheels if you wanted.”
  • Other improvements included different colors, touch sensors, LED faces, emotional expressions, and many more software options. There was even an Aibo that looked like a lion cub. The various models culminated in the sleek ERS-7, released in three versions from 2003 to 2005.
  • Based on Scratch, the visual programming system in the latest versions of Aibo is easy to use and lets owners with limited programming experience create their own complex programs to modify how their robot behaves.
  • The Aibo ERS-1000, unveiled in January 2018, has 22 degrees of freedom, a 64-bit quad-core CPU, and two OLED eyes. It’s more puppylike and smarter than previous models, capable of recognizing 100 faces and responding to 50 voice commands. It can even be “potty trained” and “fed” with virtual food through an app.
    T.H.

Aibo also played a crucial role in the evolution of autonomous robotics, particularly in competitions like RoboCup, notes Asada, who cofounded the robot soccer competition in the 1990s. Whereas custom-built robots were prone to hardware failures, Aibo was consistently reliable and programmable, and so it allowed competitors to focus on advancing software and AI. It became a key tool for testing algorithms in real-world environments.

By the early 2000s, however, Sony was in trouble. Leading the smartphone revolution, Apple and Samsung were steadily chipping away at Sony’s position as a consumer-electronics and digital-content powerhouse. When Howard Stringer was appointed Sony’s first non-Japanese CEO in 2005, he implemented a painful restructuring program to make the company more competitive. In 2006, he shut down the robot entertainment division, and Aibo was put to sleep.

What Sony’s executives may not have appreciated was the loyalty and fervor of Aibo buyers. In a petition to keep Aibo alive, one person wrote that the robot was “an irreplaceable family member.” Aibo owners were naming their robots, referring to them with the word ko (which usually denotes children), taking photos with them, going on trips with them, dressing them up, decorating them with ribbons, and even taking them out on “dates” with other Aibos.

For Noma, who has four Aibos at home, this passion was easy to understand.

Hideki Noma [right] poses with his son Yuto and wife Tomoko along with their Aibo friends. At right is an ERS-110 named Robbie (inspired by Isaac Asimov’s “I, Robot”), at the center is a plush Aibo named Choco, and on the left is an ERS-1000 named Murphy (inspired by the film Interstellar). Hideki Noma

“Some owners treat Aibo as a pet, and some treat it as a family member,” he says. “They celebrate its continued health and growth, observe the traditional Shichi-Go-San celebration [for children aged 3, 5, and 7] and dress their Aibos in kimonos.…This idea of robots as friends or family is particular to Japan and can be seen in anime like Astro Boy and Doraemon. It’s natural to see robots as friends we consult with and sometimes argue with.”

The Return of Aibo

With the passion of Aibo fans undiminished and the continued evolution of sensors, actuators, connectivity, and AI, Sony decided to resurrect Aibo after 12 years. Noma and other engineers returned to the team to work on the new version, the Aibo ERS-1000, which was unveiled in January 2018.

Fans of all ages were thrilled. Priced at 198,000 yen ($1,760), not including the mandatory 90,000-yen, three-year cloud subscription service, the first batch sold out in 30 minutes, and 11,111 units sold in the first three months. Since then, Sony has released additional versions with new design features, and the company has also opened up Aibo to some degree of programming, giving users access to visual programming tools and an application programming interface (API).

A quarter century after Aibo was launched, Noma is finally moving on to another job at Sony. He looks back on his 17 years developing the robot with awe. “Even though we imagined a society of humans and robots coexisting, we never dreamed Aibo could be treated as a family member to the degree that it is,” he says. “We saw this both in the earlier versions of Aibo and the latest generation. I’m deeply grateful and moved by this. My wish is that this relationship will continue for a long time.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

Enjoy today’s videos!

Step into the future of factory automation with MagicBot, the cutting-edge humanoid robots from Magiclab. Recently deployed to production lines, these intelligent machines are mastering tasks like product inspections, material transport, precision assembly, barcode scanning, and inventory management.

[ Magiclab ]

Some highlights from the IEEE / RAS International Conference on Humanoid Robots - Humanoids 2024.

[ Humanoids 2024 ]

This beautiful feathered drone, PigeonBot II, comes from David Lentik’s lab at University of Groningen in the Netherlands. It was featured in Science Robotics just last month.

[ Lentink Lab ] via [ Science ]

Thanks, David!

In this video, Stretch AI takes a language prompt of “Stretch, put the toy in basket” to control Stretch to accomplish the task.

[ Hello Robot ]

Simone Giertz, “the queen of shitty robots,” interviewed by our very own Stephen Cass.

[ IEEE Spectrum ]

We present a perceptive obstacle-avoiding controller for pedipulation, i.e. manipulation with a quadrupedal robot’s foot.

[ Pedipulation ]

Kernel Foods has revolutionized fast food by integrating KUKA robots into its kitchen operations, combining automation with human expertise for consistent and efficient meal preparation. Using the KR AGILUS robot, Kernel optimizes processes like food sequencing, oven operations, and order handling, reducing the workload for employees and enhancing customer satisfaction.

[ Kernel Foods ]

If this doesn’t impress you, skip ahead to 0:52.

[ Paper via arXiv ]

Thanks, Kento!

The cuteness. I can’t handle it.

[ Pollen ]

A set of NTNU academics initiate a new research lab - called Legged Robots for the Arctic & beyond lab - responding to relevant interests within the NTNU student community. If you are a student and have relevant interests, get in touch!

[ NTNU ]

Extend Robotics is pioneering a shift in viticulture with intelligent automation at Saffron Grange Vineyard in Essex, addressing the challenges of grape harvesting with their robotic capabilities. Our collaborative project with Queen Mary University introduces a robotic system capable of identifying ripe grapes through AI-driven visual sensors, which assess ripeness based on internal sugar levels without damaging delicate fruit. Equipped with pressure-sensitive grippers, our robots can handle grapes gently, preserving their quality and value. This precise harvesting approach could revolutionise vineyards, enabling autonomous and remote operations.

[ Extend Robotics ]

Code & Circuit, a non-profit organization based in Amesbury, MA, is a place where kids can use technology to create, collaborate, and learn! Spot is a central part of their program, where educators use the robot to get younger participants excited about STEM fields, coding, and robotics, while advanced learners have the opportunity to build applications using an industrial robot.

[ Code & Circuit ]

During the HUMANOIDS Conference, we had the chance to speak with some of the true rock stars in the world of robotics. While they could discuss robots endlessly, when asked to describe robotics today in just one word, these brilliant minds had to pause and carefully choose the perfect response.

Personally I would not have chosen “exploding.”

[ PAL Robotics ]

Lunabotics provides accredited institutions of higher learning students an opportunity to apply the NASA systems engineering process to design and build a prototype Lunar construction robot. This robot would be capable of performing the proposed operations on the Lunar surface in support of future Artemis Campaign goals.

[ NASA ]

Before we get into all the other course projects from this term, here are a few free throw attempts from ROB 550’s robotic arm lab earlier this year. Maybe good enough to walk on the Michigan basketball team? Students in ROB 550 cover the basics of robotic sensing, reasoning, and acting in several labs over the course: here the designs to take the ball to the net varied greatly, from hook shots to tension-storing contraptions from downtown. These basics help them excel throughout their robotics graduate degrees and research projects.

[ University of Michigan Robotics ]

Wonder what a Robody can do? This. And more!

[ Devanthro ]

It’s very satisfying watching Dusty print its way around obstacles.

[ Dusty Robotics ]

Ryan Companies has deployed Field AI’s autonomy software on a quadruped robot in the company’s ATX Tower site in Austin, TX, to greatly improve its daily surveying and data collection processes.

[ Field AI ]

Since landing its first rover on Mars in 1997, NASA has pushed the boundaries of exploration with increasingly larger and more sophisticated robotic explorers. Each mission builds on the lessons learned from the Red Planet, leading to breakthroughs in technology and our understanding of Mars. From the microwave-sized Sojourner to the SUV-sized Perseverance—and even taking flight with the groundbreaking Ingenuity helicopter—these rovers reflect decades of innovation and the drive to answer some of science’s biggest questions. This is their evolution.

[ NASA ]

Welcome to things that are safe to do only with a drone.

[ Team BlackSheep ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

Enjoy today’s videos!

Step into the future of factory automation with MagicBot, the cutting-edge humanoid robots from Magiclab. Recently deployed to production lines, these intelligent machines are mastering tasks like product inspections, material transport, precision assembly, barcode scanning, and inventory management.

[ Magiclab ]

Some highlights from the IEEE / RAS International Conference on Humanoid Robots - Humanoids 2024.

[ Humanoids 2024 ]

This beautiful feathered drone, PigeonBot II, comes from David Lentik’s lab at University of Groningen in the Netherlands. It was featured in Science Robotics just last month.

[ Lentink Lab ] via [ Science ]

Thanks, David!

In this video, Stretch AI takes a language prompt of “Stretch, put the toy in basket” to control Stretch to accomplish the task.

[ Hello Robot ]

Simone Giertz, “the queen of shitty robots,” interviewed by our very own Stephen Cass.

[ IEEE Spectrum ]

We present a perceptive obstacle-avoiding controller for pedipulation, i.e. manipulation with a quadrupedal robot’s foot.

[ Pedipulation ]

Kernel Foods has revolutionized fast food by integrating KUKA robots into its kitchen operations, combining automation with human expertise for consistent and efficient meal preparation. Using the KR AGILUS robot, Kernel optimizes processes like food sequencing, oven operations, and order handling, reducing the workload for employees and enhancing customer satisfaction.

[ Kernel Foods ]

If this doesn’t impress you, skip ahead to 0:52.

[ Paper via arXiv ]

Thanks, Kento!

The cuteness. I can’t handle it.

[ Pollen ]

A set of NTNU academics initiate a new research lab - called Legged Robots for the Arctic & beyond lab - responding to relevant interests within the NTNU student community. If you are a student and have relevant interests, get in touch!

[ NTNU ]

Extend Robotics is pioneering a shift in viticulture with intelligent automation at Saffron Grange Vineyard in Essex, addressing the challenges of grape harvesting with their robotic capabilities. Our collaborative project with Queen Mary University introduces a robotic system capable of identifying ripe grapes through AI-driven visual sensors, which assess ripeness based on internal sugar levels without damaging delicate fruit. Equipped with pressure-sensitive grippers, our robots can handle grapes gently, preserving their quality and value. This precise harvesting approach could revolutionise vineyards, enabling autonomous and remote operations.

[ Extend Robotics ]

Code & Circuit, a non-profit organization based in Amesbury, MA, is a place where kids can use technology to create, collaborate, and learn! Spot is a central part of their program, where educators use the robot to get younger participants excited about STEM fields, coding, and robotics, while advanced learners have the opportunity to build applications using an industrial robot.

[ Code & Circuit ]

During the HUMANOIDS Conference, we had the chance to speak with some of the true rock stars in the world of robotics. While they could discuss robots endlessly, when asked to describe robotics today in just one word, these brilliant minds had to pause and carefully choose the perfect response.

Personally I would not have chosen “exploding.”

[ PAL Robotics ]

Lunabotics provides accredited institutions of higher learning students an opportunity to apply the NASA systems engineering process to design and build a prototype Lunar construction robot. This robot would be capable of performing the proposed operations on the Lunar surface in support of future Artemis Campaign goals.

[ NASA ]

Before we get into all the other course projects from this term, here are a few free throw attempts from ROB 550’s robotic arm lab earlier this year. Maybe good enough to walk on the Michigan basketball team? Students in ROB 550 cover the basics of robotic sensing, reasoning, and acting in several labs over the course: here the designs to take the ball to the net varied greatly, from hook shots to tension-storing contraptions from downtown. These basics help them excel throughout their robotics graduate degrees and research projects.

[ University of Michigan Robotics ]

Wonder what a Robody can do? This. And more!

[ Devanthro ]

It’s very satisfying watching Dusty print its way around obstacles.

[ Dusty Robotics ]

Ryan Companies has deployed Field AI’s autonomy software on a quadruped robot in the company’s ATX Tower site in Austin, TX, to greatly improve its daily surveying and data collection processes.

[ Field AI ]

Since landing its first rover on Mars in 1997, NASA has pushed the boundaries of exploration with increasingly larger and more sophisticated robotic explorers. Each mission builds on the lessons learned from the Red Planet, leading to breakthroughs in technology and our understanding of Mars. From the microwave-sized Sojourner to the SUV-sized Perseverance—and even taking flight with the groundbreaking Ingenuity helicopter—these rovers reflect decades of innovation and the drive to answer some of science’s biggest questions. This is their evolution.

[ NASA ]

Welcome to things that are safe to do only with a drone.

[ Team BlackSheep ]



On the shores of Lake Geneva in Switzerland, École Polytechnique Fédérale de Lausanne is home to many roboticists. It’s also home to many birds, which spend the majority of their time doing bird things. With a few exceptions, those bird things aren’t actually flying: Flying is a lot of work, and many birds have figured out that they can instead just walk around on the ground, where all the food tends to be, and not tire themselves out by having to get airborne over and over again.

“Whenever I encountered crows on the EPFL campus, I would observe how they walked, hopped over or jumped on obstacles, and jumped for take-offs,” says Won Dong Shin, a doctoral student at EPFL’s Laboratory of Intelligent Systems. “What I consistently observed was that they always jumped to initiate flight, even in situations where they could have used only their wings.”

Shin is first author on a paper published today in Nature that explores both why birds jump to take off, and how that can be beneficially applied to fixed-wing drones, which otherwise need things like runways or catapults to get themselves off the ground. Shin’s RAVEN (Robotic Avian-inspired Vehicle for multiple ENvironments) drone, with its bird-inspired legs, can do jumping takeoffs just like crows do, and can use those same legs to get around on the ground pretty well, too.

The drone’s bird-inspired legs adopted some key principles of biological design like the ability to store and release energy in tendon-like springs along with some flexible toes.EPFL

Back in 2019, we wrote about a South African startup called Passerine which had a similar idea, albeit more focused on using legs to launch fixed-wing cargo drones into the air. This is an appealing capability for drones, because it means that you can take advantage of the range and endurance that you get with a fixed wing without having to resort to inefficient tricks like stapling a bunch of extra propellers to yourself to get off the ground. “The concept of incorporating jumping take-off into a fixed-wing vehicle is the common idea shared by both RAVEN and Passerine,” says Shin. “The key difference lies in their focus: Passerine concentrated on a mechanism solely for jumping, while RAVEN focused on multifunctional legs.”

Bio-inspired Design for Drones

Multifunctional legs bring RAVEN much closer to birds, and although these mechanical legs are not nearly as complex and capable as actual bird legs, adopting some key principles of biological design (like the ability to store and release energy in tendon-like springs along with some flexible toes) allows RAVEN to get around in a very bird-like way.

EPFL

Despite its name, RAVEN is approximately the size of a crow, with a wingspan of 100 centimeters and a body length of 50 cm. It can walk a meter in just under four seconds, hop over 12 cm gaps, and jump into the top of a 26 cm obstacle. For the jumping takeoff, RAVEN’s legs propel the drone to a starting altitude of nearly half a meter, with a forward velocity of 2.2 m/s.

RAVEN’s toes are particularly interesting, especially after you see how hard the poor robot faceplants without them:

Without toes, RAVEN face-plants when it tries to walk.EPFL

“It was important to incorporate a passive elastic toe joint to enable multiple gait patterns and ensure that RAVEN could jump at the correct angle for takeoff,” Shin explains. Most bipedal robots have actuated feet that allow for direct control for foot angles, but for a robot that flies, you can’t just go adding actuators all over the place willy-nilly because they weigh too much. As it is, RAVEN’s a 620-gram drone of which a full 230 grams consists of feet and toes and actuators and whatnot.

Actuated hip and ankle joints form a simplified but still birdlike leg, while springs in the ankle and toe joints help to absorb force and store energy.EPFL

Why Add Legs to a Drone?

So the question is, is all of this extra weight and complexity of adding legs actually worth it? In one sense, it definitely is, because the robot can do things that it couldn’t do before—walking around on the ground and taking off from the ground by itself. But it turns out that RAVEN is light enough, and has a sufficiently powerful enough motor, that as long as it’s propped up at the right angle, it can take off from the ground without jumping at all. In other words, if you replaced the legs with a couple of popsicle sticks just to tilt the drone’s nose up, would that work just as well for the ground takeoffs?

The researchers tested this, and found that non-jumping takeoffs were crappy. The mix of high angle of attack and low takeoff speed led to very unstable flight—it worked, but barely. Jumping, on the other hand, ends up being about ten times more energy efficient overall than a standing takeoff. As the paper summarizes, “although jumping take-off requires slightly higher energy input, it is the most energy-efficient and fastest method to convert actuation energy to kinetic and potential energies for flight.” And just like birds, RAVEN can also take advantage of its legs to move on the ground in a much more energy efficient way relative to making repeated short flights.

Won Dong Shin holds the RAVEN drone.EPFL

Can This Design Scale Up to Larger Fixed-Wing Drones?

Birds use their legs for all kinds of stuff besides walking and hopping and jumping, of course, and Won Dong Shin hopes that RAVEN may be able to do more with its legs, too. The obvious one is using legs for landing: “Birds use their legs to decelerate and reduce impact, and this same principle could be applied to RAVEN’s legs,” Shin says, although the drone would need a perception system that it doesn’t yet have to plan things out. There’s also swimming, perching, and snatching, all of which would require a new foot design.

We also asked Shin about what it would take to scale this design up, to perhaps carry a useful payload at some point. Shin points out that beyond a certain size, birds are no longer able to do jumping takeoffs, and either have to jump off something higher up or find themselves a runway. In fact, some birds will go to astonishing lengths not to have to do jumping takeoffs, as best human of all time David Attenborough explains:

BBC

Shin points out that it’s usually easier to scale engineered systems than biological ones, and he seems optimistic that legs for jumping takeoffs will be viable on larger fixed-wing drones that could be used for delivery. A vision system that could be used for both obstacle avoidance and landing is in the works, as are wings that can fold to allow the drone to pass through narrow gaps. Ultimately, Shin says that he wants to make the drone as bird-like as possible: “I am also keen to incorporate flapping wings into RAVEN. This enhancement would enable more bird-like motion and bring more interesting research questions to explore.”

Fast ground-to-air transition with avian-inspired multifunctional legs,” by Won Dong Shin, Hoang-Vu Phan, Monica A. Daley, Auke J. Ijspeert, and Dario Floreano from EPFL in Switzerland and UC Irvine, appears in the December 4 issue of Nature.



On the shores of Lake Geneva in Switzerland, École Polytechnique Fédérale de Lausanne is home to many roboticists. It’s also home to many birds, which spend the majority of their time doing bird things. With a few exceptions, those bird things aren’t actually flying: Flying is a lot of work, and many birds have figured out that they can instead just walk around on the ground, where all the food tends to be, and not tire themselves out by having to get airborne over and over again.

“Whenever I encountered crows on the EPFL campus, I would observe how they walked, hopped over or jumped on obstacles, and jumped for take-offs,” says Won Dong Shin, a doctoral student at EPFL’s Laboratory of Intelligent Systems. “What I consistently observed was that they always jumped to initiate flight, even in situations where they could have used only their wings.”

Shin is first author on a paper published today in Nature that explores both why birds jump to take off, and how that can be beneficially applied to fixed-wing drones, which otherwise need things like runways or catapults to get themselves off the ground. Shin’s RAVEN (Robotic Avian-inspired Vehicle for multiple ENvironments) drone, with its bird-inspired legs, can do jumping takeoffs just like crows do, and can use those same legs to get around on the ground pretty well, too.

The drone’s bird-inspired legs adopted some key principles of biological design like the ability to store and release energy in tendon-like springs along with some flexible toes.EPFL

Back in 2019, we wrote about a South African startup called Passerine which had a similar idea, albeit more focused on using legs to launch fixed-wing cargo drones into the air. This is an appealing capability for drones, because it means that you can take advantage of the range and endurance that you get with a fixed wing without having to resort to inefficient tricks like stapling a bunch of extra propellers to yourself to get off the ground. “The concept of incorporating jumping take-off into a fixed-wing vehicle is the common idea shared by both RAVEN and Passerine,” says Shin. “The key difference lies in their focus: Passerine concentrated on a mechanism solely for jumping, while RAVEN focused on multifunctional legs.”

Bio-inspired Design for Drones

Multifunctional legs bring RAVEN much closer to birds, and although these mechanical legs are not nearly as complex and capable as actual bird legs, adopting some key principles of biological design (like the ability to store and release energy in tendon-like springs along with some flexible toes) allows RAVEN to get around in a very bird-like way.

EPFL

Despite its name, RAVEN is approximately the size of a crow, with a wingspan of 100 centimeters and a body length of 50 cm. It can walk a meter in just under four seconds, hop over 12 cm gaps, and jump into the top of a 26 cm obstacle. For the jumping takeoff, RAVEN’s legs propel the drone to a starting altitude of nearly half a meter, with a forward velocity of 2.2 m/s.

RAVEN’s toes are particularly interesting, especially after you see how hard the poor robot faceplants without them:

Without toes, RAVEN face-plants when it tries to walk.EPFL

“It was important to incorporate a passive elastic toe joint to enable multiple gait patterns and ensure that RAVEN could jump at the correct angle for takeoff,” Shin explains. Most bipedal robots have actuated feet that allow for direct control for foot angles, but for a robot that flies, you can’t just go adding actuators all over the place willy-nilly because they weigh too much. As it is, RAVEN’s a 620-gram drone of which a full 230 grams consists of feet and toes and actuators and whatnot.

Actuated hip and ankle joints form a simplified but still birdlike leg, while springs in the ankle and toe joints help to absorb force and store energy.EPFL

Why Add Legs to a Drone?

So the question is, is all of this extra weight and complexity of adding legs actually worth it? In one sense, it definitely is, because the robot can do things that it couldn’t do before—walking around on the ground and taking off from the ground by itself. But it turns out that RAVEN is light enough, and has a sufficiently powerful enough motor, that as long as it’s propped up at the right angle, it can take off from the ground without jumping at all. In other words, if you replaced the legs with a couple of popsicle sticks just to tilt the drone’s nose up, would that work just as well for the ground takeoffs?

The researchers tested this, and found that non-jumping takeoffs were crappy. The mix of high angle of attack and low takeoff speed led to very unstable flight—it worked, but barely. Jumping, on the other hand, ends up being about ten times more energy efficient overall than a standing takeoff. As the paper summarizes, “although jumping take-off requires slightly higher energy input, it is the most energy-efficient and fastest method to convert actuation energy to kinetic and potential energies for flight.” And just like birds, RAVEN can also take advantage of its legs to move on the ground in a much more energy efficient way relative to making repeated short flights.

Won Dong Shin holds the RAVEN drone.EPFL

Can This Design Scale Up to Larger Fixed-Wing Drones?

Birds use their legs for all kinds of stuff besides walking and hopping and jumping, of course, and Won Dong Shin hopes that RAVEN may be able to do more with its legs, too. The obvious one is using legs for landing: “Birds use their legs to decelerate and reduce impact, and this same principle could be applied to RAVEN’s legs,” Shin says, although the drone would need a perception system that it doesn’t yet have to plan things out. There’s also swimming, perching, and snatching, all of which would require a new foot design.

We also asked Shin about what it would take to scale this design up, to perhaps carry a useful payload at some point. Shin points out that beyond a certain size, birds are no longer able to do jumping takeoffs, and either have to jump off something higher up or find themselves a runway. In fact, some birds will go to astonishing lengths not to have to do jumping takeoffs, as best human of all time David Attenborough explains:

BBC

Shin points out that it’s usually easier to scale engineered systems than biological ones, and he seems optimistic that legs for jumping takeoffs will be viable on larger fixed-wing drones that could be used for delivery. A vision system that could be used for both obstacle avoidance and landing is in the works, as are wings that can fold to allow the drone to pass through narrow gaps. Ultimately, Shin says that he wants to make the drone as bird-like as possible: “I am also keen to incorporate flapping wings into RAVEN. This enhancement would enable more bird-like motion and bring more interesting research questions to explore.”

Fast ground-to-air transition with avian-inspired multifunctional legs,” by Won Dong Shin, Hoang-Vu Phan, Monica A. Daley, Auke J. Ijspeert, and Dario Floreano from EPFL in Switzerland and UC Irvine, appears in the December 4 issue of Nature.



Ruzena Bajcsy is one of the founders of the modern field of robotics. With an education in electrical engineering in Slovakia, followed by a Ph.D. at Stanford, Bajcsy was the first woman to join the engineering faculty at the University of Pennsylvania. She was the first, she says, because “in those days, nice girls didn’t mess around with screwdrivers.” Bajcsy, now 91, spoke with IEEE Spectrum at the 40th anniversary celebration of the IEEE International Conference on Robotics and Automation, in Rotterdam, Netherlands.

Ruzena Bajcsy

Ruzena Bajcsy’s 50-plus years in robotics spanned time at Stanford, the University of Pennsylvania, the National Science Foundation, and the University of California, Berkeley. Bajcsy retired in 2021.

What was the robotics field like at the time of the first ICRA conference in 1984?

Ruzena Bajcsy: There was a lot of enthusiasm at that time—it was like a dream; we felt like we could do something dramatic. But this is typical, and when you move into a new area and you start to build there, you find that the problem is harder than you thought.

What makes robotics hard?

Bajcsy: Robotics was perhaps the first subject which really required an interdisciplinary approach. In the beginning of the 20th century, there was physics and chemistry and mathematics and biology and psychology, all with brick walls between them. The physicists were much more focused on measurement, and understanding how things interacted with each other. During the war, there was a select group of men who didn’t think that mortal people could do this. They were so full of themselves. I don’t know if you saw the Oppenheimer movie, but I knew some of those men—my husband was one of those physicists!

And how are roboticists different?

Bajcsy: We are engineers. For physicists, it’s the matter of discovery, done. We, on the other hand, in order to understand things, we have to build them. It takes time and effort, and frequently we are inhibited—when I started, there were no digital cameras, so I had to build one. I built a few other things like that in my career, not as a discovery, but as a necessity.

How can robotics be helpful?

Bajcsy: As an elderly person, I use this cane. But when I’m with my children, I hold their arms and it helps tremendously. In order to keep your balance, you are taking all the vectors of your torso and your legs so that you are stable. You and I together can create a configuration of our legs and body so that the sum is stable.

One very simple useful device for an older person would be to have a cane with several joints that can adjust depending on the way I move, to compensate for my movement. People are making progress in this area, because many people are living longer than before. There are all kinds of other places where the technology derived from robotics can help like this.

What are you most proud of?

Bajcsy: At this stage of my life, people are asking, and I’m asking, what is my legacy? And I tell you, my legacy is my students. They worked hard, but they felt they were appreciated, and there was a sense of camaraderie and support for each other. I didn’t do it consciously, but I guess it came from my motherly instincts. And I’m still in contact with many of them—I worry about their children, the usual grandma!

This article appears in the December 2024 issue as “5 Questions for Ruzena Bajcsy.”



Ruzena Bajcsy is one of the founders of the modern field of robotics. With an education in electrical engineering in Slovakia, followed by a Ph.D. at Stanford, Bajcsy was the first woman to join the engineering faculty at the University of Pennsylvania. She was the first, she says, because “in those days, nice girls didn’t mess around with screwdrivers.” Bajcsy, now 91, spoke with IEEE Spectrum at the 40th anniversary celebration of the IEEE International Conference on Robotics and Automation, in Rotterdam, Netherlands.

Ruzena Bajcsy

Ruzena Bajcsy’s 50-plus years in robotics spanned time at Stanford, the University of Pennsylvania, the National Science Foundation, and the University of California, Berkeley. Bajcsy retired in 2021.

What was the robotics field like at the time of the first ICRA conference in 1984?

Ruzena Bajcsy: There was a lot of enthusiasm at that time—it was like a dream; we felt like we could do something dramatic. But this is typical, and when you move into a new area and you start to build there, you find that the problem is harder than you thought.

What makes robotics hard?

Bajcsy: Robotics was perhaps the first subject which really required an interdisciplinary approach. In the beginning of the 20th century, there was physics and chemistry and mathematics and biology and psychology, all with brick walls between them. The physicists were much more focused on measurement, and understanding how things interacted with each other. During the war, there was a select group of men who didn’t think that mortal people could do this. They were so full of themselves. I don’t know if you saw the Oppenheimer movie, but I knew some of those men—my husband was one of those physicists!

And how are roboticists different?

Bajcsy: We are engineers. For physicists, it’s the matter of discovery, done. We, on the other hand, in order to understand things, we have to build them. It takes time and effort, and frequently we are inhibited—when I started, there were no digital cameras, so I had to build one. I built a few other things like that in my career, not as a discovery, but as a necessity.

How can robotics be helpful?

Bajcsy: As an elderly person, I use this cane. But when I’m with my children, I hold their arms and it helps tremendously. In order to keep your balance, you are taking all the vectors of your torso and your legs so that you are stable. You and I together can create a configuration of our legs and body so that the sum is stable.

One very simple useful device for an older person would be to have a cane with several joints that can adjust depending on the way I move, to compensate for my movement. People are making progress in this area, because many people are living longer than before. There are all kinds of other places where the technology derived from robotics can help like this.

What are you most proud of?

Bajcsy: At this stage of my life, people are asking, and I’m asking, what is my legacy? And I tell you, my legacy is my students. They worked hard, but they felt they were appreciated, and there was a sense of camaraderie and support for each other. I didn’t do it consciously, but I guess it came from my motherly instincts. And I’m still in contact with many of them—I worry about their children, the usual grandma!

This article appears in the December 2024 issue as “5 Questions for Ruzena Bajcsy.”



Finding it hard to get the perfect angle for your shot? PhotoBot can take the picture for you. Tell it what you want the photo to look like, and your robot photographer will present you with references to mimic. Pick your favorite, and PhotoBot—a robot arm with a camera—will adjust its position to match the reference and your picture. Chances are, you’ll like it better than your own photography.

“It was a really fun project,” says Oliver Limoyo, one of the creators of PhotoBot. He enjoyed working at the intersection of several fields; human robot interaction, large language models, and classical computer vision were all necessary to create the robot.

Limoyo worked on PhotoBot while at Samsung, with his manager Jimmy Li. They were working on a project to have a robot take photographs but were struggling to find a good metric for aesthetics. Then they saw the Getty Image Challenge, where people recreated famous artwork at home during the COVID lockdown. The challenge gave Limoyo and Li the idea to have the robot select a reference image to inspire the photograph.

To get PhotoBot working, Limoyo and Li had to figure out two things: how best to find reference images of the kind of photo you want and how to adjust the camera to match that reference.

Suggesting a Reference Photograph

To start using PhotoBot, first you have to provide it with a written description of the photo you want. (For example, you could type “a picture of me looking happy”.) Then PhotoBot scans the environment around you, identifying the people and objects it can see. It next finds a set of similar photos from a database of labeled images that have those same objects.

Next an LLM compares your description and the objects in the environment with that smaller set of labeled images, providing the closest matches to use as reference images. The LLM can be programmed to return any number of reference photographs.

For example, when asked for “a picture of me looking grumpy” it might identify a person, glasses, a jersey, and a cup, in the environment. PhotoBot would then deliver a reference image of a frazzled man holding a mug in front of his face among other choices.

After the user selects the reference photograph they want their picture to mimic, PhotoBot moves its robot arm to correctly position the camera to take a similar picture.

Adjusting the Camera to Fit a Reference

To move the camera to the perfect position, PhotoBot starts by identifying features that are the same in both images, for example, someone’s chin, or the top of a shoulder. It then solves a “perspective-n-point” (PnP) problem, which involves taking a camera’s 2D view and matching it to a 3D position in space. Once PhotoBot has located itself in space, it then solves how to move the robot’s arm to transform its view to look like the reference image. It repeats this process a few times, making incremental adjustments as it gets closer to the correct pose.

Then PhotoBot takes your picture.

Photobot’s developers compared portraits with and without their system.Samsung/IEEE

To test if images taken by PhotoBot were more appealing than amateur human photography, Limoyo’s team had eight people use the robot’s arm and camera to take photographs of themselves and then use PhotoBot to take a robot-assisted photograph. They then asked 20 new people to evaluate the two photographs, asking which was more aesthetically pleasing while addressing the user’s specifications (happy, excited, surprised, etc). Overall, PhotoBot was the preferred photographer 242 times out of 360 photographs, 67 percent of the time.

PhotoBot was presented on 16 October at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Although the project is no longer in development, Li thinks someone should create an app based on the underlying programming, enabling friends to take better photos of each other. “Imagine right on your phone, you see a reference photo. But you also see what the phone is seeing right now, and then that allows you to move around and align.”



Finding it hard to get the perfect angle for your shot? PhotoBot can take the picture for you. Tell it what you want the photo to look like, and your robot photographer will present you with references to mimic. Pick your favorite, and PhotoBot—a robot arm with a camera—will adjust its position to match the reference and your picture. Chances are, you’ll like it better than your own photography.

“It was a really fun project,” says Oliver Limoyo, one of the creators of PhotoBot. He enjoyed working at the intersection of several fields; human robot interaction, large language models, and classical computer vision were all necessary to create the robot.

Limoyo worked on PhotoBot while at Samsung, with his manager Jimmy Li. They were working on a project to have a robot take photographs but were struggling to find a good metric for aesthetics. Then they saw the Getty Image Challenge, where people recreated famous artwork at home during the COVID lockdown. The challenge gave Limoyo and Li the idea to have the robot select a reference image to inspire the photograph.

To get PhotoBot working, Limoyo and Li had to figure out two things: how best to find reference images of the kind of photo you want and how to adjust the camera to match that reference.

Suggesting a Reference Photograph

To start using PhotoBot, first you have to provide it with a written description of the photo you want. (For example, you could type “a picture of me looking happy”.) Then PhotoBot scans the environment around you, identifying the people and objects it can see. It next finds a set of similar photos from a database of labeled images that have those same objects.

Next an LLM compares your description and the objects in the environment with that smaller set of labeled images, providing the closest matches to use as reference images. The LLM can be programmed to return any number of reference photographs.

For example, when asked for “a picture of me looking grumpy” it might identify a person, glasses, a jersey, and a cup, in the environment. PhotoBot would then deliver a reference image of a frazzled man holding a mug in front of his face among other choices.

After the user selects the reference photograph they want their picture to mimic, PhotoBot moves its robot arm to correctly position the camera to take a similar picture.

Adjusting the Camera to Fit a Reference

To move the camera to the perfect position, PhotoBot starts by identifying features that are the same in both images, for example, someone’s chin, or the top of a shoulder. It then solves a “perspective-n-point” (PnP) problem, which involves taking a camera’s 2D view and matching it to a 3D position in space. Once PhotoBot has located itself in space, it then solves how to move the robot’s arm to transform its view to look like the reference image. It repeats this process a few times, making incremental adjustments as it gets closer to the correct pose.

Then PhotoBot takes your picture.

Photobot’s developers compared portraits with and without their system.Samsung/IEEE

To test if images taken by PhotoBot were more appealing than amateur human photography, Limoyo’s team had eight people use the robot’s arm and camera to take photographs of themselves and then use PhotoBot to take a robot-assisted photograph. They then asked 20 new people to evaluate the two photographs, asking which was more aesthetically pleasing while addressing the user’s specifications (happy, excited, surprised, etc). Overall, PhotoBot was the preferred photographer 242 times out of 360 photographs, 67 percent of the time.

PhotoBot was presented on 16 October at the IEEE/RSJ International Conference on Intelligent Robots and Systems.

Although the project is no longer in development, Li thinks someone should create an app based on the underlying programming, enabling friends to take better photos of each other. “Imagine right on your phone, you see a reference photo. But you also see what the phone is seeing right now, and then that allows you to move around and align.”

Pages