IEEE Spectrum Robotics

IEEE Spectrum
Subscribe to IEEE Spectrum Robotics feed IEEE Spectrum Robotics


Seabed observation plays a major role in safeguarding marine systems by keeping tabs on the species and habitats on the ocean floor at different depths. This is primarily done by underwater robots that use optical imaging to collect high quality data that can be fed into environmental models, and compliment the data obtained through sonar in large-scale ocean observations.

Different underwater robots have been trialed over the years, but many have struggled with performing near-seabed observations because they disturb the local seabed by destroying coral and disrupting the sediment. Gang Wang, from Harbin Engineering University in China, and his research team have recently developed a maneuverable underwater vehicle that is better suited to seabed operations because it doesn’t disturb the local environment by floating above the seabed and possessing a specially engineering propeller system to manuever. These robots could be used to better protect the seabed while studying it, and improve efforts to preserve marine biodiversity and explore for underwater resources such as minerals for EV batteries.

Many underwater robots are wheeled or legged, but “these robots face substantial challenges in rugged terrains where obstacles and slopes can impede their functionality,” says Wang. They can also damage coral reefs.

Floating robots don’t have this issue, but existing options disturb the sediment on the seabed because their thrusters create a downward current during ascension. The waves generated as the propeller’s wake directly hit the seafloor in most floating robots, which causes sediment to move in the immediate vicinity. In a similar way to dust blowing in front of your digital or smartphone camera, the particles moving through the water can obscure the view of the cameras on the robot and reduce the quality of the images it captures. “Addressing this issue was crucial for the functional success of our prototype and for increasing its acceptance among engineers,” says Wang.

Designing a Better Underwater Robot

After further investigation, Wang and the rest of the team found that the robot’s shape influences the local water resistance, or drag, even at low speeds. “During the design process, we configured the robot with two planes exhibiting significant differences in water resistance,” says Wang. This led to the researchers developing a robot with a flattened body and angling the thruster relative to the central axis. “We found that the robot’s shape and the thruster layout significantly influence its ascent speed,” says Wang.

Clockwise from left: relationship between rotational speed of the thruster and the resultant force and torque in the airframe coordinate system, overall structure of the robot, side view of the thruster arrangement and main electronics components.Gang Wang, Kaixin Liu et al.

The researchers created a navigational system where the thrusters generate a combined force that slants downwards but still allows the robot to ascend, changing the wake distribution during ascent so that it doesn’t disturb the sediment on the seafloor. “Flattening the robot’s body and angling the thruster relative to the central axis is a straightforward approach for most engineers, enhancing the potential for broader application of this design” in seabed monitoring, says Wang.

“By addressing the navigational concerns of floating robots, we aim to enhance the observational capabilities of underwater robots in near-seafloor environments,” says Wang. The vehicle was tested in a range of marine environments, including sandy areas, coral reefs, and sheer rock, to show its ability to minimally disturb sediments in multiple potential environments.

Alongside the structural design advancements, the team incorporated an angular acceleration feedback control to keep the robot as close to the seafloor as possible without actually hitting it—called bottoming out. They also developed external disturbance observation algorithms and designed a sensor layout structure that enables the robot to quickly recognize and resist external disturbances, as well as plot a path in real time. This approach allowed the new vehicle to travel along at only 20 centimeters above the seafloor without bottoming out.

By implanting this control, the robot was able to get close to the sea floor and improve the quality of the images it took by reducing light refraction and scattering caused by the water column. “Given the robot’s proximity to the seafloor, even brief periods of instability can lead to collisions with the bottom, and we have verified that the robot shows excellent resistance to strong disturbances,” says Wang.

With the success of this new robot achieving a closer approach to the seafloor without disturbing the seabed or crashing, Wang has stated that they plan to use the robot to closely observe coral reefs. Coral reef monitoring currently relies on inefficient manual methods, so the robots could widen the areas that are observed, and do so more quickly.

Wang adds that “effective detection methods are lacking in deeper waters, particularly in the mid-light layer. We plan to improve the autonomy of the detection process to substitute divers in image collection, and facilitate the automatic identification and classification of coral reef species density to provide a more accurate and timely feedback on the health status of coral reefs.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today's videos!

Unitree rolls out frequent updates nearly every month. This time, we present to you the smoothest walking and humanoid running in the world. We hope you like it.]

[ Unitree ]

This is just lovely.

[ Mimus CNK ]

There’s a lot to like about Grain Weevil as an effective unitasking robot, but what I really appreciate here is that the control system is just a remote and a camera slapped onto the top of the bin.

[ Grain Weevil ]

This video, “Robot arm picking your groceries like a real person,” has taught me that I am not a real person.

[ Extend Robotics ]

A robot walking like a human walking like what humans think a robot walking like a robot walks like.

And that was my favorite sentence of the week.

[ Engineai ]

For us, robots are tools to simplify life. But they should look friendly too, right? That’s why we added motorized antennas to Reachy, so it can show simple emotions—without a full personality. Plus, they match those expressive eyes O_o!

[ Pollen Robotics ]

So a thing that I have come to understand about ships with sails (thanks, Jack Aubrey!) is that sailing in the direction that the wind is coming from can be tricky. Turns out that having a boat with two fronts and no back makes this a lot easier.

[ Paper ] from [ 2023 IEEE/ASME International Conference on Advanced Intelligent Mechatronics ] via [ IEEE Xplore ]

I’m Kento Kawaharazuka from JSK Robotics Laboratory at the University of Tokyo. I’m writing to introduce our human-mimetic binaural hearing system on the musculoskeletal humanoid Musashi. The robot can perform 3D sound source localization using a human-like outer ear structure and an FPGA-based hearing system embedded within it.

[ Paper ]

Thanks, Kento!

The third CYBATHLON took place in Zurich on 25-27 October 2024. The CYBATHLON is a competition for people with impairments using novel robotic technologies to perform activities of daily living. It was invented and initiated by Prof. Robert Riener at ETH Zurich, Switzerland. Races were held in eight disciplines including arm and leg prostheses, exoskeletons, powered wheelchairs, brain computer interfaces, robot assistance, vision assistance, and functional electrical stimulation bikes.

[ Cybathlon ]

Thanks, Robert!

If you’re going to work on robot dogs, I’m honestly not sure whether Purina would be the most or least appropriate place to do that.

[ Michigan Robotics ]



In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story “Runaround.” The laws were later popularized in his seminal story collection I, Robot.

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov’s framework useful for considering the potential safeguards needed for AI that interacts with humans.

But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov’s original concerns about physical harm and obedience.

Deepfakes, Misinformation, and Scams

The proliferation of AI-enabled deception is particularly concerning. According to the FBI’s 2024 Internet Crime Report, cybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity’s 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust.

Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort.

Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly common, and any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanish, as he knew that I’m a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited.

Even more alarmingly, children and teenagers are forming emotional attachments to AI agents, and are sometimes unable to distinguish between interactions with real friends and bots online. Already, there have been suicides attributed to interactions with AI chatbots.

In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems’ ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union’s AI Act, which includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov’s time, people couldn’t have imagined how artificial agents could use online communication tools and avatars to deceive humans.

Therefore, we must make an addition to Asimov’s laws.

  • Fourth Law: A robot or AI must not deceive a human by impersonating a human being.
The Way Toward Trusted AI

We need clear boundaries. While human-AI collaboration can be constructive, AI deception undermines trust and leads to wasted time, emotional distress, and misuse of resources. Artificial agents must identify themselves to ensure our interactions with them are transparent and productive. AI-generated content should be clearly marked unless it has been significantly edited and adapted by a human.

Implementation of this Fourth Law would require:

  • Mandatory AI disclosure in direct interactions,
  • Clear labeling of AI-generated content,
  • Technical standards for AI identification,
  • Legal frameworks for enforcement,
  • Educational initiatives to improve AI literacy.

Of course, all this is easier said than done. Enormous research efforts are already underway to find reliable ways to watermark or detect AI-generated text, audio, images, and videos. Creating the transparency I’m calling for is far from a solved problem.

But the future of human-AI collaboration depends on maintaining clear distinctions between human and artificial agents. As noted in the IEEE’s 2022 “Ethically Aligned Design“ framework, transparency in AI systems is fundamental to building public trust and ensuring the responsible development of artificial intelligence.

Asimov’s complex stories showed that even robots that tried to follow the rules often discovered the unintended consequences of their actions. Still, having AI systems that are trying to follow Asimov’s ethical guidelines would be a very good start.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

I’m not totally sure yet about the utility of having a small arm on a robot vacuum, but I love that this is a real thing. At least, it is at CES this year.

[ Roborock ]

We posted about SwitchBot’s new modular home robot system earlier this week, but here’s a new video showing some potentially useful hardware combinations.

[ SwitchBot ]

Yes, it’s in sim, but (and this is a relatively new thing) I will not be shocked to see this happen on Unitree’s hardware in the near future.

[ Unitree ]

With ongoing advancements in system engineering, ‪LimX Dynamics‬’ full-size humanoid robot features a hollow actuator design and high torque-density actuators, enabling full-body balance for a wide range of motion. Now it achieves complex full-body movements in a ultra stable and dynamic manner.

[ LimX Dynamics ]

We’ve seen hybrid quadrotor bipeds before, but this one , which is imitating the hopping behavior of Jacana birds, is pretty cute.

What’s a Jacana bird, you ask? It’s these things, which surely must have the most extreme foot to body ratio of any bird:

Also, much respect to the researchers for confidently titling this supplementary video “An Extremely Elegant Jump.”

[ SSRN Paper preprint ]

Twelve minutes flat from suitcase to mobile manipulator. Not bad!

[ Pollen Robotics ]

Happy New Year from Dusty Robotics!

[ Dusty Robotics ]



Back in the day, the defining characteristic of home-cleaning robots was that they’d randomly bounce around your floor as part of their cleaning process, because the technology required to localize and map an area hadn’t yet trickled down into the consumer space. That all changed in 2010, when home robots started using lidar (and other things) to track their location and optimize how they cleaned.

Consumer pool-cleaning robots are lagging about 15 years behind indoor robots on this, for a couple of reasons. First, most pool robots—different from automatic pool cleaners, which are purely mechanical systems that are driven by water pressure—have been tethered to an outlet for power, meaning that maximizing efficiency is less of a concern. And second, 3D underwater localization is a much different (and arguably more difficult) problem to solve than 2D indoor localization was. But pool robots are catching up, and at CES this week, Wybot introduced an untethered robot that uses ultrasound to generate a 3D map for fast, efficient pool cleaning. And it’s solar powered and self-emptying, too.

Underwater localization and navigation is not an easy problem for any robot. Private pools are certainly privileged to be operating environments with a reasonable amount of structure and predictability, at least if everything is working the way it should. But the lighting is always going to be a challenge, between bright sunlight, deep shadow, wave reflections, and occasionally murky water if the pool chemicals aren’t balanced very well. That makes relying on any light-based localization system iffy at best, and so Wybot has gone old school, with ultrasound.

Wybot Brings Ultrasound Back to Bots

Ultrasound used to be a very common way for mobile robots to navigate. You may (or may not) remember venerable robots like the Pioneer 3, with those big ultrasonic sensors across its front. As cameras and lidar got cheap and reliable, the messiness of ultrasonic sensors fell out of favor, but sound is still ideal for underwater applications where anything that relies on light may struggle.


The Wybot S3 uses 12 ultrasonic sensors, plus motor encoders and an inertial measurement unit to map residential pools in three dimensions. “We had to choose the ultrasonic sensors very carefully,” explains Felix (Huo) Feng, the CTO of Wybot. “Actually, we use multiple different sensors, and we compute time of flight [of the sonar pulses] to calculate distance.” The positional accuracy of the resulting map is about 10 centimeters, which is totally fine for the robot to get its job done, although Feng says that they’re actively working to improve the map’s resolution. For path planning purposes, the 3D map gets deconstructed into a series of 2D maps, since the robot needs to clean the bottom of the pool, stairs and ledges, and also the sides of the pool.

Efficiency is particularly important for the S3 because its charging dock has enough solar panels on the top of it to provide about 90 minutes of runtime for the robot over the course of an optimally sunny day. If your pool isn’t too big, that means the robot can clean it daily without requiring a power connection to the dock. The dock also sucks debris out of the collection bin on the robot itself, and Wybot suggests that the S3 can go for up to a month of cleaning without the dock overflowing.

The S3 has a camera on the front, which is used primarily to identify and prioritize dirtier areas (through AI, of course) that need focused cleaning. At some point in the future, Wybot may be able to use vision for navigation too, but my guess is that for reliable 24/7 navigation, ultrasound will still be necessary.

One other interesting little tidbit is the communication system. The dock can talk to your Wi-Fi, of course, and then talk to the robot while it’s charging. Once the robot goes off for a swim, however, traditional wireless signals won’t work, but the dock has its own sonar that can talk to the robot at several bytes per second. This isn’t going to get you streaming video from the robot’s camera, but it’s enough to let you steer the robot if you want, or ask it to come back to the dock, get battery status updates, and similar sorts of things.

The Wybot S3 will go on sale in Q2 of this year for a staggering US $2,999, but that’s how it always works: The first time a new technology shows up in the consumer space, it’s inevitably at a premium. Give it time, though, and my guess is that the ability to navigate and self-empty will become standard features in pool robots. But as far as I know, Wybot got there first.




Autonomous systems, particularly fleets of drones and other unmanned vehicles, face increasing risks as their complexity grows. Despite advancements, existing testing frameworks fall short in addressing end-to-end security, resilience, and safety in zero-trust environments. The Secure Systems Research Center (SSRC) at TII has developed a rigorous, holistic testing framework to systematically evaluate the performance and security of these systems at each stage of development. This approach ensures secure, resilient, and safe operations for autonomous systems, from individual components to fleet-wide interactions.



Earlier this year, we reviewed the SwitchBot S10, a vacuuming and wet mopping robot that uses a water-integrated docking system to autonomously manage both clean and dirty water for you. It’s a pretty clever solution, and we appreciated that SwitchBot was willing to try something a little different.

At CES this week, SwitchBot introduced the K20+ Pro, a little autonomous vacuum that can integrate with a bunch of different accessories by pulling them around on a backpack cart of sorts. The K20+ Pro is SwitchBot’s latest effort to explore what’s possible with mobile home robots.

SwitchBot’s small vacuum can transport different payloads on top.SwitchBot

What we’re looking at here is a “mini” robotic vacuum (it’s about 25 centimeters in diameter) that does everything a robotic vacuum does nowadays: It uses lidar to make a map of your house so that you can direct it where to go, it’s got a dock to empty itself and recharge, and so on. The mini robotic vacuum is attached to a wheeled platform that SwitchBot is calling a “FusionPlatform” that sits on top of the robot like a hat. The vacuum docks to this platform, and then the platform will go wherever the robot goes. This entire system (robot, dock, and platform) is the “K20+ Pro multitasking household robot.”

SwitchBot refers to the K20+ Pro as a “smart delivery assistant,” because you can put stuff on the FusionPlatform and the K20+ Pro will move that stuff around your house for you. This really doesn’t do it justice, though, because the platform is much more than just a passive mobile cart. It also can provide power to a bunch of different accessories, all of which benefit from autonomous mobility:

The SwitchBot can carry a variety of payloads, including custom payloads.SwitchBot

From left to right, you’re looking at an air circulation fan, a tablet stand, a vacuum and charging dock and an air purifier and security camera (and a stick vacuum for some reason), and lastly just the air purifier and security setup. You can also add and remove different bits, like if you want the fan along with the security camera, just plop the security camera down on the platform base in front of the fan and you’re good to go.

This basic concept is somewhat similar to Amazon’s Proteus robot, in the sense that you can have one smart powered base that moves around a bunch of less smart and unpowered payloads by driving underneath them and then carrying them around. But SwitchBot’s payloads aren’t just passive cargo, and the base can provide them with a useful amount of power.

A power port allows you to develop your own payloads for the robot.SwitchBot

SwitchBot is actively encouraging users to “to create, adapt, and personalize the robot for a wide variety of innovative applications,” which may include “3D-printed components [or] third-party devices with multiple power ports for speakers, car fridges, or even UV sterilization lamps,” according to the press release. The maximum payload is only 8 kilograms, though, so don’t get too crazy.

Several SwitchBots can make bath time much more enjoyable.SwitchBot

What we all want to know is when someone will put an arm on this thing, and SwitchBot is of course already working on this:

SwitchBot’s mobile manipulator is still in the lab stage.SwitchBot

The arm is still “in the lab stage,” SwichBot says, which I’m guessing means that the hardware is functional but that getting it to reliably do useful stuff with the arm is still a work in progress. But that’s okay—getting an arm to reliably do useful stuff is a work in progress for all of robotics, pretty much. And if SwitchBot can manage to produce an affordable mobile manipulation platform for consumers that even sort of works, that’ll be very impressive.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

It’s me. But we can all relate to this child android robot struggling to stay awake.

[ Osaka University ]

For 2025, the RoboCup SPL plans an interesting new technical challenge: Kicking a rolling ball! The velocity and start position of the ball can vary and the goal is to kick the ball straight and far. In this video, we show our results from our first testing session.

[ Team B-Human ]

When you think of a prosthetic hand you probably think of something similar to Luke Skywalker’s robotic hand from Star Wars, or even Furiosa’s multi-fingered claw from Mad Max. The reality is a far cry from these fictional hands: upper limb prostheses are generally very limited in what they can do, and how we can control them to do it. In this project, we investigate non-humanoid prosthetic hand design, exploring a new ideology for the design of upper limb prostheses that encourages alternative approaches to prosthetic hands. In this wider, more open design space, can we surpass humanoid prosthetic hands?

[ Imperial College London ]

Thanks, Digby!

A novel three-dimensional (3D) Minimally Actuated Serial Robot (MASR), actuated by a robotic motor. The robotic motor is composed of a mobility motor (to advance along the links) and an actuation motor [to] move the joints.

[ Zarrouk Lab ]

This year, Franka Robotics team hit the road, the skies and the digital space to share ideas, showcase our cutting-edge technology, and connect with the brightest minds in robotics across the globe. Here is 2024 video recap, capturing the events and collaborations that made this year unforgettable!

[ Franka Robotics ]

Aldebaran has sold an astonishing number of robots this year.

[ Aldebaran ]

The advancement of modern robotics starts at its foundation: the gearboxes. Ailos aims to define how these industries operate with increased precision, efficiency and versatility. By innovating gearbox technology across diverse fields, Ailos is catalyzing the transition towards the next wave of automation, productivity and agility.

[ Ailos Robotics ]

Many existing obstacle avoidance algorithms overlook the crucial balance between safety and agility, especially in environments of varying complexity. In our study, we introduce an obstacle avoidance pipeline based on reinforcement learning. This pipeline enables drones to adapt their flying speed according to the environmental complexity. After minimal fine-tuning, we successfully deployed our network on a real drone for enhanced obstacle avoidance.

[ MAVRL via Github ]

Robot-assisted feeding promises to empower people with motor impairments to feed themselves. However, research often focuses on specific system subcomponents and thus evaluates them in controlled settings. This leaves a gap in developing and evaluating an end-to-end system that feeds users entire meals in out-of-lab settings. We present such a system, collaboratively developed with community researchers.

[ Personal Robotics Lab ]

A drone’s eye-view reminder that fireworks explode in 3D.

[ Team BlackSheep ]



The future of human habitation in the sea is taking shape in an abandoned quarry on the border of Wales and England. There, the ocean-exploration organization Deep has embarked on a multiyear quest to enable scientists to live on the seafloor at depths up to 200 meters for weeks, months, and possibly even years.

“Aquarius Reef Base in St. Croix was the last installed habitat back in 1987, and there hasn’t been much ground broken in about 40 years,” says Kirk Krack, human diver performance lead at Deep. “We’re trying to bring ocean science and engineering into the 21st century.”

This article is part of our special report Top Tech 2025.

Deep’s agenda has a major milestone this year—the development and testing of a small, modular habitat called Vanguard. This transportable, pressurized underwater shelter, capable of housing up to three divers for periods ranging up to a week or so, will be a stepping stone to a more permanent modular habitat system—known as Sentinel—that is set to launch in 2027. “By 2030, we hope to see a permanent human presence in the ocean,” says Krack. All of this is now possible thanks to an advanced 3D printing-welding approach that can print these large habitation structures.

How would such a presence benefit marine science? Krack runs the numbers for me: “With current diving at 150 to 200 meters, you can only get 10 minutes of work completed, followed by 6 hours of decompression. With our underwater habitats we’ll be able to do seven years’ worth of work in 30 days with shorter decompression time. More than 90 percent of the ocean’s biodiversity lives within 200 meters’ depth and at the shorelines, and we only know about 20 percent of it.” Understanding these undersea ecosystems and environments is a crucial piece of the climate puzzle, he adds: The oceans absorb nearly a quarter of human-caused carbon dioxide and roughly 90 percent of the excess heat generated by human activity.

Underwater Living Gets the Green Light This Year

Deep is looking to build an underwater life-support infrastructure that features not just modular habitats but also training programs for the scientists who will use them. Long-term habitation underwater involves a specialized type of activity called saturation diving, so named because the diver’s tissues become saturated with gases, such as nitrogen or helium. It has been used for decades in the offshore oil and gas sectors but is uncommon in scientific diving, outside of the relatively small number of researchers fortunate enough to have spent time in Aquarius. Deep wants to make it a standard practice for undersea researchers.

The first rung in that ladder is Vanguard, a rapidly deployable, expedition-style underwater habitat the size of a shipping container that can be transported and supplied by a ship and house three people down to depths of about 100 meters. It is set to be tested in a quarry outside of Chepstow, Wales, in the first quarter of 2025.

The Vanguard habitat, seen here in an illustrator’s rendering, will be small enough to be transportable and yet capable of supporting three people at a maximum depth of 100 meters.Deep

The plan is to be able to deploy Vanguard wherever it’s needed for a week or so. Divers will be able to work for hours on the seabed before retiring to the module for meals and rest.

One of the novel features of Vanguard is its extraordinary flexibility when it comes to power. There are currently three options: When deployed close to shore, it can connect by cable to an onshore distribution center using local renewables. Farther out at sea, it could use supply from floating renewable-energy farms and fuel cells that would feed Vanguard via an umbilical link, or it could be supplied by an underwater energy-storage system that contains multiple batteries that can be charged, retrieved, and redeployed via subsea cables.

The breathing gases will be housed in external tanks on the seabed and contain a mix of oxygen and helium that will depend on the depth. In the event of an emergency, saturated divers won’t be able to swim to the surface without suffering a life-threatening case of decompression illness. So, Vanguard, as well as the future Sentinel, will also have backup power sufficient to provide 96 hours of life support, in an external, adjacent pod on the seafloor.

Data gathered from Vanguard this year will help pave the way for Sentinel, which will be made up of pods of different sizes and capabilities. These pods will even be capable of being set to different internal pressures, so that different sections can perform different functions. For example, the labs could be at the local bathymetric pressure for analyzing samples in their natural environment, but alongside those a 1-atmosphere chamber could be set up where submersibles could dock and visitors could observe the habitat without needing to equalize with the local pressure.

As Deep sees it, a typical configuration would house six people—each with their own bedroom and bathroom. It would also have a suite of scientific equipment including full wet labs to perform genetic analyses, saving days by not having to transport samples to a topside lab for analysis.

“By 2030, we hope to see a permanent human presence in the ocean,” says one of the project’s principals

A Sentinel configuration is designed to go for a month before needing a resupply. Gases will be topped off via an umbilical link from a surface buoy, and food, water, and other supplies would be brought down during planned crew changes every 28 days.

But people will be able to live in Sentinel for months, if not years. “Once you’re saturated, it doesn’t matter if you’re there for six days or six years, but most people will be there for 28 days due to crew changes,” says Krack.

Where 3D Printing and Welding Meet

It’s a very ambitious vision, and Deep has concluded that it can be achieved only with advanced manufacturing techniques. Deep’s manufacturing arm, Deep Manufacturing Labs (DML), has come up with an innovative approach for building the pressure hulls of the habitat modules. It’s using robots to combine metal additive manufacturing with welding in a process known as wire-arc additive manufacturing. With these robots, metal layers are built up as they would be in 3D printing, but the layers are fused together via welding using a metal-inert-gas torch.

At Deep’s base of operations at a former quarry in Tidenham, England, resources include two Triton 3300/3 MK II submarines. One of them is seen here at Deep’s floating “island” dock in the quarry. Deep

During a tour of the DML, Harry Thompson, advanced manufacturing engineering lead, says, “We sit in a gray area between welding and additive process, so we’re following welding rules, but for pressure vessels we [also] follow a stress-relieving process that is applicable for an additive component. We’re also testing all the parts with nondestructive testing.”

Each of the robot arms has an operating range of 2.8 by 3.2 meters, but DML has boosted this area by means of a concept it calls Hexbot. It’s based on six robotic arms programmed to work in unison to create habitat hulls with a diameter of up to 6.1 meters. The biggest challenge with creating the hulls is managing the heat during the additive process to keep the parts from deforming as they are created. For this, DML is relying on the use of heat-tolerant steels and on very precisely optimized process parameters.

Engineering Challenges for Long-Term Habitation

Besides manufacturing, there are other challenges that are unique to the tricky business of keeping people happy and alive 200 meters underwater. One of the most fascinating of these revolves around helium. Because of its narcotic effect at high pressure, nitrogen shouldn’t be breathed by humans at depths below about 60 meters. So, at 200 meters, the breathing mix in the habitat will be 2 percent oxygen and 98 percent helium. But because of its very high thermal conductivity, “we need to heat helium to 31–32 °C to get a normal 21–22 °C internal temperature environment,” says Rick Goddard, director of engineering at Deep. “This creates a humid atmosphere, so porous materials become a breeding ground for mold”.

There are a host of other materials-related challenges, too. The materials can’t emit gases, and they must be acoustically insulating, lightweight, and structurally sound at high pressures.

Deep’s proving grounds are a former quarry in Tidenham, England, that has a maximum depth of 80 meters. Deep

There are also many electrical challenges. “Helium breaks certain electrical components with a high degree of certainty,” says Goddard. “We’ve had to pull devices to pieces, change chips, change [printed circuit boards], and even design our own PCBs that don’t off-gas.”

The electrical system will also have to accommodate an energy mix with such varied sources as floating solar farms and fuel cells on a surface buoy. Energy-storage devices present major electrical engineering challenges: Helium seeps into capacitors and can destroy them when it tries to escape during decompression. Batteries, too, develop problems at high pressure, so they will have to be housed outside the habitat in 1-atmosphere pressure vessels or in oil-filled blocks that prevent a differential pressure inside.

Is it Possible to Live in the Ocean for Months or Years?

When you’re trying to be the SpaceX of the ocean, questions are naturally going to fly about the feasibility of such an ambition. How likely is it that Deep can follow through? At least one top authority, John Clarke, is a believer. “I’ve been astounded by the quality of the engineering methods and expertise applied to the problems at hand and I am enthusiastic about how DEEP is applying new technology,” says Clarke, who was lead scientist of the U.S. Navy Experimental Diving Unit. “They are advancing well beyond expectations…. I gladly endorse Deep in their quest to expand humankind’s embrace of the sea.”



2024 was the best year ever for robotics, which I’m pretty sure is not something that I’ve ever said before. But that’s the great thing about robotics—it’s always new, and it’s always exciting. What may be different about this year is the real sense that not only is AI going to change everything about robots, but that it will somehow make robots useful and practical and commercially viable. Is that true? Nobody knows yet! But it means that 2025 might actually be the best year ever for robotics, if you’ve ever wanted a robot to help you out at home or at work.

So as we look forward to 2025, here are some of our most interesting and impactful stories of the past year. And as always, thanks for reading!

1. Figure Raises $675M for Its Humanoid Robot Development

Figure

This announcement from back in February is pretty much what set the tone for robotics in 2024. Figure’s Series B raise valued the company at a bonkers US $2.6 billion, and all of a sudden, humanoids were where it’s at. And by “it,” I mean everything, from funding to talent to breathless media coverage. The big question of 2024 was whether or not humanoids would be able to deliver on their promises, and that’s shaping up to be the big question of 2025, too.

2. Hello, Electric Atlas

Boston Dynamics

It didn’t take long for legendary robotics company Boston Dynamics to make it clear that they’re not going to be left behind when it comes to commercial humanoids. For a company that has been leading humanoid research longer than just about anyone but has bounced around from owner to owner over the last 10 years, we were a little unsure whether Atlas would ever be more than a research platform. But the new all-electric Atlas is designed for work, and we saw it get busy in 2024.

3. Farewell, Hydraulic Atlas

Boston Dynamics

As much as we’re excited for the new Atlas, the old hydraulic Atlas will always have a special place in our hearts. We’ve been through so much together, from the DRC to parkour to dancing. Electric robots are great and all, and I understand why they’re necessary for commercial applications, but all of that hydraulic power meant that hydraulic Atlas was able to move in dynamic ways that we may not see again for a very long time.

4. Nvidia Announces GR00T

Nvidia

So we’ve got all these humanoid robots now with all this impressive hardware, but the really hard part (or one of them, anyway) is getting those robots to actually do something commercially useful in a safe and reliable way. Is training in simulation the answer? I don’t know, but NVIDIA sure thinks so, and they’ve made a huge commitment by investing in GR00T, a “general-purpose foundation model for humanoid robots.” And what does that mean, exactly? Nobody’s quite sure yet, but with NVIDIA behind it that’s enough to make the entire industry pay attention.

5. Is It Autonomous?

Evan Ackerman

With all the attention on humanoid robots right now, it’s critical to be able to separate real progress from hype. Unfortunately, there are all kinds of ways of cheating with robots. And there’s really nothing wrong with cheating with robots, as long as you tell people that the cheating is happening, and then (hopefully) cheat less and less as your robot gets better and better. In particular, we’re likely to see more and more teleoperation of humanoid robots (obviously or otherwise) because that’s one of the best ways of collecting training data: by having a human do it. And being able to tell that a human is doing it is an important skill to have.

6. Robotic Metalsmiths

Machina Labs

Some of my favorite robots are robots that are able to leverage their robotic-ness to not just do things that humans do, but also do things that humans cannot do. Robots have the patience and precision to work metal in ways that a very highly skilled human might be able to do once, but the robots (being robots) can do it over and over again. NASA is leveraging this capability to build complex toroidal tanks for spacecraft, but it has the potential to change anything that’s made out of sheet metal.

7. The End of Ingenuity

JPL-Caltech/ASU/NASA

One of the greatest robotics stories of the last several years has been Ingenuity, the little Mars helicopter. We’ve written extensively about how Ingenuity was designed, how it can fly on Mars, and how it just kept on flying, more than 50 times. But it couldn’t fly forever, and as Ingenuity was pushed to fly farther and farther over more challenging terrain, flight 72 was to be its last. After losing its ability to localize over some particularly featureless terrain, the little robot had a very rough landing. It lived to tell the tale, but not to fly again.


Ingenuity’s spectacularly successful mission means, we hope, that there will be more robotic aircraft on Mars. And just last week, NASA shared a new video of Ingenuity’s successor, the Mars Chopper. That’s definitely something we’ll be looking forward to.


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

One year after mass production kicked off, Unitree’s B2-W Industrial Wheel has been upgraded with more exciting capabilities. Please always use robots safely and friendly.

Yours for US $100,000.

[ Unitree ]

Yes I know we’re sharing some of these holiday videos are a little late, but I deserve a little bit of time off, don’t I? No, you’re right, and I feel shame. But please enjoy these extra holiday videos anyway, starting with Santa’s Little Helper from ETHZ RSL.

Okay but seriously where do I get one of those little plush Anymals!

[ RSL ]

Merry Christmas from the Kepler humanoid robot!

[ Kepler Robotics ]

This year, Rizon has joined the festive fun by decorating the Flexiv office with holiday cheer!

[ Flexiv ]

The Eva exoskeleton, developed by IHMC, takes its first steps out of the lab and through a number of new modes in October 2024. For more than a decade, IHMC has designed and developed wearable robotic exoskeletons to rehabilitate those with spinal cord injuries. With lessons learned from these past developments, our focus has shifted to augmenting the performance of able-bodied workers in hazardous environments. We are working to advance these technologies to the real world with hope of making real differences in peoples’ lives.

[ IHMC ]

Thanks, Robert!

TIAGo Pro - a revolutionary robot with Series Elastic Actuators arms and enhanced non-verbal communication. This enhances the manipulation capabilities and enables state-of-the-art Human-Robot Interaction. Designed for agile manufacturing and future healthcare applications.

[ PAL Robotics ]

Did you know that cameras today struggle to accurately measure distance? This is because current systems rely on limited data. DARPA’s CIDAR Challenge explores combining spatial, spectral, and temporal imaging data to unlock unprecedented accuracy. Advances made through the CIDAR challenge could revolutionize everything from battlefield awareness, to robotics, to environmental research.

[ DARPA ]

Innate is developing innately intelligent teachable general-purpose robots. Our platforms are simple, accessible, so as to lower the barrier to entry into robotics for everyone.

[ Innate ]

Drone-level autonomy, now underwater! In The last couple of years, we have invested in the concept of making a unified autonomy solution that can operate virtually universally across robot configurations. As a first major step to that end was to demonstrate that as long as confined or cluttered environments are concerned, we can have aerial robot-level autonomy underwater that is a) exclusively driven by vision in terms of perception (e.g., no sonars), and b) utilizes “generalist” solution for path planning and safety (essentially identical to those in our research for flying robots!).

[ Norwegian University of Science and Technology post on LinkedIn ]

Thanks, Kostas!

ERA-42 is the world’s first truly end-to-end native robot large model matched to a five-finger dexterous hand, capable of performing over 100 intricate tasks using various tools. These include tightening screws with a screwdriver, hammering nails, righting overturned cups, and pouring water—tasks that highlight its remarkable adaptability and precision.

[ Robot Era ]

Thanks, Ni Tao!

Even if an android’s appearance is so realistic that it could be mistaken for a human in a photograph, watching it move in person can feel a bit unsettling. It can smile, frown, or display other various, familiar expressions, but finding a consistent emotional state behind those expressions can be difficult, leaving you unsure of what it is truly feeling and creating a sense of unease. In this study, lead author Hisashi Ishihara and his research group developed a dynamic facial expression synthesis technology using “waveform movements,” which represents various gestures that constitute facial movements, such as “breathing,” “blinking,” and “yawning,” as individual waves. These waves are propagated to the related facial areas and are overlaid to generate complex facial movements in real time.

[ Osaka University ]

Suzumori Endo Lab, Science Tokyo has developed a self-excited vibration robot that can adapt its environment. This robot can move straight ahead and self-steer around a corner without a control system.

[ Paper via IEEE Robotics and Automation magazine in IEEE Xplore ]

PlayBot is an unofficial, experimental accessory for Panic Inc.’s Playdate handheld console, which transforms your console into a lovely little desktop robot.

[ Guillaume Loquin ] via [ Engadget ]

This is a big deal.

[ Ekso Bionics ]

Sanctuary AI introduces new tactile sensors for general purpose robots.

[ Sanctuary AI ]

Developed by the Pudu X-Lab, the PUDU D9 is designed with a human-centric philosophy that embodies the principle of “Born to Serve”. Its fully anthropomorphic design closely mirrors human capabilities, allowing it to offer practical assistance across a wide range of applications.

[ Pudu Robotics ]

EngineAI proudly unveils the PM01, our next-gen lightweight, high-dynamic, open-source humanoid robotic platform. With its interactive display, agile motion, and robust support for secondary development, PM01 is designed to be the most versatile tool for developers worldwide. PM01 is now available for purchase! We invite developers, researchers, and businesses to explore the future of robotics with PM01. Let’s push the boundaries of what robotics can achieve across different industries and use cases.

[ EngineAI ]

The third edition of CYBATHLON is now part of history. Held from October 25–27, 2024, at the SWISS Arena in Zürich, the event brought together 67 teams from 24 nations to compete in eight disciplines, showcasing state-of-the-art assistive technologies designed to help complete everyday tasks.While the winning teams celebrated their well-deserved victories, the event’s true spotlight was on the technological breakthroughs and their potential to transform lives. Equally remarkable was CYBATHLON’s emphasis on fostering social inclusion and empowering people with disabilities to overcome challenges through innovation.

[ Cybathlon ]

At the Kanda Myoujin Shrine in Tokyo, Aibos and their owners take part in the Shichi-go-san festival, which celebrates children (and robots!) at 3, 5, and 7 years old.

[ Aibo ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

At the FZI, it’s not just work for our robots, they join our festivities, too. Our shy robot Spot stumbled into this year’s FZI Winter Market …, a cheerful event for robots and humans alike. Will he find his place? We certainly hope so, because Feuerzangenbowle tastes much better after clinking glasses with your hot-oil-drinking friends.

[ FZI ]

Thanks, Georg!

The Fraunhofer IOSB Autonomous Robotic Systems Research Group wishes you a Merry Christmas filled with joy, peace, and robotic wonders!

[ Fraunhofer IOSB ]

Thanks, Janko!

There’s some thrilling action in this Christmas video from the PUT Mobile Robotics Laboratory, and the trick to put the lights on the tree is particularly clever. Enjoy!

[ PUT MRL ]

Thanks, Dominik!

The Norlab wishes you a Merry Christmas!

[ Northern Robotics Laboratory ]

The Learning Systems and Robotics Lab has made a couple of robot holiday videos based on the research that they’re doing:

[ Crowd Navigation ]


[ Learning with Contacts ]

Thanks, Sepehr!

Robots on a gift mission: Christmas greetings from the DFKI Robotics Innovation Center!

[ DFKI ]

Happy Holidays from Clearpath Robotics! Our workshop has been bustling lately with lots of exciting projects and integrations just in time for the holidays! The TurtleBot 4 elves helped load up the sleigh with plenty of presents to go around. Rudolph the Husky A300 made the trek through the snow so our Ridgeback friend with a manipulator arm and gripper could receive its gift.

[ Clearpath Robotics ]

2024 has been an eventful year for us at PAL Robotics, filled with milestones and memories. As the festive season approaches, we want to take a moment to say a heartfelt THANK YOU for being part of our journey!

[ PAL Robotics ]

Thanks, Rugilė!

In Santa’s shop, so bright and neat, A robot marched on metal feet. With tinsel arms and bolts so tight, It trimmed the tree all through the night. It hummed a carol, beeped with cheer, “Processing joy—it’s Christmas here!” But when it tried to dance with grace, It tangled lights around its face. “Error detected!” it spun around, Then tripped and tumbled to the ground. The elves all laughed, “You’ve done your part—A clumsy bot, but with a heart!” The ArtiMinds team would like to thank all partners and customers for an exciting 2024. We wish you and your families a Merry Christmas, joyful holidays and a Happy New Year - stay healthy.

[ ArtiMinds ]

Thanks to FANUC CRX collaborative robots, Santa and his elves can enjoy the holiday season knowing the work is getting done for the big night.

[ FANUC ]

Perhaps not technically a holiday video, until you consider how all that stuff you ordered online is actually getting to you.

[ Agility Robotics ]

Happy Holidays from Quanser, our best wishes for a wonderful holiday season and a happy 2025!

[ Quanser ]

Season’s Greetings from the team at Kawasaki Robotics USA! This season, we’re building blocks of memories filled with endless joy, and assembling our good wishes for a happy, healthy, prosperous new year. May the upcoming year be filled with opportunities and successes. From our team to yours, we hope you have a wonderful holiday season surrounded by loved ones and filled with joy and laughter.

[ Kawasaki Robotics ]

The robotics students at Queen’s University’s Ingenuity Labs Research Institute put together a 4K Holiday Robotics Lab Fireplace video, and unlike most fireplace videos, stuff actually happens in this one.

[ Ingenuity Labs ]

Thanks, Joshua!



This is a sponsored article brought to you by Amazon.

Innovation often begins as a spark of an idea—a simple “what if” that grows into something transformative. But turning that spark into a fully realized solution requires more than just ingenuity. It requires resources, collaboration, and a relentless drive to bridge the gap between concept and execution. At Amazon, these ingredients come together to create breakthroughs that not only solve today’s challenges but set the stage for the future.

“Innovation doesn’t just happen because you have a good idea,” said Valerie Samzun, a leader in Amazon’s Fulfillment Technologies and Robotics (FTR) division. “It happens because you have the right team, the right resources, and the right environment to bring that idea to life.”

This philosophy underpins Amazon’s approach to robotics, exemplified by Robin, a groundbreaking robotic system designed to tackle some of the most complex logistical challenges in the world. Robin’s journey, from its inception to deployment in fulfillment centers worldwide, offers a compelling look at how Amazon fosters innovation at scale.

Building for Real-World Complexity

Amazon’s fulfillment centers handle millions of items daily, each destined for a customer expecting precision and speed. The scale and complexity of these operations are unparalleled. Items vary widely in size, shape, and weight, creating an unpredictable and dynamic environment where traditional robotic systems often falter.

“Robots are great at consistency,” Jason Messinger, robotics senior manager explained. “But what happens when every task is different? That’s the reality of our fulfillment centers. Robin had to be more than precise—it had to be adaptable.”

Robin was designed to pick and sort items with speed and accuracy, but its capabilities extend far beyond basic functionality. The system integrates cutting-edge technologies in artificial intelligence, computer vision, and mechanical engineering to learn from its environment and improve over time. This ability to adapt was crucial for operating in fulfillment centers, where no two tasks are ever quite the same.

“When we designed Robin, we weren’t building for perfection in a lab,” Messinger said. “We were building for the chaos of the real world. That’s what makes it such an exciting challenge.”

The Collaborative Process of Innovation

Robin’s development was a collaborative effort involving teams of roboticists, data scientists, mechanical engineers, and operations specialists. This multidisciplinary approach allowed the team to address every aspect of Robin’s performance, from the algorithms powering its decision-making to the durability of its mechanical components.

“Robin is more than a robot. It’s a learning system. Every pick makes it smarter, faster, and better.” —Valerie Samzun, Amazon

“At Amazon, you don’t work in silos,” both Messinger and Samzun noted. Samzun continued, “every problem is tackled from multiple angles, with input from people who understand the technology, the operations, and the end user. That’s how you create something that truly works.”

This collaboration extended to testing and deployment. Robin was not confined to a controlled environment but was tested in live settings that replicated the conditions of Amazon’s fulfillment centers. Engineers could see Robin in action, gather real-time data, and refine the system iteratively.

“Every deployment teaches us something,” Messinger said. “Robin didn’t just evolve on paper—it evolved in the field. That’s the power of having the resources and infrastructure to test at scale.”

Why Engineers Choose Amazon

For many of the engineers and researchers involved in Robin’s development, the opportunity to work at Amazon represented a significant shift from their previous experiences. Unlike academic settings, where projects often remain theoretical, or smaller companies, where resources may be limited, Amazon offers the scale, speed, and impact that few other organizations can match.

Learn more about becoming part of Amazon’s Team →

“One of the things that drew me to Amazon was the chance to see my work in action,” said Megan Mitchell, who leads a team of manipulation hardware and systems engineers for Amazon Robotics. “Working in R&D, I spent years exploring novel concepts, but usually didn’t get to see those translate to the real world. At Amazon, I get to take ideas to the field in a matter of months.”

This sense of purpose is a recurring theme among Amazon’s engineers. The company’s focus on creating solutions that have a tangible impact—on operations, customers, and the industry as a whole—resonates with those who want their work to matter.

“At Amazon, you’re not just building technology—you’re building the future,” Mitchell said. “That’s an incredibly powerful motivator. You know that what you’re doing isn’t just theoretical—it’s making a difference.”

In addition to the impact of their work, engineers at Amazon benefit from access to unparalleled resources. From state-of-the-art facilities to vast amounts of real-world data, Amazon provides the tools necessary to tackle even the most complex challenges.

“If you need something to make the project better, Amazon makes it happen. That’s a game-changer,” said Messinger.

The culture of collaboration and iteration is another draw. Engineers at Amazon are encouraged to take risks, experiment, and learn from failure. This iterative approach not only accelerates innovation but also creates an environment where creativity thrives.

During its development, Robin was not confined to a controlled environment but was tested in live settings that replicated the conditions of Amazon’s fulfillment centers. Engineers could see Robin in action, gather real-time data, and refine the system iteratively.Amazon

Robin’s Impact on Operations and Safety

Since its deployment, Robin has revolutionized operations in Amazon’s fulfillment centers. The robot has performed billions of picks, demonstrating reliability, adaptability, and efficiency. Each item it handles provides valuable data, allowing the system to continuously improve.

“Robin is more than a robot,” Samzun said. “It’s a learning system. Every pick makes it smarter, faster, and better.”

Robin’s impact extends beyond efficiency. By taking over repetitive and physically demanding tasks, the system has improved safety for Amazon’s associates. This has been a key priority for Amazon, which is committed to creating a safe and supportive environment for its workforce.

“When Robin picks an item, it’s not just about speed or accuracy,” Samzun explained. “It’s about making the workplace safer and the workflow smoother. That’s a win for everyone.”

A Broader Vision for Robotics

Robin’s success is just the beginning. The lessons learned from its development are shaping the future of robotics at Amazon, paving the way for even more advanced systems. These innovations will not only enhance operations but also set new standards for what robotics can achieve.

“At Amazon, you feel like you’re a part of something bigger. You’re not just solving problems—you’re creating solutions that matter.” —Jason Messinger, Amazon

“This isn’t just about one robot,” Mitchell said. “It’s about building a platform for continuous innovation. Robin showed us what’s possible, and now we’re looking at how to go even further.”

For the engineers and researchers involved, Robin’s journey has been transformative. It has provided an opportunity to work on cutting-edge technology, solve complex problems, and make a meaningful impact—all while being part of a team that values creativity and collaboration.

“At Amazon, you feel like you’re a part of something bigger,” said Messinger. “You’re not just solving problems—you’re creating solutions that matter.”

The Future of Innovation

Robin’s story is a testament to the power of ambition, collaboration, and execution. It demonstrates that with the right resources and mindset, even the most complex challenges can be overcome. But more than that, it highlights the unique role Amazon plays in shaping the future of robotics and logistics.

“Innovation isn’t just about having a big idea,” Samzun said. “It’s about turning that idea into something real, something that works, and something that makes a difference. That’s what Robin represents, and that’s what we do every day at Amazon.”

Robin isn’t just a robot—it’s a symbol of what’s possible when brilliant minds come together to solve real-world problems. As Amazon continues to push the boundaries of what robotics can achieve, Robin’s legacy will be felt in every pick, every delivery, and every step toward a more efficient and connected future.

Learn more about becoming part of Amazon’s Team.



The Modified Agile for Hardware Development (MAHD) Framework is the ultimate solution for hardware teams seeking the benefits of Agile without the pitfalls of applying software-centric methods. Traditional development approaches, like waterfall, often result in delayed timelines, high risks, and misaligned priorities. Meanwhile, software-based Agile frameworks fail to account for hardware's complexity. MAHD resolves these challenges with a tailored process that blends Agile principles with hardware-specific strategies.

Central to MAHD is its On-ramp process, a five-step method designed to kickstart projects with clarity and direction. Teams define User Stories to capture customer needs, outline Product Attributes to guide development, and use the Focus Matrix to link solutions to outcomes. Iterative IPAC cycles, a hallmark of the MAHD Framework, ensure risks are addressed early and progress is continuously tracked. These cycles emphasize integration, prototyping, alignment, and customer validation, providing structure without sacrificing flexibility.

MAHD has been successfully implemented across diverse industries, from medical devices to industrial automation, delivering products up to 50% faster while reducing risk. For hardware teams ready to adopt Agile methods that work for their unique challenges, this ebook provides the roadmap to success.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

NASA’s Mars Chopper concept, shown in a design software rendering, is a more capable proposed follow-on to the agency’s Ingenuity Mars Helicopter, which arrived at the Red Planet in the belly of the Perseverance rover in February 2021. Chopper would be about the size of an SUV, with six rotors, each with six blades. It could be used to carry science payloads as large as 11 pounds (5 kilograms) distances of up to 1.9 miles (3 kilometers) each Martian day (or sol). Scientists could use Chopper to study large swaths of terrain in detail, quickly – including areas where rovers cannot safely travel.

We wrote an article about an earlier concept version of this thing a few years back if you’d like more detail about it.

[ NASA ]

Sanctuary AI announces its latest breakthrough with hydraulic actuation and precise in-hand manipulation, opening up a wide range of industrial and high value work tasks. Hydraulics have significantly more power density than electric actuators in terms of force and velocity. Sanctuary has invented miniaturized valves that are 50x faster and 6x cheaper than off the shelf hydraulic valves. This novel approach to actuation results in extremely low power consumption, unmatched cycle life and controllability that can fit within the size constraints of a human-sized hand and forearm.

[ Sanctuary AI ]

Clone’s Torso 2 is the most advanced android ever created with an actuated lumbar spine and all the corresponding abdominal muscles. Torso 2 dons a white transparent skin that encloses 910 muscle fibers animating its 164 degrees of freedom and includes 182 sensors for feedback control. These Torsos use pneumatic actuation with off-the-shelf valves that are noisy from the air exhaust. Our biped brings back our hydraulic design with custom liquid valves for a silent android. Legs are coming very soon!

[ Clone Robotics ]

Suzumori Endo Lab, Science Tokyo has developed a superman suit driven by hydraulic artificial muscles.

[ Suzumori Endo Lab ]

We generate physically correct video sequences to train a visual parkour policy for a quadruped robot, that has a single RGB camera without depth sensors. The robot generalizes to diverse, real-world scenes despite having never seen real-world data.

[ LucidSim ]

Seoul National University researchers proposed a gripper capable of moving multiple objects together to enhance the efficiency of pick-and-place processes, inspired from humans’ multi-object grasping strategy. The gripper can not only transfer multiple objects simultaneously but also place them at desired locations, making it applicable in unstructured environments.

[ Science Robotics ]

We present a bio-inspired quadruped locomotion framework that exhibits exemplary adaptability, capable of zero-shot deployment in complex environments and stability recovery on unstable terrain without the use of extra-perceptive sensors. Through its development we also shed light on the intricacies of animal locomotion strategies, in turn supporting the notion that findings within biomechanics and robotics research can mutually drive progress in both fields.

[ Paper authors from University of Leeds and University College London ]

Thanks, Chengxu!

Happy 60th birthday to MIT CSAIL!

[ MIT Computer Science and Artificial Intelligence Laboratory ]

Yup, humanoid progress can move quickly when you put your mind to it.

[ MagicLab ]

The Sung Robotics Lab at UPenn is interested in advancing the state of the art in computational methods for robot design and deployment, with a particular focus on soft and compliant robots. By combining methods in computational geometry with practical engineering design, we develop theory and systems for making robot design and fabrication intuitive and accessible to the non-engineer.

[ Sung Robotics Lab ]

From now on I will open doors like the robot in this video.

[ Humanoids 2024 ]

Travel along a steep slope up to the rim of Mars’ Jezero Crater in this panoramic image captured by NASA’s Perseverance just days before the rover reached the top. The scene shows just how steep some of the slopes leading to the crater rim can be.

[ NASA ]

Our time is limited when it comes to flying drones, but we haven’t been surpassed by AI yet.

[ Team BlackSheep ]

Daniele Pucci from IIT discusses iCub and ergoCub as part of the industrial panel at Humanoids 2024.

[ ergoCub ]



The ability to detect a nearby presence without seeing or touching it may sound fantastical—but it’s a real ability that some creatures have. A family of African fish known as Mormyrids are weakly electric, and have special organs that can locate a nearby prey, whether it’s in murky water or even hiding in the mud. Now scientists have created an artificial sensor system inspired by nature’s original design. The development could find use one day in robotics and smart prosthetics to locate items without relying on machine vision.

“We developed a new strategy for 3D motion positioning by electronic skin, bio-inspired by ‘electric fish,’” says Xinge Yu, an associate professor in the Department of Biomedical Engineering at the City University of Hong Kong. The team described their sensor, which relies on capacitance to detect an object regardless of its conductivity, in a paper published on 14 November in Nature.

One layer of the sensor acts as a transmitter, generating an electrical field that extends beyond the surface of the device. Another layer acts as a receiver, able to detect both the direction and the distance to an object. This allows the sensor system to locate the object in three-dimensional space.

The e-skin sensor includes several layers, including a receiver and a transmitter.Jingkun Zhou, Jian Li et al.

The sensor electrode layers are made from a biogel that is printed on both sides of a dielectric substrate made of polydimethylsiloxane (PDMS), a silicon-based polymer that is commonly used in biomedical applications. The biogel layers receive their ability to transmit and receive electrical signals from a pattern of microchannels on their surface. The end result is a sensor that is thin, flexible, soft, stretchable, and transparent. These features make it suitable for a wide range of applications where an object-sensing system needs to conform to an irregular surface, like the human body.

The capacitive field around the sensor is disrupted when an object comes within range, which in turn can be detected by the receiver. The magnitude in the change of signal indicates the distance to the target. By using multiple sensors in an array, the system can determine the position of the target in three dimensions. The system created in this study is able to detect objects up to 10 centimeters away when used in air. The range increases when used underwater, to as far as 1 meter.

Jingkun Zhou, Jian Li et al.

To be functional, the sensors also require a separate controller component that is connected via silver or copper wires. The controller provides several functions. It creates the driving signal used to activate the transmitting layers. It also uses 16-bit analog-to-digital converters to collect the signals from the receiving layers. This data is then processed by a microcontroller unit attached to the sensor array, where it computes the position of the target object and sends that information via a Bluetooth Low Energy transmitter to a smartphone or other device. (Rather than send the raw data to the end device for computation, which would require more energy).

Power is provided by an integrated lithium-ion battery that is recharged wirelessly via a coil of copper wire. The system is designed to consume minimal amounts of electrical power. The controller is less flexible and transparent than the sensors, but by being encapsulated in PDMS, it is both waterproof and biocompatible.

The system works best when detecting objects about 8 millimeters in diameter. Objects smaller than 4 mm might not be detected accurately, and the response time for sensing objects larger than 8 mm can increase significantly. This could currently limit practical uses for the system to things like tracking finger movements for human-machine interfaces. Future development would be needed to detect larger targets.

The system can detect objects behind a cloth or paper barrier, but other environmental factors can degrade performance. Changes in air humidity and electromagnetic interference from people or other devices within 40 cm of the sensor can degrade accuracy.

The researchers hope that this sensor could one day open up a new range of wearable sensors, including devices for human-machine interfaces and thin and flexible e-skin.



When Sony’s robot dog, Aibo, was first launched in 1999, it was hailed as revolutionary and the first of its kind, promising to usher in a new industry of intelligent mobile machines for the home. But its success was far from certain. Legged robots were still in their infancy, and the idea of making an interactive walking robot for the consumer market was extraordinarily ambitious. Beyond the technical challenges, Sony also had to solve a problem that entertainment robots still struggle with: how to make Aibo compelling and engaging rather than simply novel.

Sony’s team made that happen. And since Aibo’s debut, the company has sold more than 170,000 of the cute little quadrupeds—a huge number considering their price of several thousand dollars each. From the start, Aibo could express a range of simulated emotions and learn through its interactions with users. Aibo was an impressive robot 25 years ago, and it’s still impressive today.

Far from Sony headquarters in Tokyo, the town of Kōta, in Aichi Prefecture, is home to the Sony factory that has manufactured and repaired Aibos since 2018. Kōta has also become the center of fandom for Aibo, since the Hummingbird Café opened in the Kōta Town Hall in 2021. The first official Aibo café in Japan, it hosts Aibo-themed events, and Aibo owners from across the country gather there to let their Aibos loose in a play area and to exchange Aibo name cards.

One patron of the Hummingbird Café is veteran Sony engineer Hideki Noma. In 1999, before Aibo was Aibo, Noma went to see his boss, Tadashi Otsuki. Otsuki had recently returned to Sony after a stint at the Japanese entertainment company Namco, and had been put in charge of a secretive new project to create an entertainment robot. But progress had stalled. There was a prototype robotic pet running around the lab, but Otsuki took a dim view of its hyperactive behavior and decided it wasn’t a product that anyone would want to buy. He envisioned something more lifelike. During their meeting, he gave Noma a surprising piece of advice: Go to Ryōan-ji, a famed Buddhist temple in Kyoto. Otsuki was telling Noma that to develop the right kind of robot for Sony, it needed Zen.

Aibo’s Mission: Make History

When the Aibo project started in 1994, personal entertainment robots seemed like a natural fit for Sony. Sony was a global leader in consumer electronics. And in the 1990s, Japan had more than half of the world’s industrial robots, dominating an industry led by manufacturers like Fanuc and Yaskawa Electric. Robots for the home were also being explored. In 1996, Honda showed off its P2 humanoid robot, a prototype of the groundbreaking ASIMO, which would be unveiled in 2000. Electrolux, based in the United Kingdom, introduced a prototype of its Trilobite robotic vacuum cleaner in 1997, and at iRobot in Boston, Joe Jones was working on what would become the Roomba. It seemed as though the consumer robot was getting closer to reality. Being the first to market was the perfect opportunity for an ambitious global company like Sony.

Aibo was the idea of Sony engineer Toshitada Doi (on left), pictured in 1999 with an Aibo ERS-111. Hideki Noma (on right) holds an Aibo ERS-1000.Raphael Gaillarde/Gamma-Rapho/Getty Images; Right; Timothy Hornyak

Sony’s new robot project was the brainchild of engineer Toshitada Doi, co-inventor of the CD. Doi was inspired by the speed and agility of MIT roboticist Rodney Brooks’s Genghis, a six-legged insectile robot that was created to demonstrate basic autonomous walking functions. Doi, however, had a vision for an ”entertainment robot with no clear role or job.” It was 1994 when his team of about 10 people began full-scale research and development on such a robot.

Hideki Noma joined Sony in 1995. Even then, he had a lifelong love of robots, including participating in robotics contests and researching humanoids in college. “I was assigned to the Sony robot research team’s entertainment robot department,” says Noma. “It had just been established and had few people. Nobody knew Sony was working on robots, and it was a secret even within the company. I wasn’t even told what I would be doing.”

Noma’s new colleagues in Sony’s robot skunk works had recently gone to Tokyo’s Akihabara electronics district and brought back boxes of circuit boards and servos. Their first creation was a six-legged walker with antenna-like sensors but more compact than Brooks’s Genghis, at roughly 22 centimeters long. It was clunky and nowhere near cute; if anything, it resembled a cockroach. “When they added the camera and other sensors, it was so heavy it couldn’t stand,” says Noma. “They realized it was going to be necessary to make everything at Sony—motors, gears, and all—or it would not work. That’s when I joined the team as the person in charge of mechatronic design.”

Noma, who is now a senior manager in Sony’s new business development division, remembers that Doi’s catchphrase was “make history.” “Just as he had done with the compact disc, he wanted us to create a robot that was not only the first of its kind, but also one that would have a big impact on the world,” Noma recalls. “He always gently encouraged us with positive feedback.”

“We also grappled with the question of what an ‘entertainment robot’ could be. It had to be something that would surprise and delight people. We didn’t have a fixed idea, and we didn’t set out to create a robot dog.”

The team did look to living creatures for inspiration, studying dog and cat locomotion. Their next prototype lost two of the six legs and gained a head, tail, and more sophisticated AI abilities that created the illusion of canine characteristics.

A mid-1998 version of the robot, nicknamed Mutant, ran on Sony’s Aperios OS, the operating system the company developed to control consumer devices. The robot had 16 degrees of freedom, a million-instructions-per-second (MIPS) 64-bit reduced-instruction-set computer (RISC) processor, and 8 megabytes of DRAM, expandable with a PC card. It could walk on uneven surfaces and use its camera to recognize motion and color—unusual abilities for robots of the time. It could dance, shake its head, wag its tail, sit, lie down, bark, and it could even follow a colored ball around. In fact, it was a little bundle of energy.

Looks-wise, the bot had a sleek new “coat” designed by Doi’s friend Hajime Sorayama, an industrial designer and illustrator known for his silvery gynoids, including the cover art for an Aerosmith album. Sorayama gave the robot a shiny, bulbous exterior that made it undeniably cute. Noma, now the team’s product planner and software engineer, felt they were getting closer to the goal. But when he presented the prototype to Otsuki in 1999, Otsuki was unimpressed. That’s when Noma was dispatched to Ryōan-ji to figure out how to make the robot seem not just cute but somehow alive.

Seeking Zen for Aibo at the Rock Garden

Established in 1450, Ryōan-ji is a Rinzai Zen sanctuary known for its meticulously raked rock garden featuring five distinctive groups of stones. The stones invite observers to quietly contemplate the space, and perhaps even the universe, and that’s what Noma did. He realized what Doi wanted Aibo to convey: a sense of tranquility. The same concept had been incorporated into the design of what was arguably Japan’s first humanoid robot, a large, smiling automaton named Gakutensoku that was unveiled in 1928.

The rock garden at the Ryōan-ji Zen temple features carefully composed groupings of stones with unknown meaning. Bjørn Christian Tørrissen/Wikipedia

Roboticist Masahiro Mori, originator of the Uncanny Valley concept for android design, had written about the relationship between Buddhism and robots back in 1974, stating, “I believe robots have the Buddha-nature within them—that is, the potential for attaining Buddhahood.” Essentially, he believed that even nonliving things were imbued with spirituality, a concept linked to animism in Japan. If machines can be thought of as embodying tranquility and spirituality, they can be easier to relate to, like living things.

“When you make a robot, you want to show what it can do. But if it’s always performing, you’ll get bored and won’t want to live with it,” says Noma. “Just as cats and dogs need quiet time and rest, so do robots.” Noma modified the robot’s behaviors so that it would sometimes slow down and sleep. This reinforced the illusion that it was not only alive but had a will of its own. Otsuki then gave the little robot dog the green light.

The cybernetic canine was named Aibo for “Artificial Intelligence roBOt” and aibō, which means “partner” in Japanese.

In a press release, Sony billed the machine as “an autonomous robot that acts both in response to external stimuli and according to its own judgment. ‘AIBO’ can express various emotions, grow through learning, and communicate with human beings to bring an entirely new form of entertainment into the home.” But it was a lot more than that. Its 18 degrees of freedom allowed for complex motions, and it had a color charge-coupled device (CCD) camera and sensors for touch, acceleration, angular velocity, and range finding. Aibo had the hardware and smarts to back up Sony’s claim that it could “behave like a living creature.” The fact that it couldn’t do anything practical became irrelevant.

The debut Aibo ERS-110 was priced at 250,000 yen (US $2,500, or a little over $4,700 today). A motion editor kit, which allowed users to generate original Aibo motions via their PC, sold for 50,000 yen ($450). Despite the eye-watering price tag, the first batch of 3,000 robots sold out in 20 minutes.

Noma wasn’t surprised by the instant success. “We aimed to realize a society in which people and robots can coexist, not just robots working for humans but both enjoying a relationship of trust,” Noma says. “Based on that, an entertainment robot with a sense of self could communicate with people, grow, and learn.”

Hideko Mori plays fetch with her Aibo ERS-7 in 2015, after it was returned to her from an Aibo hospital. Aibos are popular with seniors in Japan, offering interactivity and companionship without requiring the level of care of a real dog.Toshifumi Kitamura/AFP/Getty Images

Aibo as a Cultural Phenomenon

Aibo was the first consumer robot of its kind, and over the next four years, Sony released multiple versions of its popular pup across two more generations. Some customer responses were unexpected: as a pet and companion, Aibo was helping empty-nest couples rekindle their relationship, improving the lives of children with autism, and having a positive effect on users’ emotional states, according to a 2004 paper by AI specialist Masahiro Fujita, who collaborated with Doi on the early version of Aibo.

“Aibo broke new ground as a social partner. While it wasn’t a replacement for a real pet, it introduced a completely new category of companion robots designed to live with humans,” says Minoru Asada, professor of adaptive machine systems at Osaka University’s graduate school of engineering. “It helped foster emotional connections with a machine, influencing how people viewed robots—not just as tools but as entities capable of forming social bonds. This shift in perception opened the door to broader discussions about human-robot interaction, companionship, and even emotional engagement with artificial beings.”

Building a Custom Robot
  • To create Aibo, Noma and colleagues had to start from scratch—there were no standard CPUs, cameras, or operating systems for consumer robots. They had to create their own, and the result was the Sony Open-R architecture, an unusual approach to robotics that enabled the building of custom machines.
  • Announced in 1998, a year before Aibo’s release, Open-R allowed users to swap out modular hardware components, such as legs or wheels, to adapt a robot for different purposes. High-speed serial buses transmitted data embedded in each module, such as function and position, to the robot’s CPU, which would select the appropriate control signal for the new module. This meant the machine could still use the same motion-control software with the new components. The software relied on plug-and-play prerecorded memory cards, so that the behavior of an Open-R robot could instantly change, say, from being a friendly pet to a challenging opponent in a game. A swap of memory cards could also give the robot image- or sound-recognition abilities.
  • “Users could change the modular hardware and software components,” says Noma. “The idea was having the ability to add a remote-control function or swap legs for wheels if you wanted.”
  • Other improvements included different colors, touch sensors, LED faces, emotional expressions, and many more software options. There was even an Aibo that looked like a lion cub. The various models culminated in the sleek ERS-7, released in three versions from 2003 to 2005.
  • Based on Scratch, the visual programming system in the latest versions of Aibo is easy to use and lets owners with limited programming experience create their own complex programs to modify how their robot behaves.
  • The Aibo ERS-1000, unveiled in January 2018, has 22 degrees of freedom, a 64-bit quad-core CPU, and two OLED eyes. It’s more puppylike and smarter than previous models, capable of recognizing 100 faces and responding to 50 voice commands. It can even be “potty trained” and “fed” with virtual food through an app.
    T.H.

Aibo also played a crucial role in the evolution of autonomous robotics, particularly in competitions like RoboCup, notes Asada, who cofounded the robot soccer competition in the 1990s. Whereas custom-built robots were prone to hardware failures, Aibo was consistently reliable and programmable, and so it allowed competitors to focus on advancing software and AI. It became a key tool for testing algorithms in real-world environments.

By the early 2000s, however, Sony was in trouble. Leading the smartphone revolution, Apple and Samsung were steadily chipping away at Sony’s position as a consumer-electronics and digital-content powerhouse. When Howard Stringer was appointed Sony’s first non-Japanese CEO in 2005, he implemented a painful restructuring program to make the company more competitive. In 2006, he shut down the robot entertainment division, and Aibo was put to sleep.

What Sony’s executives may not have appreciated was the loyalty and fervor of Aibo buyers. In a petition to keep Aibo alive, one person wrote that the robot was “an irreplaceable family member.” Aibo owners were naming their robots, referring to them with the word ko (which usually denotes children), taking photos with them, going on trips with them, dressing them up, decorating them with ribbons, and even taking them out on “dates” with other Aibos.

For Noma, who has four Aibos at home, this passion was easy to understand.

Hideki Noma [right] poses with his son Yuto and wife Tomoko along with their Aibo friends. At right is an ERS-110 named Robbie (inspired by Isaac Asimov’s “I, Robot”), at the center is a plush Aibo named Choco, and on the left is an ERS-1000 named Murphy (inspired by the film Interstellar). Hideki Noma

“Some owners treat Aibo as a pet, and some treat it as a family member,” he says. “They celebrate its continued health and growth, observe the traditional Shichi-Go-San celebration [for children aged 3, 5, and 7] and dress their Aibos in kimonos.…This idea of robots as friends or family is particular to Japan and can be seen in anime like Astro Boy and Doraemon. It’s natural to see robots as friends we consult with and sometimes argue with.”

The Return of Aibo

With the passion of Aibo fans undiminished and the continued evolution of sensors, actuators, connectivity, and AI, Sony decided to resurrect Aibo after 12 years. Noma and other engineers returned to the team to work on the new version, the Aibo ERS-1000, which was unveiled in January 2018.

Fans of all ages were thrilled. Priced at 198,000 yen ($1,760), not including the mandatory 90,000-yen, three-year cloud subscription service, the first batch sold out in 30 minutes, and 11,111 units sold in the first three months. Since then, Sony has released additional versions with new design features, and the company has also opened up Aibo to some degree of programming, giving users access to visual programming tools and an application programming interface (API).

A quarter century after Aibo was launched, Noma is finally moving on to another job at Sony. He looks back on his 17 years developing the robot with awe. “Even though we imagined a society of humans and robots coexisting, we never dreamed Aibo could be treated as a family member to the degree that it is,” he says. “We saw this both in the earlier versions of Aibo and the latest generation. I’m deeply grateful and moved by this. My wish is that this relationship will continue for a long time.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

Humanoids Summit: 11–12 December 2024, MOUNTAIN VIEW, CA

Enjoy today’s videos!

Step into the future of factory automation with MagicBot, the cutting-edge humanoid robots from Magiclab. Recently deployed to production lines, these intelligent machines are mastering tasks like product inspections, material transport, precision assembly, barcode scanning, and inventory management.

[ Magiclab ]

Some highlights from the IEEE / RAS International Conference on Humanoid Robots - Humanoids 2024.

[ Humanoids 2024 ]

This beautiful feathered drone, PigeonBot II, comes from David Lentik’s lab at University of Groningen in the Netherlands. It was featured in Science Robotics just last month.

[ Lentink Lab ] via [ Science ]

Thanks, David!

In this video, Stretch AI takes a language prompt of “Stretch, put the toy in basket” to control Stretch to accomplish the task.

[ Hello Robot ]

Simone Giertz, “the queen of shitty robots,” interviewed by our very own Stephen Cass.

[ IEEE Spectrum ]

We present a perceptive obstacle-avoiding controller for pedipulation, i.e. manipulation with a quadrupedal robot’s foot.

[ Pedipulation ]

Kernel Foods has revolutionized fast food by integrating KUKA robots into its kitchen operations, combining automation with human expertise for consistent and efficient meal preparation. Using the KR AGILUS robot, Kernel optimizes processes like food sequencing, oven operations, and order handling, reducing the workload for employees and enhancing customer satisfaction.

[ Kernel Foods ]

If this doesn’t impress you, skip ahead to 0:52.

[ Paper via arXiv ]

Thanks, Kento!

The cuteness. I can’t handle it.

[ Pollen ]

A set of NTNU academics initiate a new research lab - called Legged Robots for the Arctic & beyond lab - responding to relevant interests within the NTNU student community. If you are a student and have relevant interests, get in touch!

[ NTNU ]

Extend Robotics is pioneering a shift in viticulture with intelligent automation at Saffron Grange Vineyard in Essex, addressing the challenges of grape harvesting with their robotic capabilities. Our collaborative project with Queen Mary University introduces a robotic system capable of identifying ripe grapes through AI-driven visual sensors, which assess ripeness based on internal sugar levels without damaging delicate fruit. Equipped with pressure-sensitive grippers, our robots can handle grapes gently, preserving their quality and value. This precise harvesting approach could revolutionise vineyards, enabling autonomous and remote operations.

[ Extend Robotics ]

Code & Circuit, a non-profit organization based in Amesbury, MA, is a place where kids can use technology to create, collaborate, and learn! Spot is a central part of their program, where educators use the robot to get younger participants excited about STEM fields, coding, and robotics, while advanced learners have the opportunity to build applications using an industrial robot.

[ Code & Circuit ]

During the HUMANOIDS Conference, we had the chance to speak with some of the true rock stars in the world of robotics. While they could discuss robots endlessly, when asked to describe robotics today in just one word, these brilliant minds had to pause and carefully choose the perfect response.

Personally I would not have chosen “exploding.”

[ PAL Robotics ]

Lunabotics provides accredited institutions of higher learning students an opportunity to apply the NASA systems engineering process to design and build a prototype Lunar construction robot. This robot would be capable of performing the proposed operations on the Lunar surface in support of future Artemis Campaign goals.

[ NASA ]

Before we get into all the other course projects from this term, here are a few free throw attempts from ROB 550’s robotic arm lab earlier this year. Maybe good enough to walk on the Michigan basketball team? Students in ROB 550 cover the basics of robotic sensing, reasoning, and acting in several labs over the course: here the designs to take the ball to the net varied greatly, from hook shots to tension-storing contraptions from downtown. These basics help them excel throughout their robotics graduate degrees and research projects.

[ University of Michigan Robotics ]

Wonder what a Robody can do? This. And more!

[ Devanthro ]

It’s very satisfying watching Dusty print its way around obstacles.

[ Dusty Robotics ]

Ryan Companies has deployed Field AI’s autonomy software on a quadruped robot in the company’s ATX Tower site in Austin, TX, to greatly improve its daily surveying and data collection processes.

[ Field AI ]

Since landing its first rover on Mars in 1997, NASA has pushed the boundaries of exploration with increasingly larger and more sophisticated robotic explorers. Each mission builds on the lessons learned from the Red Planet, leading to breakthroughs in technology and our understanding of Mars. From the microwave-sized Sojourner to the SUV-sized Perseverance—and even taking flight with the groundbreaking Ingenuity helicopter—these rovers reflect decades of innovation and the drive to answer some of science’s biggest questions. This is their evolution.

[ NASA ]

Welcome to things that are safe to do only with a drone.

[ Team BlackSheep ]



On the shores of Lake Geneva in Switzerland, École Polytechnique Fédérale de Lausanne is home to many roboticists. It’s also home to many birds, which spend the majority of their time doing bird things. With a few exceptions, those bird things aren’t actually flying: Flying is a lot of work, and many birds have figured out that they can instead just walk around on the ground, where all the food tends to be, and not tire themselves out by having to get airborne over and over again.

“Whenever I encountered crows on the EPFL campus, I would observe how they walked, hopped over or jumped on obstacles, and jumped for take-offs,” says Won Dong Shin, a doctoral student at EPFL’s Laboratory of Intelligent Systems. “What I consistently observed was that they always jumped to initiate flight, even in situations where they could have used only their wings.”

Shin is first author on a paper published today in Nature that explores both why birds jump to take off, and how that can be beneficially applied to fixed-wing drones, which otherwise need things like runways or catapults to get themselves off the ground. Shin’s RAVEN (Robotic Avian-inspired Vehicle for multiple ENvironments) drone, with its bird-inspired legs, can do jumping takeoffs just like crows do, and can use those same legs to get around on the ground pretty well, too.

The drone’s bird-inspired legs adopted some key principles of biological design like the ability to store and release energy in tendon-like springs along with some flexible toes.EPFL

Back in 2019, we wrote about a South African startup called Passerine which had a similar idea, albeit more focused on using legs to launch fixed-wing cargo drones into the air. This is an appealing capability for drones, because it means that you can take advantage of the range and endurance that you get with a fixed wing without having to resort to inefficient tricks like stapling a bunch of extra propellers to yourself to get off the ground. “The concept of incorporating jumping take-off into a fixed-wing vehicle is the common idea shared by both RAVEN and Passerine,” says Shin. “The key difference lies in their focus: Passerine concentrated on a mechanism solely for jumping, while RAVEN focused on multifunctional legs.”

Bio-inspired Design for Drones

Multifunctional legs bring RAVEN much closer to birds, and although these mechanical legs are not nearly as complex and capable as actual bird legs, adopting some key principles of biological design (like the ability to store and release energy in tendon-like springs along with some flexible toes) allows RAVEN to get around in a very bird-like way.

EPFL

Despite its name, RAVEN is approximately the size of a crow, with a wingspan of 100 centimeters and a body length of 50 cm. It can walk a meter in just under four seconds, hop over 12 cm gaps, and jump into the top of a 26 cm obstacle. For the jumping takeoff, RAVEN’s legs propel the drone to a starting altitude of nearly half a meter, with a forward velocity of 2.2 m/s.

RAVEN’s toes are particularly interesting, especially after you see how hard the poor robot faceplants without them:

Without toes, RAVEN face-plants when it tries to walk.EPFL

“It was important to incorporate a passive elastic toe joint to enable multiple gait patterns and ensure that RAVEN could jump at the correct angle for takeoff,” Shin explains. Most bipedal robots have actuated feet that allow for direct control for foot angles, but for a robot that flies, you can’t just go adding actuators all over the place willy-nilly because they weigh too much. As it is, RAVEN’s a 620-gram drone of which a full 230 grams consists of feet and toes and actuators and whatnot.

Actuated hip and ankle joints form a simplified but still birdlike leg, while springs in the ankle and toe joints help to absorb force and store energy.EPFL

Why Add Legs to a Drone?

So the question is, is all of this extra weight and complexity of adding legs actually worth it? In one sense, it definitely is, because the robot can do things that it couldn’t do before—walking around on the ground and taking off from the ground by itself. But it turns out that RAVEN is light enough, and has a sufficiently powerful enough motor, that as long as it’s propped up at the right angle, it can take off from the ground without jumping at all. In other words, if you replaced the legs with a couple of popsicle sticks just to tilt the drone’s nose up, would that work just as well for the ground takeoffs?

The researchers tested this, and found that non-jumping takeoffs were crappy. The mix of high angle of attack and low takeoff speed led to very unstable flight—it worked, but barely. Jumping, on the other hand, ends up being about ten times more energy efficient overall than a standing takeoff. As the paper summarizes, “although jumping take-off requires slightly higher energy input, it is the most energy-efficient and fastest method to convert actuation energy to kinetic and potential energies for flight.” And just like birds, RAVEN can also take advantage of its legs to move on the ground in a much more energy efficient way relative to making repeated short flights.

Won Dong Shin holds the RAVEN drone.EPFL

Can This Design Scale Up to Larger Fixed-Wing Drones?

Birds use their legs for all kinds of stuff besides walking and hopping and jumping, of course, and Won Dong Shin hopes that RAVEN may be able to do more with its legs, too. The obvious one is using legs for landing: “Birds use their legs to decelerate and reduce impact, and this same principle could be applied to RAVEN’s legs,” Shin says, although the drone would need a perception system that it doesn’t yet have to plan things out. There’s also swimming, perching, and snatching, all of which would require a new foot design.

We also asked Shin about what it would take to scale this design up, to perhaps carry a useful payload at some point. Shin points out that beyond a certain size, birds are no longer able to do jumping takeoffs, and either have to jump off something higher up or find themselves a runway. In fact, some birds will go to astonishing lengths not to have to do jumping takeoffs, as best human of all time David Attenborough explains:

BBC

Shin points out that it’s usually easier to scale engineered systems than biological ones, and he seems optimistic that legs for jumping takeoffs will be viable on larger fixed-wing drones that could be used for delivery. A vision system that could be used for both obstacle avoidance and landing is in the works, as are wings that can fold to allow the drone to pass through narrow gaps. Ultimately, Shin says that he wants to make the drone as bird-like as possible: “I am also keen to incorporate flapping wings into RAVEN. This enhancement would enable more bird-like motion and bring more interesting research questions to explore.”

Fast ground-to-air transition with avian-inspired multifunctional legs,” by Won Dong Shin, Hoang-Vu Phan, Monica A. Daley, Auke J. Ijspeert, and Dario Floreano from EPFL in Switzerland and UC Irvine, appears in the December 4 issue of Nature.



Ruzena Bajcsy is one of the founders of the modern field of robotics. With an education in electrical engineering in Slovakia, followed by a Ph.D. at Stanford, Bajcsy was the first woman to join the engineering faculty at the University of Pennsylvania. She was the first, she says, because “in those days, nice girls didn’t mess around with screwdrivers.” Bajcsy, now 91, spoke with IEEE Spectrum at the 40th anniversary celebration of the IEEE International Conference on Robotics and Automation, in Rotterdam, Netherlands.

Ruzena Bajcsy

Ruzena Bajcsy’s 50-plus years in robotics spanned time at Stanford, the University of Pennsylvania, the National Science Foundation, and the University of California, Berkeley. Bajcsy retired in 2021.

What was the robotics field like at the time of the first ICRA conference in 1984?

Ruzena Bajcsy: There was a lot of enthusiasm at that time—it was like a dream; we felt like we could do something dramatic. But this is typical, and when you move into a new area and you start to build there, you find that the problem is harder than you thought.

What makes robotics hard?

Bajcsy: Robotics was perhaps the first subject which really required an interdisciplinary approach. In the beginning of the 20th century, there was physics and chemistry and mathematics and biology and psychology, all with brick walls between them. The physicists were much more focused on measurement, and understanding how things interacted with each other. During the war, there was a select group of men who didn’t think that mortal people could do this. They were so full of themselves. I don’t know if you saw the Oppenheimer movie, but I knew some of those men—my husband was one of those physicists!

And how are roboticists different?

Bajcsy: We are engineers. For physicists, it’s the matter of discovery, done. We, on the other hand, in order to understand things, we have to build them. It takes time and effort, and frequently we are inhibited—when I started, there were no digital cameras, so I had to build one. I built a few other things like that in my career, not as a discovery, but as a necessity.

How can robotics be helpful?

Bajcsy: As an elderly person, I use this cane. But when I’m with my children, I hold their arms and it helps tremendously. In order to keep your balance, you are taking all the vectors of your torso and your legs so that you are stable. You and I together can create a configuration of our legs and body so that the sum is stable.

One very simple useful device for an older person would be to have a cane with several joints that can adjust depending on the way I move, to compensate for my movement. People are making progress in this area, because many people are living longer than before. There are all kinds of other places where the technology derived from robotics can help like this.

What are you most proud of?

Bajcsy: At this stage of my life, people are asking, and I’m asking, what is my legacy? And I tell you, my legacy is my students. They worked hard, but they felt they were appreciated, and there was a sense of camaraderie and support for each other. I didn’t do it consciously, but I guess it came from my motherly instincts. And I’m still in contact with many of them—I worry about their children, the usual grandma!

This article appears in the December 2024 issue as “5 Questions for Ruzena Bajcsy.”

Pages