Feed aggregator



In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story “Runaround.” The laws were later popularized in his seminal story collection I, Robot.

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov’s framework useful for considering the potential safeguards needed for AI that interacts with humans.

But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov’s original concerns about physical harm and obedience.

Deepfakes, Misinformation, and Scams

The proliferation of AI-enabled deception is particularly concerning. According to the FBI’s 2024 Internet Crime Report, cybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity’s 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust.

Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort.

Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly common, and any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanish, as he knew that I’m a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited.

Even more alarmingly, children and teenagers are forming emotional attachments to AI agents, and are sometimes unable to distinguish between interactions with real friends and bots online. Already, there have been suicides attributed to interactions with AI chatbots.

In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems’ ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union’s AI Act, which includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov’s time, people couldn’t have imagined how artificial agents could use online communication tools and avatars to deceive humans.

Therefore, we must make an addition to Asimov’s laws.

  • Fourth Law: A robot or AI must not deceive a human by impersonating a human being.
The Way Toward Trusted AI

We need clear boundaries. While human-AI collaboration can be constructive, AI deception undermines trust and leads to wasted time, emotional distress, and misuse of resources. Artificial agents must identify themselves to ensure our interactions with them are transparent and productive. AI-generated content should be clearly marked unless it has been significantly edited and adapted by a human.

Implementation of this Fourth Law would require:

  • Mandatory AI disclosure in direct interactions,
  • Clear labeling of AI-generated content,
  • Technical standards for AI identification,
  • Legal frameworks for enforcement,
  • Educational initiatives to improve AI literacy.

Of course, all this is easier said than done. Enormous research efforts are already underway to find reliable ways to watermark or detect AI-generated text, audio, images, and videos. Creating the transparency I’m calling for is far from a solved problem.

But the future of human-AI collaboration depends on maintaining clear distinctions between human and artificial agents. As noted in the IEEE’s 2022 “Ethically Aligned Design“ framework, transparency in AI systems is fundamental to building public trust and ensuring the responsible development of artificial intelligence.

Asimov’s complex stories showed that even robots that tried to follow the rules often discovered the unintended consequences of their actions. Still, having AI systems that are trying to follow Asimov’s ethical guidelines would be a very good start.



In 1942, the legendary science fiction author Isaac Asimov introduced his Three Laws of Robotics in his short story “Runaround.” The laws were later popularized in his seminal story collection I, Robot.

  • First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

While drawn from works of fiction, these laws have shaped discussions of robot ethics for decades. And as AI systems—which can be considered virtual robots—have become more sophisticated and pervasive, some technologists have found Asimov’s framework useful for considering the potential safeguards needed for AI that interacts with humans.

But the existing three laws are not enough. Today, we are entering an era of unprecedented human-AI collaboration that Asimov could hardly have envisioned. The rapid advancement of generative AI capabilities, particularly in language and image generation, has created challenges beyond Asimov’s original concerns about physical harm and obedience.

Deepfakes, Misinformation, and Scams

The proliferation of AI-enabled deception is particularly concerning. According to the FBI’s 2024 Internet Crime Report, cybercrime involving digital manipulation and social engineering resulted in losses exceeding US $10.3 billion. The European Union Agency for Cybersecurity’s 2023 Threat Landscape specifically highlighted deepfakes—synthetic media that appears genuine—as an emerging threat to digital identity and trust.

Social media misinformation is spreading like wildfire. I studied it during the pandemic extensively and can only say that the proliferation of generative AI tools has made its detection increasingly difficult. To make matters worse, AI-generated articles are just as persuasive or even more persuasive than traditional propaganda, and using AI to create convincing content requires very little effort.

Deepfakes are on the rise throughout society. Botnets can use AI-generated text, speech, and video to create false perceptions of widespread support for any political issue. Bots are now capable of making and receiving phone calls while impersonating people. AI scam calls imitating familiar voices are increasingly common, and any day now, we can expect a boom in video call scams based on AI-rendered overlay avatars, allowing scammers to impersonate loved ones and target the most vulnerable populations. Anecdotally, my very own father was surprised when he saw a video of me speaking fluent Spanish, as he knew that I’m a proud beginner in this language (400 days strong on Duolingo!). Suffice it to say that the video was AI-edited.

Even more alarmingly, children and teenagers are forming emotional attachments to AI agents, and are sometimes unable to distinguish between interactions with real friends and bots online. Already, there have been suicides attributed to interactions with AI chatbots.

In his 2019 book Human Compatible, the eminent computer scientist Stuart Russell argues that AI systems’ ability to deceive humans represents a fundamental challenge to social trust. This concern is reflected in recent policy initiatives, most notably the European Union’s AI Act, which includes provisions requiring transparency in AI interactions and transparent disclosure of AI-generated content. In Asimov’s time, people couldn’t have imagined how artificial agents could use online communication tools and avatars to deceive humans.

Therefore, we must make an addition to Asimov’s laws.

  • Fourth Law: A robot or AI must not deceive a human by impersonating a human being.
The Way Toward Trusted AI

We need clear boundaries. While human-AI collaboration can be constructive, AI deception undermines trust and leads to wasted time, emotional distress, and misuse of resources. Artificial agents must identify themselves to ensure our interactions with them are transparent and productive. AI-generated content should be clearly marked unless it has been significantly edited and adapted by a human.

Implementation of this Fourth Law would require:

  • Mandatory AI disclosure in direct interactions,
  • Clear labeling of AI-generated content,
  • Technical standards for AI identification,
  • Legal frameworks for enforcement,
  • Educational initiatives to improve AI literacy.

Of course, all this is easier said than done. Enormous research efforts are already underway to find reliable ways to watermark or detect AI-generated text, audio, images, and videos. Creating the transparency I’m calling for is far from a solved problem.

But the future of human-AI collaboration depends on maintaining clear distinctions between human and artificial agents. As noted in the IEEE’s 2022 “Ethically Aligned Design“ framework, transparency in AI systems is fundamental to building public trust and ensuring the responsible development of artificial intelligence.

Asimov’s complex stories showed that even robots that tried to follow the rules often discovered the unintended consequences of their actions. Still, having AI systems that are trying to follow Asimov’s ethical guidelines would be a very good start.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

I’m not totally sure yet about the utility of having a small arm on a robot vacuum, but I love that this is a real thing. At least, it is at CES this year.

[ Roborock ]

We posted about SwitchBot’s new modular home robot system earlier this week, but here’s a new video showing some potentially useful hardware combinations.

[ SwitchBot ]

Yes, it’s in sim, but (and this is a relatively new thing) I will not be shocked to see this happen on Unitree’s hardware in the near future.

[ Unitree ]

With ongoing advancements in system engineering, ‪LimX Dynamics‬’ full-size humanoid robot features a hollow actuator design and high torque-density actuators, enabling full-body balance for a wide range of motion. Now it achieves complex full-body movements in a ultra stable and dynamic manner.

[ LimX Dynamics ]

We’ve seen hybrid quadrotor bipeds before, but this one , which is imitating the hopping behavior of Jacana birds, is pretty cute.

What’s a Jacana bird, you ask? It’s these things, which surely must have the most extreme foot to body ratio of any bird:

Also, much respect to the researchers for confidently titling this supplementary video “An Extremely Elegant Jump.”

[ SSRN Paper preprint ]

Twelve minutes flat from suitcase to mobile manipulator. Not bad!

[ Pollen Robotics ]

Happy New Year from Dusty Robotics!

[ Dusty Robotics ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYRoboSoft 2025: 23–26 April 2025, LAUSANNE, SWITZERLANDICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

I’m not totally sure yet about the utility of having a small arm on a robot vacuum, but I love that this is a real thing. At least, it is at CES this year.

[ Roborock ]

We posted about SwitchBot’s new modular home robot system earlier this week, but here’s a new video showing some potentially useful hardware combinations.

[ SwitchBot ]

Yes, it’s in sim, but (and this is a relatively new thing) I will not be shocked to see this happen on Unitree’s hardware in the near future.

[ Unitree ]

With ongoing advancements in system engineering, ‪LimX Dynamics‬’ full-size humanoid robot features a hollow actuator design and high torque-density actuators, enabling full-body balance for a wide range of motion. Now it achieves complex full-body movements in a ultra stable and dynamic manner.

[ LimX Dynamics ]

We’ve seen hybrid quadrotor bipeds before, but this one , which is imitating the hopping behavior of Jacana birds, is pretty cute.

What’s a Jacana bird, you ask? It’s these things, which surely must have the most extreme foot to body ratio of any bird:

Also, much respect to the researchers for confidently titling this supplementary video “An Extremely Elegant Jump.”

[ SSRN Paper preprint ]

Twelve minutes flat from suitcase to mobile manipulator. Not bad!

[ Pollen Robotics ]

Happy New Year from Dusty Robotics!

[ Dusty Robotics ]



Back in the day, the defining characteristic of home-cleaning robots was that they’d randomly bounce around your floor as part of their cleaning process, because the technology required to localize and map an area hadn’t yet trickled down into the consumer space. That all changed in 2010, when home robots started using lidar (and other things) to track their location and optimize how they cleaned.

Consumer pool-cleaning robots are lagging about 15 years behind indoor robots on this, for a couple of reasons. First, most pool robots—different from automatic pool cleaners, which are purely mechanical systems that are driven by water pressure—have been tethered to an outlet for power, meaning that maximizing efficiency is less of a concern. And second, 3D underwater localization is a much different (and arguably more difficult) problem to solve than 2D indoor localization was. But pool robots are catching up, and at CES this week, Wybot introduced an untethered robot that uses ultrasound to generate a 3D map for fast, efficient pool cleaning. And it’s solar powered and self-emptying, too.

Underwater localization and navigation is not an easy problem for any robot. Private pools are certainly privileged to be operating environments with a reasonable amount of structure and predictability, at least if everything is working the way it should. But the lighting is always going to be a challenge, between bright sunlight, deep shadow, wave reflections, and occasionally murky water if the pool chemicals aren’t balanced very well. That makes relying on any light-based localization system iffy at best, and so Wybot has gone old school, with ultrasound.

Wybot Brings Ultrasound Back to Bots

Ultrasound used to be a very common way for mobile robots to navigate. You may (or may not) remember venerable robots like the Pioneer 3, with those big ultrasonic sensors across its front. As cameras and lidar got cheap and reliable, the messiness of ultrasonic sensors fell out of favor, but sound is still ideal for underwater applications where anything that relies on light may struggle.


The Wybot S3 uses 12 ultrasonic sensors, plus motor encoders and an inertial measurement unit to map residential pools in three dimensions. “We had to choose the ultrasonic sensors very carefully,” explains Felix (Huo) Feng, the CTO of Wybot. “Actually, we use multiple different sensors, and we compute time of flight [of the sonar pulses] to calculate distance.” The positional accuracy of the resulting map is about 10 centimeters, which is totally fine for the robot to get its job done, although Feng says that they’re actively working to improve the map’s resolution. For path planning purposes, the 3D map gets deconstructed into a series of 2D maps, since the robot needs to clean the bottom of the pool, stairs and ledges, and also the sides of the pool.

Efficiency is particularly important for the S3 because its charging dock has enough solar panels on the top of it to provide about 90 minutes of runtime for the robot over the course of an optimally sunny day. If your pool isn’t too big, that means the robot can clean it daily without requiring a power connection to the dock. The dock also sucks debris out of the collection bin on the robot itself, and Wybot suggests that the S3 can go for up to a month of cleaning without the dock overflowing.

The S3 has a camera on the front, which is used primarily to identify and prioritize dirtier areas (through AI, of course) that need focused cleaning. At some point in the future, Wybot may be able to use vision for navigation too, but my guess is that for reliable 24/7 navigation, ultrasound will still be necessary.

One other interesting little tidbit is the communication system. The dock can talk to your Wi-Fi, of course, and then talk to the robot while it’s charging. Once the robot goes off for a swim, however, traditional wireless signals won’t work, but the dock has its own sonar that can talk to the robot at several bytes per second. This isn’t going to get you streaming video from the robot’s camera, but it’s enough to let you steer the robot if you want, or ask it to come back to the dock, get battery status updates, and similar sorts of things.

The Wybot S3 will go on sale in Q2 of this year for a staggering US $2,999, but that’s how it always works: The first time a new technology shows up in the consumer space, it’s inevitably at a premium. Give it time, though, and my guess is that the ability to navigate and self-empty will become standard features in pool robots. But as far as I know, Wybot got there first.




Back in the day, the defining characteristic of home-cleaning robots was that they’d randomly bounce around your floor as part of their cleaning process, because the technology required to localize and map an area hadn’t yet trickled down into the consumer space. That all changed in 2010, when home robots started using lidar (and other things) to track their location and optimize how they cleaned.

Consumer pool-cleaning robots are lagging about 15 years behind indoor robots on this, for a couple of reasons. First, most pool robots—different from automatic pool cleaners, which are purely mechanical systems that are driven by water pressure—have been tethered to an outlet for power, meaning that maximizing efficiency is less of a concern. And second, 3D underwater localization is a much different (and arguably more difficult) problem to solve than 2D indoor localization was. But pool robots are catching up, and at CES this week, Wybot introduced an untethered robot that uses ultrasound to generate a 3D map for fast, efficient pool cleaning. And it’s solar powered and self-emptying, too.

Underwater localization and navigation is not an easy problem for any robot. Private pools are certainly privileged to be operating environments with a reasonable amount of structure and predictability, at least if everything is working the way it should. But the lighting is always going to be a challenge, between bright sunlight, deep shadow, wave reflections, and occasionally murky water if the pool chemicals aren’t balanced very well. That makes relying on any light-based localization system iffy at best, and so Wybot has gone old school, with ultrasound.

Wybot Brings Ultrasound Back to Bots

Ultrasound used to be a very common way for mobile robots to navigate. You may (or may not) remember venerable robots like the Pioneer 3, with those big ultrasonic sensors across its front. As cameras and lidar got cheap and reliable, the messiness of ultrasonic sensors fell out of favor, but sound is still ideal for underwater applications where anything that relies on light may struggle.


The Wybot S3 uses 12 ultrasonic sensors, plus motor encoders and an inertial measurement unit to map residential pools in three dimensions. “We had to choose the ultrasonic sensors very carefully,” explains Felix (Huo) Feng, the CTO of Wybot. “Actually, we use multiple different sensors, and we compute time of flight [of the sonar pulses] to calculate distance.” The positional accuracy of the resulting map is about 10 centimeters, which is totally fine for the robot to get its job done, although Feng says that they’re actively working to improve the map’s resolution. For path planning purposes, the 3D map gets deconstructed into a series of 2D maps, since the robot needs to clean the bottom of the pool, stairs and ledges, and also the sides of the pool.

Efficiency is particularly important for the S3 because its charging dock has enough solar panels on the top of it to provide about 90 minutes of runtime for the robot over the course of an optimally sunny day. If your pool isn’t too big, that means the robot can clean it daily without requiring a power connection to the dock. The dock also sucks debris out of the collection bin on the robot itself, and Wybot suggests that the S3 can go for up to a month of cleaning without the dock overflowing.

The S3 has a camera on the front, which is used primarily to identify and prioritize dirtier areas (through AI, of course) that need focused cleaning. At some point in the future, Wybot may be able to use vision for navigation too, but my guess is that for reliable 24/7 navigation, ultrasound will still be necessary.

One other interesting little tidbit is the communication system. The dock can talk to your Wi-Fi, of course, and then talk to the robot while it’s charging. Once the robot goes off for a swim, however, traditional wireless signals won’t work, but the dock has its own sonar that can talk to the robot at several bytes per second. This isn’t going to get you streaming video from the robot’s camera, but it’s enough to let you steer the robot if you want, or ask it to come back to the dock, get battery status updates, and similar sorts of things.

The Wybot S3 will go on sale in Q2 of this year for a staggering US $2,999, but that’s how it always works: The first time a new technology shows up in the consumer space, it’s inevitably at a premium. Give it time, though, and my guess is that the ability to navigate and self-empty will become standard features in pool robots. But as far as I know, Wybot got there first.




Autonomous systems, particularly fleets of drones and other unmanned vehicles, face increasing risks as their complexity grows. Despite advancements, existing testing frameworks fall short in addressing end-to-end security, resilience, and safety in zero-trust environments. The Secure Systems Research Center (SSRC) at TII has developed a rigorous, holistic testing framework to systematically evaluate the performance and security of these systems at each stage of development. This approach ensures secure, resilient, and safe operations for autonomous systems, from individual components to fleet-wide interactions.



Autonomous systems, particularly fleets of drones and other unmanned vehicles, face increasing risks as their complexity grows. Despite advancements, existing testing frameworks fall short in addressing end-to-end security, resilience, and safety in zero-trust environments. The Secure Systems Research Center (SSRC) at TII has developed a rigorous, holistic testing framework to systematically evaluate the performance and security of these systems at each stage of development. This approach ensures secure, resilient, and safe operations for autonomous systems, from individual components to fleet-wide interactions.



Earlier this year, we reviewed the SwitchBot S10, a vacuuming and wet mopping robot that uses a water-integrated docking system to autonomously manage both clean and dirty water for you. It’s a pretty clever solution, and we appreciated that SwitchBot was willing to try something a little different.

At CES this week, SwitchBot introduced the K20+ Pro, a little autonomous vacuum that can integrate with a bunch of different accessories by pulling them around on a backpack cart of sorts. The K20+ Pro is SwitchBot’s latest effort to explore what’s possible with mobile home robots.

SwitchBot’s small vacuum can transport different payloads on top.SwitchBot

What we’re looking at here is a “mini” robotic vacuum (it’s about 25 centimeters in diameter) that does everything a robotic vacuum does nowadays: It uses lidar to make a map of your house so that you can direct it where to go, it’s got a dock to empty itself and recharge, and so on. The mini robotic vacuum is attached to a wheeled platform that SwitchBot is calling a “FusionPlatform” that sits on top of the robot like a hat. The vacuum docks to this platform, and then the platform will go wherever the robot goes. This entire system (robot, dock, and platform) is the “K20+ Pro multitasking household robot.”

SwitchBot refers to the K20+ Pro as a “smart delivery assistant,” because you can put stuff on the FusionPlatform and the K20+ Pro will move that stuff around your house for you. This really doesn’t do it justice, though, because the platform is much more than just a passive mobile cart. It also can provide power to a bunch of different accessories, all of which benefit from autonomous mobility:

The SwitchBot can carry a variety of payloads, including custom payloads.SwitchBot

From left to right, you’re looking at an air circulation fan, a tablet stand, a vacuum and charging dock and an air purifier and security camera (and a stick vacuum for some reason), and lastly just the air purifier and security setup. You can also add and remove different bits, like if you want the fan along with the security camera, just plop the security camera down on the platform base in front of the fan and you’re good to go.

This basic concept is somewhat similar to Amazon’s Proteus robot, in the sense that you can have one smart powered base that moves around a bunch of less smart and unpowered payloads by driving underneath them and then carrying them around. But SwitchBot’s payloads aren’t just passive cargo, and the base can provide them with a useful amount of power.

A power port allows you to develop your own payloads for the robot.SwitchBot

SwitchBot is actively encouraging users to “to create, adapt, and personalize the robot for a wide variety of innovative applications,” which may include “3D-printed components [or] third-party devices with multiple power ports for speakers, car fridges, or even UV sterilization lamps,” according to the press release. The maximum payload is only 8 kilograms, though, so don’t get too crazy.

Several SwitchBots can make bath time much more enjoyable.SwitchBot

What we all want to know is when someone will put an arm on this thing, and SwitchBot is of course already working on this:

SwitchBot’s mobile manipulator is still in the lab stage.SwitchBot

The arm is still “in the lab stage,” SwichBot says, which I’m guessing means that the hardware is functional but that getting it to reliably do useful stuff with the arm is still a work in progress. But that’s okay—getting an arm to reliably do useful stuff is a work in progress for all of robotics, pretty much. And if SwitchBot can manage to produce an affordable mobile manipulation platform for consumers that even sort of works, that’ll be very impressive.



Earlier this year, we reviewed the SwitchBot S10, a vacuuming and wet mopping robot that uses a water-integrated docking system to autonomously manage both clean and dirty water for you. It’s a pretty clever solution, and we appreciated that SwitchBot was willing to try something a little different.

At CES this week, SwitchBot introduced the K20+ Pro, a little autonomous vacuum that can integrate with a bunch of different accessories by pulling them around on a backpack cart of sorts. The K20+ Pro is SwitchBot’s latest effort to explore what’s possible with mobile home robots.

SwitchBot’s small vacuum can transport different payloads on top.SwitchBot

What we’re looking at here is a “mini” robotic vacuum (it’s about 25 centimeters in diameter) that does everything a robotic vacuum does nowadays: It uses lidar to make a map of your house so that you can direct it where to go, it’s got a dock to empty itself and recharge, and so on. The mini robotic vacuum is attached to a wheeled platform that SwitchBot is calling a “FusionPlatform” that sits on top of the robot like a hat. The vacuum docks to this platform, and then the platform will go wherever the robot goes. This entire system (robot, dock, and platform) is the “K20+ Pro multitasking household robot.”

SwitchBot refers to the K20+ Pro as a “smart delivery assistant,” because you can put stuff on the FusionPlatform and the K20+ Pro will move that stuff around your house for you. This really doesn’t do it justice, though, because the platform is much more than just a passive mobile cart. It also can provide power to a bunch of different accessories, all of which benefit from autonomous mobility:

The SwitchBot can carry a variety of payloads, including custom payloads.SwitchBot

From left to right, you’re looking at an air circulation fan, a tablet stand, a vacuum and charging dock and an air purifier and security camera (and a stick vacuum for some reason), and lastly just the air purifier and security setup. You can also add and remove different bits, like if you want the fan along with the security camera, just plop the security camera down on the platform base in front of the fan and you’re good to go.

This basic concept is somewhat similar to Amazon’s Proteus robot, in the sense that you can have one smart powered base that moves around a bunch of less smart and unpowered payloads by driving underneath them and then carrying them around. But SwitchBot’s payloads aren’t just passive cargo, and the base can provide them with a useful amount of power.

A power port allows you to develop your own payloads for the robot.SwitchBot

SwitchBot is actively encouraging users to “to create, adapt, and personalize the robot for a wide variety of innovative applications,” which may include “3D-printed components [or] third-party devices with multiple power ports for speakers, car fridges, or even UV sterilization lamps,” according to the press release. The maximum payload is only 8 kilograms, though, so don’t get too crazy.

Several SwitchBots can make bath time much more enjoyable.SwitchBot

What we all want to know is when someone will put an arm on this thing, and SwitchBot is of course already working on this:

SwitchBot’s mobile manipulator is still in the lab stage.SwitchBot

The arm is still “in the lab stage,” SwichBot says, which I’m guessing means that the hardware is functional but that getting it to reliably do useful stuff with the arm is still a work in progress. But that’s okay—getting an arm to reliably do useful stuff is a work in progress for all of robotics, pretty much. And if SwitchBot can manage to produce an affordable mobile manipulation platform for consumers that even sort of works, that’ll be very impressive.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

It’s me. But we can all relate to this child android robot struggling to stay awake.

[ Osaka University ]

For 2025, the RoboCup SPL plans an interesting new technical challenge: Kicking a rolling ball! The velocity and start position of the ball can vary and the goal is to kick the ball straight and far. In this video, we show our results from our first testing session.

[ Team B-Human ]

When you think of a prosthetic hand you probably think of something similar to Luke Skywalker’s robotic hand from Star Wars, or even Furiosa’s multi-fingered claw from Mad Max. The reality is a far cry from these fictional hands: upper limb prostheses are generally very limited in what they can do, and how we can control them to do it. In this project, we investigate non-humanoid prosthetic hand design, exploring a new ideology for the design of upper limb prostheses that encourages alternative approaches to prosthetic hands. In this wider, more open design space, can we surpass humanoid prosthetic hands?

[ Imperial College London ]

Thanks, Digby!

A novel three-dimensional (3D) Minimally Actuated Serial Robot (MASR), actuated by a robotic motor. The robotic motor is composed of a mobility motor (to advance along the links) and an actuation motor [to] move the joints.

[ Zarrouk Lab ]

This year, Franka Robotics team hit the road, the skies and the digital space to share ideas, showcase our cutting-edge technology, and connect with the brightest minds in robotics across the globe. Here is 2024 video recap, capturing the events and collaborations that made this year unforgettable!

[ Franka Robotics ]

Aldebaran has sold an astonishing number of robots this year.

[ Aldebaran ]

The advancement of modern robotics starts at its foundation: the gearboxes. Ailos aims to define how these industries operate with increased precision, efficiency and versatility. By innovating gearbox technology across diverse fields, Ailos is catalyzing the transition towards the next wave of automation, productivity and agility.

[ Ailos Robotics ]

Many existing obstacle avoidance algorithms overlook the crucial balance between safety and agility, especially in environments of varying complexity. In our study, we introduce an obstacle avoidance pipeline based on reinforcement learning. This pipeline enables drones to adapt their flying speed according to the environmental complexity. After minimal fine-tuning, we successfully deployed our network on a real drone for enhanced obstacle avoidance.

[ MAVRL via Github ]

Robot-assisted feeding promises to empower people with motor impairments to feed themselves. However, research often focuses on specific system subcomponents and thus evaluates them in controlled settings. This leaves a gap in developing and evaluating an end-to-end system that feeds users entire meals in out-of-lab settings. We present such a system, collaboratively developed with community researchers.

[ Personal Robotics Lab ]

A drone’s eye-view reminder that fireworks explode in 3D.

[ Team BlackSheep ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RoboCup German Open: 12–16 March 2025, NUREMBERG, GERMANYGerman Robotics Conference: 13–15 March 2025, NUREMBERG, GERMANYICUAS 2025: 14–17 May 2025, CHARLOTTE, NCICRA 2025: 19–23 May 2025, ATLANTA, GAIEEE RCAR 2025: 1–6 June 2025, TOYAMA, JAPANRSS 2025: 21–25 June 2025, LOS ANGELESIAS 2025: 30 June–4 July 2025, GENOA, ITALYICRES 2025: 3–4 July 2025, PORTO, PORTUGALIEEE World Haptics: 8–11 July 2025, SUWON, KOREAIFAC Symposium on Robotics: 15–18 July 2025, PARISRoboCup 2025: 15–21 July 2025, BAHIA, BRAZIL

Enjoy today’s videos!

It’s me. But we can all relate to this child android robot struggling to stay awake.

[ Osaka University ]

For 2025, the RoboCup SPL plans an interesting new technical challenge: Kicking a rolling ball! The velocity and start position of the ball can vary and the goal is to kick the ball straight and far. In this video, we show our results from our first testing session.

[ Team B-Human ]

When you think of a prosthetic hand you probably think of something similar to Luke Skywalker’s robotic hand from Star Wars, or even Furiosa’s multi-fingered claw from Mad Max. The reality is a far cry from these fictional hands: upper limb prostheses are generally very limited in what they can do, and how we can control them to do it. In this project, we investigate non-humanoid prosthetic hand design, exploring a new ideology for the design of upper limb prostheses that encourages alternative approaches to prosthetic hands. In this wider, more open design space, can we surpass humanoid prosthetic hands?

[ Imperial College London ]

Thanks, Digby!

A novel three-dimensional (3D) Minimally Actuated Serial Robot (MASR), actuated by a robotic motor. The robotic motor is composed of a mobility motor (to advance along the links) and an actuation motor [to] move the joints.

[ Zarrouk Lab ]

This year, Franka Robotics team hit the road, the skies and the digital space to share ideas, showcase our cutting-edge technology, and connect with the brightest minds in robotics across the globe. Here is 2024 video recap, capturing the events and collaborations that made this year unforgettable!

[ Franka Robotics ]

Aldebaran has sold an astonishing number of robots this year.

[ Aldebaran ]

The advancement of modern robotics starts at its foundation: the gearboxes. Ailos aims to define how these industries operate with increased precision, efficiency and versatility. By innovating gearbox technology across diverse fields, Ailos is catalyzing the transition towards the next wave of automation, productivity and agility.

[ Ailos Robotics ]

Many existing obstacle avoidance algorithms overlook the crucial balance between safety and agility, especially in environments of varying complexity. In our study, we introduce an obstacle avoidance pipeline based on reinforcement learning. This pipeline enables drones to adapt their flying speed according to the environmental complexity. After minimal fine-tuning, we successfully deployed our network on a real drone for enhanced obstacle avoidance.

[ MAVRL via Github ]

Robot-assisted feeding promises to empower people with motor impairments to feed themselves. However, research often focuses on specific system subcomponents and thus evaluates them in controlled settings. This leaves a gap in developing and evaluating an end-to-end system that feeds users entire meals in out-of-lab settings. We present such a system, collaboratively developed with community researchers.

[ Personal Robotics Lab ]

A drone’s eye-view reminder that fireworks explode in 3D.

[ Team BlackSheep ]



The future of human habitation in the sea is taking shape in an abandoned quarry on the border of Wales and England. There, the ocean-exploration organization Deep has embarked on a multiyear quest to enable scientists to live on the seafloor at depths up to 200 meters for weeks, months, and possibly even years.

“Aquarius Reef Base in St. Croix was the last installed habitat back in 1987, and there hasn’t been much ground broken in about 40 years,” says Kirk Krack, human diver performance lead at Deep. “We’re trying to bring ocean science and engineering into the 21st century.”

This article is part of our special report Top Tech 2025.

Deep’s agenda has a major milestone this year—the development and testing of a small, modular habitat called Vanguard. This transportable, pressurized underwater shelter, capable of housing up to three divers for periods ranging up to a week or so, will be a stepping stone to a more permanent modular habitat system—known as Sentinel—that is set to launch in 2027. “By 2030, we hope to see a permanent human presence in the ocean,” says Krack. All of this is now possible thanks to an advanced 3D printing-welding approach that can print these large habitation structures.

How would such a presence benefit marine science? Krack runs the numbers for me: “With current diving at 150 to 200 meters, you can only get 10 minutes of work completed, followed by 6 hours of decompression. With our underwater habitats we’ll be able to do seven years’ worth of work in 30 days with shorter decompression time. More than 90 percent of the ocean’s biodiversity lives within 200 meters’ depth and at the shorelines, and we only know about 20 percent of it.” Understanding these undersea ecosystems and environments is a crucial piece of the climate puzzle, he adds: The oceans absorb nearly a quarter of human-caused carbon dioxide and roughly 90 percent of the excess heat generated by human activity.

Underwater Living Gets the Green Light This Year

Deep is looking to build an underwater life-support infrastructure that features not just modular habitats but also training programs for the scientists who will use them. Long-term habitation underwater involves a specialized type of activity called saturation diving, so named because the diver’s tissues become saturated with gases, such as nitrogen or helium. It has been used for decades in the offshore oil and gas sectors but is uncommon in scientific diving, outside of the relatively small number of researchers fortunate enough to have spent time in Aquarius. Deep wants to make it a standard practice for undersea researchers.

The first rung in that ladder is Vanguard, a rapidly deployable, expedition-style underwater habitat the size of a shipping container that can be transported and supplied by a ship and house three people down to depths of about 100 meters. It is set to be tested in a quarry outside of Chepstow, Wales, in the first quarter of 2025.

The Vanguard habitat, seen here in an illustrator’s rendering, will be small enough to be transportable and yet capable of supporting three people at a maximum depth of 100 meters.Deep

The plan is to be able to deploy Vanguard wherever it’s needed for a week or so. Divers will be able to work for hours on the seabed before retiring to the module for meals and rest.

One of the novel features of Vanguard is its extraordinary flexibility when it comes to power. There are currently three options: When deployed close to shore, it can connect by cable to an onshore distribution center using local renewables. Farther out at sea, it could use supply from floating renewable-energy farms and fuel cells that would feed Vanguard via an umbilical link, or it could be supplied by an underwater energy-storage system that contains multiple batteries that can be charged, retrieved, and redeployed via subsea cables.

The breathing gases will be housed in external tanks on the seabed and contain a mix of oxygen and helium that will depend on the depth. In the event of an emergency, saturated divers won’t be able to swim to the surface without suffering a life-threatening case of decompression illness. So, Vanguard, as well as the future Sentinel, will also have backup power sufficient to provide 96 hours of life support, in an external, adjacent pod on the seafloor.

Data gathered from Vanguard this year will help pave the way for Sentinel, which will be made up of pods of different sizes and capabilities. These pods will even be capable of being set to different internal pressures, so that different sections can perform different functions. For example, the labs could be at the local bathymetric pressure for analyzing samples in their natural environment, but alongside those a 1-atmosphere chamber could be set up where submersibles could dock and visitors could observe the habitat without needing to equalize with the local pressure.

As Deep sees it, a typical configuration would house six people—each with their own bedroom and bathroom. It would also have a suite of scientific equipment including full wet labs to perform genetic analyses, saving days by not having to transport samples to a topside lab for analysis.

“By 2030, we hope to see a permanent human presence in the ocean,” says one of the project’s principals

A Sentinel configuration is designed to go for a month before needing a resupply. Gases will be topped off via an umbilical link from a surface buoy, and food, water, and other supplies would be brought down during planned crew changes every 28 days.

But people will be able to live in Sentinel for months, if not years. “Once you’re saturated, it doesn’t matter if you’re there for six days or six years, but most people will be there for 28 days due to crew changes,” says Krack.

Where 3D Printing and Welding Meet

It’s a very ambitious vision, and Deep has concluded that it can be achieved only with advanced manufacturing techniques. Deep’s manufacturing arm, Deep Manufacturing Labs (DML), has come up with an innovative approach for building the pressure hulls of the habitat modules. It’s using robots to combine metal additive manufacturing with welding in a process known as wire-arc additive manufacturing. With these robots, metal layers are built up as they would be in 3D printing, but the layers are fused together via welding using a metal-inert-gas torch.

At Deep’s base of operations at a former quarry in Tidenham, England, resources include two Triton 3300/3 MK II submarines. One of them is seen here at Deep’s floating “island” dock in the quarry. Deep

During a tour of the DML, Harry Thompson, advanced manufacturing engineering lead, says, “We sit in a gray area between welding and additive process, so we’re following welding rules, but for pressure vessels we [also] follow a stress-relieving process that is applicable for an additive component. We’re also testing all the parts with nondestructive testing.”

Each of the robot arms has an operating range of 2.8 by 3.2 meters, but DML has boosted this area by means of a concept it calls Hexbot. It’s based on six robotic arms programmed to work in unison to create habitat hulls with a diameter of up to 6.1 meters. The biggest challenge with creating the hulls is managing the heat during the additive process to keep the parts from deforming as they are created. For this, DML is relying on the use of heat-tolerant steels and on very precisely optimized process parameters.

Engineering Challenges for Long-Term Habitation

Besides manufacturing, there are other challenges that are unique to the tricky business of keeping people happy and alive 200 meters underwater. One of the most fascinating of these revolves around helium. Because of its narcotic effect at high pressure, nitrogen shouldn’t be breathed by humans at depths below about 60 meters. So, at 200 meters, the breathing mix in the habitat will be 2 percent oxygen and 98 percent helium. But because of its very high thermal conductivity, “we need to heat helium to 31–32 °C to get a normal 21–22 °C internal temperature environment,” says Rick Goddard, director of engineering at Deep. “This creates a humid atmosphere, so porous materials become a breeding ground for mold”.

There are a host of other materials-related challenges, too. The materials can’t emit gases, and they must be acoustically insulating, lightweight, and structurally sound at high pressures.

Deep’s proving grounds are a former quarry in Tidenham, England, that has a maximum depth of 80 meters. Deep

There are also many electrical challenges. “Helium breaks certain electrical components with a high degree of certainty,” says Goddard. “We’ve had to pull devices to pieces, change chips, change [printed circuit boards], and even design our own PCBs that don’t off-gas.”

The electrical system will also have to accommodate an energy mix with such varied sources as floating solar farms and fuel cells on a surface buoy. Energy-storage devices present major electrical engineering challenges: Helium seeps into capacitors and can destroy them when it tries to escape during decompression. Batteries, too, develop problems at high pressure, so they will have to be housed outside the habitat in 1-atmosphere pressure vessels or in oil-filled blocks that prevent a differential pressure inside.

Is it Possible to Live in the Ocean for Months or Years?

When you’re trying to be the SpaceX of the ocean, questions are naturally going to fly about the feasibility of such an ambition. How likely is it that Deep can follow through? At least one top authority, John Clarke, is a believer. “I’ve been astounded by the quality of the engineering methods and expertise applied to the problems at hand and I am enthusiastic about how DEEP is applying new technology,” says Clarke, who was lead scientist of the U.S. Navy Experimental Diving Unit. “They are advancing well beyond expectations…. I gladly endorse Deep in their quest to expand humankind’s embrace of the sea.”



The future of human habitation in the sea is taking shape in an abandoned quarry on the border of Wales and England. There, the ocean-exploration organization Deep has embarked on a multiyear quest to enable scientists to live on the seafloor at depths up to 200 meters for weeks, months, and possibly even years.

“Aquarius Reef Base in St. Croix was the last installed habitat back in 1987, and there hasn’t been much ground broken in about 40 years,” says Kirk Krack, human diver performance lead at Deep. “We’re trying to bring ocean science and engineering into the 21st century.”

This article is part of our special report Top Tech 2025.

Deep’s agenda has a major milestone this year—the development and testing of a small, modular habitat called Vanguard. This transportable, pressurized underwater shelter, capable of housing up to three divers for periods ranging up to a week or so, will be a stepping stone to a more permanent modular habitat system—known as Sentinel—that is set to launch in 2027. “By 2030, we hope to see a permanent human presence in the ocean,” says Krack. All of this is now possible thanks to an advanced 3D printing-welding approach that can print these large habitation structures.

How would such a presence benefit marine science? Krack runs the numbers for me: “With current diving at 150 to 200 meters, you can only get 10 minutes of work completed, followed by 6 hours of decompression. With our underwater habitats we’ll be able to do seven years’ worth of work in 30 days with shorter decompression time. More than 90 percent of the ocean’s biodiversity lives within 200 meters’ depth and at the shorelines, and we only know about 20 percent of it.” Understanding these undersea ecosystems and environments is a crucial piece of the climate puzzle, he adds: The oceans absorb nearly a quarter of human-caused carbon dioxide and roughly 90 percent of the excess heat generated by human activity.

Underwater Living Gets the Green Light This Year

Deep is looking to build an underwater life-support infrastructure that features not just modular habitats but also training programs for the scientists who will use them. Long-term habitation underwater involves a specialized type of activity called saturation diving, so named because the diver’s tissues become saturated with gases, such as nitrogen or helium. It has been used for decades in the offshore oil and gas sectors but is uncommon in scientific diving, outside of the relatively small number of researchers fortunate enough to have spent time in Aquarius. Deep wants to make it a standard practice for undersea researchers.

The first rung in that ladder is Vanguard, a rapidly deployable, expedition-style underwater habitat the size of a shipping container that can be transported and supplied by a ship and house three people down to depths of about 100 meters. It is set to be tested in a quarry outside of Chepstow, Wales, in the first quarter of 2025.

The Vanguard habitat, seen here in an illustrator’s rendering, will be small enough to be transportable and yet capable of supporting three people at a maximum depth of 100 meters.Deep

The plan is to be able to deploy Vanguard wherever it’s needed for a week or so. Divers will be able to work for hours on the seabed before retiring to the module for meals and rest.

One of the novel features of Vanguard is its extraordinary flexibility when it comes to power. There are currently three options: When deployed close to shore, it can connect by cable to an onshore distribution center using local renewables. Farther out at sea, it could use supply from floating renewable-energy farms and fuel cells that would feed Vanguard via an umbilical link, or it could be supplied by an underwater energy-storage system that contains multiple batteries that can be charged, retrieved, and redeployed via subsea cables.

The breathing gases will be housed in external tanks on the seabed and contain a mix of oxygen and helium that will depend on the depth. In the event of an emergency, saturated divers won’t be able to swim to the surface without suffering a life-threatening case of decompression illness. So, Vanguard, as well as the future Sentinel, will also have backup power sufficient to provide 96 hours of life support, in an external, adjacent pod on the seafloor.

Data gathered from Vanguard this year will help pave the way for Sentinel, which will be made up of pods of different sizes and capabilities. These pods will even be capable of being set to different internal pressures, so that different sections can perform different functions. For example, the labs could be at the local bathymetric pressure for analyzing samples in their natural environment, but alongside those a 1-atmosphere chamber could be set up where submersibles could dock and visitors could observe the habitat without needing to equalize with the local pressure.

As Deep sees it, a typical configuration would house six people—each with their own bedroom and bathroom. It would also have a suite of scientific equipment including full wet labs to perform genetic analyses, saving days by not having to transport samples to a topside lab for analysis.

“By 2030, we hope to see a permanent human presence in the ocean,” says one of the project’s principals

A Sentinel configuration is designed to go for a month before needing a resupply. Gases will be topped off via an umbilical link from a surface buoy, and food, water, and other supplies would be brought down during planned crew changes every 28 days.

But people will be able to live in Sentinel for months, if not years. “Once you’re saturated, it doesn’t matter if you’re there for six days or six years, but most people will be there for 28 days due to crew changes,” says Krack.

Where 3D Printing and Welding Meet

It’s a very ambitious vision, and Deep has concluded that it can be achieved only with advanced manufacturing techniques. Deep’s manufacturing arm, Deep Manufacturing Labs (DML), has come up with an innovative approach for building the pressure hulls of the habitat modules. It’s using robots to combine metal additive manufacturing with welding in a process known as wire-arc additive manufacturing. With these robots, metal layers are built up as they would be in 3D printing, but the layers are fused together via welding using a metal-inert-gas torch.

At Deep’s base of operations at a former quarry in Tidenham, England, resources include two Triton 3300/3 MK II submarines. One of them is seen here at Deep’s floating “island” dock in the quarry. Deep

During a tour of the DML, Harry Thompson, advanced manufacturing engineering lead, says, “We sit in a gray area between welding and additive process, so we’re following welding rules, but for pressure vessels we [also] follow a stress-relieving process that is applicable for an additive component. We’re also testing all the parts with nondestructive testing.”

Each of the robot arms has an operating range of 2.8 by 3.2 meters, but DML has boosted this area by means of a concept it calls Hexbot. It’s based on six robotic arms programmed to work in unison to create habitat hulls with a diameter of up to 6.1 meters. The biggest challenge with creating the hulls is managing the heat during the additive process to keep the parts from deforming as they are created. For this, DML is relying on the use of heat-tolerant steels and on very precisely optimized process parameters.

Engineering Challenges for Long-Term Habitation

Besides manufacturing, there are other challenges that are unique to the tricky business of keeping people happy and alive 200 meters underwater. One of the most fascinating of these revolves around helium. Because of its narcotic effect at high pressure, nitrogen shouldn’t be breathed by humans at depths below about 60 meters. So, at 200 meters, the breathing mix in the habitat will be 2 percent oxygen and 98 percent helium. But because of its very high thermal conductivity, “we need to heat helium to 31–32 °C to get a normal 21–22 °C internal temperature environment,” says Rick Goddard, director of engineering at Deep. “This creates a humid atmosphere, so porous materials become a breeding ground for mold”.

There are a host of other materials-related challenges, too. The materials can’t emit gases, and they must be acoustically insulating, lightweight, and structurally sound at high pressures.

Deep’s proving grounds are a former quarry in Tidenham, England, that has a maximum depth of 80 meters. Deep

There are also many electrical challenges. “Helium breaks certain electrical components with a high degree of certainty,” says Goddard. “We’ve had to pull devices to pieces, change chips, change [printed circuit boards], and even design our own PCBs that don’t off-gas.”

The electrical system will also have to accommodate an energy mix with such varied sources as floating solar farms and fuel cells on a surface buoy. Energy-storage devices present major electrical engineering challenges: Helium seeps into capacitors and can destroy them when it tries to escape during decompression. Batteries, too, develop problems at high pressure, so they will have to be housed outside the habitat in 1-atmosphere pressure vessels or in oil-filled blocks that prevent a differential pressure inside.

Is it Possible to Live in the Ocean for Months or Years?

When you’re trying to be the SpaceX of the ocean, questions are naturally going to fly about the feasibility of such an ambition. How likely is it that Deep can follow through? At least one top authority, John Clarke, is a believer. “I’ve been astounded by the quality of the engineering methods and expertise applied to the problems at hand and I am enthusiastic about how DEEP is applying new technology,” says Clarke, who was lead scientist of the U.S. Navy Experimental Diving Unit. “They are advancing well beyond expectations…. I gladly endorse Deep in their quest to expand humankind’s embrace of the sea.”



2024 was the best year ever for robotics, which I’m pretty sure is not something that I’ve ever said before. But that’s the great thing about robotics—it’s always new, and it’s always exciting. What may be different about this year is the real sense that not only is AI going to change everything about robots, but that it will somehow make robots useful and practical and commercially viable. Is that true? Nobody knows yet! But it means that 2025 might actually be the best year ever for robotics, if you’ve ever wanted a robot to help you out at home or at work.

So as we look forward to 2025, here are some of our most interesting and impactful stories of the past year. And as always, thanks for reading!

1. Figure Raises $675M for Its Humanoid Robot Development

Figure

This announcement from back in February is pretty much what set the tone for robotics in 2024. Figure’s Series B raise valued the company at a bonkers US $2.6 billion, and all of a sudden, humanoids were where it’s at. And by “it,” I mean everything, from funding to talent to breathless media coverage. The big question of 2024 was whether or not humanoids would be able to deliver on their promises, and that’s shaping up to be the big question of 2025, too.

2. Hello, Electric Atlas

Boston Dynamics

It didn’t take long for legendary robotics company Boston Dynamics to make it clear that they’re not going to be left behind when it comes to commercial humanoids. For a company that has been leading humanoid research longer than just about anyone but has bounced around from owner to owner over the last 10 years, we were a little unsure whether Atlas would ever be more than a research platform. But the new all-electric Atlas is designed for work, and we saw it get busy in 2024.

3. Farewell, Hydraulic Atlas

Boston Dynamics

As much as we’re excited for the new Atlas, the old hydraulic Atlas will always have a special place in our hearts. We’ve been through so much together, from the DRC to parkour to dancing. Electric robots are great and all, and I understand why they’re necessary for commercial applications, but all of that hydraulic power meant that hydraulic Atlas was able to move in dynamic ways that we may not see again for a very long time.

4. Nvidia Announces GR00T

Nvidia

So we’ve got all these humanoid robots now with all this impressive hardware, but the really hard part (or one of them, anyway) is getting those robots to actually do something commercially useful in a safe and reliable way. Is training in simulation the answer? I don’t know, but NVIDIA sure thinks so, and they’ve made a huge commitment by investing in GR00T, a “general-purpose foundation model for humanoid robots.” And what does that mean, exactly? Nobody’s quite sure yet, but with NVIDIA behind it that’s enough to make the entire industry pay attention.

5. Is It Autonomous?

Evan Ackerman

With all the attention on humanoid robots right now, it’s critical to be able to separate real progress from hype. Unfortunately, there are all kinds of ways of cheating with robots. And there’s really nothing wrong with cheating with robots, as long as you tell people that the cheating is happening, and then (hopefully) cheat less and less as your robot gets better and better. In particular, we’re likely to see more and more teleoperation of humanoid robots (obviously or otherwise) because that’s one of the best ways of collecting training data: by having a human do it. And being able to tell that a human is doing it is an important skill to have.

6. Robotic Metalsmiths

Machina Labs

Some of my favorite robots are robots that are able to leverage their robotic-ness to not just do things that humans do, but also do things that humans cannot do. Robots have the patience and precision to work metal in ways that a very highly skilled human might be able to do once, but the robots (being robots) can do it over and over again. NASA is leveraging this capability to build complex toroidal tanks for spacecraft, but it has the potential to change anything that’s made out of sheet metal.

7. The End of Ingenuity

JPL-Caltech/ASU/NASA

One of the greatest robotics stories of the last several years has been Ingenuity, the little Mars helicopter. We’ve written extensively about how Ingenuity was designed, how it can fly on Mars, and how it just kept on flying, more than 50 times. But it couldn’t fly forever, and as Ingenuity was pushed to fly farther and farther over more challenging terrain, flight 72 was to be its last. After losing its ability to localize over some particularly featureless terrain, the little robot had a very rough landing. It lived to tell the tale, but not to fly again.


Ingenuity’s spectacularly successful mission means, we hope, that there will be more robotic aircraft on Mars. And just last week, NASA shared a new video of Ingenuity’s successor, the Mars Chopper. That’s definitely something we’ll be looking forward to.


2024 was the best year ever for robotics, which I’m pretty sure is not something that I’ve ever said before. But that’s the great thing about robotics—it’s always new, and it’s always exciting. What may be different about this year is the real sense that not only is AI going to change everything about robots, but that it will somehow make robots useful and practical and commercially viable. Is that true? Nobody knows yet! But it means that 2025 might actually be the best year ever for robotics, if you’ve ever wanted a robot to help you out at home or at work.

So as we look forward to 2025, here are some of our most interesting and impactful stories of the past year. And as always, thanks for reading!

1. Figure Raises $675M for Its Humanoid Robot Development

Figure

This announcement from back in February is pretty much what set the tone for robotics in 2024. Figure’s Series B raise valued the company at a bonkers US $2.6 billion, and all of a sudden, humanoids were where it’s at. And by “it,” I mean everything, from funding to talent to breathless media coverage. The big question of 2024 was whether or not humanoids would be able to deliver on their promises, and that’s shaping up to be the big question of 2025, too.

2. Hello, Electric Atlas

Boston Dynamics

It didn’t take long for legendary robotics company Boston Dynamics to make it clear that they’re not going to be left behind when it comes to commercial humanoids. For a company that has been leading humanoid research longer than just about anyone but has bounced around from owner to owner over the last 10 years, we were a little unsure whether Atlas would ever be more than a research platform. But the new all-electric Atlas is designed for work, and we saw it get busy in 2024.

3. Farewell, Hydraulic Atlas

Boston Dynamics

As much as we’re excited for the new Atlas, the old hydraulic Atlas will always have a special place in our hearts. We’ve been through so much together, from the DRC to parkour to dancing. Electric robots are great and all, and I understand why they’re necessary for commercial applications, but all of that hydraulic power meant that hydraulic Atlas was able to move in dynamic ways that we may not see again for a very long time.

4. Nvidia Announces GR00T

Nvidia

So we’ve got all these humanoid robots now with all this impressive hardware, but the really hard part (or one of them, anyway) is getting those robots to actually do something commercially useful in a safe and reliable way. Is training in simulation the answer? I don’t know, but NVIDIA sure thinks so, and they’ve made a huge commitment by investing in GR00T, a “general-purpose foundation model for humanoid robots.” And what does that mean, exactly? Nobody’s quite sure yet, but with NVIDIA behind it that’s enough to make the entire industry pay attention.

5. Is It Autonomous?

Evan Ackerman

With all the attention on humanoid robots right now, it’s critical to be able to separate real progress from hype. Unfortunately, there are all kinds of ways of cheating with robots. And there’s really nothing wrong with cheating with robots, as long as you tell people that the cheating is happening, and then (hopefully) cheat less and less as your robot gets better and better. In particular, we’re likely to see more and more teleoperation of humanoid robots (obviously or otherwise) because that’s one of the best ways of collecting training data: by having a human do it. And being able to tell that a human is doing it is an important skill to have.

6. Robotic Metalsmiths

Machina Labs

Some of my favorite robots are robots that are able to leverage their robotic-ness to not just do things that humans do, but also do things that humans cannot do. Robots have the patience and precision to work metal in ways that a very highly skilled human might be able to do once, but the robots (being robots) can do it over and over again. NASA is leveraging this capability to build complex toroidal tanks for spacecraft, but it has the potential to change anything that’s made out of sheet metal.

7. The End of Ingenuity

JPL-Caltech/ASU/NASA

One of the greatest robotics stories of the last several years has been Ingenuity, the little Mars helicopter. We’ve written extensively about how Ingenuity was designed, how it can fly on Mars, and how it just kept on flying, more than 50 times. But it couldn’t fly forever, and as Ingenuity was pushed to fly farther and farther over more challenging terrain, flight 72 was to be its last. After losing its ability to localize over some particularly featureless terrain, the little robot had a very rough landing. It lived to tell the tale, but not to fly again.


Ingenuity’s spectacularly successful mission means, we hope, that there will be more robotic aircraft on Mars. And just last week, NASA shared a new video of Ingenuity’s successor, the Mars Chopper. That’s definitely something we’ll be looking forward to.


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

One year after mass production kicked off, Unitree’s B2-W Industrial Wheel has been upgraded with more exciting capabilities. Please always use robots safely and friendly.

Yours for US $100,000.

[ Unitree ]

Yes I know we’re sharing some of these holiday videos are a little late, but I deserve a little bit of time off, don’t I? No, you’re right, and I feel shame. But please enjoy these extra holiday videos anyway, starting with Santa’s Little Helper from ETHZ RSL.

Okay but seriously where do I get one of those little plush Anymals!

[ RSL ]

Merry Christmas from the Kepler humanoid robot!

[ Kepler Robotics ]

This year, Rizon has joined the festive fun by decorating the Flexiv office with holiday cheer!

[ Flexiv ]

The Eva exoskeleton, developed by IHMC, takes its first steps out of the lab and through a number of new modes in October 2024. For more than a decade, IHMC has designed and developed wearable robotic exoskeletons to rehabilitate those with spinal cord injuries. With lessons learned from these past developments, our focus has shifted to augmenting the performance of able-bodied workers in hazardous environments. We are working to advance these technologies to the real world with hope of making real differences in peoples’ lives.

[ IHMC ]

Thanks, Robert!

TIAGo Pro - a revolutionary robot with Series Elastic Actuators arms and enhanced non-verbal communication. This enhances the manipulation capabilities and enables state-of-the-art Human-Robot Interaction. Designed for agile manufacturing and future healthcare applications.

[ PAL Robotics ]

Did you know that cameras today struggle to accurately measure distance? This is because current systems rely on limited data. DARPA’s CIDAR Challenge explores combining spatial, spectral, and temporal imaging data to unlock unprecedented accuracy. Advances made through the CIDAR challenge could revolutionize everything from battlefield awareness, to robotics, to environmental research.

[ DARPA ]

Innate is developing innately intelligent teachable general-purpose robots. Our platforms are simple, accessible, so as to lower the barrier to entry into robotics for everyone.

[ Innate ]

Drone-level autonomy, now underwater! In The last couple of years, we have invested in the concept of making a unified autonomy solution that can operate virtually universally across robot configurations. As a first major step to that end was to demonstrate that as long as confined or cluttered environments are concerned, we can have aerial robot-level autonomy underwater that is a) exclusively driven by vision in terms of perception (e.g., no sonars), and b) utilizes “generalist” solution for path planning and safety (essentially identical to those in our research for flying robots!).

[ Norwegian University of Science and Technology post on LinkedIn ]

Thanks, Kostas!

ERA-42 is the world’s first truly end-to-end native robot large model matched to a five-finger dexterous hand, capable of performing over 100 intricate tasks using various tools. These include tightening screws with a screwdriver, hammering nails, righting overturned cups, and pouring water—tasks that highlight its remarkable adaptability and precision.

[ Robot Era ]

Thanks, Ni Tao!

Even if an android’s appearance is so realistic that it could be mistaken for a human in a photograph, watching it move in person can feel a bit unsettling. It can smile, frown, or display other various, familiar expressions, but finding a consistent emotional state behind those expressions can be difficult, leaving you unsure of what it is truly feeling and creating a sense of unease. In this study, lead author Hisashi Ishihara and his research group developed a dynamic facial expression synthesis technology using “waveform movements,” which represents various gestures that constitute facial movements, such as “breathing,” “blinking,” and “yawning,” as individual waves. These waves are propagated to the related facial areas and are overlaid to generate complex facial movements in real time.

[ Osaka University ]

Suzumori Endo Lab, Science Tokyo has developed a self-excited vibration robot that can adapt its environment. This robot can move straight ahead and self-steer around a corner without a control system.

[ Paper via IEEE Robotics and Automation magazine in IEEE Xplore ]

PlayBot is an unofficial, experimental accessory for Panic Inc.’s Playdate handheld console, which transforms your console into a lovely little desktop robot.

[ Guillaume Loquin ] via [ Engadget ]

This is a big deal.

[ Ekso Bionics ]

Sanctuary AI introduces new tactile sensors for general purpose robots.

[ Sanctuary AI ]

Developed by the Pudu X-Lab, the PUDU D9 is designed with a human-centric philosophy that embodies the principle of “Born to Serve”. Its fully anthropomorphic design closely mirrors human capabilities, allowing it to offer practical assistance across a wide range of applications.

[ Pudu Robotics ]

EngineAI proudly unveils the PM01, our next-gen lightweight, high-dynamic, open-source humanoid robotic platform. With its interactive display, agile motion, and robust support for secondary development, PM01 is designed to be the most versatile tool for developers worldwide. PM01 is now available for purchase! We invite developers, researchers, and businesses to explore the future of robotics with PM01. Let’s push the boundaries of what robotics can achieve across different industries and use cases.

[ EngineAI ]

The third edition of CYBATHLON is now part of history. Held from October 25–27, 2024, at the SWISS Arena in Zürich, the event brought together 67 teams from 24 nations to compete in eight disciplines, showcasing state-of-the-art assistive technologies designed to help complete everyday tasks.While the winning teams celebrated their well-deserved victories, the event’s true spotlight was on the technological breakthroughs and their potential to transform lives. Equally remarkable was CYBATHLON’s emphasis on fostering social inclusion and empowering people with disabilities to overcome challenges through innovation.

[ Cybathlon ]

At the Kanda Myoujin Shrine in Tokyo, Aibos and their owners take part in the Shichi-go-san festival, which celebrates children (and robots!) at 3, 5, and 7 years old.

[ Aibo ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

One year after mass production kicked off, Unitree’s B2-W Industrial Wheel has been upgraded with more exciting capabilities. Please always use robots safely and friendly.

Yours for US $100,000.

[ Unitree ]

Yes I know we’re sharing some of these holiday videos are a little late, but I deserve a little bit of time off, don’t I? No, you’re right, and I feel shame. But please enjoy these extra holiday videos anyway, starting with Santa’s Little Helper from ETHZ RSL.

Okay but seriously where do I get one of those little plush Anymals!

[ RSL ]

Merry Christmas from the Kepler humanoid robot!

[ Kepler Robotics ]

This year, Rizon has joined the festive fun by decorating the Flexiv office with holiday cheer!

[ Flexiv ]

The Eva exoskeleton, developed by IHMC, takes its first steps out of the lab and through a number of new modes in October 2024. For more than a decade, IHMC has designed and developed wearable robotic exoskeletons to rehabilitate those with spinal cord injuries. With lessons learned from these past developments, our focus has shifted to augmenting the performance of able-bodied workers in hazardous environments. We are working to advance these technologies to the real world with hope of making real differences in peoples’ lives.

[ IHMC ]

Thanks, Robert!

TIAGo Pro - a revolutionary robot with Series Elastic Actuators arms and enhanced non-verbal communication. This enhances the manipulation capabilities and enables state-of-the-art Human-Robot Interaction. Designed for agile manufacturing and future healthcare applications.

[ PAL Robotics ]

Did you know that cameras today struggle to accurately measure distance? This is because current systems rely on limited data. DARPA’s CIDAR Challenge explores combining spatial, spectral, and temporal imaging data to unlock unprecedented accuracy. Advances made through the CIDAR challenge could revolutionize everything from battlefield awareness, to robotics, to environmental research.

[ DARPA ]

Innate is developing innately intelligent teachable general-purpose robots. Our platforms are simple, accessible, so as to lower the barrier to entry into robotics for everyone.

[ Innate ]

Drone-level autonomy, now underwater! In The last couple of years, we have invested in the concept of making a unified autonomy solution that can operate virtually universally across robot configurations. As a first major step to that end was to demonstrate that as long as confined or cluttered environments are concerned, we can have aerial robot-level autonomy underwater that is a) exclusively driven by vision in terms of perception (e.g., no sonars), and b) utilizes “generalist” solution for path planning and safety (essentially identical to those in our research for flying robots!).

[ Norwegian University of Science and Technology post on LinkedIn ]

Thanks, Kostas!

ERA-42 is the world’s first truly end-to-end native robot large model matched to a five-finger dexterous hand, capable of performing over 100 intricate tasks using various tools. These include tightening screws with a screwdriver, hammering nails, righting overturned cups, and pouring water—tasks that highlight its remarkable adaptability and precision.

[ Robot Era ]

Thanks, Ni Tao!

Even if an android’s appearance is so realistic that it could be mistaken for a human in a photograph, watching it move in person can feel a bit unsettling. It can smile, frown, or display other various, familiar expressions, but finding a consistent emotional state behind those expressions can be difficult, leaving you unsure of what it is truly feeling and creating a sense of unease. In this study, lead author Hisashi Ishihara and his research group developed a dynamic facial expression synthesis technology using “waveform movements,” which represents various gestures that constitute facial movements, such as “breathing,” “blinking,” and “yawning,” as individual waves. These waves are propagated to the related facial areas and are overlaid to generate complex facial movements in real time.

[ Osaka University ]

Suzumori Endo Lab, Science Tokyo has developed a self-excited vibration robot that can adapt its environment. This robot can move straight ahead and self-steer around a corner without a control system.

[ Paper via IEEE Robotics and Automation magazine in IEEE Xplore ]

PlayBot is an unofficial, experimental accessory for Panic Inc.’s Playdate handheld console, which transforms your console into a lovely little desktop robot.

[ Guillaume Loquin ] via [ Engadget ]

This is a big deal.

[ Ekso Bionics ]

Sanctuary AI introduces new tactile sensors for general purpose robots.

[ Sanctuary AI ]

Developed by the Pudu X-Lab, the PUDU D9 is designed with a human-centric philosophy that embodies the principle of “Born to Serve”. Its fully anthropomorphic design closely mirrors human capabilities, allowing it to offer practical assistance across a wide range of applications.

[ Pudu Robotics ]

EngineAI proudly unveils the PM01, our next-gen lightweight, high-dynamic, open-source humanoid robotic platform. With its interactive display, agile motion, and robust support for secondary development, PM01 is designed to be the most versatile tool for developers worldwide. PM01 is now available for purchase! We invite developers, researchers, and businesses to explore the future of robotics with PM01. Let’s push the boundaries of what robotics can achieve across different industries and use cases.

[ EngineAI ]

The third edition of CYBATHLON is now part of history. Held from October 25–27, 2024, at the SWISS Arena in Zürich, the event brought together 67 teams from 24 nations to compete in eight disciplines, showcasing state-of-the-art assistive technologies designed to help complete everyday tasks.While the winning teams celebrated their well-deserved victories, the event’s true spotlight was on the technological breakthroughs and their potential to transform lives. Equally remarkable was CYBATHLON’s emphasis on fostering social inclusion and empowering people with disabilities to overcome challenges through innovation.

[ Cybathlon ]

At the Kanda Myoujin Shrine in Tokyo, Aibos and their owners take part in the Shichi-go-san festival, which celebrates children (and robots!) at 3, 5, and 7 years old.

[ Aibo ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

At the FZI, it’s not just work for our robots, they join our festivities, too. Our shy robot Spot stumbled into this year’s FZI Winter Market …, a cheerful event for robots and humans alike. Will he find his place? We certainly hope so, because Feuerzangenbowle tastes much better after clinking glasses with your hot-oil-drinking friends.

[ FZI ]

Thanks, Georg!

The Fraunhofer IOSB Autonomous Robotic Systems Research Group wishes you a Merry Christmas filled with joy, peace, and robotic wonders!

[ Fraunhofer IOSB ]

Thanks, Janko!

There’s some thrilling action in this Christmas video from the PUT Mobile Robotics Laboratory, and the trick to put the lights on the tree is particularly clever. Enjoy!

[ PUT MRL ]

Thanks, Dominik!

The Norlab wishes you a Merry Christmas!

[ Northern Robotics Laboratory ]

The Learning Systems and Robotics Lab has made a couple of robot holiday videos based on the research that they’re doing:

[ Crowd Navigation ]


[ Learning with Contacts ]

Thanks, Sepehr!

Robots on a gift mission: Christmas greetings from the DFKI Robotics Innovation Center!

[ DFKI ]

Happy Holidays from Clearpath Robotics! Our workshop has been bustling lately with lots of exciting projects and integrations just in time for the holidays! The TurtleBot 4 elves helped load up the sleigh with plenty of presents to go around. Rudolph the Husky A300 made the trek through the snow so our Ridgeback friend with a manipulator arm and gripper could receive its gift.

[ Clearpath Robotics ]

2024 has been an eventful year for us at PAL Robotics, filled with milestones and memories. As the festive season approaches, we want to take a moment to say a heartfelt THANK YOU for being part of our journey!

[ PAL Robotics ]

Thanks, Rugilė!

In Santa’s shop, so bright and neat, A robot marched on metal feet. With tinsel arms and bolts so tight, It trimmed the tree all through the night. It hummed a carol, beeped with cheer, “Processing joy—it’s Christmas here!” But when it tried to dance with grace, It tangled lights around its face. “Error detected!” it spun around, Then tripped and tumbled to the ground. The elves all laughed, “You’ve done your part—A clumsy bot, but with a heart!” The ArtiMinds team would like to thank all partners and customers for an exciting 2024. We wish you and your families a Merry Christmas, joyful holidays and a Happy New Year - stay healthy.

[ ArtiMinds ]

Thanks to FANUC CRX collaborative robots, Santa and his elves can enjoy the holiday season knowing the work is getting done for the big night.

[ FANUC ]

Perhaps not technically a holiday video, until you consider how all that stuff you ordered online is actually getting to you.

[ Agility Robotics ]

Happy Holidays from Quanser, our best wishes for a wonderful holiday season and a happy 2025!

[ Quanser ]

Season’s Greetings from the team at Kawasaki Robotics USA! This season, we’re building blocks of memories filled with endless joy, and assembling our good wishes for a happy, healthy, prosperous new year. May the upcoming year be filled with opportunities and successes. From our team to yours, we hope you have a wonderful holiday season surrounded by loved ones and filled with joy and laughter.

[ Kawasaki Robotics ]

The robotics students at Queen’s University’s Ingenuity Labs Research Institute put together a 4K Holiday Robotics Lab Fireplace video, and unlike most fireplace videos, stuff actually happens in this one.

[ Ingenuity Labs ]

Thanks, Joshua!



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2025: 19–23 May 2025, ATLANTA, GA

Enjoy today’s videos!

At the FZI, it’s not just work for our robots, they join our festivities, too. Our shy robot Spot stumbled into this year’s FZI Winter Market …, a cheerful event for robots and humans alike. Will he find his place? We certainly hope so, because Feuerzangenbowle tastes much better after clinking glasses with your hot-oil-drinking friends.

[ FZI ]

Thanks, Georg!

The Fraunhofer IOSB Autonomous Robotic Systems Research Group wishes you a Merry Christmas filled with joy, peace, and robotic wonders!

[ Fraunhofer IOSB ]

Thanks, Janko!

There’s some thrilling action in this Christmas video from the PUT Mobile Robotics Laboratory, and the trick to put the lights on the tree is particularly clever. Enjoy!

[ PUT MRL ]

Thanks, Dominik!

The Norlab wishes you a Merry Christmas!

[ Northern Robotics Laboratory ]

The Learning Systems and Robotics Lab has made a couple of robot holiday videos based on the research that they’re doing:

[ Crowd Navigation ]


[ Learning with Contacts ]

Thanks, Sepehr!

Robots on a gift mission: Christmas greetings from the DFKI Robotics Innovation Center!

[ DFKI ]

Happy Holidays from Clearpath Robotics! Our workshop has been bustling lately with lots of exciting projects and integrations just in time for the holidays! The TurtleBot 4 elves helped load up the sleigh with plenty of presents to go around. Rudolph the Husky A300 made the trek through the snow so our Ridgeback friend with a manipulator arm and gripper could receive its gift.

[ Clearpath Robotics ]

2024 has been an eventful year for us at PAL Robotics, filled with milestones and memories. As the festive season approaches, we want to take a moment to say a heartfelt THANK YOU for being part of our journey!

[ PAL Robotics ]

Thanks, Rugilė!

In Santa’s shop, so bright and neat, A robot marched on metal feet. With tinsel arms and bolts so tight, It trimmed the tree all through the night. It hummed a carol, beeped with cheer, “Processing joy—it’s Christmas here!” But when it tried to dance with grace, It tangled lights around its face. “Error detected!” it spun around, Then tripped and tumbled to the ground. The elves all laughed, “You’ve done your part—A clumsy bot, but with a heart!” The ArtiMinds team would like to thank all partners and customers for an exciting 2024. We wish you and your families a Merry Christmas, joyful holidays and a Happy New Year - stay healthy.

[ ArtiMinds ]

Thanks to FANUC CRX collaborative robots, Santa and his elves can enjoy the holiday season knowing the work is getting done for the big night.

[ FANUC ]

Perhaps not technically a holiday video, until you consider how all that stuff you ordered online is actually getting to you.

[ Agility Robotics ]

Happy Holidays from Quanser, our best wishes for a wonderful holiday season and a happy 2025!

[ Quanser ]

Season’s Greetings from the team at Kawasaki Robotics USA! This season, we’re building blocks of memories filled with endless joy, and assembling our good wishes for a happy, healthy, prosperous new year. May the upcoming year be filled with opportunities and successes. From our team to yours, we hope you have a wonderful holiday season surrounded by loved ones and filled with joy and laughter.

[ Kawasaki Robotics ]

The robotics students at Queen’s University’s Ingenuity Labs Research Institute put together a 4K Holiday Robotics Lab Fireplace video, and unlike most fireplace videos, stuff actually happens in this one.

[ Ingenuity Labs ]

Thanks, Joshua!

Pages