Feed aggregator
Video Friday: Swiss-Mile Robot vs. Humans
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Swiss-Mile’s robot (which is really any robot that meets the hardware requirement to run their software) is faster than “most humans.” So what does that mean, exactly?
The winner here is Riccardo Rancan, who doesn’t look like he was trying especially hard—he’s the world champion in high-speed urban orienteering, which is a sport that I did not know existed but sounds pretty awesome.
[ Swiss-Mile ]
Thanks, Marko!
Oh good, we’re building giant fruit fly robots now.
But seriously, this is useful and important research because understanding the relationship between a nervous system and a bunch of legs can only be helpful as we ask more and more of legged robotic platforms.
[ Paper ]
Thanks, Clarus!
Watching humanoids get up off the ground will never not be fascinating.
[ Fourier ]
The Kepler Forerunner K2 represents the Gen 5.0 robot model, showcasing a seamless integration of the humanoid robot’s cerebral, cerebellar, and high-load body functions.[ Kepler ]
Diffusion Forcing combines the strength of full-sequence diffusion models (like SORA) and next-token models (like LLMs), acting as either or a mix at sampling time for different applications without retraining.[ MIT ]
Testing robot arms for space is no joke.
[ GITAI ]
Welcome to the Modular Robotics Lab (ModLab), a subgroup of the GRASP Lab and the Mechanical Engineering and Applied Mechanics Department at the University of Pennsylvania under the supervision of Prof. Mark Yim.[ ModLab ]
This is much more amusing than it has any right to be.
Let’s go for a walk with Adam at IROS’24![ PNDbotics ]
From Reachy 1 in 2023 to our newly launched Reachy 2, our grippers have been designed to enhance precision and dexterity in object manipulation. Some of the models featured in the video are prototypes used for various tests, showing the innovation behind the scenes.[ Pollen ]
I’m not sure how else you’d efficiently spray the tops of trees? Drones seem like a no-brainer here.
[ SUIND ]
Presented at ICRA40 in Rotterdam, we show the challenges faced by mobile manipulation platforms in the field. We at CSIRO Robotics are working steadily towards a collaborative approach to tackle such challenging technical problems.[ CSIRO ]
ABB is best known for arms, but it looks like they’re exploring AMRs for warehouse operations now.
[ ABB ]
Howie Choset, Lu Li, and Victoria Webster-Wood of the Manufacturing Futures Institute explain their work to create specialized sensors that allow robots to “feel” the world around them.[ CMU ]
Columbia Engineering Lecture Series in AI: “How Could Machines Reach Human-Level Intelligence?” by Yann LeCun.Animals and humans understand the physical world, have common sense, possess a persistent memory, can reason, and can plan complex sequences of subgoals and actions. These essential characteristics of intelligent behavior are still beyond the capabilities of today’s most powerful AI architectures, such as Auto-Regressive LLMs.I will present a cognitive architecture that may constitute a path towards human-level AI. The centerpiece of the architecture is a predictive world model that allows the system to predict the consequences of its actions. and to plan sequences of actions that that fulfill a set of objectives. The objectives may include guardrails that guarantee the system’s controllability and safety. The world model employs a Joint Embedding Predictive Architecture (JEPA) trained with self-supervised learning, largely by observation.
[ Columbia ]
Video Friday: Swiss-Mile Robot vs. Humans
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Swiss-Mile’s robot (which is really any robot that meets the hardware requirement to run their software) is faster than “most humans.” So what does that mean, exactly?
The winner here is Riccardo Rancan, who doesn’t look like he was trying especially hard—he’s the world champion in high-speed urban orienteering, which is a sport that I did not know existed but sounds pretty awesome.
[ Swiss-Mile ]
Thanks, Marko!
Oh good, we’re building giant fruit fly robots now.
But seriously, this is useful and important research because understanding the relationship between a nervous system and a bunch of legs can only be helpful as we ask more and more of legged robotic platforms.
[ Paper ]
Thanks, Clarus!
Watching humanoids get up off the ground will never not be fascinating.
[ Fourier ]
The Kepler Forerunner K2 represents the Gen 5.0 robot model, showcasing a seamless integration of the humanoid robot’s cerebral, cerebellar, and high-load body functions.[ Kepler ]
Diffusion Forcing combines the strength of full-sequence diffusion models (like SORA) and next-token models (like LLMs), acting as either or a mix at sampling time for different applications without retraining.[ MIT ]
Testing robot arms for space is no joke.
[ GITAI ]
Welcome to the Modular Robotics Lab (ModLab), a subgroup of the GRASP Lab and the Mechanical Engineering and Applied Mechanics Department at the University of Pennsylvania under the supervision of Prof. Mark Yim.[ ModLab ]
This is much more amusing than it has any right to be.
Let’s go for a walk with Adam at IROS’24![ PNDbotics ]
From Reachy 1 in 2023 to our newly launched Reachy 2, our grippers have been designed to enhance precision and dexterity in object manipulation. Some of the models featured in the video are prototypes used for various tests, showing the innovation behind the scenes.[ Pollen ]
I’m not sure how else you’d efficiently spray the tops of trees? Drones seem like a no-brainer here.
[ SUIND ]
Presented at ICRA40 in Rotterdam, we show the challenges faced by mobile manipulation platforms in the field. We at CSIRO Robotics are working steadily towards a collaborative approach to tackle such challenging technical problems.[ CSIRO ]
ABB is best known for arms, but it looks like they’re exploring AMRs for warehouse operations now.
[ ABB ]
Howie Choset, Lu Li, and Victoria Webster-Wood of the Manufacturing Futures Institute explain their work to create specialized sensors that allow robots to “feel” the world around them.[ CMU ]
Columbia Engineering Lecture Series in AI: “How Could Machines Reach Human-Level Intelligence?” by Yann LeCun.Animals and humans understand the physical world, have common sense, possess a persistent memory, can reason, and can plan complex sequences of subgoals and actions. These essential characteristics of intelligent behavior are still beyond the capabilities of today’s most powerful AI architectures, such as Auto-Regressive LLMs.I will present a cognitive architecture that may constitute a path towards human-level AI. The centerpiece of the architecture is a predictive world model that allows the system to predict the consequences of its actions. and to plan sequences of actions that that fulfill a set of objectives. The objectives may include guardrails that guarantee the system’s controllability and safety. The world model employs a Joint Embedding Predictive Architecture (JEPA) trained with self-supervised learning, largely by observation.
[ Columbia ]
This Inventor Is Molding Tomorrow’s Inventors
Marina Umaschi Bers has long been at the forefront of technological innovation for kids. In the 2010s, while teaching at Tufts University, in Massachusetts, she codeveloped the ScratchJr programming language and KIBO robotics kits, both intended for young children in STEM programs. Now head of the DevTech research group at Boston College, she continues to design learning technologies that promote computational thinking and cultivate a culture of engineering in kids.
What was the inspiration behind creating ScratchJr and the KIBO robot kits?
Marina Umaschi Bers: We want little kids—as they learn how to read and write, which are traditional literacies—to learn new literacies, such as how to code. To make that happen, we need to create child-friendly interfaces that are developmentally appropriate for their age, so they learn how to express themselves through computer programming.
How has the process of invention changed since you developed these technologies?
Bers: Now, with the maker culture, it’s a lot cheaper and easier to prototype things. And there’s more understanding that kids can be our partners as researchers and user-testers. They are not passive entities but active in expressing their needs and helping develop inventions that fit their goals.
What should people creating new technologies for kids keep in mind?
Bers: Not all kids are the same. You really need to look at the age of the kids. Try to understand developmentally where these children are in terms of their cognitive, social, emotional development. So when you’re designing, you’re designing not just for a user, but you’re designing for a whole human being.
The other thing is that in order to learn, children need to have fun. But they have fun by really being pushed to explore and create and make new things that are personally meaningful. So you need open-ended environments that allow children to explore and express themselves.
The KIBO kits teach kids robotics coding in a playful and screen-free way. KinderLab Robotics
How can coding and learning about robots bring out the inner inventors in kids?
Bers: I use the words “coding playground.” In a playground, children are inventing games all the time. They are inventing situations, they’re doing pretend play, they’re making things. So if we’re thinking of that as a metaphor when children are coding, it’s a platform for them to create, to make characters, to create stories, to make anything they want. In this idea of the coding playground, creativity is welcome—not just “follow what the teacher says” but let children invent their own projects.
What do you hope for in terms of the next generation of technologies for kids?
Bers: I hope we would see a lot more technologies that are outside. Right now, one of our projects is called Smart Playground [a project that will incorporate motors, sensors, and other devices into playgrounds to bolster computational thinking through play]. Children are able to use their bodies and run around and interact with others. It’s kind of getting away from the one-on-one relationship with the screen. Instead, technology is really going to augment the possibilities of people to interact with other people, and use their whole bodies, much of their brains, and their hands. These technologies will allow children to explore a little bit more of what it means to be human and what’s unique about us.
This Inventor Is Molding Tomorrow’s Inventors
Marina Umaschi Bers has long been at the forefront of technological innovation for kids. In the 2010s, while teaching at Tufts University, in Massachusetts, she codeveloped the ScratchJr programming language and KIBO robotics kits, both intended for young children in STEM programs. Now head of the DevTech research group at Boston College, she continues to design learning technologies that promote computational thinking and cultivate a culture of engineering in kids.
What was the inspiration behind creating ScratchJr and the KIBO robot kits?
Marina Umaschi Bers: We want little kids—as they learn how to read and write, which are traditional literacies—to learn new literacies, such as how to code. To make that happen, we need to create child-friendly interfaces that are developmentally appropriate for their age, so they learn how to express themselves through computer programming.
How has the process of invention changed since you developed these technologies?
Bers: Now, with the maker culture, it’s a lot cheaper and easier to prototype things. And there’s more understanding that kids can be our partners as researchers and user-testers. They are not passive entities but active in expressing their needs and helping develop inventions that fit their goals.
What should people creating new technologies for kids keep in mind?
Bers: Not all kids are the same. You really need to look at the age of the kids. Try to understand developmentally where these children are in terms of their cognitive, social, emotional development. So when you’re designing, you’re designing not just for a user, but you’re designing for a whole human being.
The other thing is that in order to learn, children need to have fun. But they have fun by really being pushed to explore and create and make new things that are personally meaningful. So you need open-ended environments that allow children to explore and express themselves.
The KIBO kits teach kids robotics coding in a playful and screen-free way. KinderLab Robotics
How can coding and learning about robots bring out the inner inventors in kids?
Bers: I use the words “coding playground.” In a playground, children are inventing games all the time. They are inventing situations, they’re doing pretend play, they’re making things. So if we’re thinking of that as a metaphor when children are coding, it’s a platform for them to create, to make characters, to create stories, to make anything they want. In this idea of the coding playground, creativity is welcome—not just “follow what the teacher says” but let children invent their own projects.
What do you hope for in terms of the next generation of technologies for kids?
Bers: I hope we would see a lot more technologies that are outside. Right now, one of our projects is called Smart Playground [a project that will incorporate motors, sensors, and other devices into playgrounds to bolster computational thinking through play]. Children are able to use their bodies and run around and interact with others. It’s kind of getting away from the one-on-one relationship with the screen. Instead, technology is really going to augment the possibilities of people to interact with other people, and use their whole bodies, much of their brains, and their hands. These technologies will allow children to explore a little bit more of what it means to be human and what’s unique about us.
Why Simone Giertz, the Queen of Useless Robots, Got Serious
Simone Giertz came to fame in the 2010s by becoming the self-proclaimed “queen of shitty robots.” On YouTube she demonstrated a hilarious series of self-built mechanized devices that worked perfectly for ridiculous applications, such as a headboard-mounted alarm clock with a rubber hand to slap the user awake.
But Giertz has parlayed her Internet renown into Yetch, a design company that makes commercial consumer products. (The company name comes from how Giertz’s Swedish name is properly pronounced.) Her first release, a daily habit-tracking calendar, was picked up by prestigious outlets such as the Museum of Modern Art design store in New York City. She has continued to make commercial products since, as well as one-off strange inventions for her online audience.
Where did the motivation for your useless robots come from?
Simone Giertz: I just thought that robots that failed were really funny. It was also a way for me to get out of creating from a place of performance anxiety and perfection. Because if you set out to do something that fails, that gives you a lot of creative freedom.
You built up a big online following. A lot of people would be happy with that level of success. But you moved into inventing commercial products. Why?
Giertz: I like torturing myself, I guess! I’d been creating things for YouTube and for social media for a long time. I wanted to try something new and also find longevity in my career. I’m not super motivated to constantly try to get people to give me attention. That doesn’t feel like a very good value to strive for. So I was like, “Okay, what do I want to do for the rest of my career?” And developing products is something that I’ve always been really, really interested in. And yeah, it is tough, but I’m so happy to be doing it. I’m enjoying it thoroughly, as much as there’s a lot of face-palm moments.
Giertz’s every day goal calendar was picked up by the Museum of Modern Art’s design store. Yetch
What role does failure play in your invention process?
Giertz: I think it’s inevitable. Before, obviously, I wanted something that failed in the most unexpected or fun way possible. And now when I’m developing products, it’s still a part of it. You make so many different versions of something and each one fails because of something. But then, hopefully, what happens is that you get smaller and smaller failures. Product development feels like you’re going in circles, but you’re actually going in a spiral because the circles are taking you somewhere.
What advice do you have for aspiring inventors?
Giertz: Make things that you want. A lot of people make things that they think that other people want, but the main target audience, at least for myself, is me. I trust that if I find something interesting, there are probably other people who do too. And then just find good people to work with and collaborate with. There is no such thing as the lonely genius, I think. I’ve worked with a lot of different people and some people made me really nervous and anxious. And some people, it just went easy and we had a great time. You’re just like, “Oh, what if we do this? What if we do this?” Find those people.
Why Simone Giertz, the Queen of Useless Robots, Got Serious
Simone Giertz came to fame in the 2010s by becoming the self-proclaimed “queen of shitty robots.” On YouTube she demonstrated a hilarious series of self-built mechanized devices that worked perfectly for ridiculous applications, such as a headboard-mounted alarm clock with a rubber hand to slap the user awake.
But Giertz has parlayed her Internet renown into Yetch, a design company that makes commercial consumer products. (The company name comes from how Giertz’s Swedish name is properly pronounced.) Her first release, a daily habit-tracking calendar, was picked up by prestigious outlets such as the Museum of Modern Art design store in New York City. She has continued to make commercial products since, as well as one-off strange inventions for her online audience.
Where did the motivation for your useless robots come from?
Simone Giertz: I just thought that robots that failed were really funny. It was also a way for me to get out of creating from a place of performance anxiety and perfection. Because if you set out to do something that fails, that gives you a lot of creative freedom.
You built up a big online following. A lot of people would be happy with that level of success. But you moved into inventing commercial products. Why?
Giertz: I like torturing myself, I guess! I’d been creating things for YouTube and for social media for a long time. I wanted to try something new and also find longevity in my career. I’m not super motivated to constantly try to get people to give me attention. That doesn’t feel like a very good value to strive for. So I was like, “Okay, what do I want to do for the rest of my career?” And developing products is something that I’ve always been really, really interested in. And yeah, it is tough, but I’m so happy to be doing it. I’m enjoying it thoroughly, as much as there’s a lot of face-palm moments.
Giertz’s every day goal calendar was picked up by the Museum of Modern Art’s design store. Yetch
What role does failure play in your invention process?
Giertz: I think it’s inevitable. Before, obviously, I wanted something that failed in the most unexpected or fun way possible. And now when I’m developing products, it’s still a part of it. You make so many different versions of something and each one fails because of something. But then, hopefully, what happens is that you get smaller and smaller failures. Product development feels like you’re going in circles, but you’re actually going in a spiral because the circles are taking you somewhere.
What advice do you have for aspiring inventors?
Giertz: Make things that you want. A lot of people make things that they think that other people want, but the main target audience, at least for myself, is me. I trust that if I find something interesting, there are probably other people who do too. And then just find good people to work with and collaborate with. There is no such thing as the lonely genius, I think. I’ve worked with a lot of different people and some people made me really nervous and anxious. And some people, it just went easy and we had a great time. You’re just like, “Oh, what if we do this? What if we do this?” Find those people.
Remote Sub Sustains Science Kilometers Underwater
The water column is hazy as an unusual remotely operated vehicle glides over the seafloor in search of a delicate tilt meter deployed three years ago off the west side of Vancouver Island. The sensor measures shaking and shifting in continental plates that will eventually unleash another of the region’s 9.0-scale earthquakes (the last was in 1700), and dwindling charge in the instruments’ data loggers threatens the continuity of the data.
The 4-metric-ton, C$8-million (US $5.8-million) remotely operated vehicle (ROV) is 50 meters from its target when one of the seismic science platforms appears on its SONAR imaging system, the platform’s hard edges crystallizing from the grainy background like a surgical implant jumping out of an ultrasound image. After easing the ROV to the platform, operators 2,575 meters up at the Pacific’s surface instruct its electromechanical arms and pincer hands to deftly unplug a data logger, then plug in a replacement with a fresh battery.
This mission, executed in early October, marked an exciting moment for Josh Tetarenko, director of ROV operations at North Vancouver, BC-based Canpac Marine Services. Tetarenko is the lead designer behind the new science submersible and recently dubbed it “Jenny” in homage to Forrest Gump, because the fictional character named all of his boats Jenny. Swapping out the data loggers west of Vancouver Island’s Clayoquot Sound was part of a week-long shakedown to test Jenny’s unique combination of dexterity, visualization chops, power, and pressure resistance.
Jenny is only the third science ROV designed for subsea work to a depth of 6,000 meters.
By all accounts Jenny sailed through. Tetarenko says the worst they saw was a leaky o-ring and the need to add some spring to a few bumpers. “Usually you see more things come up the first time you dive a vehicle to those depths,” says Tetarenko.
Jenny’s successful maiden cruise is just as important for Victoria, B.C.-based Ocean Networks Canada (ONC), which operates the NEPTUNE undersea observatory. Short for North-East Pacific Time-series Undersea Networked Experiments, the array boasts thousands of sensors and instruments, including deep-sea video cameras, seismometers, and robotic rovers sprawled across this corner of Pacific. Most of these are connected to shore via an 812-kilometer power and communications cable. Jenny was custom-designed to perform the annual maintenance and equipment swaps that have kept live data streaming from that cabled observatory nearly continuously for the past 15 years, despite trawler strikes, a fault on its backbone cable, and insults from corrosion, crushing pressures and fouling.
NEPTUNE remains one of the world’s largest installation for oceanographic science despite a proliferation of such cabled observatories since it went live in 2009. ONC’s open data portal has over 37,000 registered users tapping over 1.5 petabytes of ocean data—information that’s growing in importance with the intensification of climate change and the collapse of marine ecosystems.
Over the course of Jenny’s maiden cruise her operators swapped devices in and out at half a dozen ONC sites, including at several of Neptune’s five nodes and at one of Neptune’s smaller sister observatories closer to Vancouver.
Inside JennyROV ‘Jenny’ aboard the Valour, Canpac’s 50-meter offshore workhorse, ahead of October’s Neptune observatory maintenance cruise.Ocean Networks Canada
What makes Jenny so special?
- Jenny is only the third science ROV designed for subsea work to a depth of 6,000 meters.
- Motion sensors actively adjust her 7,000-meter-long umbilical cable to counteract topside wave action that would otherwise yank the ROV around at depth and, in rough seas, could damage or snap the cable.
- Dual high-dexterity manipulator arms are controlled by topside operators via a pair of replica mini-manipulators that mirror the movements.
- Each arm is capable of picking up objects weighing about 275 kilograms, and the ROV itself can transport equipment weighing up to 3,000 kg.
- 11 high resolution cameras deliver 4K video, supported by 300,000 lumens of lighting that can be tuned to deliver the soft red light needed to observe bioluminescence.
- Dual multi-beam SONAR systems maximize visibility in turbid water.
Meghan Paulson, ONC’s executive director for observatory operations, says the sonar imaging system will be particularly invaluable during dives to shallower sites where sediments stirred up by waves and weather can cut visibility from meters to centimeters. “It really reduces the risk of running into things accidentally,” says Paulson.
To experience the visibility conditions for yourself, check out recordings of the live video broadcast from the NEPTUNE Maintenance Cruise. Tetarenko says that next year they hope to broadcast not only the main camera feed but also one of the sonar images.
3D video could be next, according to Canpac ROV pilot and Jenny co-designer, James Barnett. He says they would need to boost the computing power installed topside, to process that “firehose of data,” but insists that real-time 3D is “definitely not impossible.” Tetarenko says the science ROV community is collaborating on software to help make that workable: “3D imagining is kind of the very latest thing that’s being tested on lots of ROV systems right now, but nobody’s really there yet.”
More Than ScienceExpansion of the cabled observatory concept is the more certain technological legacy for ONC and Neptune. In fact, the technology has evolved beyond just oceanography applications.
ONC tapped Alcatel Submarine Networks (ASN) to design and build the Neptune backbone and the French firm delivered a system that has reliably delivered multigigabit ethernet plus 10-kilovolts of direct-current electricity to the deep sea. Today ASN deploys a second-generation subsea power and communications networking solution, developed with Norwegian oil and gas major Equinor.
ASN’s ‘Direct Current / Fiber Optic‘ or DC/FO system provides the 100-km backbone for the ARCA subsea neutrino observatory near Sicily, in addition to providing control systems for a growing number of offshore oil and gas installations. The latter include projects led by Equinor and BP where DC/FO networks drive the subsea injection of captured carbon dioxide and monitor its storage below the seabed. Future oil and gas projects will increasingly rely on the cables’ power supply to replace the hydraulic lines that have traditionally been used to operate machinery on the seafloor, according to Ronan Michel, ASN’s product line manager for oil and gas solutions.
Michel says DC/FO incorporates important lessons learned from the Neptune installation. And the latter’s existence was a crucial prerequisite. “The DC/FO solution would probably not exist if Neptune Canada would not have been developed,” says Michel. “It probably gave confidence to Equinor that ASN was capable to develop subsea power & coms infrastructure.”
Remote Sub Sustains Science Kilometers Underwater
The water column is hazy as an unusual remotely operated vehicle glides over the seafloor in search of a delicate tilt meter deployed three years ago off the west side of Vancouver Island. The sensor measures shaking and shifting in continental plates that will eventually unleash another of the region’s 9.0-scale earthquakes (the last was in 1700), and dwindling charge in the instruments’ data loggers threatens the continuity of the data.
The 4-metric-ton, C$8-million (US $5.8-million) remotely operated vehicle (ROV) is 50 meters from its target when one of the seismic science platforms appears on its SONAR imaging system, the platform’s hard edges crystallizing from the grainy background like a surgical implant jumping out of an ultrasound image. After easing the ROV to the platform, operators 2,575 meters up at the Pacific’s surface instruct its electromechanical arms and pincer hands to deftly unplug a data logger, then plug in a replacement with a fresh battery.
This mission, executed in early October, marked an exciting moment for Josh Tetarenko, director of ROV operations at North Vancouver, BC-based Canpac Marine Services. Tetarenko is the lead designer behind the new science submersible and recently dubbed it “Jenny” in homage to Forrest Gump, because the fictional character named all of his boats Jenny. Swapping out the data loggers west of Vancouver Island’s Clayoquot Sound was part of a week-long shakedown to test Jenny’s unique combination of dexterity, visualization chops, power, and pressure resistance.
Jenny is only the third science ROV designed for subsea work to a depth of 6,000 meters.
By all accounts Jenny sailed through. Tetarenko says the worst they saw was a leaky o-ring and the need to add some spring to a few bumpers. “Usually you see more things come up the first time you dive a vehicle to those depths,” says Tetarenko.
Jenny’s successful maiden cruise is just as important for Victoria, B.C.-based Ocean Networks Canada (ONC), which operates the NEPTUNE undersea observatory. Short for North-East Pacific Time-series Undersea Networked Experiments, the array boasts thousands of sensors and instruments, including deep-sea video cameras, seismometers, and robotic rovers sprawled across this corner of Pacific. Most of these are connected to shore via an 812-kilometer power and communications cable. Jenny was custom-designed to perform the annual maintenance and equipment swaps that have kept live data streaming from that cabled observatory nearly continuously for the past 15 years, despite trawler strikes, a fault on its backbone cable, and insults from corrosion, crushing pressures and fouling.
NEPTUNE remains one of the world’s largest installation for oceanographic science despite a proliferation of such cabled observatories since it went live in 2009. ONC’s open data portal has over 37,000 registered users tapping over 1.5 petabytes of ocean data—information that’s growing in importance with the intensification of climate change and the collapse of marine ecosystems.
Over the course of Jenny’s maiden cruise her operators swapped devices in and out at half a dozen ONC sites, including at several of Neptune’s five nodes and at one of Neptune’s smaller sister observatories closer to Vancouver.
Inside JennyROV ‘Jenny’ aboard the Valour, Canpac’s 50-meter offshore workhorse, ahead of October’s Neptune observatory maintenance cruise.Ocean Networks Canada
What makes Jenny so special?
- Jenny is only the third science ROV designed for subsea work to a depth of 6,000 meters.
- Motion sensors actively adjust her 7,000-meter-long umbilical cable to counteract topside wave action that would otherwise yank the ROV around at depth and, in rough seas, could damage or snap the cable.
- Dual high-dexterity manipulator arms are controlled by topside operators via a pair of replica mini-manipulators that mirror the movements.
- Each arm is capable of picking up objects weighing about 275 kilograms, and the ROV itself can transport equipment weighing up to 3,000 kg.
- 11 high resolution cameras deliver 4K video, supported by 300,000 lumens of lighting that can be tuned to deliver the soft red light needed to observe bioluminescence.
- Dual multi-beam SONAR systems maximize visibility in turbid water.
Meghan Paulson, ONC’s executive director for observatory operations, says the sonar imaging system will be particularly invaluable during dives to shallower sites where sediments stirred up by waves and weather can cut visibility from meters to centimeters. “It really reduces the risk of running into things accidentally,” says Paulson.
To experience the visibility conditions for yourself, check out recordings of the live video broadcast from the NEPTUNE Maintenance Cruise. Tetarenko says that next year they hope to broadcast not only the main camera feed but also one of the sonar images.
3D video could be next, according to Canpac ROV pilot and Jenny co-designer, James Barnett. He says they would need to boost the computing power installed topside, to process that “firehose of data,” but insists that real-time 3D is “definitely not impossible.” Tetarenko says the science ROV community is collaborating on software to help make that workable: “3D imagining is kind of the very latest thing that’s being tested on lots of ROV systems right now, but nobody’s really there yet.”
More Than ScienceExpansion of the cabled observatory concept is the more certain technological legacy for ONC and Neptune. In fact, the technology has evolved beyond just oceanography applications.
ONC tapped Alcatel Submarine Networks (ASN) to design and build the Neptune backbone and the French firm delivered a system that has reliably delivered multigigabit ethernet plus 10-kilovolts of direct-current electricity to the deep sea. Today ASN deploys a second-generation subsea power and communications networking solution, developed with Norwegian oil and gas major Equinor.
ASN’s ‘Direct Current / Fiber Optic‘ or DC/FO system provides the 100-km backbone for the ARCA subsea neutrino observatory near Sicily, in addition to providing control systems for a growing number of offshore oil and gas installations. The latter include projects led by Equinor and BP where DC/FO networks drive the subsea injection of captured carbon dioxide and monitor its storage below the seabed. Future oil and gas projects will increasingly rely on the cables’ power supply to replace the hydraulic lines that have traditionally been used to operate machinery on the seafloor, according to Ronan Michel, ASN’s product line manager for oil and gas solutions.
Michel says DC/FO incorporates important lessons learned from the Neptune installation. And the latter’s existence was a crucial prerequisite. “The DC/FO solution would probably not exist if Neptune Canada would not have been developed,” says Michel. “It probably gave confidence to Equinor that ASN was capable to develop subsea power & coms infrastructure.”
Video Friday: Mobile Robot Upgrades
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
One of the most venerable (and recognizable) mobile robots ever made, the Husky, has just gotten a major upgrade.
Shipping early next year.
MAB Robotics is developing legged robots for the inspection and maintenance of industrial infrastructure. One of the initial areas for deploying this technology is underground infrastructure, such as water and sewer canals. In these environments, resistance to factors like high humidity and working underwater is essential. To address these challenges, the MAB team has built a walking robot capable of operating fully submerged, based on exceptional self-developed robotics actuators. This innovation overcomes the limitations of current technologies, offering MAB’s first clients a unique service for trenchless inspection and maintenance tasks.[ MAB Robotics ]
Thanks, Jakub!
The G1 robot can perform a standing long jump of up to 1.4 meters, possibly the longest jump ever achieved by a humanoid robot of its size in the world, standing only 1.32 meters tall.[ Unitree Robotics ]
Apparently, you can print out a functional four-fingered hand on an inkjet.
[ UC Berkeley ]
We present SDS (``See it. Do it. Sorted’), a novel pipeline for intuitive quadrupedal skill learning from a single demonstration video leveraging the visual capabilities of GPT-4o. We validate our method on the Unitree Go1 robot, demonstrating its ability to execute variable skills such as trotting, bounding, pacing, and hopping, achieving high imitation fidelity and locomotion stability.[ Robot Perception Lab, University College London ]
You had me at “3D desk octopus.”
[ UIST 2024 ACM Symposium on User Interface Software and Technology ]
Top-notch swag from Dusty Robotics
[ Dusty Robotics ]
I’m not sure how serious this shoes-versus-no-shoes test is, but it’s an interesting result nonetheless.
[ Robot Era ]
Thanks, Ni Tao!
Introducing TRON 1, the first multimodal biped robot! With its innovative “Three-in-One” modular design, TRON 1 can easily switch among Point-Foot, Sole, and Wheeled foot ends.[ LimX Dynamics ]
Recent works in the robot-learning community have successfully introduced generalist models capable of controlling various robot embodiments across a wide range of tasks, such as navigation and locomotion. However, achieving agile control, which pushes the limits of robotic performance, still relies on specialist models that require extensive parameter tuning. To leverage generalist-model adaptability and flexibility while achieving specialist-level agility, we propose AnyCar, a transformer-based generalist dynamics model designed for agile control of various wheeled robots.[ AnyCar ]
Discover the future of aerial manipulation with our untethered soft robotic platform with onboard perception stack! Presented at the 2024 Conference on Robot Learning, in Munich, this platform introduces autonomous aerial manipulation that works in both indoor and outdoor environments—without relying on costly off-board tracking systems.[ Paper ] via [ ETH Zurich Soft Robotics Laboratory ]
Deploying perception modules for human-robot handovers is challenging because they require a high degree of reactivity, generalizability, and robustness to work reliably for diverse cases. Here, we show hardware handover experiments using our efficient and object-agnostic real-time tracking framework, specifically designed for human-to-robot handover tasks with legged manipulators.[ Paper ] via [ ETH Zurich Robotic Systems Lab ]
Azi and Ameca are killing time, but Azi struggles being the new kid around. Engineered Arts desktop robots feature 32 actuators, 27 for facial control alone, and 5 for the neck. They include AI conversational ability including GPT-4o support, which makes them great robotic companions, even to each other. The robots are following a script for this video, using one of their many voices.[ Engineered Arts ]
Plato automates carrying and transporting, giving your staff more time to focus on what really matters, improving their quality of life. With a straightforward setup that requires no markers or additional hardware, Plato is incredibly intuitive to use—no programming skills needed.[ Aldebaran ]
This UPenn GRASP Lab seminar is from Antonio Loquercio, on “Simulation: What made us intelligent will make our robots intelligent.”
Simulation-to-reality transfer is an emerging approach that enables robots to develop skills in simulated environments before applying them in the real world. This method has catalyzed numerous advancements in robotic learning, from locomotion to agile flight. In this talk, I will explore simulation-to-reality transfer through the lens of evolutionary biology, drawing intriguing parallels with the function of the mammalian neocortex. By reframing this technique in the context of biological evolution, we can uncover novel research questions and explore how simulation-to-reality transfer can evolve from an empirically driven process to a scientific discipline.Video Friday: Mobile Robot Upgrades
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
One of the most venerable (and recognizable) mobile robots ever made, the Husky, has just gotten a major upgrade.
Shipping early next year.
MAB Robotics is developing legged robots for the inspection and maintenance of industrial infrastructure. One of the initial areas for deploying this technology is underground infrastructure, such as water and sewer canals. In these environments, resistance to factors like high humidity and working underwater is essential. To address these challenges, the MAB team has built a walking robot capable of operating fully submerged, based on exceptional self-developed robotics actuators. This innovation overcomes the limitations of current technologies, offering MAB’s first clients a unique service for trenchless inspection and maintenance tasks.[ MAB Robotics ]
Thanks, Jakub!
The G1 robot can perform a standing long jump of up to 1.4 meters, possibly the longest jump ever achieved by a humanoid robot of its size in the world, standing only 1.32 meters tall.[ Unitree Robotics ]
Apparently, you can print out a functional four-fingered hand on an inkjet.
[ UC Berkeley ]
We present SDS (``See it. Do it. Sorted’), a novel pipeline for intuitive quadrupedal skill learning from a single demonstration video leveraging the visual capabilities of GPT-4o. We validate our method on the Unitree Go1 robot, demonstrating its ability to execute variable skills such as trotting, bounding, pacing, and hopping, achieving high imitation fidelity and locomotion stability.[ Robot Perception Lab, University College London ]
You had me at “3D desk octopus.”
[ UIST 2024 ACM Symposium on User Interface Software and Technology ]
Top-notch swag from Dusty Robotics
[ Dusty Robotics ]
I’m not sure how serious this shoes-versus-no-shoes test is, but it’s an interesting result nonetheless.
[ Robot Era ]
Thanks, Ni Tao!
Introducing TRON 1, the first multimodal biped robot! With its innovative “Three-in-One” modular design, TRON 1 can easily switch among Point-Foot, Sole, and Wheeled foot ends.[ LimX Dynamics ]
Recent works in the robot-learning community have successfully introduced generalist models capable of controlling various robot embodiments across a wide range of tasks, such as navigation and locomotion. However, achieving agile control, which pushes the limits of robotic performance, still relies on specialist models that require extensive parameter tuning. To leverage generalist-model adaptability and flexibility while achieving specialist-level agility, we propose AnyCar, a transformer-based generalist dynamics model designed for agile control of various wheeled robots.[ AnyCar ]
Discover the future of aerial manipulation with our untethered soft robotic platform with onboard perception stack! Presented at the 2024 Conference on Robot Learning, in Munich, this platform introduces autonomous aerial manipulation that works in both indoor and outdoor environments—without relying on costly off-board tracking systems.[ Paper ] via [ ETH Zurich Soft Robotics Laboratory ]
Deploying perception modules for human-robot handovers is challenging because they require a high degree of reactivity, generalizability, and robustness to work reliably for diverse cases. Here, we show hardware handover experiments using our efficient and object-agnostic real-time tracking framework, specifically designed for human-to-robot handover tasks with legged manipulators.[ Paper ] via [ ETH Zurich Robotic Systems Lab ]
Azi and Ameca are killing time, but Azi struggles being the new kid around. Engineered Arts desktop robots feature 32 actuators, 27 for facial control alone, and 5 for the neck. They include AI conversational ability including GPT-4o support, which makes them great robotic companions, even to each other. The robots are following a script for this video, using one of their many voices.[ Engineered Arts ]
Plato automates carrying and transporting, giving your staff more time to focus on what really matters, improving their quality of life. With a straightforward setup that requires no markers or additional hardware, Plato is incredibly intuitive to use—no programming skills needed.[ Aldebaran ]
This UPenn GRASP Lab seminar is from Antonio Loquercio, on “Simulation: What made us intelligent will make our robots intelligent.”
Simulation-to-reality transfer is an emerging approach that enables robots to develop skills in simulated environments before applying them in the real world. This method has catalyzed numerous advancements in robotic learning, from locomotion to agile flight. In this talk, I will explore simulation-to-reality transfer through the lens of evolutionary biology, drawing intriguing parallels with the function of the mammalian neocortex. By reframing this technique in the context of biological evolution, we can uncover novel research questions and explore how simulation-to-reality transfer can evolve from an empirically driven process to a scientific discipline.Boston Dynamics and Toyota Research Team Up on Robots
Today, Boston Dynamics and the Toyota Research Institute (TRI) announced a new partnership “to accelerate the development of general-purpose humanoid robots utilizing TRI’s Large Behavior Models and Boston Dynamics’ Atlas robot.” Committing to working towards a general purpose robot may make this partnership sound like a every other commercial humanoid company right now, but that’s not at all that’s going on here: BD and TRI are talking about fundamental robotics research, focusing on hard problems, and (most importantly) sharing the results.
The broader context here is that Boston Dynamics has an exceptionally capable humanoid platform capable of advanced and occasionally painful-looking whole-body motion behaviors along with some relatively basic and brute force-y manipulation. Meanwhile, TRI has been working for quite a while on developing AI-based learning techniques to tackle a variety of complicated manipulation challenges. TRI is working toward what they’re calling large behavior models (LBMs), which you can think of as analogous to large language models (LLMs), except for robots doing useful stuff in the physical world. The appeal of this partnership is pretty clear: Boston Dynamics gets new useful capabilities for Atlas, while TRI gets Atlas to explore new useful capabilities on.
Here’s a bit more from the press release:
The project is designed to leverage the strengths and expertise of each partner equally. The physical capabilities of the new electric Atlas robot, coupled with the ability to programmatically command and teleoperate a broad range of whole-body bimanual manipulation behaviors, will allow research teams to deploy the robot across a range of tasks and collect data on its performance. This data will, in turn, be used to support the training of advanced LBMs, utilizing rigorous hardware and simulation evaluation to demonstrate that large, pre-trained models can enable the rapid acquisition of new robust, dexterous, whole-body skills.The joint team will also conduct research to answer fundamental training questions for humanoid robots, the ability of research models to leverage whole-body sensing, and understanding human-robot interaction and safety/assurance cases to support these new capabilities.
For more details, we spoke with Scott Kuindersma (Senior Director of Robotics Research at Boston Dynamics) and Russ Tedrake (VP of Robotics Research at TRI).
How did this partnership happen?
Russ Tedrake: We have a ton of respect for the Boston Dynamics team and what they’ve done, not only in terms of the hardware, but also the controller on Atlas. They’ve been growing their machine learning effort as we’ve been working more and more on the machine learning side. On TRI’s side, we’re seeing the limits of what you can do in tabletop manipulation, and we want to explore beyond that.
Scott Kuindersma: The combination skills and tools that TRI brings the table with the existing platform capabilities we have at Boston Dynamics, in addition to the machine learning teams we’ve been building up for the last couple years, put us in a really great position to hit the ground running together and do some pretty amazing stuff with Atlas.
What will your approach be to communicating your work, especially in the context of all the craziness around humanoids right now?
Tedrake: There’s a ton of pressure right now to do something new and incredible every six months or so. In some ways, it’s healthy for the field to have that much energy and enthusiasm and ambition. But I also think that there are people in the field that are coming around to appreciate the slightly longer and deeper view of understanding what works and what doesn’t, so we do have to balance that.
The other thing that I’d say is that there’s so much hype out there. I am incredibly excited about the promise of all this new capability; I just want to make sure that as we’re pushing the science forward, we’re being also honest and transparent about how well it’s working.
Kuindersma: It’s not lost on either of our organizations that this is maybe one of the most exciting points in the history of robotics, but there’s still a tremendous amount of work to do.
What are some of the challenges that your partnership will be uniquely capable of solving?
Kuindersma: One of the things that we’re both really excited about is the scope of behaviors that are possible with humanoids—a humanoid robot is much more than a pair of grippers on a mobile base. I think the opportunity to explore the full behavioral capability space of humanoids is probably something that we’re uniquely positioned to do right now because of the historical work that we’ve done at Boston Dynamics. Atlas is a very physically capable robot—the most capable humanoid we’ve ever built. And the platform software that we have allows for things like data collection for whole body manipulation to be about as easy as it is anywhere in the world.
Tedrake: In my mind, we really have opened up a brand new science—there’s a new set of basic questions that need answering. Robotics has come into this era of big science where it takes a big team and a big budget and strong collaborators to basically build the massive data sets and train the models to be in a position to ask these fundamental questions.
Fundamental questions like what?
Tedrake: Nobody has the beginnings of an idea of what the right training mixture is for humanoids. Like, we want to do pre-training with language, that’s way better, but how early do we introduce vision? How early do we introduce actions? Nobody knows. What’s the right curriculum of tasks? Do we want some easy tasks where we get greater than zero performance right out of the box? Probably. Do we also want some really complicated tasks? Probably. We want to be just in the home? Just in the factory? What’s the right mixture? Do we want backflips? I don’t know. We have to figure it out.
There are more questions too, like whether we have enough data on the Internet to train robots, and how we could mix and transfer capabilities from Internet data sets into robotics. Is robot data fundamentally different than other data? Should we expect the same scaling laws? Should we expect the same long-term capabilities?
The other big one that you’ll hear the experts talk about is evaluation, which is a major bottleneck. If you look at some of these papers that show incredible results, the statistical strength of their results section is very weak and consequently we’re making a lot of claims about things that we don’t really have a lot of basis for. It will take a lot of engineering work to carefully build up empirical strength in our results. I think evaluation doesn’t get enough attention.
What has changed in robotics research in the last year or so that you think has enabled the kind of progress that you’re hoping to achieve?
Kuindersma: From my perspective, there are two high-level things that have changed how I’ve thought about work in this space. One is the convergence of the field around repeatable processes for training manipulation skills through demonstrations. The pioneering work of diffusion policy (which TRI was a big part of) is a really powerful thing—it takes the process of generating manipulation skills that previously were basically unfathomable, and turned it into something where you just collect a bunch of data, you train it on an architecture that’s more or less stable at this point, and you get a result.
The second thing is everything that’s happened in robotics-adjacent areas of AI showing that data scale and diversity are really the keys to generalizable behavior. We expect that to also be true for robotics. And so taking these two things together, it makes the path really clear, but I still think there are a ton of open research challenges and questions that we need to answer.
Do you think that simulation is an effective way of scaling data for robotics?
Tedrake: I think generally people underestimate simulation. The work we’ve been doing has made me very optimistic about the capabilities of simulation as long as you use it wisely. Focusing on a specific robot doing a specific task is asking the wrong question; you need to get the distribution of tasks and performance in simulation to be predictive of the distribution of tasks and performance in the real world. There are some things that are still hard to simulate well, but even when it comes to frictional contact and stuff like that, I think we’re getting pretty good at this point.
Is there a commercial future for this partnership that you’re able to talk about?
Kuindersma: For Boston Dynamics, clearly we think there’s long-term commercial value in this work, and that’s one of the main reasons why we want to invest in it. But the purpose of this collaboration is really about fundamental research—making sure that we do the work, advance the science, and do it in a rigorous enough way so that we actually understand and trust the results and we can communicate that out to the world. So yes, we see tremendous value in this commercially. Yes, we are commercializing Atlas, but this project is really about fundamental research.
What happens next?
Tedrake: There are questions at the intersection of things that BD has done and things that TRI has done that we need to do together to start, and that’ll get things going. And then we have big ambitions—getting a generalist capability that we’re calling LBM (large behavior models) running on Atlas is the goal. In the first year we’re trying to focus on these fundamental questions, push boundaries, and write and publish papers.
I want people to be excited about watching for our results, and I want people to trust our results when they see them. For me, that’s the most important message for the robotics community: Through this partnership we’re trying to take a longer view that balances our extreme optimism with being critical in our approach.
Boston Dynamics and Toyota Research Team Up on Robots
Today, Boston Dynamics and the Toyota Research Institute (TRI) announced a new partnership “to accelerate the development of general-purpose humanoid robots utilizing TRI’s Large Behavior Models and Boston Dynamics’ Atlas robot.” Committing to working towards a general purpose robot may make this partnership sound like a every other commercial humanoid company right now, but that’s not at all that’s going on here: BD and TRI are talking about fundamental robotics research, focusing on hard problems, and (most importantly) sharing the results.
The broader context here is that Boston Dynamics has an exceptionally capable humanoid platform capable of advanced and occasionally painful-looking whole-body motion behaviors along with some relatively basic and brute force-y manipulation. Meanwhile, TRI has been working for quite a while on developing AI-based learning techniques to tackle a variety of complicated manipulation challenges. TRI is working toward what they’re calling large behavior models (LBMs), which you can think of as analogous to large language models (LLMs), except for robots doing useful stuff in the physical world. The appeal of this partnership is pretty clear: Boston Dynamics gets new useful capabilities for Atlas, while TRI gets Atlas to explore new useful capabilities on.
Here’s a bit more from the press release:
The project is designed to leverage the strengths and expertise of each partner equally. The physical capabilities of the new electric Atlas robot, coupled with the ability to programmatically command and teleoperate a broad range of whole-body bimanual manipulation behaviors, will allow research teams to deploy the robot across a range of tasks and collect data on its performance. This data will, in turn, be used to support the training of advanced LBMs, utilizing rigorous hardware and simulation evaluation to demonstrate that large, pre-trained models can enable the rapid acquisition of new robust, dexterous, whole-body skills.The joint team will also conduct research to answer fundamental training questions for humanoid robots, the ability of research models to leverage whole-body sensing, and understanding human-robot interaction and safety/assurance cases to support these new capabilities.
For more details, we spoke with Scott Kuindersma (Senior Director of Robotics Research at Boston Dynamics) and Russ Tedrake (VP of Robotics Research at TRI).
How did this partnership happen?
Russ Tedrake: We have a ton of respect for the Boston Dynamics team and what they’ve done, not only in terms of the hardware, but also the controller on Atlas. They’ve been growing their machine learning effort as we’ve been working more and more on the machine learning side. On TRI’s side, we’re seeing the limits of what you can do in tabletop manipulation, and we want to explore beyond that.
Scott Kuindersma: The combination skills and tools that TRI brings the table with the existing platform capabilities we have at Boston Dynamics, in addition to the machine learning teams we’ve been building up for the last couple years, put us in a really great position to hit the ground running together and do some pretty amazing stuff with Atlas.
What will your approach be to communicating your work, especially in the context of all the craziness around humanoids right now?
Tedrake: There’s a ton of pressure right now to do something new and incredible every six months or so. In some ways, it’s healthy for the field to have that much energy and enthusiasm and ambition. But I also think that there are people in the field that are coming around to appreciate the slightly longer and deeper view of understanding what works and what doesn’t, so we do have to balance that.
The other thing that I’d say is that there’s so much hype out there. I am incredibly excited about the promise of all this new capability; I just want to make sure that as we’re pushing the science forward, we’re being also honest and transparent about how well it’s working.
Kuindersma: It’s not lost on either of our organizations that this is maybe one of the most exciting points in the history of robotics, but there’s still a tremendous amount of work to do.
What are some of the challenges that your partnership will be uniquely capable of solving?
Kuindersma: One of the things that we’re both really excited about is the scope of behaviors that are possible with humanoids—a humanoid robot is much more than a pair of grippers on a mobile base. I think the opportunity to explore the full behavioral capability space of humanoids is probably something that we’re uniquely positioned to do right now because of the historical work that we’ve done at Boston Dynamics. Atlas is a very physically capable robot—the most capable humanoid we’ve ever built. And the platform software that we have allows for things like data collection for whole body manipulation to be about as easy as it is anywhere in the world.
Tedrake: In my mind, we really have opened up a brand new science—there’s a new set of basic questions that need answering. Robotics has come into this era of big science where it takes a big team and a big budget and strong collaborators to basically build the massive data sets and train the models to be in a position to ask these fundamental questions.
Fundamental questions like what?
Tedrake: Nobody has the beginnings of an idea of what the right training mixture is for humanoids. Like, we want to do pre-training with language, that’s way better, but how early do we introduce vision? How early do we introduce actions? Nobody knows. What’s the right curriculum of tasks? Do we want some easy tasks where we get greater than zero performance right out of the box? Probably. Do we also want some really complicated tasks? Probably. We want to be just in the home? Just in the factory? What’s the right mixture? Do we want backflips? I don’t know. We have to figure it out.
There are more questions too, like whether we have enough data on the Internet to train robots, and how we could mix and transfer capabilities from Internet data sets into robotics. Is robot data fundamentally different than other data? Should we expect the same scaling laws? Should we expect the same long-term capabilities?
The other big one that you’ll hear the experts talk about is evaluation, which is a major bottleneck. If you look at some of these papers that show incredible results, the statistical strength of their results section is very weak and consequently we’re making a lot of claims about things that we don’t really have a lot of basis for. It will take a lot of engineering work to carefully build up empirical strength in our results. I think evaluation doesn’t get enough attention.
What has changed in robotics research in the last year or so that you think has enabled the kind of progress that you’re hoping to achieve?
Kuindersma: From my perspective, there are two high-level things that have changed how I’ve thought about work in this space. One is the convergence of the field around repeatable processes for training manipulation skills through demonstrations. The pioneering work of diffusion policy (which TRI was a big part of) is a really powerful thing—it takes the process of generating manipulation skills that previously were basically unfathomable, and turned it into something where you just collect a bunch of data, you train it on an architecture that’s more or less stable at this point, and you get a result.
The second thing is everything that’s happened in robotics-adjacent areas of AI showing that data scale and diversity are really the keys to generalizable behavior. We expect that to also be true for robotics. And so taking these two things together, it makes the path really clear, but I still think there are a ton of open research challenges and questions that we need to answer.
Do you think that simulation is an effective way of scaling data for robotics?
Tedrake: I think generally people underestimate simulation. The work we’ve been doing has made me very optimistic about the capabilities of simulation as long as you use it wisely. Focusing on a specific robot doing a specific task is asking the wrong question; you need to get the distribution of tasks and performance in simulation to be predictive of the distribution of tasks and performance in the real world. There are some things that are still hard to simulate well, but even when it comes to frictional contact and stuff like that, I think we’re getting pretty good at this point.
Is there a commercial future for this partnership that you’re able to talk about?
Kuindersma: For Boston Dynamics, clearly we think there’s long-term commercial value in this work, and that’s one of the main reasons why we want to invest in it. But the purpose of this collaboration is really about fundamental research—making sure that we do the work, advance the science, and do it in a rigorous enough way so that we actually understand and trust the results and we can communicate that out to the world. So yes, we see tremendous value in this commercially. Yes, we are commercializing Atlas, but this project is really about fundamental research.
What happens next?
Tedrake: There are questions at the intersection of things that BD has done and things that TRI has done that we need to do together to start, and that’ll get things going. And then we have big ambitions—getting a generalist capability that we’re calling LBM (large behavior models) running on Atlas is the goal. In the first year we’re trying to focus on these fundamental questions, push boundaries, and write and publish papers.
I want people to be excited about watching for our results, and I want people to trust our results when they see them. For me, that’s the most important message for the robotics community: Through this partnership we’re trying to take a longer view that balances our extreme optimism with being critical in our approach.
Video Friday: Reachy 2
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
At ICRA 2024, we sat down with Pollen Robotics to talk about Reachy 2 O_o
[ Pollen Robotics ]
A robot pangolin designed to plant trees is the winner of the 2023 Natural Robotics Contest, which rewards robot designs inspired by nature. As the winning entry, the pangolin—dubbed “Plantolin”—has been brought to life by engineers at the University of Surrey in the United Kingdom. Out of 184 entries, the winning design came from Dorothy, a high school student from California.Dr. Rob Siddall, a roboticist at the University of Surrey who built Plantolin, said, “In the wild, large animals will cut paths through the overgrowth and move seeds. This doesn’t happen nearly as much in urban areas like the South East of England—so there’s definitely room for a robot to help fill that gap. Dorothy’s brilliant design reminds us how we can solve some of our biggest challenges by looking to nature for inspiration.”[ Plantolin ]
Our novel targeted throwing end-effector is designed to seamlessly integrate with drones and mobile manipulators. It utilizes elastic energy for efficient picking, placing, and throwing of objects, offering a versatile solution for industrial and warehouse applications. By combining a physics-based model with residual learning, it achieves increased accuracy in targeted throwing, even with previously unseen objects.[ Throwing Manipulation, multimedia extension for IEEE Robotics and Automation Letters ]
Thanks, Nagamanikandan!
Control of off-road vehicles is challenging due to the complex dynamic interactions with the terrain. Accurate modeling of these interactions is important to optimize driving performance, but the relevant physical phenomena are too complex to model from first principles. Therefore, we present an offline meta-learning algorithm to construct a rapidly-tunable model of residual dynamics and disturbances. We evaluate our method outdoors on different slopes with varying slippage and actuator degradation disturbances, and compare against an adaptive controller that does not use the VFM terrain features.[ Paper ]
Thanks, Sorina!
Corvus Robotics, a provider of autonomous inventory management systems, announced an updated version of its Corvus One system that brings, for the first time, the ability to fly its drone-powered system in a lights-out distribution center without any added infrastructure like reflectors, stickers, or beacons.With obstacle detection at its core, the light-weight drone safely flies at walking speed without disrupting workflow or blocking aisles and can preventatively ascend to avoid collisions with people, forklifts, or robots, if necessary. Its advanced barcode scanning can read any barcode symbology in any orientation placed anywhere on the front of cartons or pallets.[ Corvus Robotics ]
Thanks, Jackie!
The first public walking demo of a new humanoid from Under Control Robotics.
The ability to accurately and rapidly identify key physiological signatures of injury – such as hemorrhage and airway injuries – proved key to success in the DARPA Triage Challenge Event 1. DART took the top spot in the Systems competition, while Coordinated Robotics topped the leaderboard in the Virtual competition and pulled off the win in the Data competition. All qualified teams are eligible for prizes in the Final Event. These self-funded teams won between $60,000 - $120,000 each for their first-place finishes.[ DARPA ]
The body structure of an anatomically correct tendon-driven musculoskeletal humanoid is complex. We focused on reciprocal innervation in the human nervous system, and then implemented antagonist inhibition control (AIC) based on the reflex. To verify its effectiveness, we applied AIC to the upper limb of the tendon-driven musculoskeletal humanoid, Kengoro, and succeeded in dangling for 14 minutes and doing pull-ups.That is also how I do pull-ups.
[ Jouhou System Kougaku Laboratory, University of Tokyo ]
Thanks, Kento!
On June 5, 2024 Digit completed it’s first day of work for GXO Logistics, Inc. as part of regular operations. This is the result of a multi-year agreement between GXO and Agility Robotics to begin deploying Digit in GXO’s logistics operations. This agreement, which follows a proof-of-concept pilot in late 2023, is both the industry’s first formal commercial deployment of humanoid robots and first Robots-as-a-Service (RaaS) deployment of humanoid robots.[ Agility Robotics ]
Although there is a growing demand for cooking behaviours as one of the expected tasks for robots, a series of cooking behaviours based on new recipe descriptions by robots in the real world has not yet been realised. In this study, we propose a robot system that integrates real-world executable robot cooking behaviour planning using the Large Language Model (LLM) and classical planning of PDDL descriptions, and food ingredient state recognition learning from a small number of data using the Vision-Language model (VLM).[ JSK Robotics Laboratory, University of Tokyo GitHub ]
Thanks, Naoaki!
This paper introduces a novel approach to interactive robots by leveraging the form-factor of cards to create thin robots equipped with vibrational capabilities for locomotion and haptic feedback. The system is composed of flat-shaped robots with on-device sensing and wireless control, which offer lightweight portability and scalability. Applications include augmented card playing, educational tools, and assistive technology, which showcase CARDinality’s versatility in tangible interaction.[ AxLab Actuated Experience Lab, University of Chicago ]
Azi reacts in full AI to the scripted skit it did with Ameca.Azi uses 32 actuators, with 27 to control its silicone face, and 5 for the neck. It uses GPT-4o with a customisable personality.[ Engineered Arts ]
We are testing a system that includes robots, structural building blocks, and smart algorithms to build large-scale structures for future deep space exploration. In this video, autonomous robots worked as a team to transport material in a mock rail system and simulate a build of a tower at our Roverscape.In the summer of 2024 HEBI’s intern Aditya Nair worked to add new use-case demos, and improve quality and consistency of the existing demos for our robotic arms! In this video you can see teach and report, augmented reality, gravity compensation, and impedance control gimbal for our robotic arms.[ HEBI Robotics ]
This video showcases cutting-edge innovations and robotic demonstrations from the Reconfigurable Robotics Lab (RRL) at EPFL. As we are closing the semester, this event brings together the exciting progress and breakthroughs made by our researchers and students over the past months. In this video, you’ll experience a collection of exciting demonstrations, featuring the latest in reconfigurable, soft, and modular robotics, aimed at tackling real-world challenges.[ EPFL Reconfigurable Robotics Lab ]
Humanoid robot companies are promising that humanoids will fast become our friends, colleagues, employees, and the backbone of our workforce. But how close are we to this reality? What are the key costs associated with operating a humanoid? Can companies deploy them profitably? Will humanoids take our jobs, and if so, what should we be doing to prepare?[ Human Robot Interaction Podcast ]
According to Web of Science, there have been 1,147,069 publications from 2003 to 2023 that fell under their category of “Computer Science, Artificial Intelligence.” During the same time period, 217,507 publications fell under their “Robotics” category, about 1/5th of the volume. On top of that, Canada’s published Science, Technology, and Innovation Priorities has AI at the top of the “Technology Advanced Canada” list, but robotics is not even listed. AI has also engaged the public’s imagination more so than robotics with “AI” dominating Google Search trends compared to “robotics.” This has us questioning: “Is AI Skyrocketing while Robotics Inches Forward?”[ Ingenuity Labs RAIS2024 Robotics Debate ]
Video Friday: Reachy 2
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
At ICRA 2024, we sat down with Pollen Robotics to talk about Reachy 2 O_o
[ Pollen Robotics ]
A robot pangolin designed to plant trees is the winner of the 2023 Natural Robotics Contest, which rewards robot designs inspired by nature. As the winning entry, the pangolin—dubbed “Plantolin”—has been brought to life by engineers at the University of Surrey in the United Kingdom. Out of 184 entries, the winning design came from Dorothy, a high school student from California.Dr. Rob Siddall, a roboticist at the University of Surrey who built Plantolin, said, “In the wild, large animals will cut paths through the overgrowth and move seeds. This doesn’t happen nearly as much in urban areas like the South East of England—so there’s definitely room for a robot to help fill that gap. Dorothy’s brilliant design reminds us how we can solve some of our biggest challenges by looking to nature for inspiration.”[ Plantolin ]
Our novel targeted throwing end-effector is designed to seamlessly integrate with drones and mobile manipulators. It utilizes elastic energy for efficient picking, placing, and throwing of objects, offering a versatile solution for industrial and warehouse applications. By combining a physics-based model with residual learning, it achieves increased accuracy in targeted throwing, even with previously unseen objects.[ Throwing Manipulation, multimedia extension for IEEE Robotics and Automation Letters ]
Thanks, Nagamanikandan!
Control of off-road vehicles is challenging due to the complex dynamic interactions with the terrain. Accurate modeling of these interactions is important to optimize driving performance, but the relevant physical phenomena are too complex to model from first principles. Therefore, we present an offline meta-learning algorithm to construct a rapidly-tunable model of residual dynamics and disturbances. We evaluate our method outdoors on different slopes with varying slippage and actuator degradation disturbances, and compare against an adaptive controller that does not use the VFM terrain features.[ Paper ]
Thanks, Sorina!
Corvus Robotics, a provider of autonomous inventory management systems, announced an updated version of its Corvus One system that brings, for the first time, the ability to fly its drone-powered system in a lights-out distribution center without any added infrastructure like reflectors, stickers, or beacons.With obstacle detection at its core, the light-weight drone safely flies at walking speed without disrupting workflow or blocking aisles and can preventatively ascend to avoid collisions with people, forklifts, or robots, if necessary. Its advanced barcode scanning can read any barcode symbology in any orientation placed anywhere on the front of cartons or pallets.[ Corvus Robotics ]
Thanks, Jackie!
The first public walking demo of a new humanoid from Under Control Robotics.
The ability to accurately and rapidly identify key physiological signatures of injury – such as hemorrhage and airway injuries – proved key to success in the DARPA Triage Challenge Event 1. DART took the top spot in the Systems competition, while Coordinated Robotics topped the leaderboard in the Virtual competition and pulled off the win in the Data competition. All qualified teams are eligible for prizes in the Final Event. These self-funded teams won between $60,000 - $120,000 each for their first-place finishes.[ DARPA ]
The body structure of an anatomically correct tendon-driven musculoskeletal humanoid is complex. We focused on reciprocal innervation in the human nervous system, and then implemented antagonist inhibition control (AIC) based on the reflex. To verify its effectiveness, we applied AIC to the upper limb of the tendon-driven musculoskeletal humanoid, Kengoro, and succeeded in dangling for 14 minutes and doing pull-ups.That is also how I do pull-ups.
[ Jouhou System Kougaku Laboratory, University of Tokyo ]
Thanks, Kento!
On June 5, 2024 Digit completed it’s first day of work for GXO Logistics, Inc. as part of regular operations. This is the result of a multi-year agreement between GXO and Agility Robotics to begin deploying Digit in GXO’s logistics operations. This agreement, which follows a proof-of-concept pilot in late 2023, is both the industry’s first formal commercial deployment of humanoid robots and first Robots-as-a-Service (RaaS) deployment of humanoid robots.[ Agility Robotics ]
Although there is a growing demand for cooking behaviours as one of the expected tasks for robots, a series of cooking behaviours based on new recipe descriptions by robots in the real world has not yet been realised. In this study, we propose a robot system that integrates real-world executable robot cooking behaviour planning using the Large Language Model (LLM) and classical planning of PDDL descriptions, and food ingredient state recognition learning from a small number of data using the Vision-Language model (VLM).[ JSK Robotics Laboratory, University of Tokyo GitHub ]
Thanks, Naoaki!
This paper introduces a novel approach to interactive robots by leveraging the form-factor of cards to create thin robots equipped with vibrational capabilities for locomotion and haptic feedback. The system is composed of flat-shaped robots with on-device sensing and wireless control, which offer lightweight portability and scalability. Applications include augmented card playing, educational tools, and assistive technology, which showcase CARDinality’s versatility in tangible interaction.[ AxLab Actuated Experience Lab, University of Chicago ]
Azi reacts in full AI to the scripted skit it did with Ameca.Azi uses 32 actuators, with 27 to control its silicone face, and 5 for the neck. It uses GPT-4o with a customisable personality.[ Engineered Arts ]
We are testing a system that includes robots, structural building blocks, and smart algorithms to build large-scale structures for future deep space exploration. In this video, autonomous robots worked as a team to transport material in a mock rail system and simulate a build of a tower at our Roverscape.In the summer of 2024 HEBI’s intern Aditya Nair worked to add new use-case demos, and improve quality and consistency of the existing demos for our robotic arms! In this video you can see teach and report, augmented reality, gravity compensation, and impedance control gimbal for our robotic arms.[ HEBI Robotics ]
This video showcases cutting-edge innovations and robotic demonstrations from the Reconfigurable Robotics Lab (RRL) at EPFL. As we are closing the semester, this event brings together the exciting progress and breakthroughs made by our researchers and students over the past months. In this video, you’ll experience a collection of exciting demonstrations, featuring the latest in reconfigurable, soft, and modular robotics, aimed at tackling real-world challenges.[ EPFL Reconfigurable Robotics Lab ]
Humanoid robot companies are promising that humanoids will fast become our friends, colleagues, employees, and the backbone of our workforce. But how close are we to this reality? What are the key costs associated with operating a humanoid? Can companies deploy them profitably? Will humanoids take our jobs, and if so, what should we be doing to prepare?[ Human Robot Interaction Podcast ]
According to Web of Science, there have been 1,147,069 publications from 2003 to 2023 that fell under their category of “Computer Science, Artificial Intelligence.” During the same time period, 217,507 publications fell under their “Robotics” category, about 1/5th of the volume. On top of that, Canada’s published Science, Technology, and Innovation Priorities has AI at the top of the “Technology Advanced Canada” list, but robotics is not even listed. AI has also engaged the public’s imagination more so than robotics with “AI” dominating Google Search trends compared to “robotics.” This has us questioning: “Is AI Skyrocketing while Robotics Inches Forward?”[ Ingenuity Labs RAIS2024 Robotics Debate ]
How a Robot Is Grabbing Fuel From a Fukushima Reactor
Thirteen years since a massive earthquake and tsunami struck the Fukushima Dai-ichi nuclear power plant in northern Japan, causing a loss of power, meltdowns and a major release of radioactive material, operator Tokyo Electric Power Co. (TEPCO) finally seems to be close to extracting the first bit of melted fuel from the complex—thanks to a special telescopic robotic device.
Despite Japan’s prowess in industrial robotics, TEPCO had no robots to deploy in the immediate aftermath of the disaster. Since then, however, robots have been used to measure radiation levels, clear building debris, and survey the exterior and interior of the plant overlooking the Pacific Ocean.
It will take decades to decommission Fukushima Dai-ichi, and one of the most dangerous, complex tasks is the removal and storage of about 880 tons of highly radioactive molten fuel in three reactor buildings that were operating when the tsunami hit. TEPCO believes mixtures of uranium, zirconium and other metals accumulated around the bottom of the primary containment vessels (PCVs) of the reactors—but the exact composition of the material is unknown. The material is “fuel debris,” which TEPCO defines as overheated fuel that has melted with fuel rods and in-vessel structures, then cooled and re-solidified. The extraction was supposed to begin in 2021 but ran into development delays and obstacles in the extraction route; the coronavirus pandemic also slowed work.
While TEPCO wants a molten fuel sample to analyze for exact composition, getting just a teaspoon of the stuff has proven so tricky that the job is years behind schedule. That may change soon as crews have deployed the telescoping device to target the 237 tons of fuel debris in Unit 2, which suffered less damage than the other reactor buildings and no hydrogen explosion, making it an easier and safer test bed.
“We plan to retrieve a small amount of fuel debris from Unit 2, analyze it to evaluate its properties and the process of its formation, and then move on to large-scale retrieval,” says Tatsuya Matoba, a spokesperson for TEPCO. “We believe that extracting as much information as possible from the retrieved fuel debris will likely contribute greatly to future decommissioning work.”
How TEPCO Plans to Retrieve a Fuel SampleGetting to the fuel is easier said than done. Shaped like an inverted light bulb, the damaged PCV is a 33-meter-tall steel structure that houses the reactor pressure vessel where nuclear fission took place. A 2-meter-long isolation valve designed to block the release of radioactive material sits at the bottom of the PCV, and that’s where the robot will go in. The fuel debris itself is partly underwater.
Approved for use by Japan’s Nuclear Regulation Authority on 31 July, a robot arm is trying to retrieve 3 grams of the fuel debris without further contamination to the outside environment. So what exactly is this robot and how does it work?
Mitsubishi Heavy Industries, the International Research Institute for Nuclear Decommissioning and UK-based Veolia Nuclear Solutions developed the robot arm to enter small openings in the PCV, where it can survey the interior and grab the fuel. Mostly made of stainless steel and aluminum, the arm measures 22 meters long, weighs 4.6 tons and can move along 18 degrees of freedom. It’s a boom-style arm, not unlike the robotic arms on the International Space Station, that rests in a sealed enclosure box when not extended.
The arm consists of four main elements: a carriage that pushes the assembly through the openings, arm links that can fold up like a ream of dot matrix printer paper, an arm that has three telescopic stages, and a “wand” (an extendable pipe-shaped component) with cameras and a gripper on its tip. Both the arm and the wand can tilt downward toward the target area.
After the assembly is pushed through the PCV’s isolation valve, it angles downward over a 7.2-meter-long rail heading toward the base of the reactor. It continues through existing openings in the pedestal, a concrete structure supporting the reactor, and the platform, which is a flat surface under the reactor.
Then, the tip is lowered on a cable like the grabber in a claw machine toward the debris field at the bottom of the pedestal. The gripper tool at the end of the component has two delicate pincers (only 5 square millimeters), that can pinch a small pebble of debris. The debris is transferred to a container and, if all goes well, is brought back up through the openings and placed in a glovebox: A sealed, negative-pressure container in the reactor building where initial testing can be performed. It will then be moved to a Japan Atomic Energy Agency facility in nearby Ibaraki Prefecture for detailed analysis.
While the gripper was able to reach the debris field and grasp a piece of rubble—it’s unknown if it was actually melted fuel—last month, two of the four cameras on the device stopped working a few days later, and the device was eventually reeled back into the enclosure box. Crews confirmed there were no problems with signal wiring from the control panel in the reactor building, and proceeded to perform oscilloscope testing. TEPCO speculates that radiation passing through camera semiconductor elements caused electrical charge to build up, and that the charge will drain if the cameras are left on in a relatively low-dose environment. It was the latest setback in a very long project.
“Retrieving fuel debris from Fukushima Daiichi Nuclear Power Station is an extremely difficult task, and a very important part of decommissioning,” says Matoba. “With the goal of completing the decommissioning in 30 to 40 years, we believe it is important to proceed strategically and systematically with each step of the work at hand.”
How a Robot Is Grabbing Fuel From a Fukushima Reactor
Thirteen years since a massive earthquake and tsunami struck the Fukushima Dai-ichi nuclear power plant in northern Japan, causing a loss of power, meltdowns and a major release of radioactive material, operator Tokyo Electric Power Co. (TEPCO) finally seems to be close to extracting the first bit of melted fuel from the complex—thanks to a special telescopic robotic device.
Despite Japan’s prowess in industrial robotics, TEPCO had no robots to deploy in the immediate aftermath of the disaster. Since then, however, robots have been used to measure radiation levels, clear building debris, and survey the exterior and interior of the plant overlooking the Pacific Ocean.
It will take decades to decommission Fukushima Dai-ichi, and one of the most dangerous, complex tasks is the removal and storage of about 880 tons of highly radioactive molten fuel in three reactor buildings that were operating when the tsunami hit. TEPCO believes mixtures of uranium, zirconium and other metals accumulated around the bottom of the primary containment vessels (PCVs) of the reactors—but the exact composition of the material is unknown. The material is “fuel debris,” which TEPCO defines as overheated fuel that has melted with fuel rods and in-vessel structures, then cooled and re-solidified. The extraction was supposed to begin in 2021 but ran into development delays and obstacles in the extraction route; the coronavirus pandemic also slowed work.
While TEPCO wants a molten fuel sample to analyze for exact composition, getting just a teaspoon of the stuff has proven so tricky that the job is years behind schedule. That may change soon as crews have deployed the telescoping device to target the 237 tons of fuel debris in Unit 2, which suffered less damage than the other reactor buildings and no hydrogen explosion, making it an easier and safer test bed.
“We plan to retrieve a small amount of fuel debris from Unit 2, analyze it to evaluate its properties and the process of its formation, and then move on to large-scale retrieval,” says Tatsuya Matoba, a spokesperson for TEPCO. “We believe that extracting as much information as possible from the retrieved fuel debris will likely contribute greatly to future decommissioning work.”
How TEPCO Plans to Retrieve a Fuel SampleGetting to the fuel is easier said than done. Shaped like an inverted light bulb, the damaged PCV is a 33-meter-tall steel structure that houses the reactor pressure vessel where nuclear fission took place. A 2-meter-long isolation valve designed to block the release of radioactive material sits at the bottom of the PCV, and that’s where the robot will go in. The fuel debris itself is partly underwater.
Approved for use by Japan’s Nuclear Regulation Authority on 31 July, a robot arm is trying to retrieve 3 grams of the fuel debris without further contamination to the outside environment. So what exactly is this robot and how does it work?
Mitsubishi Heavy Industries, the International Research Institute for Nuclear Decommissioning and UK-based Veolia Nuclear Solutions developed the robot arm to enter small openings in the PCV, where it can survey the interior and grab the fuel. Mostly made of stainless steel and aluminum, the arm measures 22 meters long, weighs 4.6 tons and can move along 18 degrees of freedom. It’s a boom-style arm, not unlike the robotic arms on the International Space Station, that rests in a sealed enclosure box when not extended.
The arm consists of four main elements: a carriage that pushes the assembly through the openings, arm links that can fold up like a ream of dot matrix printer paper, an arm that has three telescopic stages, and a “wand” (an extendable pipe-shaped component) with cameras and a gripper on its tip. Both the arm and the wand can tilt downward toward the target area.
After the assembly is pushed through the PCV’s isolation valve, it angles downward over a 7.2-meter-long rail heading toward the base of the reactor. It continues through existing openings in the pedestal, a concrete structure supporting the reactor, and the platform, which is a flat surface under the reactor.
Then, the tip is lowered on a cable like the grabber in a claw machine toward the debris field at the bottom of the pedestal. The gripper tool at the end of the component has two delicate pincers (only 5 square millimeters), that can pinch a small pebble of debris. The debris is transferred to a container and, if all goes well, is brought back up through the openings and placed in a glovebox: A sealed, negative-pressure container in the reactor building where initial testing can be performed. It will then be moved to a Japan Atomic Energy Agency facility in nearby Ibaraki Prefecture for detailed analysis.
While the gripper was able to reach the debris field and grasp a piece of rubble—it’s unknown if it was actually melted fuel—last month, two of the four cameras on the device stopped working a few days later, and the device was eventually reeled back into the enclosure box. Crews confirmed there were no problems with signal wiring from the control panel in the reactor building, and proceeded to perform oscilloscope testing. TEPCO speculates that radiation passing through camera semiconductor elements caused electrical charge to build up, and that the charge will drain if the cameras are left on in a relatively low-dose environment. It was the latest setback in a very long project.
“Retrieving fuel debris from Fukushima Daiichi Nuclear Power Station is an extremely difficult task, and a very important part of decommissioning,” says Matoba. “With the goal of completing the decommissioning in 30 to 40 years, we believe it is important to proceed strategically and systematically with each step of the work at hand.”
SwitchBot S10 Review: "This Is the Future of Home Robots"
I’ve been reviewing robot vacuums for more than a decade, and robot mops for just as long. It’s been astonishing how the technology has evolved, from the original iRobot Roomba bouncing off of walls and furniture to robots that use lidar and vision to map your entire house and intelligently keep it clean.
As part of this evolution, cleaning robots have become more and more hands-off, and most of them are now able to empty themselves into occasionally enormous docks with integrated vacuums and debris bags. This means that your robot can vacuum your house, empty itself, recharge, and repeat this process until the dock’s dirt bag fills up.
But this all breaks down when it comes to robots that both vacuum and mop. Mopping, which is a capability that you definitely want if you have hard floors, requires a significant amount of clean water and generates an equally significant amount of dirty water. One approach is to make docks that are even more enormous—large enough to host tanks for clean and dirty water that you have to change out on a weekly basis.
SwitchBot, a company that got its start with a stick-on robotic switch that can make dumb things with switches into smart things, has been doing some clever things in the robotic vacuum space as well, and we’ve been taking a look at the SwitchBot S10, which hooks up to your home plumbing to autonomously manage all of its water needs. And I have to say, it works so well that it feels inevitable: this is the future of home robots.
A Massive Mopping VacuumThe giant dock can collect debris from the robot for months, and also includes a hot air dryer for the roller mop.Evan Ackerman/IEEE Spectrum
The SwitchBot S10 is a hybrid robotic vacuum and mop that uses a Neato-style lidar system for localization and mapping. It’s also got a camera on the front to help it with obstacle avoidance. The mopping function uses a cloth-covered spinning roller that adds clean water and sucks out dirty water on every rotation. The roller lifts automatically when the robot senses that it’s about to move onto carpet. The S10 comes with a charging dock with an integrated vacuum and dust collection system, and there’s also a heated mop cleaner underneath, which is a nice touch.
I’m not going to spend a lot of time analyzing the S10’s cleaning performance. From what I can tell, it does a totally decent job vacuuming, and the mopping is particularly good thanks to the roller mop that exerts downward pressure on the floor while spinning. Just about any floor cleaning robot is going to do a respectable job with the actual floor cleaning—it’s all the other stuff, like software and interface and ease of use, that have become more important differentiators.
Home Plumbing IntegrationThe water dock, seen here hooked up to my toilet and sink, exchanges dirty water out of the robot and includes an option to add cleaning fluid.Evan Ackerman/IEEE Spectrum
The S10’s primary differentiator is that it integrates with your home plumbing. It does this through a secondary dock—there’s the big charging dock, which you can put anywhere, and then the much smaller water dock, which is small enough to slide underneath an average toe-kick in a kitchen.
The dock includes a pumping system that accesses clean water through a pressurized water line, and then squirts dirty water out into a drain. The best place to find this combination of fixtures is near a sink with a p-trap, and if this is already beyond the limits of your plumbing knowledge, well, that’s the real challenge with the S10. The S10 is very much not plug-and-play; to install the water dock, you should be comfortable with basic tool use and, more importantly, have some faith in the integrity of your existing plumbing.
My house was built in the early 1960s, which means that a lot of my plumbing consists of old copper with varying degrees of corrosion and mineral infestation, along with slightly younger but somewhat brittle PVC. Installing the clean water line for the dock involves temporarily shutting off the cold water line feeding a sink or a toilet—that is, turning off a valve that may not have been turned for a decade or more. This is risky, and the potential consequences of any uncontrolled water leak are severe, so know where your main water shutoff is before futzing with the dock installation.
To SwitchBot’s credit, the actual water dock installation process was very easy, thanks to a suite of connectors and adapters that come included. I installed my dock in between a toilet and a pedestal sink, with access to the toilet’s water valve for clean water and the sink’s p-trap for dirty water. The water dock is battery powered, and cleverly charges from the robot itself, so it doesn’t need a power outlet. Even so, this one spot was pretty much the only place in my entire house where the water dock could easily go: my other bathrooms have cabinet sinks, which would have meant drilling holes for the water lines, and neither of them had floor space where the dock could live without being kicked all the time. It’s not like the water dock is all that big, but it really needs to be out of the way, and it can be hard to find a compatible space.
Mediocre MappingWith the dock set up, the next step is mapping. The mapping process with the S10 was a bit finicky. I spent a bunch of time prepping my house—that is, moving as much furniture as possible off of the floor to give the robot the best chance at making a solid map. I know this isn’t something that most people probably do for their robots, but knowing robots like I do, I figure that getting a really good map is worth the hassle in the long run.
The first mapping run completed in about 20 minutes, but the robot got “stuck” on the way back to its dock thanks to a combination of a bit of black carpet and black coffee table legs. I rescued it, but it promptly forgot its map, and I had to start again. The second time, the robot failed to map my kitchen, dining room, laundry room, and one bathroom by not going through a wide open doorway off of the living room. This was confusing, because I could see the unexplored area on the map, and I’m not sure why the robot decided to call it a day rather than investigating that pretty obvious frontier region.
SwitchBot is not terrible at mapping, but it’s definitely sub-par relative to the experiences that I’ve had with older generations of other robots. The S10 also intermittently freaked out on the black patterned carpet that I have: moving very cautiously, spinning in circles, and occasionally stopping completely while complaining about malfunctioning cliff sensors, presumably because my carpet was absorbing all of the infrared from its cliff sensors while it was trying to map.
Black carpet, terror of robots everywhere.Evan Ackerman/IEEE Spectrum
Part of my frustration here is that I feel like I should be able to tell the robot “it’s a black carpet in that spot, you’re fine,” rather than taking such drastic measures as taping over all of the cliff sensors with tin foil, which I’ve had to do on occasion. And let me tell you how overjoyed I was to discover that the S10’s map editor has that exact option. You can also segment rooms by hand, and even position furniture to give the robot a clue on what kind of obstacles to expect. What’s missing is some way of asking the robot to explore a particular area over again, which would have made the initial process a lot easier.
Would a smarter robot be able to figure out all of this stuff on its own? Sure. But robots are dumb, and being able to manually add carpets and furniture and whatnot is an incredibly useful feature, I just wish I could do that during the mapping run somehow instead of having to spend a couple of hours getting that first map to work. Oh well.
How the SwitchBot S10 CleansWhen you ask the S10 to vacuum and mop, it leaves its charging dock and goes to the water dock. Once it docks there, it will extract any dirty water, clean its roller mop, extract the dirty water, wash its filter, and then finally refill itself with clean water before heading off to start mopping. It may do this several times over the course of a cleaning run, depending on how much water you ask it to use, but it’s quite good at managing all of this by itself. If you would like your floor to be extra clean, you can have the robot make two passes over the same area, which it does in a crosshatch pattern. And the app helpfully clues you in to everything that the robot is doing, including real-time position.
The app does and excellent job of showing where the robot has cleaned. You can also add furniture and floor types to help the robot clean better.Evan Ackerman/IEEE Spectrum
I’m pleasantly surprised by my experience with the S10 and the water dock. It was relatively easy to install and works exactly as it should. This is getting very close to the dream for robot vacuums, right? I will never have to worry about clean water tanks or dirty water tanks. The robot can mop every day if I want it to, and I don’t ever have to think about it, short of emptying the charging dock’s dustbin every few months and occasionally doing some basic robot maintenance.
SwitchBot’s FutureBeing able to access water on-demand for mopping is pretty great, but the S10’s water dock is about more than that. SwitchBot already has plans for a humidifier and dehumidifier, which can be filled and emptied with the S10 acting as a water shuttle. And the dehumidifier can even pull water out of the air and then the S10 can use that water to mop, which is pretty cool. I can think of two other applications for a water shuttle that are immediately obvious: pets, and plants.
SwitchBot is already planning for more ways of using the S10’s water transporting capability.SwitchBot
What about a water bowl for your pets that you can put anywhere in your house, and it’s always full of fresh water, thanks to a robot that not only tops the water off, but changes it completely? Or a little plant-sized dock that lives on the floor with a tube up to the pot of your leafy friend for some botanical thirst quenching? Heck, I have an entire fleet of robotic gardens that would love to be tended by a mobile water delivery system.
SwitchBot is not the only company to offer plumbing integration for home robots. Narwal and Roborock also have options for plumbing add-on kits to their existing docks, although they seem to be designed more for European or Asian homes where home plumbing tends to be designed a bit differently. And besides the added complication of systems like these, you’ll pay a premium for them: the SwitchBot S10 can cost as much as $1200, although it’s frequently on sale for less. As with all new features for floor care robots, though, you can expect the price to drop precipitously over the next several years as new features become standard, and I hope plumbing integration gets there soon, because I’m sold.
SwitchBot S10 Review: "This Is the Future of Home Robots"
I’ve been reviewing robot vacuums for more than a decade, and robot mops for just as long. It’s been astonishing how the technology has evolved, from the original iRobot Roomba bouncing off of walls and furniture to robots that use lidar and vision to map your entire house and intelligently keep it clean.
As part of this evolution, cleaning robots have become more and more hands-off, and most of them are now able to empty themselves into occasionally enormous docks with integrated vacuums and debris bags. This means that your robot can vacuum your house, empty itself, recharge, and repeat this process until the dock’s dirt bag fills up.
But this all breaks down when it comes to robots that both vacuum and mop. Mopping, which is a capability that you definitely want if you have hard floors, requires a significant amount of clean water and generates an equally significant amount of dirty water. One approach is to make docks that are even more enormous—large enough to host tanks for clean and dirty water that you have to change out on a weekly basis.
SwitchBot, a company that got its start with a stick-on robotic switch that can make dumb things with switches into smart things, has been doing some clever things in the robotic vacuum space as well, and we’ve been taking a look at the SwitchBot S10, which hooks up to your home plumbing to autonomously manage all of its water needs. And I have to say, it works so well that it feels inevitable: this is the future of home robots.
A Massive Mopping VacuumThe giant dock can collect debris from the robot for months, and also includes a hot air dryer for the roller mop.Evan Ackerman/IEEE Spectrum
The SwitchBot S10 is a hybrid robotic vacuum and mop that uses a Neato-style lidar system for localization and mapping. It’s also got a camera on the front to help it with obstacle avoidance. The mopping function uses a cloth-covered spinning roller that adds clean water and sucks out dirty water on every rotation. The roller lifts automatically when the robot senses that it’s about to move onto carpet. The S10 comes with a charging dock with an integrated vacuum and dust collection system, and there’s also a heated mop cleaner underneath, which is a nice touch.
I’m not going to spend a lot of time analyzing the S10’s cleaning performance. From what I can tell, it does a totally decent job vacuuming, and the mopping is particularly good thanks to the roller mop that exerts downward pressure on the floor while spinning. Just about any floor cleaning robot is going to do a respectable job with the actual floor cleaning—it’s all the other stuff, like software and interface and ease of use, that have become more important differentiators.
Home Plumbing IntegrationThe water dock, seen here hooked up to my toilet and sink, exchanges dirty water out of the robot and includes an option to add cleaning fluid.Evan Ackerman/IEEE Spectrum
The S10’s primary differentiator is that it integrates with your home plumbing. It does this through a secondary dock—there’s the big charging dock, which you can put anywhere, and then the much smaller water dock, which is small enough to slide underneath an average toe-kick in a kitchen.
The dock includes a pumping system that accesses clean water through a pressurized water line, and then squirts dirty water out into a drain. The best place to find this combination of fixtures is near a sink with a p-trap, and if this is already beyond the limits of your plumbing knowledge, well, that’s the real challenge with the S10. The S10 is very much not plug-and-play; to install the water dock, you should be comfortable with basic tool use and, more importantly, have some faith in the integrity of your existing plumbing.
My house was built in the early 1960s, which means that a lot of my plumbing consists of old copper with varying degrees of corrosion and mineral infestation, along with slightly younger but somewhat brittle PVC. Installing the clean water line for the dock involves temporarily shutting off the cold water line feeding a sink or a toilet—that is, turning off a valve that may not have been turned for a decade or more. This is risky, and the potential consequences of any uncontrolled water leak are severe, so know where your main water shutoff is before futzing with the dock installation.
To SwitchBot’s credit, the actual water dock installation process was very easy, thanks to a suite of connectors and adapters that come included. I installed my dock in between a toilet and a pedestal sink, with access to the toilet’s water valve for clean water and the sink’s p-trap for dirty water. The water dock is battery powered, and cleverly charges from the robot itself, so it doesn’t need a power outlet. Even so, this one spot was pretty much the only place in my entire house where the water dock could easily go: my other bathrooms have cabinet sinks, which would have meant drilling holes for the water lines, and neither of them had floor space where the dock could live without being kicked all the time. It’s not like the water dock is all that big, but it really needs to be out of the way, and it can be hard to find a compatible space.
Mediocre MappingWith the dock set up, the next step is mapping. The mapping process with the S10 was a bit finicky. I spent a bunch of time prepping my house—that is, moving as much furniture as possible off of the floor to give the robot the best chance at making a solid map. I know this isn’t something that most people probably do for their robots, but knowing robots like I do, I figure that getting a really good map is worth the hassle in the long run.
The first mapping run completed in about 20 minutes, but the robot got “stuck” on the way back to its dock thanks to a combination of a bit of black carpet and black coffee table legs. I rescued it, but it promptly forgot its map, and I had to start again. The second time, the robot failed to map my kitchen, dining room, laundry room, and one bathroom by not going through a wide open doorway off of the living room. This was confusing, because I could see the unexplored area on the map, and I’m not sure why the robot decided to call it a day rather than investigating that pretty obvious frontier region.
SwitchBot is not terrible at mapping, but it’s definitely sub-par relative to the experiences that I’ve had with older generations of other robots. The S10 also intermittently freaked out on the black patterned carpet that I have: moving very cautiously, spinning in circles, and occasionally stopping completely while complaining about malfunctioning cliff sensors, presumably because my carpet was absorbing all of the infrared from its cliff sensors while it was trying to map.
Black carpet, terror of robots everywhere.Evan Ackerman/IEEE Spectrum
Part of my frustration here is that I feel like I should be able to tell the robot “it’s a black carpet in that spot, you’re fine,” rather than taking such drastic measures as taping over all of the cliff sensors with tin foil, which I’ve had to do on occasion. And let me tell you how overjoyed I was to discover that the S10’s map editor has that exact option. You can also segment rooms by hand, and even position furniture to give the robot a clue on what kind of obstacles to expect. What’s missing is some way of asking the robot to explore a particular area over again, which would have made the initial process a lot easier.
Would a smarter robot be able to figure out all of this stuff on its own? Sure. But robots are dumb, and being able to manually add carpets and furniture and whatnot is an incredibly useful feature, I just wish I could do that during the mapping run somehow instead of having to spend a couple of hours getting that first map to work. Oh well.
How the SwitchBot S10 CleansWhen you ask the S10 to vacuum and mop, it leaves its charging dock and goes to the water dock. Once it docks there, it will extract any dirty water, clean its roller mop, extract the dirty water, wash its filter, and then finally refill itself with clean water before heading off to start mopping. It may do this several times over the course of a cleaning run, depending on how much water you ask it to use, but it’s quite good at managing all of this by itself. If you would like your floor to be extra clean, you can have the robot make two passes over the same area, which it does in a crosshatch pattern. And the app helpfully clues you in to everything that the robot is doing, including real-time position.
The app does and excellent job of showing where the robot has cleaned. You can also add furniture and floor types to help the robot clean better.Evan Ackerman/IEEE Spectrum
I’m pleasantly surprised by my experience with the S10 and the water dock. It was relatively easy to install and works exactly as it should. This is getting very close to the dream for robot vacuums, right? I will never have to worry about clean water tanks or dirty water tanks. The robot can mop every day if I want it to, and I don’t ever have to think about it, short of emptying the charging dock’s dustbin every few months and occasionally doing some basic robot maintenance.
SwitchBot’s FutureBeing able to access water on-demand for mopping is pretty great, but the S10’s water dock is about more than that. SwitchBot already has plans for a humidifier and dehumidifier, which can be filled and emptied with the S10 acting as a water shuttle. And the dehumidifier can even pull water out of the air and then the S10 can use that water to mop, which is pretty cool. I can think of two other applications for a water shuttle that are immediately obvious: pets, and plants.
SwitchBot is already planning for more ways of using the S10’s water transporting capability.SwitchBot
What about a water bowl for your pets that you can put anywhere in your house, and it’s always full of fresh water, thanks to a robot that not only tops the water off, but changes it completely? Or a little plant-sized dock that lives on the floor with a tube up to the pot of your leafy friend for some botanical thirst quenching? Heck, I have an entire fleet of robotic gardens that would love to be tended by a mobile water delivery system.
SwitchBot is not the only company to offer plumbing integration for home robots. Narwal and Roborock also have options for plumbing add-on kits to their existing docks, although they seem to be designed more for European or Asian homes where home plumbing tends to be designed a bit differently. And besides the added complication of systems like these, you’ll pay a premium for them: the SwitchBot S10 can cost as much as $1200, although it’s frequently on sale for less. As with all new features for floor care robots, though, you can expect the price to drop precipitously over the next several years as new features become standard, and I hope plumbing integration gets there soon, because I’m sold.
Video Friday: Quadruped Ladder Climbing
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Not even ladders can keep you safe from quadruped robots anymore.
[ ETH Zürich Robot Systems Lab ]
Introducing Azi (right), the new desktop robot from Engineered Arts Ltd. Azi and Ameca are having a little chat, demonstrating their wide range of expressive capabilities. Engineered Arts desktop robots feature 32 actuators, 27 for facial control alone, and 5 for the neck. They include AI conversational ability including GPT-4o support which makes them great robotic companions.[ Engineered Arts ]
Quadruped robots that individual researchers can build by themselves are crucial for expanding the scope of research due to their high scalability and customizability. In this study, we develop a metal quadruped robot MEVIUS, that can be constructed and assembled using only materials ordered through e-commerce. We have considered the minimum set of components required for a quadruped robot, employing metal machining, sheet metal welding, and off-the-shelf components only.[ MEVIUS from JSK Robotics Laboratory ]
Thanks Kento!
Avian perching maneuvers are one of the most frequent and agile flight scenarios, where highly optimized flight trajectories, produced by rapid wing and tail morphing that generate high angular rates and accelerations, reduce kinetic energy at impact. Here, we use optimal control methods on an avian-inspired drone with morphing wing and tail to test a recent hypothesis derived from perching maneuver experiments of Harris’ hawks that birds minimize the distance flown at high angles of attack to dissipate kinetic energy before impact.[ EPFL Laboratory of Intelligent Systems ]
The earliest signs of bearing failures are inaudible to you, but not to Spot . Introducing acoustic vibration sensing—Automate ultrasonic inspections of rotating equipment to keep your factory humming.The only thing I want to know is whether Spot is programmed to actually do that cute little tilt when using its acoustic sensors.
[ Boston Dynamics ]
Hear from Jonathan Hurst, our co-founder and Chief Robot Officer, why legs are ideally suited for Digit’s work.[ Agility Robotics ]
I don’t think “IP67” really does this justice.
[ ANYbotics ]
This paper presents a teleportation system with floating robotic arms that traverse parallel cables to perform long-distance manipulation. The system benefits from the cable-based infrastructure, which is easy to set up and cost-effective with expandable workspace range.[ EPFL ]
It seems to be just renderings for now, but here’s the next version of Fourier’s humanoid.
[ Fourier ]
Happy Oktoberfest from Dino Robotics!
[ Dino Robotics ]
This paper introduces a learning-based low-level controller for quadcopters, which adaptively controls quadcopters with significant variations in mass, size, and actuator capabilities. Our approach leverages a combination of imitation learning and reinforcement learning, creating a fast-adapting and general control framework for quadcopters that eliminates the need for precise model estimation or manual tuning.[ HiPeR Lab ]
Parkour poses a significant challenge for legged robots, requiring navigation through complex environments with agility and precision based on limited sensory inputs. In this work, we introduce a novel method for training end-to-end visual policies, from depth pixels to robot control commands, to achieve agile and safe quadruped locomotion.[ SoloParkour ]
Video Friday: Quadruped Ladder Climbing
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Not even ladders can keep you safe from quadruped robots anymore.
[ ETH Zürich Robot Systems Lab ]
Introducing Azi (right), the new desktop robot from Engineered Arts Ltd. Azi and Ameca are having a little chat, demonstrating their wide range of expressive capabilities. Engineered Arts desktop robots feature 32 actuators, 27 for facial control alone, and 5 for the neck. They include AI conversational ability including GPT-4o support which makes them great robotic companions.[ Engineered Arts ]
Quadruped robots that individual researchers can build by themselves are crucial for expanding the scope of research due to their high scalability and customizability. In this study, we develop a metal quadruped robot MEVIUS, that can be constructed and assembled using only materials ordered through e-commerce. We have considered the minimum set of components required for a quadruped robot, employing metal machining, sheet metal welding, and off-the-shelf components only.[ MEVIUS from JSK Robotics Laboratory ]
Thanks Kento!
Avian perching maneuvers are one of the most frequent and agile flight scenarios, where highly optimized flight trajectories, produced by rapid wing and tail morphing that generate high angular rates and accelerations, reduce kinetic energy at impact. Here, we use optimal control methods on an avian-inspired drone with morphing wing and tail to test a recent hypothesis derived from perching maneuver experiments of Harris’ hawks that birds minimize the distance flown at high angles of attack to dissipate kinetic energy before impact.[ EPFL Laboratory of Intelligent Systems ]
The earliest signs of bearing failures are inaudible to you, but not to Spot . Introducing acoustic vibration sensing—Automate ultrasonic inspections of rotating equipment to keep your factory humming.The only thing I want to know is whether Spot is programmed to actually do that cute little tilt when using its acoustic sensors.
[ Boston Dynamics ]
Hear from Jonathan Hurst, our co-founder and Chief Robot Officer, why legs are ideally suited for Digit’s work.[ Agility Robotics ]
I don’t think “IP67” really does this justice.
[ ANYbotics ]
This paper presents a teleportation system with floating robotic arms that traverse parallel cables to perform long-distance manipulation. The system benefits from the cable-based infrastructure, which is easy to set up and cost-effective with expandable workspace range.[ EPFL ]
It seems to be just renderings for now, but here’s the next version of Fourier’s humanoid.
[ Fourier ]
Happy Oktoberfest from Dino Robotics!
[ Dino Robotics ]
This paper introduces a learning-based low-level controller for quadcopters, which adaptively controls quadcopters with significant variations in mass, size, and actuator capabilities. Our approach leverages a combination of imitation learning and reinforcement learning, creating a fast-adapting and general control framework for quadcopters that eliminates the need for precise model estimation or manual tuning.[ HiPeR Lab ]
Parkour poses a significant challenge for legged robots, requiring navigation through complex environments with agility and precision based on limited sensory inputs. In this work, we introduce a novel method for training end-to-end visual policies, from depth pixels to robot control commands, to achieve agile and safe quadruped locomotion.[ SoloParkour ]