Feed aggregator

Disabled people are often involved in robotics research as potential users of technologies which address specific needs. However, their more generalised lived expertise is not usually included when planning the overall design trajectory of robots for health and social care purposes. This risks losing valuable insight into the lived experience of disabled people, and impinges on their right to be involved in the shaping of their future care. This project draws upon the expertise of an interdisciplinary team to explore methodologies for involving people with disabilities in the early design of care robots in a way that enables incorporation of their broader values, experiences and expectations. We developed a comparative set of focus group workshops using Community Philosophy, LEGO® Serious Play® and Design Thinking to explore how people with a range of different physical impairments used these techniques to envision a “useful robot”. The outputs were then workshopped with a group of roboticists and designers to explore how they interacted with the thematic map produced. Through this process, we aimed to understand how people living with disability think robots might improve their lives and consider new ways of bringing the fullness of lived experience into earlier stages of robot design. Secondary aims were to assess whether and how co-creative methodologies might produce actionable information for designers (or why not), and to deepen the exchange of social scientific and technical knowledge about feasible trajectories for robotics in health-social care. Our analysis indicated that using these methods in a sequential process of workshops with disabled people and incorporating engineers and other stakeholders at the Design Thinking stage could potentially produce technologically actionable results to inform follow-on proposals.

In the study of collective animal behavior, researchers usually rely on gathering empirical data from animals in the wild. While the data gathered can be highly accurate, researchers have limited control over both the test environment and the agents under study. Further aggravating the data gathering problem is the fact that empirical studies of animal groups typically involve a large number of conspecifics. In these groups, collective dynamics may occur over long periods of time interspersed with excessively rapid events such as collective evasive maneuvers following a predator’s attack. All these factors stress the steep challenges faced by biologists seeking to uncover the fundamental mechanisms and functions of social organization in a given taxon. Here, we argue that beyond commonly used simulations, experiments with multi-robot systems offer a powerful toolkit to deepen our understanding of various forms of swarming and other social animal organizations. Indeed, the advances in multi-robot systems and swarm robotics over the past decade pave the way for the development of a new hybrid form of scientific investigation of social organization in biology. We believe that by fostering such interdisciplinary research, a feedback loop can be created where agent behaviors designed and tested in robotico can assist in identifying hypotheses worth being validated through the observation of animal collectives in nature. In turn, these observations can be used as a novel source of inspiration for even more innovative behaviors in engineered systems, thereby perpetuating the feedback loop.

This work describes the design of real-time dance-based interaction with a humanoid robot, where the robot seeks to promote physical activity in children by taking on multiple roles as a dance partner. It acts as a leader by initiating dances but can also act as a follower by mimicking a child’s dance movements. Dances in the leader role are produced by a sequence-to-sequence (S2S) Long Short-Term Memory (LSTM) network trained on children’s music videos taken from YouTube. On the other hand, a music orchestration platform is implemented to generate background music in the follower mode as the robot mimics the child’s poses. In doing so, we also incorporated the largely unexplored paradigm of learning-by-teaching by including multiple robot roles that allow the child to both learn from and teach to the robot. Our work is among the first to implement a largely autonomous, real-time full-body dance interaction with a bipedal humanoid robot that also explores the impact of the robot roles on child engagement. Importantly, we also incorporated in our design formal constructs taken from autism therapy, such as the least-to-most prompting hierarchy, reinforcements for positive behaviors, and a time delay to make behavioral observations. We implemented a multimodal child engagement model that encompasses both affective engagement (displayed through eye gaze focus and facial expressions) as well as task engagement (determined by the level of physical activity) to determine child engagement states. We then conducted a virtual exploratory user study to evaluate the impact of mixed robot roles on user engagement and found no statistically significant difference in the children’s engagement in single-role and multiple-role interactions. While the children were observed to respond positively to both robot behaviors, they preferred the music-driven leader role over the movement-driven follower role, a result that can partly be attributed to the virtual nature of the study. Our findings support the utility of such a platform in practicing physical activity but indicate that further research is necessary to fully explore the impact of each robot role.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RSS 2022: 21 June–1 July 2022, NEW YORK CITYERF 2022: 28 June–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11 July–17 July 2022, BANGKOKIEEE CASE 2022: 20 August–24 August 2022, MEXICO CITYCLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4 November–5 November 2022, LOS ANGELESCoRL 2022: 14 December–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

The secret to making a robot is to pick one thing and do it really, really well. And then make it smaller and cheaper and cuter!

Not sure how much Baby Clappy is going to cost quite yet, but listen for it next year.

[ Baby Clappy ]

Digit is capable of navigating a wide variety of challenging terrain. Robust dynamic stability paired with advanced perception capabilities enables Digit to maneuver through a logistics warehouse environment or even a stretch of trail in the woods. Today Digit took a hike in our own back yard, along the famous Pacific Crest Trail.

[ Agility Robotics ]

Match of Tech United versus the ladies from Vitória SC during the European RoboCup 2022 in Guimarães, Portugal. Note that the ladies intentionally tied against our robots, so we could end the game in penalties.

[ Tech United ]

Franka Production 3 is the force sensitive robot platform made in Germany, an industry system that ignites productivity for everyone who needs industrial robotics automation.

[ Franka ]

David demonstrates advanced manipulation skills with the 7-DoF arm and fully articulated 5-finger hand using a pipette. To localize the object, we combine multi-object tracking with proprioceptive measurements. Together with path planning, this allows for controlled in-hand manipulation.

[ DLR RMC ]

DEEP Robotics has signed a strategic agreement with Huzhou Institute of Zhejiang University for cooperating on further research to seek various possibilities in drones and quadruped.

[ Deep Robotics ]

Have you ever wondered if that over-the-counter pill you took an hour ago is helping to relieve your headache? With NSF's support, a team of Stanford University mechanical engineers has found a way to target drug delivery…to better attack that headache. Meet the millirobots. These finger-sized, wireless, origami inspired, amphibious robots could become medicines future lifesaver.

[ Zhao Lab ]

Engineers at Rice University have developed a method that allows humans to help robots “see” their environments and carry out tasks. The strategy called Bayesian Learning IN the Dark—BLIND, for short—is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time.

[ Rice ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RSS 2022: 21 June–1 July 2022, NEW YORK CITYERF 2022: 28 June–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11 July–17 July 2022, BANGKOKIEEE CASE 2022: 20 August–24 August 2022, MEXICO CITYCLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4 November–5 November 2022, LOS ANGELESCoRL 2022: 14 December–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

The secret to making a robot is to pick one thing and do it really, really well. And then make it smaller and cheaper and cuter!

Not sure how much Baby Clappy is going to cost quite yet, but listen for it next year.

[ Baby Clappy ]

Digit is capable of navigating a wide variety of challenging terrain. Robust dynamic stability paired with advanced perception capabilities enables Digit to maneuver through a logistics warehouse environment or even a stretch of trail in the woods. Today Digit took a hike in our own back yard, along the famous Pacific Crest Trail.

[ Agility Robotics ]

Match of Tech United versus the ladies from Vitória SC during the European RoboCup 2022 in Guimarães, Portugal. Note that the ladies intentionally tied against our robots, so we could end the game in penalties.

[ Tech United ]

Franka Production 3 is the force sensitive robot platform made in Germany, an industry system that ignites productivity for everyone who needs industrial robotics automation.

[ Franka ]

David demonstrates advanced manipulation skills with the 7-DoF arm and fully articulated 5-finger hand using a pipette. To localize the object, we combine multi-object tracking with proprioceptive measurements. Together with path planning, this allows for controlled in-hand manipulation.

[ DLR RMC ]

DEEP Robotics has signed a strategic agreement with Huzhou Institute of Zhejiang University for cooperating on further research to seek various possibilities in drones and quadruped.

[ Deep Robotics ]

Have you ever wondered if that over-the-counter pill you took an hour ago is helping to relieve your headache? With NSF's support, a team of Stanford University mechanical engineers has found a way to target drug delivery…to better attack that headache. Meet the millirobots. These finger-sized, wireless, origami inspired, amphibious robots could become medicines future lifesaver.

[ Zhao Lab ]

Engineers at Rice University have developed a method that allows humans to help robots “see” their environments and carry out tasks. The strategy called Bayesian Learning IN the Dark—BLIND, for short—is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time.

[ Rice ]



In the puzzle of climate change, Earth’s oceans are an immense and crucial piece. The oceans act as an enormous reservoir of both heat and carbon dioxide, the most abundant greenhouse gas. But gathering accurate and sufficient data about the oceans to feed climate and weather models has been a huge technical challenge.

Over the years, though, a basic picture of ocean heating patterns has emerged. The sun’s infrared, visible-light, and ultraviolet radiation warms the oceans, with the heat absorbed particularly in Earth’s lower latitudes and in the eastern areas of the vast ocean basins. Thanks to wind-driven currents and large-scale patterns of circulation, the heat is generally driven westward and toward the poles, being lost as it escapes to the atmosphere and space.

This heat loss comes mainly from a combination of evaporation and reradiation into space. This oceanic heat movement helps make Earth habitable by smoothing out local and seasonal temperature extremes. But the transport of heat in the oceans and its eventual loss upward are affected by many factors, such as the ability of the currents and wind to mix and churn, driving heat down into the ocean. The upshot is that no model of climate change can be accurate unless it accounts for these complicating processes in a detailed way. And that’s a fiendish challenge, not least because Earth’s five great oceans occupy 140 million square miles, or 71 percent of the planet’s surface.

“We can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.”
—Susan Wijffels

Providing such detail is the purpose of the Argo program, run by an international consortium involving 30 nations. The group operates a global fleet of some 4,000 undersea robotic craft scattered throughout the world’s oceans. The vessels are called “floats,” though they spend nearly all of their time underwater, diving thousands of meters while making measurements of temperature and salinity. Drifting with ocean currents, the floats surface every 10 days or so to transmit their information to data centers in Brest, France, and Monterey, Calif. The data is then made available to researchers and weather forecasters all over the world.

The Argo system, which produces more than 100,000 salinity and temperature profiles per year, is a huge improvement over traditional methods, which depended on measurements made from ships or with buoys. The remarkable technology of these floats and the systems technology that was created to operate them as a network was recognized this past May with the IEEE Corporate Innovation Award, at the 2022 Vision, Innovation, and Challenges Summit. Now, as Argo unveils an ambitious proposal to increase the number of floats to 4,700 and increase their capabilities, IEEE Spectrum spoke with Susan Wijffels, senior scientist at the Woods Hole Oceanographic Institution on Cape Cod, Mass., and cochair of the Argo steering committee.

Susan Wijffels on…

Back to top

Why do we need a vast network like Argo to help us understand how Earth’s climate is changing?

Susan Wijffels: Well, the reason is that the ocean is a key player in Earth’s climate system. So, we know that, for instance, our average climate is really, really dependent on the ocean. But actually, how the climate varies and changes, beyond about a two-to-three-week time scale, is highly controlled by the ocean. And so, in a way, you can think that the future of climate—the future of Earth—is going to be determined partly by what we do, but also by how the ocean responds.

Susan Wijffels

Aren’t satellites already making these kind of measurements?

Wijffels: The satellite observing system, a wonderful constellation of satellites run by many nations, is very important. But they only measure the very, very top of the ocean. They penetrate a couple of meters at the most. Most are only really seeing what’s happening in the upper few millimeters of the ocean. And yet, the ocean itself is very deep, 5, 6 kilometers deep, around the world. And it’s what’s happening in the deep ocean that is critical, because things are changing in the ocean. It’s getting warmer, but not uniformly warm. There’s a rich structure to that warming, and that all matters for what’s going to happen in the future.

How was this sort of oceanographic data collected historically, before Argo?

Wijffels: Before Argo, the main way we had of getting subsurface information, particularly things like salinity, was to measure it from ships, which you can imagine is quite expensive. These are research vessels that are very expensive to operate, and you need to have teams of scientists aboard. They’re running very sensitive instrumentation. And they would simply prepare a package and lower it down the side into the ocean. And to do a 2,000-meter profile, it would maybe take a couple of hours. To go to the seafloor, it can take 6 hours or so.

The ships really are wonderful. We need them to measure all kinds of things. But to get the global coverage we’re talking about, it’s just prohibitive. In fact, there are not enough research vessels in the world to do this. And so, that’s why we needed to try and exploit robotics to solve this problem.

Back to top

Pick a typical Argo float and tell us something about it, a day in the life of an Argo float or a week in the life. How deep is this float typically, and how often does it transmit data?

Wijffels: They spend 90 percent of their time at 1,000 meters below the surface of the ocean—an environment where it’s dark and it’s cold. A float will drift there for about nine and a half days. Then it will make itself a little bit smaller in volume, which increases its density relative to the seawater around it. That allows it to then sink down to 2,000 meters. Once there, it will halt its downward trajectory, and switch on its sensor package. Once it has collected the intended complement of data, it expands, lowering its density. As the then lighter-than-water automaton floats back up toward the surface, it takes a series of measurements in a single column. And then, once they reach the sea surface, they transmit that profile back to us via a satellite system. And we also get a location for that profile through the global positioning system satellite network. Most Argo floats at sea right now are measuring temperature and salinity at a pretty high accuracy level.

How big is a typical data transmission, and where does it go?

Wijffels: The data is not very big at all. It’s highly compressed. It’s only about 20 or 30 kilobytes, and it goes through the Iridium network now for most of the float array. That data then comes ashore from the satellite system to your national data centers. It gets encoded and checked, and then it gets sent out immediately. It gets logged onto the Internet at a global data assembly center, but it also gets sent immediately to all the operational forecasting centers in the world. So the data is shared freely, within 24 hours, with everyone that wants to get hold of it.

This visualization shows some 3,800 of Argo’s floats scattered across the globe.Argo Program

You have 4,000 of these floats now spread throughout the world. Is that enough to do what your scientists need to do?

Wijffels: Currently, the 4,000 we have is a legacy of our first design of Argo, which was conceived in 1998. And at that time, our floats couldn’t operate in the sea-ice zones and couldn’t operate very well in enclosed seas. And so, originally, we designed the global array to be 3,000 floats; that was to kind of track what I think of as the slow background changes. These are changes happening across 1,000 kilometers in around three months—sort of the slow manifold of what’s happening to subsurface ocean temperature and salinity.

So, that’s what that design is for. But now, we have successfully piloted floats in the polar oceans and the seasonal sea-ice zones. So we know we can operate them there. And we also know now that there are some special areas like the equatorial oceans where we might need higher densities [of floats]. And so, we have a new design. And for that new design, we need to get about 4,700 operating floats into the water.

But we’re just starting now to really go to governments and ask them to provide the funds to expand the fleet. And part of the new design calls for floats to go deeper. Most of our floats in operation right now go only as deep as about 2,000 meters. But we now can build floats that can withstand the oceans’ rigors down to depths of 6,000 meters. And so, we want to build and sustain an array of about 1,200 deep-profiling floats, with an additional 1,000 of the newly built units capable of tracking the oceans by geochemistry. But this is new. These are big, new missions for the Argo infrastructure that we’re just starting to try and build up. We’ve done a lot of the piloting work; we’ve done a lot of the preparation. But now, we need to find sustained funding to implement that.

A new generation of deep-diving Argo floats can reach a depth of 6,000 meters. A spherical glass housing protects the electronics inside from the enormous pressure at that depth.MRV Systems/Argo Program

What is the cost of a typical float?

Wijffels: A typical cold float, which just measures temperature, salinity, and operates to 2,000 meters, depending on the country, costs between $20,000 and $30,000 U.S. dollars. But they each last five to seven years. And so, the cost per profile that we get, which is what really matters for us, is very low—particularly compared with other methods [of acquiring the same data].

Back to top

What kind of insights can we get from tracking heat and salinity and how they’re changing across Earth’s oceans?

Wijffels: There are so many things I could talk about, so many amazing discoveries that have come from the Argo data stream. There’s more than a paper a day that comes out using Argo. And that’s probably a conservative view. But I mean, one of the most important things we need to measure is how the ocean is warming. So, as the Earth system warms, most of that extra heat is actually being trapped in the ocean. Now, it’s a good thing that that heat is taken up and sequestered by the ocean, because it makes the rate of surface temperature change slower. But as it takes up that heat, the ocean expands. So, that’s actually driving sea-level rise. The ocean is pumping heat into the polar regions, which is causing both sea-ice and ice-sheet melt. And we know it’s starting to change regional weather patterns as well. With all that in mind, tracking where that heat is, and how the ocean circulation is moving it around, is really, really important for understanding both what's happening now to our climate system and what's going to happen to it in the future.

What has Argo’s data told us about how ocean temperatures have changed over the past 20 years? Are there certain oceans getting warmer? Are there certain parts of oceans getting warmer and others getting colder?

Wijffels: The signal in the deep ocean is very small. It’s a fraction, a hundredth of a degree, really. But we have very high precision instruments on Argo. The warming signal came out very quickly in the Argo data sets when averaged across the global ocean. If you measure in a specific place, say a time series at a site, there's a lot of noise there because the ocean circulation is turbulent, and it can move heat around from place to place. So, any given year, the ocean can be warm, and then it can be cool…that’s just a kind of a lateral shifting of the signal.

“We have discovered through Argo new current systems that we knew nothing about....There’s just been a revolution in our ability to make discoveries and understand how the ocean works.”
—Susan Wijffels

But when you measure globally and monitor the global average over time, the warming signal becomes very, very apparent. And so, as we’ve seen from past data—and Argo reinforces this—the oceans are warming faster at the surface than at their depths. And that’s because the ocean takes a while to draw the heat down. We see the Southern Hemisphere warming faster than the Northern Hemisphere. And there’s a lot of work that’s going on around that. The discrepancy is partly due to things like aerosol pollution in the Northern Hemisphere’s atmosphere, which actually has a cooling effect on our climate.

But some of it has to do with how the winds are changing. Which brings me to another really amazing thing about Argo: We’ve had a lot of discussion in our community about hiatuses or slowdowns of global warming. And that’s because of the surface temperature, which is the metric that a lot of people use. The oceans have a big effect on the global average surface temperature estimates because the oceans comprise the majority of Earth’s surface area. And we see that the surface temperature can peak when there’s a big El Niño–Southern Oscillation event. That’s because, in the Pacific, a whole bunch of heat from the subsurface [about 200 or 300 meters below the surface] suddenly becomes exposed to the surface. [Editor’s note: The El Niño–Southern Oscillation is a recurring, large-scale variation in sea-surface temperatures and wind patterns over the tropical eastern Pacific Ocean.]

What we see is this kind of chaotic natural phenomena, such as the El Niño–Southern Oscillation. It just transfers heat vertically in the ocean. And if you measure vertically through the El Niño or the tropical Pacific, that all cancels out. And so, the actual change in the amount of heat in the ocean doesn’t see those hiatuses that appear in surface measurements. It’s just a staircase. And we can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.

Argo was obviously designed and established for research into climate change, but so many large scientific instruments turn out to be useful for scientific questions other than the ones they were designed for. Is that the case with Argo?

Wijffels: Absolutely. Climate change is just one of the questions Argo was designed to address. It’s really being used now to study nearly all aspects of the ocean, from ocean mixing to just mapping out what the deep circulation, the currents in the deep ocean, look like. We now have very detailed maps of the surface of the ocean from the satellites we talked about, but understanding what the currents are in the deep ocean is actually very, very difficult. This is particularly true of the slow currents, not the turbulence, which is everywhere in the ocean like it is in the atmosphere. But now, we can do that using Argo because Argo gives us a map of the sort of pressure field. And from the pressure field, we can infer the currents. We have discovered through Argo new current systems that we knew nothing about. People are using this knowledge to study the ocean eddy field and how it moves heat around the ocean.

People have also made lots of discoveries about salinity; how salinity affects ocean currents and how it is reflecting what’s happening in our atmosphere. There’s just been a revolution in our ability to make discoveries and understand how the ocean works.

During a typical 10-day cycle, an Argo float spends most of its time at a depth of 2,000 meters, making readings before ascending to the surface and then transmitting its data via a satellite network.Argo Program

As you pointed out earlier, the signal from the deep ocean is very subtle, and it’s a very small signal. So, naturally, that would prompt an engineer to ask, “How accurate are these measurements, and how do you know that they’re that accurate?”

Wijffels: So, at the inception of the program, we put a lot of resources into a really good data-management and quality-assurance system. That’s the Argo Data Management system, which broke new ground for oceanography. And so, part of that innovation is that we have, in every nation that deploys floats, expert teams that look at the data. When the data is about a year old, they look at that data, and they assess it in the context of nearby ship data, which is usually the gold standard in terms of accuracy. And so, when a float is deployed, we know the sensors are routinely calibrated. And so, if we compare a freshly calibrated float’s profile with an old one that might be six or seven years old, we can make important comparisons. What’s more, some of the satellites that Argo is designed to work with also give us ability to check whether the float sensors are working properly.

And through the history of Argo, we have had issues. But we’ve tackled them head on. We have had issues that originated in the factories producing the sensors. Sometimes, we’ve halted deployments for years while we waited for a particular problem to be fixed. Furthermore, we try and be as vigilant as we can and use whatever information we have around every float record to ensure that it makes sense. We want to make sure that there’s not a big bias, and that our measurements are accurate.

Back to top

You mentioned earlier there’s a new generation of floats capable of diving to an astounding 6,000 meters. I imagine that as new technology becomes available, your scientists and engineers are looking at this and incorporating it. Tell us how advances in technology are improving your program.

Wijffels: [There are] three big, new things that we want to do with Argo and that we’ve proven we can do now through regional pilots. The first one, as you mentioned, is to go deep. And so that meant reengineering the float itself so that it could withstand and operate under really high pressure. And there are two strategies to that. One is to stay with an aluminum hull but make it thicker. Floats with that design can go to about 4,000 meters. The other strategy was to move to a glass housing. So the float goes from a metal cylinder to a glass sphere. And glass spheres have been used in ocean science for a long time because they’re extremely pressure resistant. So, glass floats can go to those really deep depths, right to the seafloor of most of the global ocean.

The game changer is a set of sensors that are sensitive and accurate enough to measure the tiny climate-change signals that we’re looking for in the deep ocean. And so that requires an extra level of care in building those sensors and a higher level of calibration. And so we’re working with sensor manufacturers to develop and prove calibration methods with tighter tolerances and ways of building these sensors with greater reliability. And as we prove that out, we go to sea on research vessels, we take the same sensors that were in our shipboard systems, and compare them with the ones that we’re deploying on the profiling floats. So, we have to go through a whole development cycle to prove that these work before we certify them for global implementation.

You mentioned batteries. Are batteries what is ultimately the limit on lifetime? I mean, I imagine you can’t recharge a battery that’s 2,000 meters down.

Wijffels: You’re absolutely right. Batteries are one of the key limitations for floats right now as regards their lifetime, and what they’re capable of. If there were a leap in battery technology, we could do a lot more with the floats. We could maybe collect data profiles faster. We could add many more extra sensors.

So, battery power and energy management Is a big, important aspect of what we do. And in fact, the way that we task the floats, it’s been a problem with particularly lithium batteries because the floats spend about 90 percent of their time sitting in the cold and not doing very much. During their drift phase, we sometimes turn them on to take some measurements. But still, they don’t do very much. They don’t use their buoyancy engines. This is the engine that changes the volume of the float.

And what we’ve learned is that these batteries can passivate. And so, we might think we’ve loaded a certain number of watts onto the float, but we never achieved the rated power level because of this passivation problem. But we’ve found different kinds of batteries that really sidestep that passivation problem. So, yes, batteries have been one thing that we’ve had to figure out so that energy is not a limiting factor in float operation.



In the puzzle of climate change, Earth’s oceans are an immense and crucial piece. The oceans act as an enormous reservoir of both heat and carbon dioxide, the most abundant greenhouse gas. But gathering accurate and sufficient data about the oceans to feed climate and weather models has been a huge technical challenge.

Over the years, though, a basic picture of ocean heating patterns has emerged. The sun’s infrared, visible-light, and ultraviolet radiation warms the oceans, with the heat absorbed particularly in Earth’s lower latitudes and in the eastern areas of the vast ocean basins. Thanks to wind-driven currents and large-scale patterns of circulation, the heat is generally driven westward and toward the poles, being lost as it escapes to the atmosphere and space.

This heat loss comes mainly from a combination of evaporation and reradiation into space. This oceanic heat movement helps make Earth habitable by smoothing out local and seasonal temperature extremes. But the transport of heat in the oceans and its eventual loss upward are affected by many factors, such as the ability of the currents and wind to mix and churn, driving heat down into the ocean. The upshot is that no model of climate change can be accurate unless it accounts for these complicating processes in a detailed way. And that’s a fiendish challenge, not least because Earth’s five great oceans occupy 140 million square miles, or 71 percent of the planet’s surface.

“We can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.”
—Susan Wijffels

Providing such detail is the purpose of the Argo program, run by an international consortium involving 30 nations. The group operates a global fleet of some 4,000 undersea robotic craft scattered throughout the world’s oceans. The vessels are called “floats,” though they spend nearly all of their time underwater, diving thousands of meters while making measurements of temperature and salinity. Drifting with ocean currents, the floats surface every 10 days or so to transmit their information to data centers in Brest, France, and Monterey, Calif. The data is then made available to researchers and weather forecasters all over the world.

The Argo system, which produces more than 100,000 salinity and temperature profiles per year, is a huge improvement over traditional methods, which depended on measurements made from ships or with buoys. The remarkable technology of these floats and the systems technology that was created to operate them as a network was recognized this past May with the IEEE Corporate Innovation Award, at the 2022 Vision, Innovation, and Challenges Summit. Now, as Argo unveils an ambitious proposal to increase the number of floats to 4,700 and increase their capabilities, IEEE Spectrum spoke with Susan Wijffels, senior scientist at the Woods Hole Oceanographic Institution on Cape Cod, Mass., and cochair of the Argo steering committee.

Susan Wijffels on…

Back to top

Why do we need a vast network like Argo to help us understand how Earth’s climate is changing?

Susan Wijffels: Well, the reason is that the ocean is a key player in Earth’s climate system. So, we know that, for instance, our average climate is really, really dependent on the ocean. But actually, how the climate varies and changes, beyond about a two-to-three-week time scale, is highly controlled by the ocean. And so, in a way, you can think that the future of climate—the future of Earth—is going to be determined partly by what we do, but also by how the ocean responds.

Susan Wijffels

Aren’t satellites already making these kind of measurements?

Wijffels: The satellite observing system, a wonderful constellation of satellites run by many nations, is very important. But they only measure the very, very top of the ocean. They penetrate a couple of meters at the most. Most are only really seeing what’s happening in the upper few millimeters of the ocean. And yet, the ocean itself is very deep, 5, 6 kilometers deep, around the world. And it’s what’s happening in the deep ocean that is critical, because things are changing in the ocean. It’s getting warmer, but not uniformly warm. There’s a rich structure to that warming, and that all matters for what’s going to happen in the future.

How was this sort of oceanographic data collected historically, before Argo?

Wijffels: Before Argo, the main way we had of getting subsurface information, particularly things like salinity, was to measure it from ships, which you can imagine is quite expensive. These are research vessels that are very expensive to operate, and you need to have teams of scientists aboard. They’re running very sensitive instrumentation. And they would simply prepare a package and lower it down the side into the ocean. And to do a 2,000-meter profile, it would maybe take a couple of hours. To go to the seafloor, it can take 6 hours or so.

The ships really are wonderful. We need them to measure all kinds of things. But to get the global coverage we’re talking about, it’s just prohibitive. In fact, there are not enough research vessels in the world to do this. And so, that’s why we needed to try and exploit robotics to solve this problem.

Back to top

Pick a typical Argo float and tell us something about it, a day in the life of an Argo float or a week in the life. How deep is this float typically, and how often does it transmit data?

Wijffels: They spend 90 percent of their time at 1,000 meters below the surface of the ocean—an environment where it’s dark and it’s cold. A float will drift there for about nine and a half days. Then it will make itself a little bit smaller in volume, which increases its density relative to the seawater around it. That allows it to then sink down to 2,000 meters. Once there, it will halt its downward trajectory, and switch on its sensor package. Once it has collected the intended complement of data, it expands, lowering its density. As the then lighter-than-water automaton floats back up toward the surface, it takes a series of measurements in a single column. And then, once they reach the sea surface, they transmit that profile back to us via a satellite system. And we also get a location for that profile through the global positioning system satellite network. Most Argo floats at sea right now are measuring temperature and salinity at a pretty high accuracy level.

How big is a typical data transmission, and where does it go?

Wijffels: The data is not very big at all. It’s highly compressed. It’s only about 20 or 30 kilobytes, and it goes through the Iridium network now for most of the float array. That data then comes ashore from the satellite system to your national data centers. It gets encoded and checked, and then it gets sent out immediately. It gets logged onto the Internet at a global data assembly center, but it also gets sent immediately to all the operational forecasting centers in the world. So the data is shared freely, within 24 hours, with everyone that wants to get hold of it.

This visualization shows some 3,800 of Argo’s floats scattered across the globe.Argo Program

You have 4,000 of these floats now spread throughout the world. Is that enough to do what your scientists need to do?

Wijffels: Currently, the 4,000 we have is a legacy of our first design of Argo, which was conceived in 1998. And at that time, our floats couldn’t operate in the sea-ice zones and couldn’t operate very well in enclosed seas. And so, originally, we designed the global array to be 3,000 floats; that was to kind of track what I think of as the slow background changes. These are changes happening across 1,000 kilometers in around three months—sort of the slow manifold of what’s happening to subsurface ocean temperature and salinity.

So, that’s what that design is for. But now, we have successfully piloted floats in the polar oceans and the seasonal sea-ice zones. So we know we can operate them there. And we also know now that there are some special areas like the equatorial oceans where we might need higher densities [of floats]. And so, we have a new design. And for that new design, we need to get about 4,700 operating floats into the water.

But we’re just starting now to really go to governments and ask them to provide the funds to expand the fleet. And part of the new design calls for floats to go deeper. Most of our floats in operation right now go only as deep as about 2,000 meters. But we now can build floats that can withstand the oceans’ rigors down to depths of 6,000 meters. And so, we want to build and sustain an array of about 1,200 deep-profiling floats, with an additional 1,000 of the newly built units capable of tracking the oceans by geochemistry. But this is new. These are big, new missions for the Argo infrastructure that we’re just starting to try and build up. We’ve done a lot of the piloting work; we’ve done a lot of the preparation. But now, we need to find sustained funding to implement that.

A new generation of deep-diving Argo floats can reach a depth of 6,000 meters. A spherical glass housing protects the electronics inside from the enormous pressure at that depth.MRV Systems/Argo Program

What is the cost of a typical float?

Wijffels: A typical cold float, which just measures temperature, salinity, and operates to 2,000 meters, depending on the country, costs between $20,000 and $30,000 U.S. dollars. But they each last five to seven years. And so, the cost per profile that we get, which is what really matters for us, is very low—particularly compared with other methods [of acquiring the same data].

Back to top

What kind of insights can we get from tracking heat and salinity and how they’re changing across Earth’s oceans?

Wijffels: There are so many things I could talk about, so many amazing discoveries that have come from the Argo data stream. There’s more than a paper a day that comes out using Argo. And that’s probably a conservative view. But I mean, one of the most important things we need to measure is how the ocean is warming. So, as the Earth system warms, most of that extra heat is actually being trapped in the ocean. Now, it’s a good thing that that heat is taken up and sequestered by the ocean, because it makes the rate of surface temperature change slower. But as it takes up that heat, the ocean expands. So, that’s actually driving sea-level rise. The ocean is pumping heat into the polar regions, which is causing both sea-ice and ice-sheet melt. And we know it’s starting to change regional weather patterns as well. With all that in mind, tracking where that heat is, and how the ocean circulation is moving it around, is really, really important for understanding both what's happening now to our climate system and what's going to happen to it in the future.

What has Argo’s data told us about how ocean temperatures have changed over the past 20 years? Are there certain oceans getting warmer? Are there certain parts of oceans getting warmer and others getting colder?

Wijffels: The signal in the deep ocean is very small. It’s a fraction, a hundredth of a degree, really. But we have very high precision instruments on Argo. The warming signal came out very quickly in the Argo data sets when averaged across the global ocean. If you measure in a specific place, say a time series at a site, there's a lot of noise there because the ocean circulation is turbulent, and it can move heat around from place to place. So, any given year, the ocean can be warm, and then it can be cool…that’s just a kind of a lateral shifting of the signal.

“We have discovered through Argo new current systems that we knew nothing about....There’s just been a revolution in our ability to make discoveries and understand how the ocean works.”
—Susan Wijffels

But when you measure globally and monitor the global average over time, the warming signal becomes very, very apparent. And so, as we’ve seen from past data—and Argo reinforces this—the oceans are warming faster at the surface than at their depths. And that’s because the ocean takes a while to draw the heat down. We see the Southern Hemisphere warming faster than the Northern Hemisphere. And there’s a lot of work that’s going on around that. The discrepancy is partly due to things like aerosol pollution in the Northern Hemisphere’s atmosphere, which actually has a cooling effect on our climate.

But some of it has to do with how the winds are changing. Which brings me to another really amazing thing about Argo: We’ve had a lot of discussion in our community about hiatuses or slowdowns of global warming. And that’s because of the surface temperature, which is the metric that a lot of people use. The oceans have a big effect on the global average surface temperature estimates because the oceans comprise the majority of Earth’s surface area. And we see that the surface temperature can peak when there’s a big El Niño–Southern Oscillation event. That’s because, in the Pacific, a whole bunch of heat from the subsurface [about 200 or 300 meters below the surface] suddenly becomes exposed to the surface. [Editor’s note: The El Niño–Southern Oscillation is a recurring, large-scale variation in sea-surface temperatures and wind patterns over the tropical eastern Pacific Ocean.]

What we see is this kind of chaotic natural phenomena, such as the El Niño–Southern Oscillation. It just transfers heat vertically in the ocean. And if you measure vertically through the El Niño or the tropical Pacific, that all cancels out. And so, the actual change in the amount of heat in the ocean doesn’t see those hiatuses that appear in surface measurements. It’s just a staircase. And we can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.

Argo was obviously designed and established for research into climate change, but so many large scientific instruments turn out to be useful for scientific questions other than the ones they were designed for. Is that the case with Argo?

Wijffels: Absolutely. Climate change is just one of the questions Argo was designed to address. It’s really being used now to study nearly all aspects of the ocean, from ocean mixing to just mapping out what the deep circulation, the currents in the deep ocean, look like. We now have very detailed maps of the surface of the ocean from the satellites we talked about, but understanding what the currents are in the deep ocean is actually very, very difficult. This is particularly true of the slow currents, not the turbulence, which is everywhere in the ocean like it is in the atmosphere. But now, we can do that using Argo because Argo gives us a map of the sort of pressure field. And from the pressure field, we can infer the currents. We have discovered through Argo new current systems that we knew nothing about. People are using this knowledge to study the ocean eddy field and how it moves heat around the ocean.

People have also made lots of discoveries about salinity; how salinity affects ocean currents and how it is reflecting what’s happening in our atmosphere. There’s just been a revolution in our ability to make discoveries and understand how the ocean works.

During a typical 10-day cycle, an Argo float spends most of its time at a depth of 2,000 meters, making readings before ascending to the surface and then transmitting its data via a satellite network.Argo Program

As you pointed out earlier, the signal from the deep ocean is very subtle, and it’s a very small signal. So, naturally, that would prompt an engineer to ask, “How accurate are these measurements, and how do you know that they’re that accurate?”

Wijffels: So, at the inception of the program, we put a lot of resources into a really good data-management and quality-assurance system. That’s the Argo Data Management system, which broke new ground for oceanography. And so, part of that innovation is that we have, in every nation that deploys floats, expert teams that look at the data. When the data is about a year old, they look at that data, and they assess it in the context of nearby ship data, which is usually the gold standard in terms of accuracy. And so, when a float is deployed, we know the sensors are routinely calibrated. And so, if we compare a freshly calibrated float’s profile with an old one that might be six or seven years old, we can make important comparisons. What’s more, some of the satellites that Argo is designed to work with also give us ability to check whether the float sensors are working properly.

And through the history of Argo, we have had issues. But we’ve tackled them head on. We have had issues that originated in the factories producing the sensors. Sometimes, we’ve halted deployments for years while we waited for a particular problem to be fixed. Furthermore, we try and be as vigilant as we can and use whatever information we have around every float record to ensure that it makes sense. We want to make sure that there’s not a big bias, and that our measurements are accurate.

Back to top

You mentioned earlier there’s a new generation of floats capable of diving to an astounding 6,000 meters. I imagine that as new technology becomes available, your scientists and engineers are looking at this and incorporating it. Tell us how advances in technology are improving your program.

Wijffels: [There are] three big, new things that we want to do with Argo and that we’ve proven we can do now through regional pilots. The first one, as you mentioned, is to go deep. And so that meant reengineering the float itself so that it could withstand and operate under really high pressure. And there are two strategies to that. One is to stay with an aluminum hull but make it thicker. Floats with that design can go to about 4,000 meters. The other strategy was to move to a glass housing. So the float goes from a metal cylinder to a glass sphere. And glass spheres have been used in ocean science for a long time because they’re extremely pressure resistant. So, glass floats can go to those really deep depths, right to the seafloor of most of the global ocean.

The game changer is a set of sensors that are sensitive and accurate enough to measure the tiny climate-change signals that we’re looking for in the deep ocean. And so that requires an extra level of care in building those sensors and a higher level of calibration. And so we’re working with sensor manufacturers to develop and prove calibration methods with tighter tolerances and ways of building these sensors with greater reliability. And as we prove that out, we go to sea on research vessels, we take the same sensors that were in our shipboard systems, and compare them with the ones that we’re deploying on the profiling floats. So, we have to go through a whole development cycle to prove that these work before we certify them for global implementation.

You mentioned batteries. Are batteries what is ultimately the limit on lifetime? I mean, I imagine you can’t recharge a battery that’s 2,000 meters down.

Wijffels: You’re absolutely right. Batteries are one of the key limitations for floats right now as regards their lifetime, and what they’re capable of. If there were a leap in battery technology, we could do a lot more with the floats. We could maybe collect data profiles faster. We could add many more extra sensors.

So, battery power and energy management Is a big, important aspect of what we do. And in fact, the way that we task the floats, it’s been a problem with particularly lithium batteries because the floats spend about 90 percent of their time sitting in the cold and not doing very much. During their drift phase, we sometimes turn them on to take some measurements. But still, they don’t do very much. They don’t use their buoyancy engines. This is the engine that changes the volume of the float.

And what we’ve learned is that these batteries can passivate. And so, we might think we’ve loaded a certain number of watts onto the float, but we never achieved the rated power level because of this passivation problem. But we’ve found different kinds of batteries that really sidestep that passivation problem. So, yes, batteries have been one thing that we’ve had to figure out so that energy is not a limiting factor in float operation.

Tactile sensing for robotics is achieved through a variety of mechanisms, including magnetic, optical-tactile, and conductive fluid. Currently, the fluid-based sensors have struck the right balance of anthropomorphic sizes and shapes and accuracy of tactile response measurement. However, this design is plagued by a low Signal to Noise Ratio (SNR) due to the fluid based sensing mechanism “damping” the measurement values that are hard to model. To this end, we present a spatio-temporal gradient representation on the data obtained from fluid-based tactile sensors, which is inspired from neuromorphic principles of event based sensing. We present a novel algorithm (GradTac) that converts discrete data points from spatial tactile sensors into spatio-temporal surfaces and tracks tactile contours across these surfaces. Processing the tactile data using the proposed spatio-temporal domain is robust, makes it less susceptible to the inherent noise from the fluid based sensors, and allows accurate tracking of regions of touch as compared to using the raw data. We successfully evaluate and demonstrate the efficacy of GradTac on many real-world experiments performed using the Shadow Dexterous Hand, equipped with the BioTac SP sensors. Specifically, we use it for tracking tactile input across the sensor’s surface, measuring relative forces, detecting linear and rotational slip, and for edge tracking. We also release an accompanying task-agnostic dataset for the BioTac SP, which we hope will provide a resource to compare and quantify various novel approaches, and motivate further research.

For robots navigating using only a camera, illumination changes in indoor environments can cause re-localization failures during autonomous navigation. In this paper, we present a multi-session visual SLAM approach to create a map made of multiple variations of the same locations in different illumination conditions. The multi-session map can then be used at any hour of the day for improved re-localization capability. The approach presented is independent of the visual features used, and this is demonstrated by comparing re-localization performance between multi-session maps created using the RTAB-Map library with SURF, SIFT, BRIEF, BRISK, KAZE, DAISY, and SuperPoint visual features. The approach is tested on six mapping and six localization sessions recorded at 30 min intervals during sunset using a Google Tango phone in a real apartment.

Inchworm-styled locomotion is one of the simplest gaits for mobile robots, which enables easy actuation, effective movement, and strong adaptation in nature. However, an agile inchworm-like robot that realizes versatile locomotion usually requires effective friction force manipulation with a complicated actuation structure and control algorithm. In this study, we embody a friction force controller based on the deformation of the robot body, to realize bidirectional locomotion. Two kinds of differential friction forces are integrated into a beam-like soft robot body, and along with the cyclical actuation of the robot body, two locomotion gaits with opposite locomotion directions can be generated and controlled by the deformation process of the robot body, that is, the dynamic gaits. Based on these dynamic gaits, two kinds of locomotion control schemes, the amplitude-based control and the frequency-based control, are proposed, analyzed, and validated with both theoretical simulations and prototype experiments. The soft inchworm crawler achieves the versatile locomotion result via a simple system configuration and minimalist actuation input. This work is an example of using soft structure vibrations for challenging robotic tasks.

We developed a novel framework for deep reinforcement learning (DRL) algorithms in task constrained path generation problems of robotic manipulators leveraging human demonstrated trajectories. The main contribution of this article is to design a reward function that can be used with generic reinforcement learning algorithms by utilizing the Koopman operator theory to build a human intent model from the human demonstrated trajectories. In order to ensure that the developed reward function produces the correct reward, the demonstrated trajectories are further used to create a trust domain within which the Koopman operator–based human intent prediction is considered. Otherwise, the proposed algorithm asks for human feedback to receive rewards. The designed reward function is incorporated inside the deep Q-learning (DQN) framework, which results in a modified DQN algorithm. The effectiveness of the proposed learning algorithm is demonstrated using a simulated robotic arm to learn the paths for constrained end-effector motion and considering the safety of the human in the surroundings of the robot.

Wearable robots are envisioned to amplify the independence of people with movement impairments by providing daily physical assistance. For portable, comfortable, and safe devices, soft pneumatic-based robots are emerging as a potential solution. However, due to the inherent complexities, including compliance and nonlinear mechanical behavior, feedback control for facilitating human–robot interaction remains a challenge. Herein, we present the design, fabrication, and control architecture of a soft wearable robot that assists in supination and pronation of the forearm. The soft wearable robot integrates an antagonistic pair of pneumatic-based helical actuators to provide active pronation and supination torques. Our main contribution is a bio-inspired equilibrium-point control scheme for integrating proprioceptive feedback and exteroceptive input (e.g., the user’s muscle activation signals) directly with the on/off valve behavior of the soft pneumatic actuators. The proposed human–robot controller is directly inspired by the equilibrium-point hypothesis of motor control, which suggests that voluntary movements arise through shifts in the equilibrium state of the antagonistic muscle pair spanning a joint. We hypothesized that the proposed method would reduce the required effort during dynamic manipulation without affecting the error. In order to evaluate our proposed method, we recruited seven pediatric participants with movement disorders to perform two dynamic interaction tasks with a haptic manipulandum. Each task required the participant to track a sinusoidal trajectory while the haptic manipulandum behaved as a Spring-Dominate system or Inertia-Dominate system. Our results reveal that the soft wearable robot, when active, reduced user effort on average by 14%. This work demonstrates the practical implementation of an equilibrium-point volitional controller for wearable robots and provides a foundational path toward versatile, low-cost, and soft wearable robots.

This paper presents the design, control, and experimental evaluation of a novel fully automated robotic-assisted system for the positioning and insertion of a commercial full core biopsy instrument under guidance by ultrasound imaging. The robotic system consisted of a novel 4° of freedom (DOF) add-on robot for the positioning and insertion of the biopsy instrument that is attached to a UR5-based teleoperation system with 6 DOF. The robotic system incorporates the advantages of both freehand and probe-guided biopsy techniques. The proposed robotic system can be used as a slave robot in a teleoperation configuration or as an autonomous or semi-autonomous robot in the future. While the UR5 manipulator was controlled using a teleoperation scheme with force controller, a reinforcement learning based controller using the Deep Deterministic Policy Gradient (DDPG) algorithm was developed for the add-on robotic system. The dexterous workspace analysis of the add-on robotic system demonstrated that the system has a suitable workspace within the US image. Two sets of comprehensive experiments including four experiments were performed to evaluate the robotic system’s performance in terms of the biopsy instrument positioning, and the insertion of the needle inside the ultrasound plane. The experimental results showed the ability of the robotic system for in-plane needle insertion. The overall mean error of all four experiments in the tracking of the needle angle was 0.446°, and the resolution of the needle insertion was 0.002 mm.

Partners have to build a shared understanding of their environment in everyday collaborative tasks by aligning their perceptions and establishing a common ground. This is one of the aims of shared perception: revealing characteristics of the individual perception to others with whom we share the same environment. In this regard, social cognitive processes, such as joint attention and perspective-taking, form a shared perception. From a Human-Robot Interaction (HRI) perspective, robots would benefit from the ability to establish shared perception with humans and a common understanding of the environment with their partners. In this work, we wanted to assess whether a robot, considering the differences in perception between itself and its partner, could be more effective in its helping role and to what extent this improves task completion and the interaction experience. For this purpose, we designed a mathematical model for a collaborative shared perception that aims to maximise the collaborators’ knowledge of the environment when there are asymmetries in perception. Moreover, we instantiated and tested our model via a real HRI scenario. The experiment consisted of a cooperative game in which participants had to build towers of Lego bricks, while the robot took the role of a suggester. In particular, we conducted experiments using two different robot behaviours. In one condition, based on shared perception, the robot gave suggestions by considering the partners’ point of view and using its inference about their common ground to select the most informative hint. In the other condition, the robot just indicated the brick that would have yielded a higher score from its individual perspective. The adoption of shared perception in the selection of suggestions led to better performances in all the instances of the game where the visual information was not a priori common to both agents. However, the subjective evaluation of the robot’s behaviour did not change between conditions.

This paper presents a new approach for evaluating and controlling expressive humanoid robotic faces using open-source computer vision and machine learning methods. Existing research in Human-Robot Interaction lacks flexible and simple tools that are scalable for evaluating and controlling various robotic faces; thus, our goal is to demonstrate the use of readily available AI-based solutions to support the process. We use a newly developed humanoid robot prototype intended for medical training applications as a case example. The approach automatically captures the robot’s facial action units through a webcam during random motion, which are components traditionally used to describe facial muscle movements in humans. Instead of manipulating the actuators individually or training the robot to express specific emotions, we propose using action units as a means for controlling the robotic face, which enables a multitude of ways to generate dynamic motion, expressions, and behavior. The range of action units achieved by the robot is thus analyzed to discover its expressive capabilities and limitations and to develop a control model by correlating action units to actuation parameters. Because the approach is not dependent on specific facial attributes or actuation capabilities, it can be used for different designs and continuously inform the development process. In healthcare training applications, our goal is to establish a prerequisite of expressive capabilities of humanoid robots bounded by industrial and medical design constraints. Furthermore, to mediate human interpretation and thus enable decision-making based on observed cognitive, emotional, and expressive cues, our approach aims to find the minimum viable expressive capabilities of the robot without having to optimize for realism. The results from our case example demonstrate the flexibility and efficiency of the presented AI-based solutions to support the development of humanoid facial robots.



Microrobotics engineers often turn to nature to inspire their builds. A group of researchers at Northwestern University have picked the peekytoe crab to build a remote-controlled microbot that is tiny enough to walk comfortably on the edge of a coin.

According to John A. Rogers, the lead investigator of the study, their work complements that of other scientists who are working on millimeter-scale robots, for example, worm-like structures that can move through liquid media with flagella. But to the best of his knowledge, their crab microbots are the smallest terrestrial robots—just half a millimeter wide—to walk on solid surfaces in open air.

The tiny robot moves with a scuttling motion thanks to shape memory alloys (SMA). This class of materials undergoes a phase transition at a certain temperature, triggering a shape change. “So you create material in an initial geometry, deform it, and then when you heat it up, it’ll go back to that initial geometry,” Rogers says. “We exploit the shape changes [as] the basis of kind of a mechanical actuator or kind of a muscle.”

To move the robot, lasers heat its “legs” in sequence; the shape memory alloy in each leg bends in response to the heat—and then returns to its original orientation upon cooling.

Northwestern University

The robot comprises three key materials—an electronics-grade polymer for the body and parts of the limbs; the SMA, which forms the “active” component; and a thin layer of glass as an exoskeleton to give the structure rigidity. Rogers adds that they are not constrained by these particular materials, however, and his team are looking at ways to integrate semiconducting materials and other kinds of conductors.

For movement, the researchers use the focus spot of a laser beam on the robot body. “Whenever the laser beam illuminates the shape memory alloy components of the robot, you induce [its] phase change and corresponding motion,” Rogers says, “and when the laser beam moves off, you get a fast cooling and the limb returns to the deformed geometry.” Thus, scanning the laser spot across the body of the robot can sequentially activate various joints, and thereby establish a gait and direction for motion.

Northwestern University

Though this method has its advantages, Rogers would like to explore more options. “With the laser, you need some kind of optical access… [but depending] on where you want the robot to operate, that approach is going to be feasible or not,” says Rogers.

This is not the first time Rogers has had a hand in creating submillimeter-sized robots. His lab has developed tiny structures resembling worms and beetles, and even a winged microchip that moves through the air passively, using the same principles as the wind dispersal of seeds.

In 2015, Rogers and his colleagues also published a paper about using the concepts of kirigami, the Japanese art of paper cutting, as seen in pop-up books, for example, to design their robots. They use high-fidelity multilayer stacks of patterned materials supported by a silicon wafer, but while those are great for integrated circuits, they’re “no good for robots,” says Rogers, as they are flat. To move them into the third dimension, studying the principles of kirigami was a starting point.

As Rogers emphasizes, their research is purely exploratory at the moment, an attempt to introduce some some additional ideas into microrobotic engineering. “We can move these robots around, make them walk in different directions, but they don’t execute a specific task,” he says. For example, even though the crab-bots have claws, these are just for visual purposes, they don’t move or grasp objects. “Creating capabilities for task execution would be a next step in research in this area,” he says. For now, though, making multi-material 3D structures and using SMAs for two-way actuation are the two key pieces of contribution to the broader community from his team.

For further exploration, he and his colleagues are thinking about how to add the ability to grasp or manipulate objects at this scale, as well as adding microcircuits, digital sensors, and wireless communication to the bots. Communication between the robots could allow them to operate as a swarm, for instance. Another area to work on is adding some kind of local power supply, powered by photovoltaics, for example, with a microcontroller to provide local heating in a timed sequence to control movement.

In terms of potential applications, Rogers envisages the tiny robots to be useful for working in confined spaces, primarily for minimally invasive surgeries, followed by vehicles for building other tiny machines. But he also advocates caution: “I wouldn’t want to oversell what we’ve done. It’s pretty easy to slide into fantastical visions of these robots getting in the body and doing something powerful in terms of medical treatment. [But] that’s where we’d like to go, and it’s what's motivating a lot of our work."



Microrobotics engineers often turn to nature to inspire their builds. A group of researchers at Northwestern University have picked the peekytoe crab to build a remote-controlled microbot that is tiny enough to walk comfortably on the edge of a coin.

According to John A. Rogers, the lead investigator of the study, their work complements that of other scientists who are working on millimeter-scale robots, for example, worm-like structures that can move through liquid media with flagella. But to the best of his knowledge, their crab microbots are the smallest terrestrial robots—just half a millimeter wide—to walk on solid surfaces in open air.

The tiny robot moves with a scuttling motion thanks to shape memory alloys (SMA). This class of materials undergoes a phase transition at a certain temperature, triggering a shape change. “So you create material in an initial geometry, deform it, and then when you heat it up, it’ll go back to that initial geometry,” Rogers says. “We exploit the shape changes [as] the basis of kind of a mechanical actuator or kind of a muscle.”

To move the robot, lasers heat its “legs” in sequence; the shape memory alloy in each leg bends in response to the heat—and then returns to its original orientation upon cooling.

Northwestern University

The robot comprises three key materials—an electronics-grade polymer for the body and parts of the limbs; the SMA, which forms the “active” component; and a thin layer of glass as an exoskeleton to give the structure rigidity. Rogers adds that they are not constrained by these particular materials, however, and his team are looking at ways to integrate semiconducting materials and other kinds of conductors.

For movement, the researchers use the focus spot of a laser beam on the robot body. “Whenever the laser beam illuminates the shape memory alloy components of the robot, you induce [its] phase change and corresponding motion,” Rogers says, “and when the laser beam moves off, you get a fast cooling and the limb returns to the deformed geometry.” Thus, scanning the laser spot across the body of the robot can sequentially activate various joints, and thereby establish a gait and direction for motion.

Northwestern University

Though this method has its advantages, Rogers would like to explore more options. “With the laser, you need some kind of optical access… [but depending] on where you want the robot to operate, that approach is going to be feasible or not,” says Rogers.

This is not the first time Rogers has had a hand in creating submillimeter-sized robots. His lab has developed tiny structures resembling worms and beetles, and even a winged microchip that moves through the air passively, using the same principles as the wind dispersal of seeds.

In 2015, Rogers and his colleagues also published a paper about using the concepts of kirigami, the Japanese art of paper cutting, as seen in pop-up books, for example, to design their robots. They use high-fidelity multilayer stacks of patterned materials supported by a silicon wafer, but while those are great for integrated circuits, they’re “no good for robots,” says Rogers, as they are flat. To move them into the third dimension, studying the principles of kirigami was a starting point.

As Rogers emphasizes, their research is purely exploratory at the moment, an attempt to introduce some some additional ideas into microrobotic engineering. “We can move these robots around, make them walk in different directions, but they don’t execute a specific task,” he says. For example, even though the crab-bots have claws, these are just for visual purposes, they don’t move or grasp objects. “Creating capabilities for task execution would be a next step in research in this area,” he says. For now, though, making multi-material 3D structures and using SMAs for two-way actuation are the two key pieces of contribution to the broader community from his team.

For further exploration, he and his colleagues are thinking about how to add the ability to grasp or manipulate objects at this scale, as well as adding microcircuits, digital sensors, and wireless communication to the bots. Communication between the robots could allow them to operate as a swarm, for instance. Another area to work on is adding some kind of local power supply, powered by photovoltaics, for example, with a microcontroller to provide local heating in a timed sequence to control movement.

In terms of potential applications, Rogers envisages the tiny robots to be useful for working in confined spaces, primarily for minimally invasive surgeries, followed by vehicles for building other tiny machines. But he also advocates caution: “I wouldn’t want to oversell what we’ve done. It’s pretty easy to slide into fantastical visions of these robots getting in the body and doing something powerful in terms of medical treatment. [But] that’s where we’d like to go, and it’s what's motivating a lot of our work."



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RSS 2022: 21 June–1 July 2022, NEW YORK CITYERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

The European Robocup Finals 2022, featuring Tech United vs. VDL Robotsports.

[ Tech United ]

Within the European Union project we aim to autonomously monitor habitats. Regular monitoring of individual plant species allows for more sophisticated decision-making. The video is recorded in Perugia Italy.

[ RSL ]

ICRA 2023 is in London!

[ ICRA 2023 ]

What can we learn from nature? What skills from the animal world can be used for industrial applications? Festo has been dealing with these questions in the Bionic Learning Network for years. In association with universities, institutes and development companies, we are developing research platforms whose basic technical principles are based on nature. A recurring theme here is the unique movements and functions of the elephant’s trunk.

[ Festo ]

We are proud to announce the Relaunch of Misty, providing you with a more intuitive and easy-to-use robot platform! So what is new, we hear you ask? To begin with, we have updated Misty’s conversational skills, focusing on both improved NLU capabilities and added more languages. Python has been added as our primary focus programming language going forward, complemented by enhanced Blockly drag and drop functionality. We think you will really enjoy our brand new Misty Studio, which is both more user friendly and with improved features.

[ Misty ]

We developed a self-contained end-effector for layouting on construction sites with aerial robots! The end-effector achieves high accuracy through the use of multiple contact points, compliance, and actuation.

[ Paper ]

The compliance and conformability of soft robots provide inherent advantages when working around delicate objects or in unstructured environments. However, rapid locomotion in soft robotics is challenging due to the slow propagation of motion in compliant structures, particularly underwater. Taking inspiration from cephalopods, here we present an underwater robot with a compliant body that can achieve repeatable jet propulsion by changing its internal volume and cross-sectional area to take advantage of jet propulsion as well as the added mass effect.

[ UCSD ]

I like this idea of making incidental art with robots.

[ RPL UCL ]

If you want to be at the cutting-edge of your research field and publish impactful research papers, you need the most cutting-edge hardware. Our technology is unique (we own the relevant IP), unrivaled and a must-have tool for those in robotics research.

[ Shadow ]

Hardware platforms for socially interactive robotics can be limited by cost or lack of functionality. This article presents the overall system—design, hardware, and software—for Quori, a novel, affordable, socially interactive humanoid robot platform for facilitating non-contact human-robot interaction (HRI) research.

[ Paper ]

Wyss Associate Faculty members, Conor Walsh and Rob Wood discuss their visions for the future of bio-inspired soft robotics.

[ Wyss Institute ]

Towel folding: still not easy for robots.

[ Ishikawa Lab ]

We present hybrid adhesive end-effectors for bimanual handling of deformable objects. The end-effectors are designed with features meant to accommodate surface irregularities in macroscale form, mesoscale waviness, and microscale roughness, achieving good shear adhesion on surfaces with little gripping force. The new gripping system combines passive mechanical compliance with a hybrid electrostatic-adhesive pad so that humanoid robots can grasp a wide range of materials including paperboard and textured plastics.

[ Paper ]

MIT CSAIL grad students speak about what they think is the most important unsolved problem in computer science today.

[ MIT CSAIL ]

At the National Centre of Competence in Research (NCCR) Robotics, a new generation of robots that can work side by side with humans—fighting disabilities, facing emergencies and transforming education—is developed.

[ NCCR ]

The OS-150 Robotics Laboratory is Lawrence Livermore National Laboratory’s facility for testing autonomous drones, vehicles, and robots of the future. The Lab, informally known as the “drone pen,” allows operators to pilot drones safely and build trust with their robotic teammates.

[ LLNL ]

I am not entirely certain whether a Roomba is capable of detecting and navigating pixelated poop IRL, but I’d like to think so.

[ iRobot ]

How Wing designed its hybrid drone for last-mile delivery.

[ Wing ]

Over the past ten years, AI has experienced breakthrough after breakthrough in fields as diverse as computer vision, speech recognition, and protein folding prediction. Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. Geoff joins Pieter Abbeel in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain.

[ Robot Brains ]

Pages