IEEE Spectrum Robotics

IEEE Spectrum
Subscribe to IEEE Spectrum Robotics feed IEEE Spectrum Robotics


Garbage is a global problem that each of us contributes to. Since the 1970s, we've all been told we can help fix that problem by assiduously recycling bottles and cans, boxes and newspapers.

So far, though, we haven’t been up to the task. Only 16 percent of the 2.1 billion tonnes of solid waste that the world produces every year gets recycled. The U.S. Environmental Protection Agency estimates that the United States recycled only about 32 percent of its garbage in 2018, putting the country in the middle of the pack worldwide. Germany, on the high end, captures about 65 percent, while Chile and Turkey barely do anything, recycling a mere 1 percent of their trash, according to a 2015 report by the Organization for Economic Cooperation and Development (OECD).

Here in the United States, of the 32 percent of the trash that we try to recycle, about 80 to 95 percent actually gets recycled, as Jason Calaiaro of AMP Robotics points out in “AI Takes a Dumpster Dive.” The technology that Calaiaro’s company is developing could move us closer to 100 percent. But it would have no effect on the two-thirds of the waste stream that never makes it to recyclers.

Certainly, the marginal gains realized by AI and robotics will help the bottom lines of recycling companies, making it profitable for them to recover more useful materials from waste. But to make a bigger difference, we need to address the problem at the beginning of the process: Manufacturers and packaging companies must shift to more sustainable designs that use less material or more recyclable ones.

According to the Joint Research Centre of the European Commission, more than “80 percent of all product-related environmental impacts are determined during the design phase of a product.” One company that applies AI at the start of the design process is Digimind GmbH based in Berlin. As CEO Katharina Eissing told Packaging Europe last year, Digimind’s AI-aided platform lets package designers quickly assess the outcome of changes they make to designs. In one case, Digimind reduced the weight of a company’s 1.5-liter plastic bottles by 13.7 percent, a seemingly small improvement that becomes more impressive when you consider that the company produces 1 billion of these bottles every year.

That’s still just a drop in the polyethylene terephthalate bucket: The world produced an estimated 583 billion PET bottles last year, according to Statista. To truly address our global garbage problem, our consumption patterns must change–canteens instead of single-use plastic bottles, compostable paper boxes instead of plastic clamshell containers, reusable shopping bags instead of “disposable” plastic ones. And engineers involved in product design need to develop packaging free of PET, polystyrene, and polycarbonate, which break down into tiny particles called microplastics that researchers are now finding in human blood and feces.

As much as we may hope that AI can solve our problems for us, that’s wishful thinking. Human ingenuity got us into this mess and humans will have to regulate, legislate, and otherwise incentivize the private sector to get us out of it.



It’s Tuesday night. In front of your house sits a large blue bin, full of newspaper, cardboard, bottles, cans, foil take-out trays, and empty yogurt containers. You may feel virtuous, thinking you’re doing your part to reduce waste. But after you rinse out that yogurt container and toss it into the bin, you probably don’t think much about it ever again.

The truth about recycling in many parts of the United States and much of Europe is sobering. Tomorrow morning, the contents of the recycling bin will be dumped into a truck and taken to the recycling facility to be sorted. Most of the material will head off for processing and eventual use in new products. But a lot of it will end up in a landfill.

So how much of the material that goes into the typical bin avoids a trip to landfill? For countries that do curbside recycling, the number—called the recovery rate—appears to average around 70 to 90 percent, though widespread data isn’t available. That doesn’t seem bad. But in some municipalities, it can go as low as 40 percent.

What’s worse, only a small quantity of all recyclables makes it into the bins—just 32 percent in the United States and 10 to 15 percent globally. That’s a lot of material made from finite resources that needlessly goes to waste.

We have to do better than that. Right now, the recycling industry is facing a financial crisis, thanks to falling prices for sorted recyclables as well as policy, enacted by China in 2018, which restricts the import of many materials destined for recycling and shuts out most recyclables originating in the United States.

There is a way to do better. Using computer vision, machine learning, and robots to identify and sort recycled material, we can improve the accuracy of automatic sorting machines, reduce the need for human intervention, and boost overall recovery rates.

My company, Amp Robotics, based in Louisville, Colo., is developing hardware and software that relies on image analysis to sort recyclables with far higher accuracy and recovery rates than are typical for conventional systems. Other companies are similarly working to apply AI and robotics to recycling, including Bulk Handling Systems, Machinex, and Tomra. To date, the technology has been installed in hundreds of sorting facilities around the world. Expanding its use will prevent waste and help the environment by keeping recyclables out of landfills and making them easier to reprocess and reuse.

AMP Robotics

Before I explain how AI will improve recycling, let’s look at how recycled materials were sorted in the past and how they’re being sorted in most parts of the world today.

When recycling began in the 1960s, the task of sorting fell to the consumer—newspapers in one bundle, cardboard in another, and glass and cans in their own separate bins. That turned out to be too much of a hassle for many people and limited the amount of recyclable materials gathered.

In the 1970s, many cities took away the multiple bins and replaced them with a single container, with sorting happening downstream. This “single stream” recycling boosted participation, and it is now the dominant form of recycling in developed countries.

Moving the task of sorting further downstream led to the building of sorting facilities. To do the actual sorting, recycling entrepreneurs adapted equipment from the mining and agriculture industries, filling in with human labor as necessary. These sorting systems had no computer intelligence, relying instead on the physical properties of materials to separate them. Glass, for example, can be broken into tiny pieces and then sifted and collected. Cardboard is rigid and light—it can glide over a series of mechanical camlike disks, while other, denser materials fall in between the disks. Ferrous metals can be magnetically separated from other materials; magnetism can also be induced in nonferrous items, like aluminum, using a large eddy current.

By the 1990s, hyperspectral imaging, developed by NASA and first launched in a satellite in 1972, was becoming commercially viable and began to show up in the recycling world. Unlike human eyes, which mostly see in combinations of red, green, and blue, hyperspectral sensors divide images into many more spectral bands. The technology’s ability to distinguish between different types of plastics changed the game for recyclers, bringing not only optical sensing but computer intelligence into the process. Programmable optical sorters were also developed to separate paper products, distinguishing, say, newspaper from junk mail.

So today, much of the sorting is automated. These systems generally sort to 80 to 95 percent purity—that is, 5 to 20 percent of the output shouldn’t be there. For the output to be profitable, however, the purity must be higher than 95 percent; below this threshold, the value drops, and often it’s worth nothing. So humans manually clean up each of the streams, picking out stray objects before the material is compressed and baled for shipping.

Despite all the automated and manual sorting, about 10 to 30 percent of the material that enters the facility ultimately ends up in a landfill. In most cases, more than half of that material is recyclable and worth money but was simply missed.

We’ve pushed the current systems as far as they can go. Only AI can do better.

Getting AI into the recycling business means combining pick-and-place robots with accurate real-time object detection. Pick-and-place robots combined with computer vision systems are used in manufacturing to grab particular objects, but they generally are just looking repeatedly for a single item, or for a few items of known shapes and under controlled lighting conditions. Recycling, though, involves infinite variability in the kinds, shapes, and orientations of the objects traveling down the conveyor belt, requiring nearly instantaneous identification along with the quick dispatch of a new trajectory to the robot arm.

AI-based systems guide robotic arms to grab materials from a stream of mixed recyclables and place them in the correct bins. Here, a tandem robot system operates at a Waste Connections recycling facility [top], and a single robot arm [bottom] recovers a piece of corrugated cardboard. The United States does a pretty good job when it comes to cardboard: In 2021, 91.4 percent of discarded cardboard was recycled, according to the American Forest and Paper Association. AMP Robotics

My company first began using AI in 2016 to extract empty cartons from other recyclables at a facility in Colorado; today, we have systems installed in more than 25 U.S. states and six countries. We weren’t the first company to try AI sorting, but it hadn’t previously been used commercially. And we have steadily expanded the types of recyclables our systems can recognize and sort.

AI makes it theoretically possible to recover all of the recyclables from a mixed-material stream at accuracy approaching 100 percent, entirely based on image analysis. If an AI-based sorting system can see an object, it can accurately sort it.

Consider a particularly challenging material for today’s recycling sorters: high-density polyethylene (HDPE), a plastic commonly used for detergent bottles and milk jugs. (In the United States, Europe, and China, HDPE products are labeled as No. 2 recyclables.) In a system that relies on hyperspectral imaging, batches of HDPE tend to be mixed with other plastics and may have paper or plastic labels, making it difficult for the hyperspectral imagers to detect the underlying object’s chemical composition.

An AI-driven computer-vision system, by contrast, can determine that a bottle is HDPE and not something else by recognizing its packaging. Such a system can also use attributes like color, opacity, and form factor to increase detection accuracy, and even sort by color or specific product, reducing the amount of reprocessing needed. Though the system doesn’t attempt to understand the meaning of words on labels, the words are part of an item’s visual attributes.

We at AMP Robotics have built systems that can do this kind of sorting. In the future, AI systems could also sort by combinations of material and by original use, enabling food-grade materials to be separated from containers that held household cleaners, and paper contaminated with food waste to be separated from clean paper.

Training a neural network to detect objects in the recycling stream is not easy. It is at least several orders of magnitude more challenging than recognizing faces in a photograph, because there can be a nearly infinite variety of ways that recyclable materials can be deformed, and the system has to recognize the permutations.

Inside the Sorting Center

Today’s recycling facilities use mechanical sorting, optical hyperspectral sorting, and human workers. Here’s what typically happens after the recycling truck leaves your house with the contents of your blue bin.

Trucks unload on a concrete pad, called the tip floor. A front-end loader scoops up material in bulk and dumps it onto a conveyor belt, typically at a rate of 30 to 60 tonnes per hour.

The first stage is the presort. Human workers remove large or problematic items that shouldn’t have made it onto collection trucks in the first place—bicycles, big pieces of plastic film, propane canisters, car transmissions.


Sorting machines that rely on optical hyperspectral imaging or human workers separate fiber (office paper, cardboard, magazines—referred to as 2D products, as they are mostly flat) from the remaining plastics and metals. In the case of the optical sorters, cameras stare down at the material rolling down the conveyor belt, detect an object made of the target substance, and then send a message to activate a bank of electronically controllable solenoids to divert the object into a collection bin.


The nonfiber materials pass through a mechanical system with densely packed camlike wheels. Large items glide past while small items, like that recyclable fork you thoughtfully deposited in your blue bin, slip through, headed straight for landfill—they are just too small to be sorted. Machines also smash glass, which falls to the bottom and is screened out.


The rest of the stream then passes under overhead magnets, which collect items made of ferrous metals, and an eddy-current-inducing machine, which jolts nonferrous metals to another collection area.


At this point, mostly plastics remain. More hyperspectral sorters, in series, can pull off plastics one type—like the HDPE of detergent bottles and the PET of water bottles—at a time.

Finally, whatever is left—between 10 to 30 percent of what came in on the trucks—goes to landfill.


In the future, AI-driven robotic sorting systems and AI inspection systems could replace human workers at most points in this process. In the diagram, red icons indicate where AI-driven robotic systems could replace human workers and a blue icon indicates where an AI auditing system could make a final check on the success of the sorting effort.

It’s hard enough to train a neural network to identify all the different types of bottles of laundry detergent on the market today, but it’s an entirely different challenge when you consider the physical deformations that these objects can undergo by the time they reach a recycling facility. They can be folded, torn, or smashed. Mixed into a stream of other objects, a bottle might have only a corner visible. Fluids or food waste might obscure the material.

We train our systems by giving them images of materials belonging to each category, sourced from recycling facilities around the world. My company now has the world’s largest data set of recyclable material images for use in machine learning.

Using this data, our models learn to identify recyclables in the same way their human counterparts do, by spotting patterns and features that distinguish different materials. We continuously collect random samples from all the facilities that use our systems, and then annotate them, add them to our database, and retrain our neural networks. We also test our networks to find models that perform best on target material and do targeted additional training on materials that our systems have trouble identifying correctly.

In general, neural networks are susceptible to learning the wrong thing. Pictures of cows are associated with milk packaging, which is commonly produced as a fiber carton or HDPE container. But milk products can also be packaged in other plastics; for example, single-serving milk bottles may look like the HDPE of gallon jugs but are usually made from an opaque form of the PET (polyethylene terephthalate) used for water bottles. Cows don’t always mean fiber or HDPE, in other words.

There is also the challenge of staying up to date with the continual changes in consumer packaging. Any mechanism that relies on visual observation to learn associations between packaging and material types will need to consume a steady stream of data to ensure that objects are classified accurately.

But we can get these systems to work. Right now, our systems do really well on certain categories—more than 98 percent accuracy on aluminum cans—and are getting better at distinguishing nuances like color, opacity, and initial use (spotting those food-grade plastics).

Now that AI-based systems are ready to take on your recyclables, how might things change? Certainly, they will boost the use of robotics, which is only minimally used in the recycling industry today. Given the perpetual worker shortage in this dull and dirty business, automation is a path worth taking.

AI can also help us understand how well today’s existing sorting processes are doing and how we can improve them. Today, we have a very crude understanding of the operational efficiency of sorting facilities—we weigh trucks on the way in and weigh the output on the way out. No facility can tell you the purity of the products with any certainty; they only audit quality periodically by breaking open random bales. But if you placed an AI-powered vision system over the inputs and outputs of relevant parts of the sorting process, you’d gain a holistic view of what material is flowing where. This level of scrutiny is just beginning in hundreds of facilities around the world, and it should lead to greater efficiency in recycling operations. Being able to digitize the real-time flow of recyclables with precision and consistency also provides opportunities to better understand which recyclable materials are and are not currently being recycled and then to identify gaps that will allow facilities to improve their recycling systems overall.

Sorting Robot Picking Mixed Plastics AMP Robotics

But to really unleash the power of AI on the recycling process, we need to rethink the entire sorting process. Today, recycling operations typically whittle down the mixed stream of materials to the target material by removing nontarget material—they do a “negative sort,” in other words. Instead, using AI vision systems with robotic pickers, we can perform a “positive sort.” Instead of removing nontarget material, we identify each object in a stream and select the target material.

To be sure, our recovery rate and purity are only as good as our algorithms. Those numbers continue to improve as our systems gain more experience in the world and our training data set continues to grow. We expect to eventually hit purity and recovery rates of 100 percent.

The implications of moving from more mechanical systems to AI are profound. Rather than coarsely sorting to 80 percent purity and then manually cleaning up the stream to 95 percent purity, a facility can reach the target purity on the first pass. And instead of having a unique sorting mechanism handling each type of material, a sorting machine can change targets just by a switch in algorithm.

The use of AI also means that we can recover materials long ignored for economic reasons. Until now, it was only economically viable for facilities to pursue the most abundant, high-value items in the waste stream. But with machine-learning systems that do positive sorting on a wider variety of materials, we can start to capture a greater diversity of material at little or no overhead to the business. That’s good for the planet.

We are beginning to see a few AI-based secondary recycling facilities go into operation, with Amp’s technology first coming online in Denver in late 2020. These systems are currently used where material has already passed through a traditional sort, seeking high-value materials missed or low-value materials that can be sorted in novel ways and therefore find new markets.

Thanks to AI, the industry is beginning to chip away at the mountain of recyclables that end up in landfills each year—a mountain containing billions of tons of recyclables representing billions of dollars lost and nonrenewable resources wasted.

This article appears in the July 2022 print issue as “AI Takes a Dumpster Dive .”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

The Real Robotics Lab at University of Leeds presents two Chef quadruped robots remotely controlled by a single operator to make a tasty burger as a team. The operator uses a gamepad to control their walking and a wearable motion capture system for manipulation control of the robotic arms mounted on the legged robots.

We’re told that these particular quadrupeds are vegans, and that the vegan burgers they make are “very delicious.”

[ Real Robotics ]

Thanks Chengxu!

Elasto-plastic materials like Play-Doh can be difficult for robots to manipulate. RoboCraft is a system that enables a robot to learn how to shape these materials in just ten minutes.

[ MIT ]

Thanks, Rachel!

State-of-the-art frame interpolation methods generate intermediate frames by inferring object motions in the image from consecutive key-frames. In the absence of additional information, first-order approximations, i.e. optical flow, must be used, but this choice restricts the types of motions that can be modeled, leading to errors in highly dynamic scenarios. Event cameras are novel sensors that address this limitation by providing auxiliary visual information in the blind-time between frames.

[ ETH Zurich ]

Loopy is a robotic swarm of one Degree-of-Freedom (DOF) agents (i.e., a closed-loop made of 36 Dynamixel servos). Each agent (servo) makes its own local decisions based on interactions with its two neighbors. In this video, Loopy is trying to go from an arbitrary initial shape to a goal shape (Flying WV).

[ WVU ]

A collaboration between Georgia Tech Robotic Musicianship Group and Avshalom Pollak Dance Theatre. The robotic arms respond to the dancers’ movement and to the music. Our goal is for both humans and robots to be surprised and inspired by each other. If successful, both humans and robots will be dancing differently than they did before they met.

[ Georgia Tech ]

Thanks, Gil!

Lingkang Zhang wrote in to share a bipedal robot he’s working on. It’s 70 centimeters tall, runs ROS, can balance and walk, and costs just US $200!

[ YouTube ]

Thanks, Lingkang!

The private-public partnership with NASA and Redwire will demonstrate the ability of a small spacecraft—OSAM-2 (On-Orbit Servicing, Manufacturing and Assembly)—to manufacture and assemble spacecraft components in low-Earth orbit.

[ NASA ]

Inspired by fireflies, researchers create insect-scale robots that can emit light when they fly, which enables motion tracking and communication.

The ability to emit light also brings these microscale robots, which weigh barely more than a paper clip, one step closer to flying on their own outside the lab. These robots are so lightweight that they can’t carry sensors, so researchers must track them using bulky infrared cameras that don’t work well outdoors. Now, they’ve shown that they can track the robots precisely using the light they emit and just three smartphone cameras.

[ MIT ]

Unboxing and getting started with a TurtleBot 4 robotics learning platform with Maddy Thomson, Robotics Demo Designer from Clearpath Robotics.

[ Clearpath ]

We present a new gripper and exploration approach that uses a finger with very low reflected inertia for probing and then grasping objects. The finger employs a transparent transmission, resulting in a light touch when contact occurs. Experiments show that the finger can safely move faster into contacts than industrial parallel jaw grippers or even most force-controlled grippers with backdrivable transmissions. This property allows rapid proprioceptive probing of objects.

[ Stanford BDML ]

This is very, very water resistant. I’m impressed.

[ Unitree ]

I have no idea why Pepper is necessary here, but I do love that this ice cream shop is named Quokka.

[ Quokka ]

Researchers at ETH Zurich have developed a wearable textile exomuscle that serves as an extra layer of muscles. They aim to use it to increase the upper body strength and endurance of people with restricted mobility.

[ ETH Zurich ]

VISTA is a data-driven, photorealistic simulator for autonomous driving. It can simulate not just live video but LiDAR data and event cameras, and also incorporate other simulated vehicles to model complex driving situations.

[ MIT CSAIL ]

In the second phase of the ANT project, the hexapod CREX and the quadruped Aliengo are traversing rough terrain to show their terrain adaption capabilities.

[ DFKI ]

Here are some satisfying food-service robot videos from FOOMA, a trade show in Japan.


ロビット CUTR レタスの芯抜き #FOOMAJAPAN2022 www.youtube.com


デンソーウェーブ 不定形&柔軟物取り扱い #FOOMAJAPAN2022 www.youtube.com


アールティ Fondly 弁当盛付 #FOOMAJAPAN2022 www.youtube.com

[ Kazumichi Moriyama ]



One year ago, we wrote about some “high-tech” warehouse robots from Amazon that appeared to be anything but. It was confusing, honestly, to see not just hardware that looked dated but concepts about how robots should work in warehouses that seemed dated as well. Obviously we’d expected a company like Amazon to be at the forefront of developing robotic technology to make their fulfillment centers safer and more efficient. So it’s a bit of a relief that Amazon has just announced several new robotics projects that rely on sophisticated autonomy to do useful, valuable warehouse tasks.

The highlight of the announcement is Proteus, which is like one of Amazon’s Kiva shelf-transporting robots that’s smart enough (and safe enough) to transition from a highly structured environment to a moderately structured environment, an enormous challenge for any mobile robot.

Proteus is our first fully autonomous mobile robot. Historically, it’s been difficult to safely incorporate robotics in the same physical space as people. We believe Proteus will change that while remaining smart, safe, and collaborative.

Proteus autonomously moves through our facilities using advanced safety, perception, and navigation technology developed by Amazon. The robot was built to be automatically directed to perform its work and move around employees—meaning it has no need to be confined to restricted areas. It can operate in a manner that augments simple, safe interaction between technology and people—opening up a broader range of possible uses to help our employees—such as the lifting and movement of GoCarts, the nonautomated, wheeled transports used to move packages through our facilities.

I assume that moving these GoCarts around is a significant task within Amazon’s warehouse, because last year, one of the robots that Amazon introduced (and that we were most skeptical of) was designed to do exactly that. It was called Scooter, and it was this massive mobile system that required manual loading and could move only a few carts to the same place at the same time, which seemed like a super weird approach for Amazon, as I explained at the time:

We know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.

From what I can make out from the limited information available, Proteus shows that Amazon is not, in fact, behind the curve with autonomous mobile robots (AMRs) and has actually been doing what makes sense all along, while for some reason occasionally showing us videos of other robots like Scooter and Bert in order to (I guess?) keep their actually useful platforms secret.

Anyway, Proteus looks to be a combination of one of Amazon’s newer Kiva mobile bases, along with the sensing and intelligence that allow AMRs to operate in semi structured warehouse environments alongside moderately trained humans. Its autonomy seems to be enabled by a combination of stereo-vision sensors and several planar lidars at the front and sides, a good combination for both safety and effective indoor localization in environments with a bunch of reliably static features.

I’m particularly impressed with the emphasis on human-robot interaction with Proteus, which often seems to be a secondary concern for robots designed for work in industry. The “eyes” are expressive in a minimalist sort of way, and while the front of the robot is very functional in appearance, the arrangement of the sensors and light bar also manages to give it a sort of endearingly serious face. That green light that the robot projects in front of itself also seems to be designed for human interaction—I haven’t seen any sensors that use light like that, but it seems like an effective way of letting a human know that the robot is active and moving. Overall, I think it’s cute, although very much not in a “let’s try to make this robot look cute” way, which is good.

What we’re not seeing with Proteus is all of the software infrastructure required to make it work effectively. Don’t get me wrong—making this hardware cost effective and reliable enough that Amazon can scale to however many robots it wants to scale to (likely a frighteningly large number) is a huge achievement. But there’s also all that fleet-management stuff that gets much more complicated once you have robots autonomously moving things around an active warehouse full of fragile humans who need to be both collaborated with and avoided.

Proteus is certainly the star of the show here, but Amazon did also introduce a couple of new robotic systems. One is Cardinal:

The movement of heavy packages, as well as the reduction of twisting and turning motions by employees, are areas we continually look to automate to help reduce risk of injury. Enter Cardinal, the robotic work cell that uses advanced artificial intelligence (AI) and computer vision to nimbly and quickly select one package out of a pile of packages, lift it, read the label, and precisely place it in a GoCart to send the package on the next step of its journey. Cardinal reduces the risk of employee injuries by handling tasks that require lifting and turning of large or heavy packages or complicated packing in a confined space.

The video of Cardinal looks to be a rendering, so I'm not going to spend too much time on it.

There’s also a new system for transferring pods from containers to adorable little container-hauling robots, designed to minimize the number of times that humans have to reach up or down or sideways:

It’s amazing to look at this kind of thing and realize the amount of effort that Amazon is putting in to maximize the efficiency of absolutely everything surrounding the (so far) very hard-to-replace humans in their fulfillment centers. There’s still nothing that can do a better job than our combination of eyes, brains, and hands when it comes to rapidly and reliably picking random things out of things and putting them into other things, but the sooner Amazon can solve that problem, the sooner the humans that those eyes and brains and hands belong to will be able to direct their attention to more creative and fulfilling tasks. Or that’s the idea, anyway.

Amazon says it expects Proteus to start off moving carts around in specific areas, with the hope that it’ll eventually automate cart movements in its warehouses as much as possible. And Cardinal is still in prototype form, but Amazon hopes that it’ll be deployed in fulfillment centers by next year.



The Big Picture features technology through the lens of photographers.

Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition.

Enjoy the latest images, and if you have suggestions, leave a comment below.

Megatruck Runs on the Lightest Gas

Big things are happening in the world of hydrogen-powered vehicles. One of the latest monumental happenings is the debut of Anglo American’s 510-ton hydrogen-powered mining truck. The behemoth, which will put in work at a South African platinum mine, will replace an entire 40-truck fleet that services the mine. Together, those trucks consume about one million liters of diesel fuel each year. The new truck, whose power plant features eight 100-kilowatt hydrogen fuel cells and a 1.2-megawatt battery pack, is just the first earth-moving step in Anglo American’s NuGen project aimed at replacing its global fleet of 400 diesel mining trucks with hydrogen-powered versions. According to the company’s estimates, the switch will be the equivalent of taking half a million diesel-fueled passenger cars off the road.

Waldo Swiegers/Bloomberg/Getty Images


South Pole Snooping Platform

Snooping on penguins for clues regarding how they relate to their polar environment is a job for machines and not men. That is the conclusion reached by a team of researchers that is studying how climate change is threatening penguins’ icy Antarctic habitat and puzzling out how to protect the species that are native to both polar regions. Rather than subjecting members of the team to the bitter cold weather in penguins’ neighborhoods, they’re studying these ecosystems using hybrid autonomous and remote-controlled Husky UGV robots. Four-wheeled robots like the one pictured here are equipped with arrays of sensors such as cameras and RFID scanners that read ID tags in chipped penguins. These allow the research team, representing several American and European research institutes, to track individual penguins, assess how successfully they are breeding, and get a picture of overall penguin population dynamics–all from their labs and offices in more temperate climates.

Clearpath Robotics


Seeing the Whole Scene

This is not a hailstorm with pieces of ice that are straight-edged instead of ball-shaped. The image is meant to illustrate an innovation in imaging that will allow cameras to capture stunning details of objects up close and far afield at the same time. The metalens is inspired by the compound eyes of a long-extinct invertebrate sea creature that could home in on distant objects and not lose focus on things that were up close. In a single photo, the lens can produce sharp images of objects as close as 3 centimeters and as far away as 1.7 kilometers. Previously, image resolution suffered as depth of field increased, and vice versa. But researchers from several labs in China and at the National Institute of Standards and Technology (NIST) in Gaithersburg, Md., have been experimenting with metasurfaces, which are surfaces covered with forests of microscopic pillars (the array of ice-cube-like shapes in the illustration). Tuning the size and shape of the pillars and arranging them so they are separated by distances shorter than the wavelength of light makes the metasurfaces capable of capturing images with unprecedented depth of field.

NIST


Auto Body Arms Race

Painters specializing in automobile detailing might want to begin seeking out new lines of work. Their art may soon be the exclusive province of a robotic arm that can replicate images drawn on paper and in computer programs with unrivaled precision. ABB’s PixelPaint computerized arm makes painting go much faster than is possible with a human artisan because its 1,000 paint nozzles deliver paint to a car’s surface much the same way that an inkjet printer deposits pigment on a sheet of paper. Because there’s no overspray, there is no need for the time-consuming masking and tape-removal steps. This level of precision, which puts 100 percent of the paint on the car, also eliminates paint waste, so paint jobs are less expensive. Heretofore, artistic renderings still needed the expert eye and practiced hand of a skilled artist. But PixelPaint has shown itself capable of laying down designs with a level of intricacy human eyes and hands cannot execute.

ABB



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RSS 2022: 21 June–1 July 2022, NEW YORK CITYERF 2022: 28 June–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11 July–17 July 2022, BANGKOKIEEE CASE 2022: 20 August–24 August 2022, MEXICO CITYCLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4 November–5 November 2022, LOS ANGELESCoRL 2022: 14 December–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today's videos!

The secret to making a robot is to pick one thing and do it really, really well. And then make it smaller and cheaper and cuter!

Not sure how much Baby Clappy is going to cost quite yet, but listen for it next year.

[ Baby Clappy ]

Digit is capable of navigating a wide variety of challenging terrain. Robust dynamic stability paired with advanced perception capabilities enables Digit to maneuver through a logistics warehouse environment or even a stretch of trail in the woods. Today Digit took a hike in our own back yard, along the famous Pacific Crest Trail.

[ Agility Robotics ]

Match of Tech United versus the ladies from Vitória SC during the European RoboCup 2022 in Guimarães, Portugal. Note that the ladies intentionally tied against our robots, so we could end the game in penalties.

[ Tech United ]

Franka Production 3 is the force sensitive robot platform made in Germany, an industry system that ignites productivity for everyone who needs industrial robotics automation.

[ Franka ]

David demonstrates advanced manipulation skills with the 7-DoF arm and fully articulated 5-finger hand using a pipette. To localize the object, we combine multi-object tracking with proprioceptive measurements. Together with path planning, this allows for controlled in-hand manipulation.

[ DLR RMC ]

DEEP Robotics has signed a strategic agreement with Huzhou Institute of Zhejiang University for cooperating on further research to seek various possibilities in drones and quadruped.

[ Deep Robotics ]

Have you ever wondered if that over-the-counter pill you took an hour ago is helping to relieve your headache? With NSF's support, a team of Stanford University mechanical engineers has found a way to target drug delivery…to better attack that headache. Meet the millirobots. These finger-sized, wireless, origami inspired, amphibious robots could become medicines future lifesaver.

[ Zhao Lab ]

Engineers at Rice University have developed a method that allows humans to help robots “see” their environments and carry out tasks. The strategy called Bayesian Learning IN the Dark—BLIND, for short—is a novel solution to the long-standing problem of motion planning for robots that work in environments where not everything is clearly visible all the time.

[ Rice ]



In the puzzle of climate change, Earth’s oceans are an immense and crucial piece. The oceans act as an enormous reservoir of both heat and carbon dioxide, the most abundant greenhouse gas. But gathering accurate and sufficient data about the oceans to feed climate and weather models has been a huge technical challenge.

Over the years, though, a basic picture of ocean heating patterns has emerged. The sun’s infrared, visible-light, and ultraviolet radiation warms the oceans, with the heat absorbed particularly in Earth’s lower latitudes and in the eastern areas of the vast ocean basins. Thanks to wind-driven currents and large-scale patterns of circulation, the heat is generally driven westward and toward the poles, being lost as it escapes to the atmosphere and space.

This heat loss comes mainly from a combination of evaporation and reradiation into space. This oceanic heat movement helps make Earth habitable by smoothing out local and seasonal temperature extremes. But the transport of heat in the oceans and its eventual loss upward are affected by many factors, such as the ability of the currents and wind to mix and churn, driving heat down into the ocean. The upshot is that no model of climate change can be accurate unless it accounts for these complicating processes in a detailed way. And that’s a fiendish challenge, not least because Earth’s five great oceans occupy 140 million square miles, or 71 percent of the planet’s surface.

“We can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.”
—Susan Wijffels

Providing such detail is the purpose of the Argo program, run by an international consortium involving 30 nations. The group operates a global fleet of some 4,000 undersea robotic craft scattered throughout the world’s oceans. The vessels are called “floats,” though they spend nearly all of their time underwater, diving thousands of meters while making measurements of temperature and salinity. Drifting with ocean currents, the floats surface every 10 days or so to transmit their information to data centers in Brest, France, and Monterey, Calif. The data is then made available to researchers and weather forecasters all over the world.

The Argo system, which produces more than 100,000 salinity and temperature profiles per year, is a huge improvement over traditional methods, which depended on measurements made from ships or with buoys. The remarkable technology of these floats and the systems technology that was created to operate them as a network was recognized this past May with the IEEE Corporate Innovation Award, at the 2022 Vision, Innovation, and Challenges Summit. Now, as Argo unveils an ambitious proposal to increase the number of floats to 4,700 and increase their capabilities, IEEE Spectrum spoke with Susan Wijffels, senior scientist at the Woods Hole Oceanographic Institution on Cape Cod, Mass., and cochair of the Argo steering committee.

Susan Wijffels on…

Back to top

Why do we need a vast network like Argo to help us understand how Earth’s climate is changing?

Susan Wijffels: Well, the reason is that the ocean is a key player in Earth’s climate system. So, we know that, for instance, our average climate is really, really dependent on the ocean. But actually, how the climate varies and changes, beyond about a two-to-three-week time scale, is highly controlled by the ocean. And so, in a way, you can think that the future of climate—the future of Earth—is going to be determined partly by what we do, but also by how the ocean responds.

Susan Wijffels

Aren’t satellites already making these kind of measurements?

Wijffels: The satellite observing system, a wonderful constellation of satellites run by many nations, is very important. But they only measure the very, very top of the ocean. They penetrate a couple of meters at the most. Most are only really seeing what’s happening in the upper few millimeters of the ocean. And yet, the ocean itself is very deep, 5, 6 kilometers deep, around the world. And it’s what’s happening in the deep ocean that is critical, because things are changing in the ocean. It’s getting warmer, but not uniformly warm. There’s a rich structure to that warming, and that all matters for what’s going to happen in the future.

How was this sort of oceanographic data collected historically, before Argo?

Wijffels: Before Argo, the main way we had of getting subsurface information, particularly things like salinity, was to measure it from ships, which you can imagine is quite expensive. These are research vessels that are very expensive to operate, and you need to have teams of scientists aboard. They’re running very sensitive instrumentation. And they would simply prepare a package and lower it down the side into the ocean. And to do a 2,000-meter profile, it would maybe take a couple of hours. To go to the seafloor, it can take 6 hours or so.

The ships really are wonderful. We need them to measure all kinds of things. But to get the global coverage we’re talking about, it’s just prohibitive. In fact, there are not enough research vessels in the world to do this. And so, that’s why we needed to try and exploit robotics to solve this problem.

Back to top

Pick a typical Argo float and tell us something about it, a day in the life of an Argo float or a week in the life. How deep is this float typically, and how often does it transmit data?

Wijffels: They spend 90 percent of their time at 1,000 meters below the surface of the ocean—an environment where it’s dark and it’s cold. A float will drift there for about nine and a half days. Then it will make itself a little bit smaller in volume, which increases its density relative to the seawater around it. That allows it to then sink down to 2,000 meters. Once there, it will halt its downward trajectory, and switch on its sensor package. Once it has collected the intended complement of data, it expands, lowering its density. As the then lighter-than-water automaton floats back up toward the surface, it takes a series of measurements in a single column. And then, once they reach the sea surface, they transmit that profile back to us via a satellite system. And we also get a location for that profile through the global positioning system satellite network. Most Argo floats at sea right now are measuring temperature and salinity at a pretty high accuracy level.

How big is a typical data transmission, and where does it go?

Wijffels: The data is not very big at all. It’s highly compressed. It’s only about 20 or 30 kilobytes, and it goes through the Iridium network now for most of the float array. That data then comes ashore from the satellite system to your national data centers. It gets encoded and checked, and then it gets sent out immediately. It gets logged onto the Internet at a global data assembly center, but it also gets sent immediately to all the operational forecasting centers in the world. So the data is shared freely, within 24 hours, with everyone that wants to get hold of it.

This visualization shows some 3,800 of Argo’s floats scattered across the globe.Argo Program

You have 4,000 of these floats now spread throughout the world. Is that enough to do what your scientists need to do?

Wijffels: Currently, the 4,000 we have is a legacy of our first design of Argo, which was conceived in 1998. And at that time, our floats couldn’t operate in the sea-ice zones and couldn’t operate very well in enclosed seas. And so, originally, we designed the global array to be 3,000 floats; that was to kind of track what I think of as the slow background changes. These are changes happening across 1,000 kilometers in around three months—sort of the slow manifold of what’s happening to subsurface ocean temperature and salinity.

So, that’s what that design is for. But now, we have successfully piloted floats in the polar oceans and the seasonal sea-ice zones. So we know we can operate them there. And we also know now that there are some special areas like the equatorial oceans where we might need higher densities [of floats]. And so, we have a new design. And for that new design, we need to get about 4,700 operating floats into the water.

But we’re just starting now to really go to governments and ask them to provide the funds to expand the fleet. And part of the new design calls for floats to go deeper. Most of our floats in operation right now go only as deep as about 2,000 meters. But we now can build floats that can withstand the oceans’ rigors down to depths of 6,000 meters. And so, we want to build and sustain an array of about 1,200 deep-profiling floats, with an additional 1,000 of the newly built units capable of tracking the oceans by geochemistry. But this is new. These are big, new missions for the Argo infrastructure that we’re just starting to try and build up. We’ve done a lot of the piloting work; we’ve done a lot of the preparation. But now, we need to find sustained funding to implement that.

A new generation of deep-diving Argo floats can reach a depth of 6,000 meters. A spherical glass housing protects the electronics inside from the enormous pressure at that depth.MRV Systems/Argo Program

What is the cost of a typical float?

Wijffels: A typical cold float, which just measures temperature, salinity, and operates to 2,000 meters, depending on the country, costs between $20,000 and $30,000 U.S. dollars. But they each last five to seven years. And so, the cost per profile that we get, which is what really matters for us, is very low—particularly compared with other methods [of acquiring the same data].

Back to top

What kind of insights can we get from tracking heat and salinity and how they’re changing across Earth’s oceans?

Wijffels: There are so many things I could talk about, so many amazing discoveries that have come from the Argo data stream. There’s more than a paper a day that comes out using Argo. And that’s probably a conservative view. But I mean, one of the most important things we need to measure is how the ocean is warming. So, as the Earth system warms, most of that extra heat is actually being trapped in the ocean. Now, it’s a good thing that that heat is taken up and sequestered by the ocean, because it makes the rate of surface temperature change slower. But as it takes up that heat, the ocean expands. So, that’s actually driving sea-level rise. The ocean is pumping heat into the polar regions, which is causing both sea-ice and ice-sheet melt. And we know it’s starting to change regional weather patterns as well. With all that in mind, tracking where that heat is, and how the ocean circulation is moving it around, is really, really important for understanding both what's happening now to our climate system and what's going to happen to it in the future.

What has Argo’s data told us about how ocean temperatures have changed over the past 20 years? Are there certain oceans getting warmer? Are there certain parts of oceans getting warmer and others getting colder?

Wijffels: The signal in the deep ocean is very small. It’s a fraction, a hundredth of a degree, really. But we have very high precision instruments on Argo. The warming signal came out very quickly in the Argo data sets when averaged across the global ocean. If you measure in a specific place, say a time series at a site, there's a lot of noise there because the ocean circulation is turbulent, and it can move heat around from place to place. So, any given year, the ocean can be warm, and then it can be cool…that’s just a kind of a lateral shifting of the signal.

“We have discovered through Argo new current systems that we knew nothing about....There’s just been a revolution in our ability to make discoveries and understand how the ocean works.”
—Susan Wijffels

But when you measure globally and monitor the global average over time, the warming signal becomes very, very apparent. And so, as we’ve seen from past data—and Argo reinforces this—the oceans are warming faster at the surface than at their depths. And that’s because the ocean takes a while to draw the heat down. We see the Southern Hemisphere warming faster than the Northern Hemisphere. And there’s a lot of work that’s going on around that. The discrepancy is partly due to things like aerosol pollution in the Northern Hemisphere’s atmosphere, which actually has a cooling effect on our climate.

But some of it has to do with how the winds are changing. Which brings me to another really amazing thing about Argo: We’ve had a lot of discussion in our community about hiatuses or slowdowns of global warming. And that’s because of the surface temperature, which is the metric that a lot of people use. The oceans have a big effect on the global average surface temperature estimates because the oceans comprise the majority of Earth’s surface area. And we see that the surface temperature can peak when there’s a big El Niño–Southern Oscillation event. That’s because, in the Pacific, a whole bunch of heat from the subsurface [about 200 or 300 meters below the surface] suddenly becomes exposed to the surface. [Editor’s note: The El Niño–Southern Oscillation is a recurring, large-scale variation in sea-surface temperatures and wind patterns over the tropical eastern Pacific Ocean.]

What we see is this kind of chaotic natural phenomena, such as the El Niño–Southern Oscillation. It just transfers heat vertically in the ocean. And if you measure vertically through the El Niño or the tropical Pacific, that all cancels out. And so, the actual change in the amount of heat in the ocean doesn’t see those hiatuses that appear in surface measurements. It’s just a staircase. And we can see the clear impact of the greenhouse-gas effect in the ocean. When we measure from the surface all the way down, and we measure globally, it’s very clear.

Argo was obviously designed and established for research into climate change, but so many large scientific instruments turn out to be useful for scientific questions other than the ones they were designed for. Is that the case with Argo?

Wijffels: Absolutely. Climate change is just one of the questions Argo was designed to address. It’s really being used now to study nearly all aspects of the ocean, from ocean mixing to just mapping out what the deep circulation, the currents in the deep ocean, look like. We now have very detailed maps of the surface of the ocean from the satellites we talked about, but understanding what the currents are in the deep ocean is actually very, very difficult. This is particularly true of the slow currents, not the turbulence, which is everywhere in the ocean like it is in the atmosphere. But now, we can do that using Argo because Argo gives us a map of the sort of pressure field. And from the pressure field, we can infer the currents. We have discovered through Argo new current systems that we knew nothing about. People are using this knowledge to study the ocean eddy field and how it moves heat around the ocean.

People have also made lots of discoveries about salinity; how salinity affects ocean currents and how it is reflecting what’s happening in our atmosphere. There’s just been a revolution in our ability to make discoveries and understand how the ocean works.

During a typical 10-day cycle, an Argo float spends most of its time at a depth of 2,000 meters, making readings before ascending to the surface and then transmitting its data via a satellite network.Argo Program

As you pointed out earlier, the signal from the deep ocean is very subtle, and it’s a very small signal. So, naturally, that would prompt an engineer to ask, “How accurate are these measurements, and how do you know that they’re that accurate?”

Wijffels: So, at the inception of the program, we put a lot of resources into a really good data-management and quality-assurance system. That’s the Argo Data Management system, which broke new ground for oceanography. And so, part of that innovation is that we have, in every nation that deploys floats, expert teams that look at the data. When the data is about a year old, they look at that data, and they assess it in the context of nearby ship data, which is usually the gold standard in terms of accuracy. And so, when a float is deployed, we know the sensors are routinely calibrated. And so, if we compare a freshly calibrated float’s profile with an old one that might be six or seven years old, we can make important comparisons. What’s more, some of the satellites that Argo is designed to work with also give us ability to check whether the float sensors are working properly.

And through the history of Argo, we have had issues. But we’ve tackled them head on. We have had issues that originated in the factories producing the sensors. Sometimes, we’ve halted deployments for years while we waited for a particular problem to be fixed. Furthermore, we try and be as vigilant as we can and use whatever information we have around every float record to ensure that it makes sense. We want to make sure that there’s not a big bias, and that our measurements are accurate.

Back to top

You mentioned earlier there’s a new generation of floats capable of diving to an astounding 6,000 meters. I imagine that as new technology becomes available, your scientists and engineers are looking at this and incorporating it. Tell us how advances in technology are improving your program.

Wijffels: [There are] three big, new things that we want to do with Argo and that we’ve proven we can do now through regional pilots. The first one, as you mentioned, is to go deep. And so that meant reengineering the float itself so that it could withstand and operate under really high pressure. And there are two strategies to that. One is to stay with an aluminum hull but make it thicker. Floats with that design can go to about 4,000 meters. The other strategy was to move to a glass housing. So the float goes from a metal cylinder to a glass sphere. And glass spheres have been used in ocean science for a long time because they’re extremely pressure resistant. So, glass floats can go to those really deep depths, right to the seafloor of most of the global ocean.

The game changer is a set of sensors that are sensitive and accurate enough to measure the tiny climate-change signals that we’re looking for in the deep ocean. And so that requires an extra level of care in building those sensors and a higher level of calibration. And so we’re working with sensor manufacturers to develop and prove calibration methods with tighter tolerances and ways of building these sensors with greater reliability. And as we prove that out, we go to sea on research vessels, we take the same sensors that were in our shipboard systems, and compare them with the ones that we’re deploying on the profiling floats. So, we have to go through a whole development cycle to prove that these work before we certify them for global implementation.

You mentioned batteries. Are batteries what is ultimately the limit on lifetime? I mean, I imagine you can’t recharge a battery that’s 2,000 meters down.

Wijffels: You’re absolutely right. Batteries are one of the key limitations for floats right now as regards their lifetime, and what they’re capable of. If there were a leap in battery technology, we could do a lot more with the floats. We could maybe collect data profiles faster. We could add many more extra sensors.

So, battery power and energy management Is a big, important aspect of what we do. And in fact, the way that we task the floats, it’s been a problem with particularly lithium batteries because the floats spend about 90 percent of their time sitting in the cold and not doing very much. During their drift phase, we sometimes turn them on to take some measurements. But still, they don’t do very much. They don’t use their buoyancy engines. This is the engine that changes the volume of the float.

And what we’ve learned is that these batteries can passivate. And so, we might think we’ve loaded a certain number of watts onto the float, but we never achieved the rated power level because of this passivation problem. But we’ve found different kinds of batteries that really sidestep that passivation problem. So, yes, batteries have been one thing that we’ve had to figure out so that energy is not a limiting factor in float operation.



Microrobotics engineers often turn to nature to inspire their builds. A group of researchers at Northwestern University have picked the peekytoe crab to build a remote-controlled microbot that is tiny enough to walk comfortably on the edge of a coin.

According to John A. Rogers, the lead investigator of the study, their work complements that of other scientists who are working on millimeter-scale robots, for example, worm-like structures that can move through liquid media with flagella. But to the best of his knowledge, their crab microbots are the smallest terrestrial robots—just half a millimeter wide—to walk on solid surfaces in open air.

The tiny robot moves with a scuttling motion thanks to shape memory alloys (SMA). This class of materials undergoes a phase transition at a certain temperature, triggering a shape change. “So you create material in an initial geometry, deform it, and then when you heat it up, it’ll go back to that initial geometry,” Rogers says. “We exploit the shape changes [as] the basis of kind of a mechanical actuator or kind of a muscle.”

To move the robot, lasers heat its “legs” in sequence; the shape memory alloy in each leg bends in response to the heat—and then returns to its original orientation upon cooling.

Northwestern University

The robot comprises three key materials—an electronics-grade polymer for the body and parts of the limbs; the SMA, which forms the “active” component; and a thin layer of glass as an exoskeleton to give the structure rigidity. Rogers adds that they are not constrained by these particular materials, however, and his team are looking at ways to integrate semiconducting materials and other kinds of conductors.

For movement, the researchers use the focus spot of a laser beam on the robot body. “Whenever the laser beam illuminates the shape memory alloy components of the robot, you induce [its] phase change and corresponding motion,” Rogers says, “and when the laser beam moves off, you get a fast cooling and the limb returns to the deformed geometry.” Thus, scanning the laser spot across the body of the robot can sequentially activate various joints, and thereby establish a gait and direction for motion.

Northwestern University

Though this method has its advantages, Rogers would like to explore more options. “With the laser, you need some kind of optical access… [but depending] on where you want the robot to operate, that approach is going to be feasible or not,” says Rogers.

This is not the first time Rogers has had a hand in creating submillimeter-sized robots. His lab has developed tiny structures resembling worms and beetles, and even a winged microchip that moves through the air passively, using the same principles as the wind dispersal of seeds.

In 2015, Rogers and his colleagues also published a paper about using the concepts of kirigami, the Japanese art of paper cutting, as seen in pop-up books, for example, to design their robots. They use high-fidelity multilayer stacks of patterned materials supported by a silicon wafer, but while those are great for integrated circuits, they’re “no good for robots,” says Rogers, as they are flat. To move them into the third dimension, studying the principles of kirigami was a starting point.

As Rogers emphasizes, their research is purely exploratory at the moment, an attempt to introduce some some additional ideas into microrobotic engineering. “We can move these robots around, make them walk in different directions, but they don’t execute a specific task,” he says. For example, even though the crab-bots have claws, these are just for visual purposes, they don’t move or grasp objects. “Creating capabilities for task execution would be a next step in research in this area,” he says. For now, though, making multi-material 3D structures and using SMAs for two-way actuation are the two key pieces of contribution to the broader community from his team.

For further exploration, he and his colleagues are thinking about how to add the ability to grasp or manipulate objects at this scale, as well as adding microcircuits, digital sensors, and wireless communication to the bots. Communication between the robots could allow them to operate as a swarm, for instance. Another area to work on is adding some kind of local power supply, powered by photovoltaics, for example, with a microcontroller to provide local heating in a timed sequence to control movement.

In terms of potential applications, Rogers envisages the tiny robots to be useful for working in confined spaces, primarily for minimally invasive surgeries, followed by vehicles for building other tiny machines. But he also advocates caution: “I wouldn’t want to oversell what we’ve done. It’s pretty easy to slide into fantastical visions of these robots getting in the body and doing something powerful in terms of medical treatment. [But] that’s where we’d like to go, and it’s what's motivating a lot of our work."



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RSS 2022: 21 June–1 July 2022, NEW YORK CITYERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

The European Robocup Finals 2022, featuring Tech United vs. VDL Robotsports.

[ Tech United ]

Within the European Union project we aim to autonomously monitor habitats. Regular monitoring of individual plant species allows for more sophisticated decision-making. The video is recorded in Perugia Italy.

[ RSL ]

ICRA 2023 is in London!

[ ICRA 2023 ]

What can we learn from nature? What skills from the animal world can be used for industrial applications? Festo has been dealing with these questions in the Bionic Learning Network for years. In association with universities, institutes and development companies, we are developing research platforms whose basic technical principles are based on nature. A recurring theme here is the unique movements and functions of the elephant’s trunk.

[ Festo ]

We are proud to announce the Relaunch of Misty, providing you with a more intuitive and easy-to-use robot platform! So what is new, we hear you ask? To begin with, we have updated Misty’s conversational skills, focusing on both improved NLU capabilities and added more languages. Python has been added as our primary focus programming language going forward, complemented by enhanced Blockly drag and drop functionality. We think you will really enjoy our brand new Misty Studio, which is both more user friendly and with improved features.

[ Misty ]

We developed a self-contained end-effector for layouting on construction sites with aerial robots! The end-effector achieves high accuracy through the use of multiple contact points, compliance, and actuation.

[ Paper ]

The compliance and conformability of soft robots provide inherent advantages when working around delicate objects or in unstructured environments. However, rapid locomotion in soft robotics is challenging due to the slow propagation of motion in compliant structures, particularly underwater. Taking inspiration from cephalopods, here we present an underwater robot with a compliant body that can achieve repeatable jet propulsion by changing its internal volume and cross-sectional area to take advantage of jet propulsion as well as the added mass effect.

[ UCSD ]

I like this idea of making incidental art with robots.

[ RPL UCL ]

If you want to be at the cutting-edge of your research field and publish impactful research papers, you need the most cutting-edge hardware. Our technology is unique (we own the relevant IP), unrivaled and a must-have tool for those in robotics research.

[ Shadow ]

Hardware platforms for socially interactive robotics can be limited by cost or lack of functionality. This article presents the overall system—design, hardware, and software—for Quori, a novel, affordable, socially interactive humanoid robot platform for facilitating non-contact human-robot interaction (HRI) research.

[ Paper ]

Wyss Associate Faculty members, Conor Walsh and Rob Wood discuss their visions for the future of bio-inspired soft robotics.

[ Wyss Institute ]

Towel folding: still not easy for robots.

[ Ishikawa Lab ]

We present hybrid adhesive end-effectors for bimanual handling of deformable objects. The end-effectors are designed with features meant to accommodate surface irregularities in macroscale form, mesoscale waviness, and microscale roughness, achieving good shear adhesion on surfaces with little gripping force. The new gripping system combines passive mechanical compliance with a hybrid electrostatic-adhesive pad so that humanoid robots can grasp a wide range of materials including paperboard and textured plastics.

[ Paper ]

MIT CSAIL grad students speak about what they think is the most important unsolved problem in computer science today.

[ MIT CSAIL ]

At the National Centre of Competence in Research (NCCR) Robotics, a new generation of robots that can work side by side with humans—fighting disabilities, facing emergencies and transforming education—is developed.

[ NCCR ]

The OS-150 Robotics Laboratory is Lawrence Livermore National Laboratory’s facility for testing autonomous drones, vehicles, and robots of the future. The Lab, informally known as the “drone pen,” allows operators to pilot drones safely and build trust with their robotic teammates.

[ LLNL ]

I am not entirely certain whether a Roomba is capable of detecting and navigating pixelated poop IRL, but I’d like to think so.

[ iRobot ]

How Wing designed its hybrid drone for last-mile delivery.

[ Wing ]

Over the past ten years, AI has experienced breakthrough after breakthrough in fields as diverse as computer vision, speech recognition, and protein folding prediction. Many of these advancements hinge on the deep learning work conducted by our guest, Geoff Hinton, who has fundamentally changed the focus and direction of the field. Geoff joins Pieter Abbeel in our two-part season finale for a wide-ranging discussion inspired by insights gleaned from Hinton’s journey from academia to Google Brain.

[ Robot Brains ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. This week is going to be a little bit on the short side, because Evan is getting married this weekend [!!!!!! –Ed.] and is actually supposed to be writing his vows right now.

We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

RSS 2022: 21 June–1 July 2022, NEW YORK CITY
ERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGAL

Enjoy today’s videos!

These five videos from ICRA 2022 were created by David Garzón Ramos, a Ph.D. student at IRIDIA, Université libre de Bruxelles, and a member of the ERC DEMIURGE project. David won an award from the ICRA organizing committee to help him attend the conference and share his experiences, and here's how he described his approach to communicating the most exciting parts of ICRA:

At ICRA 2022, I collaborated with the Publicity Committee to portrait some curious, interesting, and emotive moments of the conference in a series of video digests. I believe that working with robots is fun! However, I also believe that it happens quite often that the fascinating ecosystem of contemporary robots is reserved to few fortunate researchers, makers, and engineers. In my videos, I tried to depict and share this rich ecosystem as it was happening in Philadelphia’s ICRA 2022. I focused in creating stories that could be accessible and appealing for the specialized and the nonspecialized public. I wandered around the conference capturing those moments that, at least to my eyes, could help to communicate an important message: robots and people can engage positively. What could be more engaging than having funky robots?! :)





Many thanks to David for producing and sharing these videos!

We’ll have more ICRA content in the coming weeks, but if you’re looking for the latest research being done on awesome robots, look no further than the annual Legged Locomotion workshop. All of the talks from the ICRA 2022 edition are now online, and you can watch the whole playlist (or just skip to your favorite humans and robots!) below.

[ Legged Robots ]



When we think of bipedal humanoid robots, we tend to think of robots that aren’t just human-shaped, but also human-sized. There are exceptions, of course—among them, a subcategory of smaller humanoids that includes research and hobby humanoids that aren’t really intended to do anything practical. But at the In International Conference on Robotics and Automation (ICRA) last week, roboticists from Carnegie Mellon University (CMU) are asked an interesting question: What happens if you try to scale down a bipedal robot? Like, way down? This line from the paper asking this question sums it up: “our goal with this project is to make miniature walking robots, as small as a LEGO Minifigure (1centimeter leg) or smaller.”

The current robot, while small (its legs are 15cm long), is obviously much bigger than a LEGO minifig. But that’s okay, because it’s not supposed to be quite as tiny as the group's ultimate ambition would have it. At least not yet. It’s a platform that the CMU researchers are using to figure out how to proceed. They're still assessing what it’s going to take to shrink bipedal walking robots to the point where they could ride in Matchbox cars. At very small scales, robots run into all kinds of issues, including space and actuation efficiency. These crop up mainly because it’s simply not possible to cram the same number of batteries and motors that go into bigger bots into something that tiny. So, in order to make a tiny robot that can usefully walk, designers have to get creative.

Bipedal walking is already a somewhat creative form of locomotion. Despite how complex bipedal robots tend to be, if the only criteria for a bipedal robot is that it walks, then it’s kind of crazy how simple roboticists can make them. Here’s a 1990-ish (!) video from Tad McGeer, the first roboticist to explore the concept of passive dynamic walking by completely unpowered robots placed on a gentle downwards slope:


The above video comes from the AMBER Lab, which has been working on efficient walking for large humanoids for a long time (you remember DURUS, right?). For small humanoids, the CMU researchers are trying to figure out how to leverage the principle of dynamic walking to make robots that can move efficiently and controllably while needing the absolute minimum of hardware, and in a way that can be scaled. With a small battery and just one actuator per leg, CMU’s robot is fully controllable, with the ability to turn and to start and stop on its own.

“Building at a larger scale allows us to explore the parameter space of construction and control, so that we know how to scale down from there,” says Justin Yim, one of the authors of the ICRA paper. “If you want to get robots into small spaces for things like inspection or maintenance or exploration, walking could be a good option, and being able to build robots at that size scale is a first step.”

“Obviously [at that scale] we will not have a ton of space,” adds Aaron Johnson, who runs CMU’s Robomechanics Lab. “Minimally actuated designs that leverage passive dynamics will be key. We aren't there yet on the LEGO scale, but with this paper we wanted to understand the way this particular morphology walks before dealing with the smaller actuators and constraints.”


Scalable Minimally Actuated Leg Extension Bipedal Walker Based on 3D Passive Dynamics, by Sharfin Islam, Kamal Carter, Justin Yim, James Kyle, Sarah Bergbreiter, and Aaron M. Johnson from CMU, was presented at ICRA 2022.


Your weekly selection of awesome robot videos

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IEEE ARSO 2022: 28–30 May 2022, LONG BEACH, CALIF.RSS 2022: 21 June–1 July 2022, NEW YORK CITYERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGALCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Finally, after the first Rocky movie in 1976, the Robotic Systems Lab presents a continuation of the iconic series. Our transformer robot visited Philly in 2022 as part of the International Conference on Robotics and Automation.

[ Swiss-Mile ]

Human cells grown in the lab could one day be used for a variety of tissue grafts, but these cells need the right kind of environment and stimulation. New research suggests that robot bodies could provide tendon cells with the same kind of stretching and twisting as they would experience in a real human body. It remains to be seen whether using robots to exercise human cells results in a better tissue for transplantation into patients.

[ Nature ]

Researchers from Carnegie Mellon University took an all-terrain vehicle on wild rides through tall grass, loose gravel and mud to gather data about how the ATV interacted with a challenging, off-road environment.

The resulting dataset, called TartanDrive, includes about 200,000 of these real-world interactions. The researchers believe the data is the largest real-world, multimodal, off-road driving dataset, both in terms of the number of interactions and types of sensors. The five hours of data could be useful for training a self-driving vehicle to navigate off road.

[ CMU ]

Chengxu Zhou from the University of Leeds writes, “we have recently done a demo with one operator teleoperating two legged manipulator for a bottle opening task.”

[ Real Robotics ]

Thanks, Chengxu!

We recently hosted a Youth Fly Day, bringing together 75 Freshman students from ICA Cristo Rey All Girls Academy of San Francisco for a day of hands-on exposure to and education about drones. It was an exciting opportunity for the Skydio team to help inspire the next generation of women pilots and engineers.

[ Skydio ]

Legged robotic systems leverage ground contact and the reaction forces they provide to achieve agile locomotion. However, uncertainty coupled with the discontinuous nature of contact can lead to failure in real-world environments with unexpected height variations, such as rocky hills or curbs. To enable dynamic traversal of extreme terrain, this work introduces the utilization of proprioception to estimate and react to unknown hybrid events and elevation changes and a two-degree-of-freedom tail to improve control independent of contact.

If you like this and are in the market for a new open source quadruped controller, CMU’s got that going on, too.

[ Robomechanics Lab ]

A bolt-on 360 camera kit for your drone that costs $430.

[ Insta360 ]

I think I may be too old to have any idea what’s going on here.

[ Neato ]

I’m not the biggest fan of the way the Stop Killer Robots folks go about trying to make their point, but they have a new documentary out, so here you go.

[ Immoral Code ]

This symposium hosted by the U.S. Department of Commerce and National Institute of Standards and Technology, Stanford Institute for Human-Centered Artificial Intelligence (HAI), and the FinRegLab, brought together leaders from government, industry, civil society, and academia to explore potential opportunities and challenges posed by artificial intelligence and machine learning deployment across different economic sectors, with a particular focus on financial services and healthcare.

[ Stanford HAI ]



The Big Picture features technology through the lens of photographers.

Every month, IEEE Spectrum selects the most stunning technology images recently captured by photographers around the world. We choose images that reflect an important advance, or a trend, or that are just mesmerizing to look at. We feature all images on our site, and one also appears on our monthly print edition.

Enjoy the latest images, and if you have suggestions, leave a comment below.

Figure From Fiction

For centuries, people in China have maintained a posture of awe and reverence for dragons. In traditional Chinese culture, the dragon—which symbolizes power, nobility, honor, luck, and success in business—even has a place in the calendar; every twelfth year is a dragon year. Flying, fire-breathing horses covered in lizard scales have been part of legend, lore, and literature since those things first existed. Now, in the age of advanced technology, an engineer has created his own mechatronic version of the mythical beast. François Delarozière, founder and artistic director of French street-performance company La Machine, is shown riding his brainchild, called Long Ma. The 72-tonne steel-and-wood automaton can carry 50 people on a covered terrace built into its back and still walk at speeds of up to 4 kilometers per hour. It will flap its leather-and-canvas-covered wings, and shoot fire, smoke, or steam from its mouth, nose, eyelids, and more than two dozen other vents located along its 25-meter-long body. Long Ma spends most of its time in China, but the mechanical beast has been transported to France so it can participate in fairs there this summer. It has already been featured at the Toulouse International Fair, where it thrilled onlookers from 9 to 18 April.

Alain Pitton/NurPhoto/AP

Body Area Network

Your social media accounts and your credit card information are not the only targets that are in cybercrooks’ crosshairs. Criminals will take advantage of the slightest lapse in the security even of electronic medical devices such as pacemakers, implantable insulin pumps, and neural implants. No one wants to imagine their final experience to be a hostile takeover of their life-saving medical device. So, researchers are brainstorming ideas for foiling cyberattacks on such devices that exploit security weak points in their wireless power or Internet connections. A team at Columbia University, in New York City, has developed a wireless-communication technique for wearable medical devices that sends signals securely through body tissue. Signals are sent from a pair of implanted transmitters to a pair of receivers that are temporarily attached to the device user’s skin. Contrast this with RF communication, where the device is continuously transmitting data waiting for the receiver to catch the signal. With this system, there is no security risk, because there are no unencrypted electromagnetic waves sent out into the air to hack. The tiny transmitter-receiver pair pictured here can communicate through the petal of a flower. Larger versions, say the Columbia researchers, will get signals from transmitters located adjacent to internal organs deep within the body to noninvasive external receivers stuck onto the skin.

Dion Khodagholy/Columbia Engineering

Sun in a Box

Anyone who has ever paid attention to how an incandescent lightbulb works knows that a significant amount of the energy aimed at creating light is lost as heat. The same is true in reverse, when solar panels lose some of the energy in photons as heat instead of it all being converted into electrons. Scientists have been steadily cutting these losses and ramping up the efficiency of photovoltaics, with the aim of bringing them to operational and economic parity with power plants that generate electricity via the spinning of turbines. The most efficient turbine-based generators convert only about 35 percent of the total theoretical energy contained in, say, natural gas into electrical charge, . And until recently, that was enough to keep them head and shoulders above solar cells. But the tide looks to be turning. A thermophotovoltaic (TPV) cell developed by engineers at MIT has eclipsed the 40-percent-efficiency mark. The so-called "Sun in a Box" captures enough light energy that it reaches temperatures above 2,200 °C. At these temperatures, a silicon filament inside the box emits light in the infrared range. Those infrared photons get converted from light to charge instead of more heat, ultimately boosting the device’s overall conversion efficiency. The TPV’s creators and outside observers believe that such devices could operate at 50-percent efficiency at higher temperatures. That, say the MIT researchers, could dramatically lower the cost of electric power, and turn the fossil-fuel- and fission-fired power plants upon which we so heavily rely into quaint anachronisms. “A turbine-based power production system’s cost is usually on the order of [US] $1 per watt. However, for thermophotovoltaics, there is potential to reduce it to the order of 10 cents per watt,” says Asegun Henry, the MIT professor of mechanical engineering who led the team that produced the TPV cell.

Felice Frankel

One Large Rat, Hold the Droppings

Rats are irrepressible. They go where they want, eat what they want, and seem immune to our best efforts to eradicate them and the pathogens they carry. Scientists have now decided that, since we cannot beat them, the smart thing to do is to recruit them for our purposes. But training rodents to carry out our wishes while ignoring their own instinctive drives is not likely to be a successful endeavor. Therefore, researchers are making robotic rats that have real rodents’ physical features but can be remotely controlled. One of the first use cases is in disaster zones, where debris and unstable terrain make it too dangerous for human rescue workers to tread. The robotic rat pictured here is a product of a group of researchers at the Beijing Institute of Technology. They tried other designs, but “large quadruped robots cannot enter narrow spaces, while micro quadruped robots can enter the narrow spaces but face difficulty in performing tasks, owing to their limited ability to carry heavy loads,” says Professor Qing Shi, a member of the team that developed the automaton rodent. They decided to model their machine after the rat because of how adept it is at squeezing into tight spaces and turning on a dime, and its remarkable strength relative to its size.

Qing Shi



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23 May–27 May 2022, PHILADELPHIAIEEE ARSO 2022: 28 May–30 May 2022, LONG BEACH, CALIF.RSS 2022: 21 June–1 July 2022, NEW YORK CITYERF 2022: 28 June–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11 July–17 July 2022, BANGKOKIEEE CASE 2022: 20 August–24 August 2022, MEXICO CITYCLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGAL

Enjoy today’s videos!

After four years of development, Flyability has announced the Elios 3, which you are more than welcome to smash into anything you like.

“The Elios 3 is the single biggest project that Flyability has ever undertaken,” said Adrien Briod, CTO of Flyability. “If you think of the Elios 2 as your classic flip phone, only designed to make phone calls, the Elios 3 is the smartphone. It’s made to be customized for the specific demands of each user, letting you attach the payload you need so you can use the tool as you like, and allowing it to grow and improve over time with new payloads or software solutions.”

[ Flyability ]

We get that Digit is good at walking under things, but if Agility wants to make the robot more relatable, it should program Digit to bump its head like 5 percent of the time. We all do it.

[ Agility ]

Skybrush is a drone-show management platform that’s now open source, and if drone shows aren’t your thing, it’s also good for coordinating multiple drones in any other way you want. Or you can make drone shows your thing!

We share Skybrush because we are proud of it, and because we envision a growing community around it, consisting of enthusiastic and motivated experts and users all around the world who can join our mission to create something great for the future. The drone industry is evolving at light speed, our team alone is too small yet to keep pace with it. But we have a core that is rock solid and we know for sure that great things can be built on top of it.

[ Skybrush ]

This happened back in the fall of 2021, but it’s still cool seeing the full video of a Gremlin launch, flight, and capture sequence.

[ Dynetics ]

NASA’s InSight lander touched down in the Elysium Planitia region of Mars in November of 2018. During its time on the Red Planet, InSight has achieved all its primary science goals and continues to hunt for quakes on Mars.

[ Insight ]

This kite-powered drone is blowing my mind.

[ Kite Propulsion ]

A friendly reminder that Tertill is anxious to massacre the weeds in your garden.

[ Tertill ]

I am not a fan of this ElliQ commercial.

[ ElliQ ]

We are excited to announce that the 2022 edition of the Swiss Drone Days will take place on 11–12 June in Dubendorf/Zurich. The event will feature live demos including autonomous drone racing...in one of the largest drone flying arenas in the world, spectacular drone races by the Swiss drone league, presentations of distinguished speakers, [and] an exhibition and trade fair.

[ Drone Days ]

Enjoy 8 minutes of fast-paced, extremely dramatic, absolutely mind-blowing robot football highlights.

[ RoboCup ]

This week’s GRASP on Robotics seminar is from Katherine Kuchenbecker at the Max Planck Institute for Intelligent Systems, on haptics and physical human-robot interaction.

“A haptic interface is a mechatronic system that modulates the physical interaction between a human and their tangible surroundings. Such systems typically take the form of grounded kinesthetic devices, ungrounded wearable devices, or surface devices, and they enable the user to act on and feel a remote or virtual environment. I will elucidate key approaches to creating effective haptic interfaces by showcasing several systems my team created and evaluated over the years.”

[ UPenn ]

This Lockheed Martin Robotics Seminar is from Xuesu Xiao from The Everyday Robot Project at X, on Deployable Robots that Learn.

“While many robots are currently deployable in factories, warehouses, and homes, their autonomous deployment requires either the deployment environments to be highly controlled, or the deployment to only entail executing one single preprogrammed task. These deployable robots do not learn to address changes and to improve performance. For uncontrolled environments and for novel tasks, current robots must seek help from highly skilled robot operators for teleoperated (not autonomous) deployment. In this talk, I will present three approaches to removing these limitations by learning to enable autonomous deployment in the context of mobile robot navigation, a common core capability for deployable robots. Building on robust autonomous navigation, I will discuss my vision toward a hardened, reliable, and resilient robot fleet which is also task-efficient and continually learns from each other and from humans.”

[ UMD ]



Eight-ish years ago, back when drone delivery was more hype than airborne reality (even more so than it is now), DHL tested a fully autonomous delivery service that relied on drones to deliver packages to an island 12 kilometers off Germany’s North Sea coast. The other alternative for getting parcels to the island was a ferry. But because the ferry didn’t run every day, the drones filled the scheduling gaps so residents of the island could get important packages without having to wait.

“To the extent that it is technically feasible and economically sensible,” DHL said at the time, “the use of [drones] to deliver urgently needed goods to thinly populated or remote areas or in emergencies is an interesting option for the future.” We’ve seen Zipline have success with this approach; now, drones are becoming affordable and reliable enough that they’re starting to make sense for use cases that are slightly less urgent than blood and medication deliveries. Now, thinly populated or remote areas can benefit from drones even if they aren’t having an emergency. Case in point: The United Kingdom’s Royal Mail has announced plans to establish more than 50 new postal drone routes over the next three years.

The drones themselves come from Windracers Group, and they’re beefy, able to carry a payload of 100 kilograms up to 1,000 km with full autonomy. Pretty much everything on it ensures redundancy: a pair of engines, six separate control units, and backups for the avionics, communications, and ground control. Here’s an overview of a pilot (pilotless?) project from last year:

Subject to CAA approval and the ongoing planned improvement in UAV economics, Royal Mail is aiming to secure more than 50 drone routes supported by up to 200 drones over the next three years. Island communities across the Isles of Scilly, Shetland Islands, Orkney Islands, and the Hebrides would be the first to benefit. Longer term, the ambition is to deploy a fleet of more than 500 drones servicing all corners of the U.K.

“Corners” is the operative word here, and it’s being used more exclusively than inclusively—these islands are particularly inconvenient to get to, and drones really are the best way of getting regular, reliable mail delivery to these outposts in a cost-effective way. Other options are infrequent boats or even more infrequent large piloted aircraft. But when you consider the horrific relative expense of those modes of transportation, it’s hard for drones not to be cast in a favorable light. And when you want frequent service to a location such as Fair Isle, as shown in the video below, a drone is not only your best bet but also your only reasonable one—it flew 105 km in 40 minutes, fighting strong winds much of the way:

There’s still some work to be done to gain the approval of the U.K.’s Civil Aviation Authority. At this point, figuring out those airspace protections and safety regulations and all that stuff is likely more of an obstacle than the technical challenges that remain. But personally, I’m much more optimistic about use cases like the one Royal Mail is proposing here that I am about drone delivery of tacos or whatever to suburbanites, because the latter seems very much like a luxury, while the former is an essential service.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23 May–27 May 2022, PHILADELPHIAIEEE ARSO 2022: 28 May–30 May 2022, LONG BEACH, CALIF.RSS 2022: 21 June–1 July 2022, NEW YORK CITYERF 2022: 28 June–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11 July–17 July 2022, BANGKOKIEEE CASE 2022: 20 August–24 August 2022, MEXICO CITYCLAWAR 2022: 12 September–14 September 2022, AZORES, PORTUGAL

Enjoy today's videos!

ABB Robotics has collaborated with two world-renowned artists—8-year-old Indian child prodigy Advait Kolarkar and Dubai-based digital-design collective Illusorr—to create the world’s first robot-painted art car. ABB’s award-winning PixelPaint technology has, without human intervention, perfectly recreated Advait’s swirling, monochromatic design as well as Illusorr’s tricolor geometrical patterns.

[ ABB ]

Working closely with users and therapists, EPFL spin-off Emovo Care has developed a light and easy-to-attach hand exoskeleton for people unable to grasp objects following a stroke or accident. The device has been successfully tested in several hospitals and rehabilitation centers.

This is pretty amazing, because it’s not just a research project—it’s actually a product that's helping patients. If you think this might be able to help you (and you live in Switzerland), Emovo is currently offering free trials.

[ Emovo Care ] via [ EPFL ]

Thanks, Luca!

Uh, I don’t exactly know where this research is going, but the fact that they’ve got a pair of robotic legs that are nearly 2 meters tall is a little scary.

[ KIMLAB ]

The most impressive thing about this aerial tour of AutoX’s Pingshan RoboTaxi Operations Center is that AutoX has nine (!) more of them.

[ AutoX ]

In addition to delivering your lunch, Relay+ will also magically transform plastic food packaging into more eco-friendly cardboard. Amazing!

[ Relay ]

Meet Able Mabel, the incredible robotic housekeeper, whose only function is to make your life more leisurely. Yours for just £500. Too good to be true? Well, in 1966 it is, but if Professor Thring at the department of mechanical engineering of Queen Mary College has his way, by 1976 there could be an Able Mabel in every home. He shows us some of the robotic prototypes he has been working on.

This clip is from “Tomorrow's World,” originally broadcast 16 June 1966.

[ BBC Archive ]

I find the sound effects in this video to be very confusing.

[ AgileX ]

The first part of this video is extremely satisfying to watch.

[ Paper ] via [ AMTL ]

Thanks to this unboxing video of the Jueying X20 quadruped, I now know that it’s best practice to tuck your robot dog in when you’ve finished playing with it.

[ Deep Robotics ]

As not-sold as I am on urban drone delivery, I will grant you that Wing is certainly putting the work in.

[ Wing ]

GlobalFoundries, a global semiconductor manufacturer, has turned to Spot to further automate their data collection for condition monitoring and predictive maintenance. Manufacturing facilities are filled with thousands of inspection points, and adding fixed sensors to all these assets is not economical. With Spot bringing the sensors to their assets, the team collects valuable information about the thermal condition of pumps and motors, as well as taking analog gauge readings.

[ Boston Dynamics ]

The Langley Aerodrome No. 8 (LA-8) is a distributed-electric-propulsion, vertical-takeoff-and-landing (VTOL) aircraft that is being used for wind-tunnel testing and free-flight testing at the NASA Langley Research Center. The intent of the LA-8 project is to provide a low-cost, modular test bed for technologies in the area of advanced air mobility, which includes electric urban and short regional flight.

[ NASA ]

As social robots become increasingly prevalent in day-to-day environments, they will participate in conversations and appropriately manage the information shared with them. However, little is known about how robots might appropriately discern the sensitivity of information, which has major implications for human-robot trust. As a first step to address a part of this issue, we designed a privacy controller, CONFIDANT, for conversational social robots, capable of using contextual metadata (for example, sentiment, relationships, topic) from conversations to model privacy boundaries.

[ Paper ]

The Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) is hosting a series of special talks on modular self-reconfigurable robots, starting with Mark Yim and Kirstin Petersen.

Subscribe to the AIRS YouTube channel for more talks over the next few weeks!

[ AIRS ]

Thanks, Tin Lun!



The relatively simple and now quite pervasive quadrotor design for drones emphasizes performance and manufacturability, which is fine, but there are some trade-offs—namely, endurance. Four motors with rapidly spinning tiny blades suck up battery power, and while consumer drones have mitigated this somewhat by hauling around ever-larger batteries, the fundamental problem is one of efficiency in flight.

In a paper published this week in Science Robotics, researchers from the City University of Hong Kong have come up with a drone inspired by maple seeds that weighs less than 50 grams but can hold a stable hover for over 24 minutes.

Maple seed pods, also called samaras, are those things you see whirling down from maple trees in the fall, helicopter style. The seed pods are optimized for maximum air time through efficient rotating flight, thanks to an evolutionary design process that rewards the distance traveled from the parent tree, resulting in a relatively large wing with a high ratio of wing to payload.

Samara drones (or monocopters, more generally) have been around for quite a while. They make excellent passive spinny gliders when dropped in midair, and they can also achieve powered flight with the addition of a propulsion system on the tip of the wing. This particular design is symmetrical, using two sizable wings, each with a tip propeller. The electronics, battery, and payload are in the center, and flight consists of the entire vehicle spinning at about 200 rpm:

The bicopter is inherently stable, with the wings acting as aerodynamic dampers that result in passive-attitude stabilization, something that even humans tend to struggle with. With a small battery, the drone weighs just 35 grams with a wingspan of about 60 centimeters. The key to the efficiency is that unlike most propellerized drones, the propellers aren’t being used for lift—they’re being used to spin the wings, and that’s where the lift comes from. Full 3D control is achieved by carefully pulsing the propellers at specific points in the rotation of the vehicle to translate in any direction. With a 650-milliampere-hour battery (contributing to a total vehicle mass of 42.5 g), the drone is able to hover in place for 24.5 minutes. The ratio of mass to power consumption that this represents is about twice as good as other small multirotor drones.

You may be wondering just how fundamentally useful a platform like this is if it’s constantly spinning. Some sensors simply don’t care about spinning, while other sensors have to spin themselves if they’re not already spinning, so it’s easy to see how this spinning effect could actually be a benefit for, say, lidar. Cameras are a bit more complicated, but by syncing the camera frame rate to the spin rate of the drone, the researchers were able to use a 22-g camera payload to capture four 3.5 fps videos simultaneously, recording video of every direction at once.

Despite the advantages of these samara-inspired designs, we haven’t seen them make much progress out of research contexts, which is a real shame. The added complication seems to be enough that at least for most consumer and research applications, it’s just easier to build traditional quadrotors. Near-term applications might be situations in which you need lightweight, relatively long-duration, functional-aerial-mapping, or surveillance systems.

A bioinspired revolving-wing drone with passive attitude stability and efficient hovering flight,” by Songnan Bai, Qingning He, and Pakpong Chirarattananon from the City University of Hong Kong, is published in Science Robotics.



Robots are well known for having consistency and precision that humans tend to lack. Robots are also well known for not being especially creative—depending I suppose on your definition of “creative.” Either way, roboticists have seized an opportunity to match the strengths of humans and robots while plastering over their respective weaknesses.

At CHI 2022, researchers from ETH Zurich presented an interactive robotic plastering system that lets artistic humans use augmented reality to create three-dimensional designs meant to be sprayed in plaster on bare walls by robotic arms.

Robotic fabrication is not a new idea. And there are lots of examples of robots building intricate structures, leveraging their penchant for precision and other robot qualities to place components in careful, detailed patterns that yield unique architectures. This algorithmic approach is certainly artistic on its own, but not quite as much as when humans are in the loop. Toss a human into the mix, and you get stuff like this:

I’m honestly not sure whether a human would be able to effectuate something with that level of complexity, but I’m fairly sure that if a human could do that, they wouldn’t be able to do it as quickly or repeatably as the robot can. The beauty of this innovation (besides what ends up on the wall) is the way the software helps human designers be even more creative (or to formalize and express their creativity in novel ways), while offloading all of the physically difficult tasks to the machine. Seeing this—perhaps naively—I feel like I could jump right in there and design my own 3D wall art (which I would totally do, given the chance).

A variety of filter systems can translate human input to machine output in different styles.

And maybe that’s the broader idea here: that robots are able to slightly democratize some tasks that otherwise would require an impractical amount of experience and skill. In this example, it’s not that the robot would replace a human expert; the machine would let the human create plaster designs in a completely different way with completely different results from what human hands could generate unassisted. The robotic system is offering a new kind of interface that enables a new kind of art that wouldn’t be possible otherwise and that doesn’t require a specific kind of expertise. It’s not better or worse; it’s just a different approach to design and construction.

Future instantiations of this system will hopefully be easier to use; as a research project, it requires a lot of calibration and the hardware can be a bit of a hassle to manage. The researchers say they hope to improve the state of play significantly by making everything more self-contained and easier to access remotely. That will eliminate the need for designers to be on-site. While a system like this will likely never be cheap, I’m imagining a point at which you might be able to rent one for a couple of days for your own home, so you can add texture (and perhaps eventually color?) that will give you one-of-a-kind walls and rooms.

Interactive Robotic Plastering: Augmented Interactive Design and Fabrication for On-site Robotic Plastering, by Daniela Mitterberger, Selen Ercan Jenny, Lauren Vasey, Ena Lloret-Fritschi, Petrus Aejmelaeus-Lindström, Fabio Gramazio, and Matthias Kohler from ETH Zurich, was presented at CHI 2022.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PHILADELPHIAIEEE ARSO 2022: 28–30 May 2022, LONG BEACH, CALIF.RSS 2022: 27 June–1 July 2022, NEW YORK CITYERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGAL

Enjoy today’s videos!

What a strange position for Boston Dynamics to be in, having to contend with the fact that its robots are at this point likely best known for dancing rather than for being useful in a more productivity-minded way:

Boston Dynamics is also announcing some upgrades for Spot:

[ Boston Dynamics ]

MIT CSAIL has developed a new way to rapidly design and fabricate soft pneumatic actuators with integrated sensing. Such actuators can be used as the backbone in a variety of applications such as assistive wearables, robotics, and rehabilitative technologies.

[ MIT ]

The Sechseläuten (“the six o’clock ringing of the bells”) is a traditional spring holiday in the Swiss city of Zurich, and this year, it had a slightly less traditional guest: ANYmal!

[ Swiss-Mile ]

Thanks, Marko!

Working in collaboration with domestic appliances manufacturer Beko, researchers from the University of Cambridge trained their robot chef to assess the saltiness of a dish at different stages of the chewing process, imitating a similar process in humans. Their results could be useful in the development of automated or semi-automated food preparation by helping robots to learn what tastes good and what doesn’t, making them better cooks.

[ Cambridge ]

More impressive work from the UZH Robotics and Perception Group, teaching racing quadrotors to adapt on the fly to a changing course:

[ RPG ]

In the SANDRo Project, funded by DIH-HERO, PAL Robotics and Heemskerk Innovation Technology are developing the TIAGo robot to provide assistive services to people with difficulties in the activities of daily living.

[ PAL Robotics ]

For drones to autonomously perform necessary but quotidian tasks, such as delivering packages or airlifting injured drivers from a traffic accident, drones must be able to adapt to wind conditions in real time—rolling with the punches, meteorologically speaking. To face this challenge, a team of engineers from Caltech has developed Neural-Fly, a deep-learning method that can help drones cope with new and unknown wind conditions in real time just by updating a few key parameters.

[ Caltech ]

On May 17th, the Furhat Conference on Social Robotics returns with a new lineup of experts who will share their latest cutting edge research and innovation projects using social robots and conversational AI. Since Furhat Robotics’ recent acquisition of Misty Robotics, a brand new face will make an appearance—the Misty robot! Registration for the conference is free and now open.

[ Furhat Conference ]

Thanks, Chris!

This is quite a contest: Draw your best idea for a robot inspired by nature, and if you win, a bunch of robotics experts will actually build it!

[ Natural Robotics Contest ]

Thanks, Robert!

Franka Production 3 is the force sensitive robot platform made in Germany, a system that ignites productivity for everyone who needs industrial robotics automation.

[ Franka ]

Thailand is equipping vocational students with robotic skills to cater to the anticipated demand for 200,000 robotics-trained workers by 2024. More and more factories are moving to Thailand, hence education plays an important role to educate the students in Industry 4.0 knowledge.

[ Kuka ]

Dusty Robotics develops robot-powered tools for the modern construction workforce, using cutting-edge robotics technology that is built in-house from the ground up. Our engineers design the mechanical, electrical, firmware, robotics, and software components that power ultra-precise mobile printers. Hear from Dusty engineers about what it’s like to work at Dusty and the impact their work has—every day.

[ Dusty ]

One in three older adults falls every year, leading to a serious healthcare problem in the United States. A team of Stanford scholars are developing wearable robotics to help people restore their balance to prevent these falls. Karen Lu, associate professor of computer science, and Steve Collins, associate professor of mechanical engineering, explain how an intelligent exoskeleton could enhance people’s mobility.

[ Stanford HAI ]

The latest episode of the Robot Brains Podcast features Skydio CEO Adam Bry.

[ Robot Brains ]

This week’s CMU RI Seminar is by Ross L. Hatton from Oregon State, on “Snakes & Spiders, Robots & Geometry.”

[ CMU RI ]



This is a sponsored article brought to you by SICK Inc..

From advanced manufacturing to automated vehicles, engineers are using LiDAR to change the world as we know it. For the second year, students from across the country submitted projects to SICK's annual TiM$10K Challenge.


The first place team during the 2020 TiM$10K Challenge hails from Worcester Polytechnic Institute (WPI) in Worcester, Mass. The team comprised of undergraduate seniors, Daniel Pelaez and Noah Budris, and undergraduate junior, Noah Parker.

With the help of their academic advisor, Dr. Alexander Wyglinski, Professor of Electrical Engineering and Robotics Engineering at WPI, the team took first place in the 2020 TiM$10K Challenge with their project titled ROADGNAR, a mobile and autonomous pavement quality data collection system.

So what is the TiM$10K Challenge?

In this challenge, SICK reached out to universities across the nation that were looking to support innovation and student achievement in automation and technology. Participating teams were supplied with a SICK 270° LiDAR, a TiM, and accessories. They were challenged to solve a problem, create a solution, and bring a new application that utilizes the SICK scanner in any industry.

Around the United States, many of the nation's roadways are in poor condition, most often from potholes and cracks in the pavement, which can make driving difficult. Many local governments agree that infrastructure is in need of repair, but with a lack of high-quality data, inconsistencies in damage reporting, and an overall lack of adequate prioritization, this is a difficult problem to solve.



Pelaez, Parker, and Budris first came up with the idea of ROADGNAR before they had even learned of the TiM$10K Challenge. They noticed that the roads in their New England area were in poor condition, and wanted to see if there was a way to help solve the way road maintenance is performed.

In their research, they learned that many local governments use outdated and manual processes. Many send out workers to check for poor road conditions, who then log the information in notebooks.

The team began working on a solution to help solve this problem. It was at a career fair that Pelaez met a SICK representative, who encouraged him to apply to the TiM$10K Challenge.

Win $10K and a Trip to Germany!

SICK is excited to announce the 2022-2023 edition of the SICK TiM$10K Challenge. Twenty teams will be selected to participate in the challenge, and the chosen teams will be supplied with a 270º SICK LiDAR sensor (TiM) and accessories. The teams will be challenged to solve a problem, create a solution, bring a new application that utilizes the SICK LiDAR in any industry. This can be part of the curriculum of a senior design project or capstone projects for students.

Awards:

The 3 winning teams will win a cash award of

• 1st Place - $10K
• 2nd Place - $5K
• 3rd place - $3K

In addition to bragging rights and the cash prize, the 1st place winning team, along with the advising professor, will be offered an all-expenses-paid trip to SICK Germany to visit the SICK headquarters and manufacturing facility!

Registration is now open for the academic year 2022-2023!


Using SICK's LiDAR technology, the ROADGNAR takes a 3D scan of the road and the data is then used to determine the exact level of repair needed.

ROADGNAR collects detailed data on the surface of any roadway, while still allowing for easy integration onto any vehicle. With this automated system, road maintenance can become a faster, more reliable, and more efficient process for towns and cities around the country.

ROADGNAR solves this problem through two avenues: hardware and software. The team designed two mounting brackets to connect the system to a vehicle. The first, located in the back of the vehicle, supports a LiDAR scanner. The second is fixed in line with the vehicle's axle and supports a wheel encoder, which is wired to the fuse box.

"It definitely took us a while to figure out a way to power ROADGNAR so we wouldn't have to worry about it shutting off while the car was in motion," said Parker.

Also wired to the fuse box is a GPS module within the vehicle itself. Data transfer wires are attached to these three systems and connected to a central processing unit within the vehicle.

Using LiDAR to collect road data

When the car is started, all connected devices turn on. The LiDAR scanner collects road surface data, the wheel encoder tracks an accurate measurement of the distance travelled by the vehicle, and the GPS generates geo-tags on a constant basis. All this data is stored in the onboard database, where a monitor presents it all to the user. The data is then stored in a hard drive.

Much like the roads in their Massachusetts town, the creation process of ROADGNAR was not without its challenges. The biggest problem took the form of the COVID-19 pandemic, which hit the ROADGNAR team in the middle of development. Once WPI closed to encourage its students and faculty to practice social distancing, the team was without a base of operations.

"When the coronavirus closed our school, we were lucky enough to live pretty close to each other," said Paleaz. "We took precautions, but were able to come together to test and power through to finish our project."



Integrating LiDAR into the car was also a challenge. Occasionally, the LiDAR would shut off when the car began moving. The team had to take several measures to keep the sensor online, often contacting SICK's help center for instruction.

"One of the major challenges was making sure we were getting enough data on a given road surface," said Budris. "At first we were worried that we wouldn't get enough data from the sensor to make ROADGNAR feasible, but we figured that if we drove at a slow and constant rate, we'd be able to get accurate scans."

With the challenge complete, Pelaez, Budris, and Parker are looking to turn ROADGNAR into a genuine product. They have already contacted an experienced business partner to help them determine their next steps.



They are now interviewing with representatives from various Department of Public Works throughout Massachusetts and Connecticut. Thirteen municipalities have indicated that they would be extremely interested in utilizing ROADGNAR, as it would drastically reduce the time needed to assess all the roads in the area. The trio is excited to see how different LiDAR sensors can help refine ROADGNAR into a viable product.

"We'd like to keep the connection going," explained Pelaez. "If we can keep the door open for a potential partnership between us and SICK, that'd be great."

SICK is now accepting entries for the TiM$10K Challenge for the 2022-2023 school year!

Student teams are encouraged to use their creativity and technical knowledge to incorporate the SICK LiDAR for any industry in any application. Advisors/professors are allowed to guide the student teams as required.




Pages