Feed aggregator

aside.inlay.CoronaVirusCoverage.xlrg { font-family: "Helvetica", sans-serif; text-transform: uppercase; text-align: center; border-width: 4px 0; border-top: 2px solid #666; border-bottom: 2px solid #666; padding: 10px 0; font-size: 18px; font-weight: bold; } span.LinkHereRed { color: #cc0000; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; }

Researchers on WeBank’s AI Moonshot Team have taken a deep learning system developed to detect solar panel installations from satellite imagery and repurposed it to track China’s economic recovery from the novel coronavirus outbreak.

This, as far as the researchers know, is the first time big data and AI have been used to measure the impact of the new coronavirus on China, Haishan Wu, vice general manager of WeBank’s AI department, told IEEE Spectrum. WeBank is a private Chinese online banking company founded by Tencent.

Click here for additional coronavirus coverage

The team used its neural network to analyze visible, near-infrared, and short-wave infrared images from various satellites, including the infrared bands from the Sentinel-2 satellite. This allowed the system to look for hot spots indicative of actual steel manufacturing inside a plant.  In the early days of the outbreak, this analysis showed that steel manufacturing had dropped to a low of 29 percent of capacity. But by 9 February, it had recovered to 76 percent.

The researchers then looked at other types of manufacturing and commercial activity using AI. One of the techniques was simply counting cars in large corporate parking lots. From that analysis, it appeared that, by 10 February, Tesla’s Shanghai car production had fully recovered, while tourism operations, like Shanghai Disneyland, are still shut down.

Images: WeBank

Moving beyond satellite data, the researchers took daily anonymized GPS data from several million mobile phone users in 2019 and 2020, and used AI to determine which of those users were commuters. The software then counted the number of commuters in each city, and compared the number of commuters on a given day in 2019 and its corresponding date in 2020, starting with Chinese New Year. In both cases, Chinese New Year saw a huge dip in commuting, but unlike in 2019, the number of people going to work didn’t bounce back after the holiday. While things picked up slowly, the WeBank researchers calculated that by 10 March 2020, about 75 percent of the workforce had returned to work.

Projecting out from these curves, the researchers concluded that most Chinese workers, with the exception of Wuhan, will be back to work by the end of March. Economic growth in the first quarter, their study indicated, will take a 36 percent hit.

Finally, the team used natural language processing technology to mine Twitter-like services and other social media platforms for mentions of companies that provide online working, gaming, education, streaming video, social networking, e-commerce, and express delivery services. According to this analysis, telecommuting for work is booming, up 537 percent from the first day of 2020; online education is up 169 percent; gaming is up 124 percent; video streaming is up 55 percent; social networking is up 47 percent. Meanwhile,  e-commerce is flat, and express delivery is down a little less than 1 percent. The analysis of China’s social media activity also yielded the prediction that the Chinese economy will be mostly back to normal by the end of March.

A lot of people in the auto industry talked for way too long about the imminent advent of fully self-driving cars. 

In 2013, Carlos Ghosn, now very much the ex-chairman of Nissan, said it would happen in seven years. In 2016, Elon Musk, then chairman of Tesla, implied  his cars could basically do it already. In 2017 and right through early 2019 GM Cruise talked 2019. And Waymo, the company with the most to show for its efforts so far, is speaking in more measured terms than it used just a year or two ago. 

It’s all making Gill Pratt, CEO of the Toyota Research Institute in California, look rather prescient. A veteran roboticist who joined Toyota in 2015 with the task of developing robocars, Pratt from the beginning emphasized just how hard the task would be and how important it was to aim for intermediate goals—notably by making a car that could help drivers now, not merely replace them at some distant date.

That helpmate, called Guardian, is set to use a range of active safety features to coach a driver and, in the worst cases, to save him from his own mistakes. The more ambitious Chauffeur will one day really drive itself, though in a constrained operating environment. The constraints on the current iteration will be revealed at the first demonstration at this year’s Olympic games in Tokyo; they will certainly involve limits to how far afield and how fast the car may go.

Earlier this week, at TRI’s office in Palo Alto, Calif., Pratt and his colleagues gave Spectrum a walkaround look at the latest version of the Chauffeur, the P4; it’s a Lexus with a package of sensors neatly merging with the roof. Inside are two lidars from Luminar, a stereocamera, a mono-camera (just to zero in on traffic signs), and radar. At the car’s front and corners are small Velodyne lidars, hidden behind a grill or folded smoothly into small protuberances. Nothing more could be glimpsed, not even the electronics that no doubt filled the trunk.

Pratt and his colleagues had a lot to say on the promises and pitfalls of self-driving technology. The easiest to excerpt is their view on the difficulty of the problem.

“There isn’t anything that’s telling us it can’t be done; I should be very clear on that,” Pratt says. “Just because we don’t know how to do it doesn’t mean it can’t be done.”

That said, though, he notes that early successes (using deep neural networks to process vast amounts of data) led researchers to optimism. In describing that optimism, he does not object to the phrase “irrational exuberance,” made famous during the 1990s dot-com bubble.

It turned out that the early successes came in those fields where deep learning, as it’s known, was most effective, like artificial vision and other aspects of perception. Computers, long held to be particularly bad at pattern recognition, were suddenly shown to be particularly good at it—even better, in some cases, than human beings. 

“The irrational exuberance came from looking  at the slope of the [graph] and seeing the seemingly miraculous improvement deep learning had given us,” Pratt says. “Everyone was surprised, including the people who developed it, that suddenly, if you threw enough data and enough computing at it, the performance would get so good. It was then easy to say that because we were surprised just now, it must mean we’re going to continue to be surprised in the next couple of years.”

The mindset was one of permanent revolution: The difficult, we do immediately; the impossible just takes a little longer. 

Then came the slow realization that AI not only had to perceive the world—a nontrivial problem, even now—but also to make predictions, typically about human behavior. That problem is more than nontrivial. It is nearly intractable. 

Of course, you can always use deep learning to do whatever it does best, and then use expert systems to handle the rest. Such systems use logical rules, input by actual experts, to handle whatever problems come up. That method also enables engineers to tweak the system—an option that the black box of deep learning doesn’t allow.

Putting deep learning and expert systems together does help, says Pratt. “But not nearly enough.”

Day-to-day improvements will continue no matter what new tools become available to AI researchers, says Wolfram Burgard, Toyota’s vice president for automated driving technology. 

“We are now in the age of deep learning,” he says. “We don’t know what will come after—it could be a rebirth of an old technology that suddenly outperforms what we saw before. We are still in a phase where we are making progress with existing techniques, but the gradient isn’t as steep as it was a few years ago. It is getting more difficult.”

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

aside.inlay.xlrg.XploreFree { font-family: "Georgia", serif; border-width: 4px 0; border-top: solid #888; border-bottom: solid #888; padding: 10px 0; font-size: 19px; font-weight: bold; text-align: center; } span.FreeRed { color: red; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; } span.XploreBlue { color: #03a6e3; font-family: "Theinhardt-Medium", sans-serif; }

A new sensor for robots is designed to make our physical interactions with these machines a little smoother—and safer. The sensor, which is now being commercialized, allows robots to measure the distance and angle of approach of a human or object in close proximity.

Industrial robots often work autonomously to complete tasks. But increasingly, collaborative robots are working alongside humans. To avoid collisions in these circumstances, collaborative robots need highly accurate sensors to detect when someone (or something) is getting a little too close.

Many sensors have been developed for this purpose, each with its own advantages and disadvantages. Those that rely on sound and light (for example, infrared or ultrasonic time-of-flight sensors) measure the reflections of those signals and must therefore be closely aligned with the approaching object, which limits their field of detection.

Photos: Aidin Robotics

To circumvent this problem, a group of researchers in South Korea created a new proximity sensor that measures impedance. It works by inducing electric and magnetic fields with a wide angle. When a human approaches the sensor, their body causes changes in resistance within those fields. The sensor measures the changes and uses that data to inform the robot of the person’s distance and angle of approach. The researchers describe their design in a study published 26 February in IEEE Transactions on Industrial Electronics. It has since been commercialized by Aidin Robotics.

Read this article for free on IEEE Xplore until 08 April 2020.

The sensor is made of electrodes with a flexible, coil-like design. “Since the sensor is highly flexible, it can be manufactured in various shapes tailored to the geometries of the robot,” explains Yoon Haeng Lee, CEO of Aidin Robotics. “Moreover, it is able to classify the materials of the approaching objects such as human, metals, and plastics.”

Tests show that the sensor can detect humans from up to 30 centimeters away. It has an accuracy of 90 percent when on a flat surface. However, the electric and magnetic fields become weaker and more dispersed when the sensor is laid over a curved surface. Therefore, the sensor’s accuracy decreases as the underlying surface becomes increasingly curved.

Every robot is different, and the sensor’s performance may change based on a specific robot’s characteristics. The latest version of the integrated sensor module, when installed on a curved surface, can detect objects from up to 20 centimeters away with an accuracy of 94 percent.

Lee says the device is already being used in some collaborative robot models, including the UR10 (by Universal Robots) and Indy7 (by Neuromeka Inc.). “In the future, the sensor module will be mass-produced and applied to the other service robots, as well as collaborative and industrial robots, to contribute to the truly safe work and coexistence of robots and humans,” he says.

Back to IEEE Journal Watch

Dr. Arthur Kreitenberg and his son Elliot got some strange looks when they began the design work for the GermFalcon, a new machine that uses ultraviolet light to wipe out coronavirus and other germs inside an airplane. The father-son founders of Dimer UVC took tape measures with them on flights to unobtrusively record the distances that would form the key design constraints for their system.

“We definitely got lots of looks from passengers and lots of inquiries from flight attendants,” Dr. Kreitenberg recalls. “You can imagine that would cause some attention: taking out a tape measure midflight and measuring armrests. The truth is that when we explained to the flight attendants what we were doing and what we were designing, they [were] really excited about it.”

Perhaps that shouldn’t be surprising. In these days of coronavirus concerns, airline attendants work in what must seem like an aluminum-encased biohazard site.

Image: Dimer UVC

GermFalcon uses a set of mercury lamps to bathe the airline cabin, bathrooms, and galley in ultraviolet-C light. Unlike UV-A and UV-B, that 200 to 280 nanometer wavelength doesn’t reach the surface of the Earth from the sun, because it’s strongly absorbed by nitrogen in the air. And that’s a good thing, because it’s like kryptonite to DNA. Using 100-amps from a lithium-iron-phosphate battery pack, GermFalcon’s mercury lamps’ output is so strong that the company claims the system can wipe out flu viruses from an entire narrow-body plane in about three minutes: one pass up the aisle, one pass down the aisle, and a minute for the bathrooms and galley.

Flu prevention was the original inspiration for GermFalcon. Dr. Arthur Kreitenberg, an orthopaedic surgeon with a background in mechanical engineering, was already familiar with UV-C sterilization, because of its use in operating rooms. “Our motivation was to take it outside of the hospital into other areas where people are concerned about germs,” he says. With SARS and MERS and annual influenza, it seemed clear that airplanes are a major mode of transmission. It was also clear that nobody was effectively disinfecting aircraft.

Many of the chemicals you’d use in a hospital are not approved for use on an aircraft, Kreitenberg points out. And some of the ones that are, aren’t nearly as effective or practical as assumed. (Stop for a minute and look at the actual directions for disinfecting a surface with a Lysol Wipe, then try to imagine doing that on a plane. Go ahead. I’ll wait.)

Photo: Dimer UVC

The key design constraints for bringing UV-C sterilization into air travel were geometry, time, and power. The Kreitenbergs needed to know how much room their system had to move up and down the aisles without bashing into seats, armrests, restroom doors, and overhead bins. They also needed to know what surfaces were the most germ-ridden (the top of the seat back, as you might expect), something they discovered by swabbing surfaces on about a dozen flights. And from those data points, they had to figure out the proper power and position of the UV-lamps that would allow them to sterilize an aircraft in a matter of minutes. “Time is a big constraint as well. The airlines want us on and off the airplane as quick as possible,” he says.

“I wish I could tell you we solved it all mathematically,” says Kreitenberg. “But the truth is we went out to the airplane graveyard in Mojave, California and bought a couple rows of airplane seats and overhead bins, put [UV] meters on them, smeared them with bacteria, and did cultures.”

It took four or five iterations to get it right. “It turns out there are a lot of different airplane configurations,” he says. 

Initially, the pair envisioned GermFalcon as a robot, but that made the design challenges multiply. “Robotics are easier said than done, even just going up and down an airplane,” he says. Sensors weren’t hardy enough and needed frequent recalibration, and the motor drives were heavy and energy consuming. The robotics consumed about a year of their development time before they decided to abandon that path in favor of a human protected by shielding.

Photo: Dimer UVC

Lacking a suitable lab for such a dangerous germ, Dimer UVC hasn’t tested the system on the virus that causes COVID-19. But Kreitenberg expects it will be similarly susceptible to UV-C as influenza and other germs are. The dose can be easily adjusted by slowing GermFalcon’s roll down the aisle. The company has offered GermFalcon's services free of charge to airlines operating from a handful of U.S. airports.

While Dimer UVC waits for airlines to take up its offer, it’s gotten involved in another attempt to robotize aerospace interiors. The company is part of a team building a UV-C sterilization robot for the International Space Station. “It’ll work basically work like a Roomba and skim the surface of the space station,” says Kreitenberg, a former finalist astronaut candidate.

Because it can get so close to the station’s surfaces, the zero-G death-ray Roomba the team is working on can use UV-C LEDs instead of the power-hungry mercury lamps of GermFalcon. Kreitenberg says he would be much happier using LEDs, if they could reach the needed power. “All of our power constraints and a lot of other constraints will be solved when there is an effective UV-C LED,” he says. Looking at the progress companies have made in that area over the last five years, he's “optimistic" that GermFalcon will be able to switch to using only LEDs.

aside.inlay.xlrg.XploreFree { font-family: "Georgia", serif; border-width: 4px 0; border-top: solid #888; border-bottom: solid #888; padding: 10px 0; font-size: 19px; font-weight: bold; text-align: center; } span.FreeRed { color: red; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; } span.XploreBlue { color: #03a6e3; font-family: "Theinhardt-Medium", sans-serif; }

Swarms of small, inexpensive robots are a compelling research area in robotics. With a swarm, you can often accomplish tasks that would be impractical (or impossible) for larger robots to do, in a way that’s much more resilient and cost effective than larger robots could ever be.

The tricky thing is getting a swarm of robots to work together to do what you want them to do, especially if what you want them to do is a task that’s complicated or highly structured. It’s not too bad if you have some kind of controller that can see all the robots at once and tell them where to go, but that’s a luxury that you’re not likely to find outside of a robotics lab.

Researchers at Northwestern University, in Evanston, have been working on a way to provide decentralized control for a swarm of 100 identically programmed small robots, which allows them to collectively work out a way to transition from one shape to another without running into each other even a little bit.

The process that the robots use to figure out where to go seems like it should be mostly straightforward: They’re given a shape to form, so each robot picks its goal location (where it wants to end up as part of the shape), and then plans a path to get from where it is to where it needs to go, following a grid pattern to make things a little easier. But using this method, you immediately run into two problems: First, since there’s no central control, you may end up with two (or more) robots with the same goal; and second, there’s no way for any single robot to path plan all the way to its goal in a way that it can be certain won’t run into another robot.

To solve these problems, the robots are all talking to each other as they move, not just to avoid colliding with its friends, but also to figure out where its friends are going and whether it might be worth swapping destinations. Since the robots are all the same, they don’t really care where exactly they end up, as long as all of the goal positions are filled up. And if one robot talks to another robot and they agree that a goal swap would result in both of them having to move less, they go ahead and swap. The algorithm makes sure that all goal positions are filled eventually, and also helps robots avoid running into each other through judicious use of a “wait” command.

What’s novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this.

What’s really novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this. And it means that since it’s effective with no centralized control at all, you can think of “the swarm” as a sort of Borg-like collective entity of its own, which is pretty cool.

The Northwestern researchers behind this are Michael Rubenstein, assistant professor of electrical engineering and computer science, and his PhD student Hanlin Wang. You might remember Mike from his work on Kilobots at Harvard, which we wrote about in 2011, 2013, and again in 2014, when Mike and his fellow researchers managed to put together a thousand (!) of them. As awesome as it is to have a thousand robots, when you start thinking about what it takes to charge, fix, and modify them, a thousand robots (a thousand robots!), it makes sense why they’ve updated the platform a bit (now called Coachbot) and reduced the swarm size to 100 physical robots, making up the rest in simulation.

These robots, we’re told, are “much better behaved.”

Image: Northwestern University

The hardware used by the researchers in their experiments. 1. The Coachbot V2.0 mobile robots (height of 12 cm and a diameter of 10 cm) are equipped with a localization system based on the HTC Vive (a), Raspberry Pi b+ computer (b), electronics motherboard (c), and rechargeable battery (d). The robot arena used in experiments has an overhead camera only used for recording videos (e) and an overhead HTC Vive base station (f). The experiments relied on a swarm of 100 robots (g). 2. The Coachbot V2.0 swarm communication network consists of an ethernet connection between the base station and a Wi-Fi router (green link), TCP/IP connections (blue links), and layer 2 broadcasting connections (black links). 3. A swarm of 100 robots. 4. The robots recharge their batteries by connecting to two metal strips attached to the wall.

For more details on this work, we spoke with Mike Rubenstein via email.

IEEE Spectrum: Why switch to the new hardware platform instead of Kilobots?

Mike Rubenstein: We wanted to make a platform more capable and extendable than Kilobot, and improve on lessons learned with Kilobot. These robots have far better locomotion capabilities that Kilobot, and include absolute position sensing, which makes operating the robots easier. They have truly “hands free” operations. For example with Kilobot to start an experiment you had to place the robots in their starting position by hand (sometimes taking an hour or two), while with these robots, a user just specifies a set of positions for all the robots and presses the “go” button. With Kilobot it was also hard to see what the state of all the robots were, for example it was difficult to see if 999 robots are powered on or 1000 robots are powered on. These new robots send state information back to a user display, making it easy to understand the full state of the swarm. 
 
How much of a constraint is grid-ifying the goal points and motion planning?

The grid constraint obviously makes motion less efficient as they must move in Manhattan-type paths, not straight line paths, so most of the time they move a bit farther. The reason we constrain the motions to move in a discrete grid is that it makes the robot algorithm less computationally complex and reasoning about collisions and deadlock becomes a lot easier, which allowed us to provide guarantees that the shape will form successfully. 

Image: Northwestern University

Still images of a 100 robot shape formation experiment. The robots start in a random configuration, and move to form the desired “N” shape. Once this shape is formed, they then form the shape “U.” The entire sequence is fully autonomous. (a) T = 0 s; (b) T = 20 s; (c) T = 64 s; (d) T = 72 s; (e)  T = 80 s; (f) T = 112 s.

Can you tell us about those couple of lonely wandering robots at the end of the simulated “N” formation in the video?

In our algorithm, we don’t assign goal locations to all the robots at the start, they have to figure out on their own which robot goes where. The last few robots you pointed out happened to be far away from the goal location the swarm figured they should have. Instead of having that robot move around the whole shape to its goal, you see a subset of robots all shift over by one to make room for the robot in the shape closer to its current position.
 
What are some examples of ways in which this research could be applied to real-world useful swarms of robots?

One example could be the shape formation in modular self-reconfigurable robots. The hope is that this shape formation algorithm could allow these self-reconfigurable systems to automatically change their shape in a simple and reliable way. Another example could be warehouse robots, where robots need to move to assigned goals to pick up items. This algorithm would help them move quickly and reliably.
 
What are you working on next?

I’m looking at trying to understand how to enable large groups of simple individuals to behave in a controlled and reliable way as a group. I’ve started looking at this question in a wide range of settings; from swarms of ground robots, to reconfigurable robots that attach together by melting conductive plastic, to swarms of flying vehicles, to satellite swarms. 

Shape Formation in Homogeneous Swarms Using Local Task Swapping,” by Hanlin Wang and Michael Rubenstein from Northwestern, is published in IEEE Transactions on Robotics. < Back to IEEE Journal Watch
aside.inlay.xlrg.XploreFree { font-family: "Georgia", serif; border-width: 4px 0; border-top: solid #888; border-bottom: solid #888; padding: 10px 0; font-size: 19px; font-weight: bold; text-align: center; } span.FreeRed { color: red; text-transform: uppercase; font-family: "Theinhardt-Medium", sans-serif; } span.XploreBlue { color: #03a6e3; font-family: "Theinhardt-Medium", sans-serif; }

Swarms of small, inexpensive robots are a compelling research area in robotics. With a swarm, you can often accomplish tasks that would be impractical (or impossible) for larger robots to do, in a way that’s much more resilient and cost effective than larger robots could ever be.

The tricky thing is getting a swarm of robots to work together to do what you want them to do, especially if what you want them to do is a task that’s complicated or highly structured. It’s not too bad if you have some kind of controller that can see all the robots at once and tell them where to go, but that’s a luxury that you’re not likely to find outside of a robotics lab.

Researchers at Northwestern University, in Evanston, have been working on a way to provide decentralized control for a swarm of 100 identically programmed small robots, which allows them to collectively work out a way to transition from one shape to another without running into each other even a little bit.

The process that the robots use to figure out where to go seems like it should be mostly straightforward: They’re given a shape to form, so each robot picks its goal location (where it wants to end up as part of the shape), and then plans a path to get from where it is to where it needs to go, following a grid pattern to make things a little easier. But using this method, you immediately run into two problems: First, since there’s no central control, you may end up with two (or more) robots with the same goal; and second, there’s no way for any single robot to path plan all the way to its goal in a way that it can be certain won’t run into another robot.

To solve these problems, the robots are all talking to each other as they move, not just to avoid colliding with its friends, but also to figure out where its friends are going and whether it might be worth swapping destinations. Since the robots are all the same, they don’t really care where exactly they end up, as long as all of the goal positions are filled up. And if one robot talks to another robot and they agree that a goal swap would result in both of them having to move less, they go ahead and swap. The algorithm makes sure that all goal positions are filled eventually, and also helps robots avoid running into each other through judicious use of a “wait” command.

What’s novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this.

What’s really novel about this approach is that despite the fully distributed nature of the algorithm, it’s also provably correct, and will result in the guaranteed formation of an entire shape without collisions or deadlocks. As far as the researchers know, it’s the first algorithm to do this. And it means that since it’s effective with no centralized control at all, you can think of “the swarm” as a sort of Borg-like collective entity of its own, which is pretty cool.

The Northwestern researchers behind this are Michael Rubenstein, assistant professor of electrical engineering and computer science, and his PhD student Hanlin Wang. You might remember Mike from his work on Kilobots at Harvard, which we wrote about in 2011, 2013, and again in 2014, when Mike and his fellow researchers managed to put together a thousand (!) of them. As awesome as it is to have a thousand robots, when you start thinking about what it takes to charge, fix, and modify them, a thousand robots (a thousand robots!), it makes sense why they’ve updated the platform a bit (now called Coachbot) and reduced the swarm size to 100 physical robots, making up the rest in simulation.

These robots, we’re told, are “much better behaved.”

Image: Northwestern University

The hardware used by the researchers in their experiments. 1. The Coachbot V2.0 mobile robots (height of 12 cm and a diameter of 10 cm) are equipped with a localization system based on the HTC Vive (a), Raspberry Pi b+ computer (b), electronics motherboard (c), and rechargeable battery (d). The robot arena used in experiments has an overhead camera only used for recording videos (e) and an overhead HTC Vive base station (f). The experiments relied on a swarm of 100 robots (g). 2. The Coachbot V2.0 swarm communication network consists of an ethernet connection between the base station and a Wi-Fi router (green link), TCP/IP connections (blue links), and layer 2 broadcasting connections (black links). 3. A swarm of 100 robots. 4. The robots recharge their batteries by connecting to two metal strips attached to the wall.

For more details on this work, we spoke with Mike Rubenstein via email.

IEEE Spectrum: Why switch to the new hardware platform instead of Kilobots?

Mike Rubenstein: We wanted to make a platform more capable and extendable than Kilobot, and improve on lessons learned with Kilobot. These robots have far better locomotion capabilities that Kilobot, and include absolute position sensing, which makes operating the robots easier. They have truly “hands free” operations. For example with Kilobot to start an experiment you had to place the robots in their starting position by hand (sometimes taking an hour or two), while with these robots, a user just specifies a set of positions for all the robots and presses the “go” button. With Kilobot it was also hard to see what the state of all the robots were, for example it was difficult to see if 999 robots are powered on or 1000 robots are powered on. These new robots send state information back to a user display, making it easy to understand the full state of the swarm. 
 
How much of a constraint is grid-ifying the goal points and motion planning?

The grid constraint obviously makes motion less efficient as they must move in Manhattan-type paths, not straight line paths, so most of the time they move a bit farther. The reason we constrain the motions to move in a discrete grid is that it makes the robot algorithm less computationally complex and reasoning about collisions and deadlock becomes a lot easier, which allowed us to provide guarantees that the shape will form successfully. 

Image: Northwestern University

Still images of a 100 robot shape formation experiment. The robots start in a random configuration, and move to form the desired “N” shape. Once this shape is formed, they then form the shape “U.” The entire sequence is fully autonomous. (a) T = 0 s; (b) T = 20 s; (c) T = 64 s; (d) T = 72 s; (e)  T = 80 s; (f) T = 112 s.

Can you tell us about those couple of lonely wandering robots at the end of the simulated “N” formation in the video?

In our algorithm, we don’t assign goal locations to all the robots at the start, they have to figure out on their own which robot goes where. The last few robots you pointed out happened to be far away from the goal location the swarm figured they should have. Instead of having that robot move around the whole shape to its goal, you see a subset of robots all shift over by one to make room for the robot in the shape closer to its current position.
 
What are some examples of ways in which this research could be applied to real-world useful swarms of robots?

One example could be the shape formation in modular self-reconfigurable robots. The hope is that this shape formation algorithm could allow these self-reconfigurable systems to automatically change their shape in a simple and reliable way. Another example could be warehouse robots, where robots need to move to assigned goals to pick up items. This algorithm would help them move quickly and reliably.
 
What are you working on next?

I’m looking at trying to understand how to enable large groups of simple individuals to behave in a controlled and reliable way as a group. I’ve started looking at this question in a wide range of settings; from swarms of ground robots, to reconfigurable robots that attach together by melting conductive plastic, to swarms of flying vehicles, to satellite swarms. 

Shape Formation in Homogeneous Swarms Using Local Task Swapping,” by Hanlin Wang and Michael Rubenstein from Northwestern, is published in IEEE Transactions on Robotics. < Back to IEEE Journal Watch

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

NASA Curiosity Project Scientist Ashwin Vasavada guides this tour of the rover’s view of the Martian surface. Composed of more than 1,000 images and carefully assembled over the ensuing months, the larger version of this composite contains nearly 1.8 billion pixels of Martian landscape.

This panorama showcases "Glen Torridon," a region on the side of Mount Sharp that Curiosity is exploring. The panorama was taken between Nov. 24 and Dec. 1, 2019, when the Curiosity team was out for the Thanksgiving holiday. Since the rover would be sitting still with few other tasks to do while it waited for the team to return and provide its next commands, the rover had a rare chance to image its surroundings several days in a row without moving.

[ MSL ]

Sarcos has been making progress with its Guardian XO powered exoskeleton, which we got to see late last year in prototype stage:

The Sarcos Guardian XO full-body, powered exoskeleton is a first-of-its-kind wearable robot that enhances human productivity while keeping workers safe from strain or injury. Set to transform the way work gets done, the Guardian XO exoskeleton augments operator strength without restricting freedom of movement to boost productivity while dramatically reducing injuries.

[ Sarcos ]

Professor Hooman Samani, director of the Artificial Intelligence and Robotics Technology Laboratory (AIART Lab) at National Taipei University, Taiwan, writes in to share some ideas on how robots could be used to fight the coronavirus outbreak. 

Time is a critical issue when dealing with people affected by Coronavirus. Also due to the current emergency disaster, doctors could be far away from the patients. Additionally, avoiding direct contact with infected person is a medical priority. An immediate monitoring and treatment using specific kits must be administered to the victim. We have designed and developed the Ambulance Robot (AmbuBot) which could be a solution to address those issues. AmbuBot could be placed in various locations especially in busy, remote or quarantine areas to assist in above mentioned scenario. The AmbuBot also brings along an AED in a sudden event of cardiac arrest and facilitates various modes of operation from manual to semi-autonomous to autonomous functioning.

[ AIART Lab ]

IEEE Spectrum is interested in exploring how robotics and related technologies can help to fight the coronavirus (COVID-19) outbreak. If you are involved with actual deployments of robots to hospitals and high risk areas or have experience working with robots, drones, or other autonomous systems designed for this kind of emergency, please contact  IEEE Spectrum senior editor Erico Guizzo (e.guizzo@ieee.org)

Digit is launching later this month alongside a brand new sim that’s a 1:1 match to both the API and physics of the actual robot. Here, we show off the ability to train a learned policy against the validated physics of the robot. We have a LOT more to say about RL with real hardware... stay tuned.

Staying tuned!

Agility Robotics ]

This video presents simulations and experiments highlighting the functioning of the proposed Trapezium Line Theta* planner, as well as its improvements over our previous work namely the Obstacle Negotiating A* planner. First, we briefly present a comparison of our previous and new planners. We then show two simulations. The first shows the robot traversing an inclined corridor to reach a goal near the low-lying obstacle. This demonstrates the omnidirectional and any-angle motion planning improvement achieved by the new planner, as well as the independent planning for the front and back wheel pairs. The second simulation further demonstrates the key improvements mentioned above by having the robot traverse tight right-angled corridors. Finally, we present two real experiments on the CENTAURO robot. In the first experiment, the robot has to traverse into a narrow passage and then expand over a low lying obstacle. The second experiment has the robot first expand over a wide obstacle and then move into a narrow passage.

To be presented at ICRA 2020.

Dimitrios Kanoulas ]

We’re contractually obligated to post any video with “adverse events” in the title.

JHU ]

Waymo advertises their self-driving system in this animated video that features a robot car making a right turn without indicating. Also pretty sure that it ends up in the wrong lane for a little bit after a super wide turn and blocks a crosswalk to pick up a passenger. Oops!

I’d still ride in one, though.

Waymo ]

Exyn is building the world’s most advanced, autonomous aerial robots. Today, we launched our latest capability, Scoutonomy. Our pilotless robot can now ‘scout’ freely within a desired volume, such as a tunnel, or this parking garage. The robot sees the white boxes as ‘unknown’ space, and flies to explore them. The orange boxes are mapped obstacles. It also intelligently avoids obstacles in its path and identifies objects, such as people or cars. Scoutonomy can be used to safely and quickly finding survivors in natural, or man-made, disasters.

Exyn ]

I don’t know what soma blocks are, but this robot is better with them than I am.

This work presents a planner that can automatically find an optimal assembly sequence for a dual-arm robot to assemble the soma blocks. The planner uses the mesh model of objects and the final state of the assembly to generate all possible assembly sequence and evaluate the optimal assembly sequence by considering the stability, graspability, assemblability, as well as the need for a second arm. Especially, the need for a second arm is considered when supports from worktables and other workpieces are not enough to produce a stable assembly.

[ Harada Lab ]

Semantic grasping is the problem of selecting stable grasps that are functionally suitable for specific object manipulation tasks. In order for robots to effectively perform object manipulation, a broad sense of contexts, including object and task constraints, needs to be accounted for. We introduce the Context-Aware Grasping Engine, which combines a novel semantic representation of grasp contexts with a neural network structure based on the Wide & Deep model, capable of capturing complex reasoning patterns. We quantitatively validate our approach against three prior methods on a novel dataset consisting of 14,000 semantic grasps for 44 objects, 7 tasks, and 6 different object states. Our approach outperformed all baselines by statistically significant margins, producing new insights into the importance of balancing memorization and generalization of contexts for semantic grasping. We further demonstrate the effectiveness of our approach on robot experiments in which the presented model successfully achieved 31 of 32 suitable grasps.

[ RAIL Lab ]

I’m not totally convinced that bathroom cleaning is an ideal job for autonomous robots at this point, just because of the unstructured nature of a messy bathroom (if not of the bathroom itself). But this startup is giving it a shot anyway.

The cost target is $1,000 per month.

[ Somatic ] via [ TechCrunch ]

IHMC is designing, building, and testing a mobility assistance research device named Quix. The main function of Quix is to restore mobility to those stricken with lower limb paralysis. In order to achieve this the device has motors at the pelvis, hips, knees, and ankles and an onboard computer controlling the motors and various sensors incorporated into the system.

[ IHMC ]

In this major advance for mind-controlled prosthetics, U-M research led by Paul Cederna and Cindy Chestek demonstrates an ultra-precise prosthetic interface technology that taps faint latent signals from nerves in the arm and amplifies them to enable real-time, intuitive, finger-level control of a robotic hand.

[ University of Michigan ]

Coral reefs represent only 1% of the seafloor, but are home to more than 25% of all marine life. Reefs are declining worldwide. Yet, critical information remains unknown about basic biological, ecological, and chemical processes that sustain coral reefs because of the challenges to access their narrow crevices and passageways. A robot that grows through its environment would be well suited to this challenge as there is no relative motion between the exterior of the robot and its surroundings. We design and develop a soft growing robot that operates underwater and take a step towards navigating the complex terrain of a coral reef.

[ UCSD ]

What goes on inside those package lockers, apparently.

[ Dorabot ]

In the future robots could track the progress of construction projects. As part of the MEMMO H2020 project, we recently carried out an autonomous inspection of the Costain High Speed Rail site in London with our ANYmal robot, in collaboration with Edinburgh Robotics.

[ ORI ]

Soft Robotics technology enables seafood handling at high speed even with amorphous products like mussels, crab legs, and lobster tails.

[ Soft Robotics ]

Pepper and Nao had a busy 2019:

[ SoftBank Robotics ]

Chris Atkeson, a professor at the Robotics Institute at Carnegie Mellon University, watches a variety of scenes featuring robots from movies and television and breaks down how accurate their depictions really are. Would the Terminator actually have dialogue options? Are the "three laws" from I, Robot a real thing? Is it actually hard to erase a robot’s memory (a la Westworld)?

[ Chris Atkeson ] via [ Wired ]

This week’s CMU RI Seminar comes from Anca Dragan at UC Berkeley, on “Optimizing for Coordination With People.”

From autonomous cars to quadrotors to mobile manipulators, robots need to co-exist and even collaborate with humans. In this talk, we will explore how our formalism for decision making needs to change to account for this interaction, and dig our heels into the subtleties of modeling human behavior — sometimes strategic, often irrational, and nearly always influenceable. Towards the end, I’ll try to convince you that every robotics task is actually a human-robot interaction task (its specification lies with a human!) and how this view has shaped our more recent work.

[ CMU RI ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

NASA Curiosity Project Scientist Ashwin Vasavada guides this tour of the rover’s view of the Martian surface. Composed of more than 1,000 images and carefully assembled over the ensuing months, the larger version of this composite contains nearly 1.8 billion pixels of Martian landscape.

This panorama showcases "Glen Torridon," a region on the side of Mount Sharp that Curiosity is exploring. The panorama was taken between Nov. 24 and Dec. 1, 2019, when the Curiosity team was out for the Thanksgiving holiday. Since the rover would be sitting still with few other tasks to do while it waited for the team to return and provide its next commands, the rover had a rare chance to image its surroundings several days in a row without moving.

[ MSL ]

Sarcos has been making progress with its Guardian XO powered exoskeleton, which we got to see late last year in prototype stage:

The Sarcos Guardian XO full-body, powered exoskeleton is a first-of-its-kind wearable robot that enhances human productivity while keeping workers safe from strain or injury. Set to transform the way work gets done, the Guardian XO exoskeleton augments operator strength without restricting freedom of movement to boost productivity while dramatically reducing injuries.

[ Sarcos ]

Professor Hooman Samani, director of the Artificial Intelligence and Robotics Technology Laboratory (AIART Lab) at National Taipei University, Taiwan, writes in to share some ideas on how robots could be used to fight the coronavirus outbreak. 

Time is a critical issue when dealing with people affected by Coronavirus. Also due to the current emergency disaster, doctors could be far away from the patients. Additionally, avoiding direct contact with infected person is a medical priority. An immediate monitoring and treatment using specific kits must be administered to the victim. We have designed and developed the Ambulance Robot (AmbuBot) which could be a solution to address those issues. AmbuBot could be placed in various locations especially in busy, remote or quarantine areas to assist in above mentioned scenario. The AmbuBot also brings along an AED in a sudden event of cardiac arrest and facilitates various modes of operation from manual to semi-autonomous to autonomous functioning.

[ AIART Lab ]

IEEE Spectrum is interested in exploring how robotics and related technologies can help to fight the coronavirus (COVID-19) outbreak. If you are involved with actual deployments of robots to hospitals and high risk areas or have experience working with robots, drones, or other autonomous systems designed for this kind of emergency, please contact  IEEE Spectrum senior editor Erico Guizzo (e.guizzo@ieee.org)

Digit is launching later this month alongside a brand new sim that’s a 1:1 match to both the API and physics of the actual robot. Here, we show off the ability to train a learned policy against the validated physics of the robot. We have a LOT more to say about RL with real hardware... stay tuned.

Staying tuned!

Agility Robotics ]

This video presents simulations and experiments highlighting the functioning of the proposed Trapezium Line Theta* planner, as well as its improvements over our previous work namely the Obstacle Negotiating A* planner. First, we briefly present a comparison of our previous and new planners. We then show two simulations. The first shows the robot traversing an inclined corridor to reach a goal near the low-lying obstacle. This demonstrates the omnidirectional and any-angle motion planning improvement achieved by the new planner, as well as the independent planning for the front and back wheel pairs. The second simulation further demonstrates the key improvements mentioned above by having the robot traverse tight right-angled corridors. Finally, we present two real experiments on the CENTAURO robot. In the first experiment, the robot has to traverse into a narrow passage and then expand over a low lying obstacle. The second experiment has the robot first expand over a wide obstacle and then move into a narrow passage.

To be presented at ICRA 2020.

Dimitrios Kanoulas ]

We’re contractually obligated to post any video with “adverse events” in the title.

JHU ]

Waymo advertises their self-driving system in this animated video that features a robot car making a right turn without indicating. Also pretty sure that it ends up in the wrong lane for a little bit after a super wide turn and blocks a crosswalk to pick up a passenger. Oops!

I’d still ride in one, though.

Waymo ]

Exyn is building the world’s most advanced, autonomous aerial robots. Today, we launched our latest capability, Scoutonomy. Our pilotless robot can now ‘scout’ freely within a desired volume, such as a tunnel, or this parking garage. The robot sees the white boxes as ‘unknown’ space, and flies to explore them. The orange boxes are mapped obstacles. It also intelligently avoids obstacles in its path and identifies objects, such as people or cars. Scoutonomy can be used to safely and quickly finding survivors in natural, or man-made, disasters.

Exyn ]

I don’t know what soma blocks are, but this robot is better with them than I am.

This work presents a planner that can automatically find an optimal assembly sequence for a dual-arm robot to assemble the soma blocks. The planner uses the mesh model of objects and the final state of the assembly to generate all possible assembly sequence and evaluate the optimal assembly sequence by considering the stability, graspability, assemblability, as well as the need for a second arm. Especially, the need for a second arm is considered when supports from worktables and other workpieces are not enough to produce a stable assembly.

[ Harada Lab ]

Semantic grasping is the problem of selecting stable grasps that are functionally suitable for specific object manipulation tasks. In order for robots to effectively perform object manipulation, a broad sense of contexts, including object and task constraints, needs to be accounted for. We introduce the Context-Aware Grasping Engine, which combines a novel semantic representation of grasp contexts with a neural network structure based on the Wide & Deep model, capable of capturing complex reasoning patterns. We quantitatively validate our approach against three prior methods on a novel dataset consisting of 14,000 semantic grasps for 44 objects, 7 tasks, and 6 different object states. Our approach outperformed all baselines by statistically significant margins, producing new insights into the importance of balancing memorization and generalization of contexts for semantic grasping. We further demonstrate the effectiveness of our approach on robot experiments in which the presented model successfully achieved 31 of 32 suitable grasps.

[ RAIL Lab ]

I’m not totally convinced that bathroom cleaning is an ideal job for autonomous robots at this point, just because of the unstructured nature of a messy bathroom (if not of the bathroom itself). But this startup is giving it a shot anyway.

The cost target is $1,000 per month.

[ Somatic ] via [ TechCrunch ]

IHMC is designing, building, and testing a mobility assistance research device named Quix. The main function of Quix is to restore mobility to those stricken with lower limb paralysis. In order to achieve this the device has motors at the pelvis, hips, knees, and ankles and an onboard computer controlling the motors and various sensors incorporated into the system.

[ IHMC ]

In this major advance for mind-controlled prosthetics, U-M research led by Paul Cederna and Cindy Chestek demonstrates an ultra-precise prosthetic interface technology that taps faint latent signals from nerves in the arm and amplifies them to enable real-time, intuitive, finger-level control of a robotic hand.

[ University of Michigan ]

Coral reefs represent only 1% of the seafloor, but are home to more than 25% of all marine life. Reefs are declining worldwide. Yet, critical information remains unknown about basic biological, ecological, and chemical processes that sustain coral reefs because of the challenges to access their narrow crevices and passageways. A robot that grows through its environment would be well suited to this challenge as there is no relative motion between the exterior of the robot and its surroundings. We design and develop a soft growing robot that operates underwater and take a step towards navigating the complex terrain of a coral reef.

[ UCSD ]

What goes on inside those package lockers, apparently.

[ Dorabot ]

In the future robots could track the progress of construction projects. As part of the MEMMO H2020 project, we recently carried out an autonomous inspection of the Costain High Speed Rail site in London with our ANYmal robot, in collaboration with Edinburgh Robotics.

[ ORI ]

Soft Robotics technology enables seafood handling at high speed even with amorphous products like mussels, crab legs, and lobster tails.

[ Soft Robotics ]

Pepper and Nao had a busy 2019:

[ SoftBank Robotics ]

Chris Atkeson, a professor at the Robotics Institute at Carnegie Mellon University, watches a variety of scenes featuring robots from movies and television and breaks down how accurate their depictions really are. Would the Terminator actually have dialogue options? Are the "three laws" from I, Robot a real thing? Is it actually hard to erase a robot’s memory (a la Westworld)?

[ Chris Atkeson ] via [ Wired ]

This week’s CMU RI Seminar comes from Anca Dragan at UC Berkeley, on “Optimizing for Coordination With People.”

From autonomous cars to quadrotors to mobile manipulators, robots need to co-exist and even collaborate with humans. In this talk, we will explore how our formalism for decision making needs to change to account for this interaction, and dig our heels into the subtleties of modeling human behavior — sometimes strategic, often irrational, and nearly always influenceable. Towards the end, I’ll try to convince you that every robotics task is actually a human-robot interaction task (its specification lies with a human!) and how this view has shaped our more recent work.

[ CMU RI ]

In this article we investigate the role of interactive haptic-enabled tangible robots in supporting the learning of cursive letter writing for children with attention and visuomotor coordination issues. We focus on the two principal aspects of handwriting that are linked to these issues: Visual perception and visuomotor coordination. These aspects, respectively, enhance two features of letter representation in the learner's mind in particular, namely the shape (grapheme) and the dynamics (ductus) of the letter, which constitute the central learning goals in our activity. Building upon an initial design tested with 17 healthy children in a preliminary school, we iteratively ported the activity to an occupational therapy context in 2 different therapy centers, in the context of 3 different summer school camps involving a total of 12 children having writing difficulties. The various iterations allowed us to uncover insights about the design of robot-enhanced writing activities for special education, specifically highlighting the importance of ease of modification of the duration of an activity as well as of adaptable frequency, content, flow and game-play and of providing a range of evaluation test alternatives. Results show that the use of robot-assisted handwriting activities could have a positive impact on the learning of the representation of letters in the context of occupational therapy (V = 1, 449, p < 0.001, r = 0.42). Results also highlight how the design changes made across the iterations affected the outcomes of the handwriting sessions, such as the evaluation of the performances, monitoring of the performances, and the connectedness of the handwriting.

When the group of high schoolers arrived for the coding camp, the idea of spending the day staring at a computer screen didn’t seem too exciting to them. But then Pepper rolled into the room.

“All of a sudden everyone wanted to become a robot coder,” says Kass Dawson, head of marketing and business strategy at SoftBank Robotics America, in San Francisco. He saw the same thing happen in other classrooms, where the friendly humanoid was an instant hit with students.

“What we realized very quickly was, we need to take advantage of the fact that this robot can get kids excited about computer science,” Dawson says.

Today SoftBank is launching Tethys, a visual programming tool designed to teach students how to code by creating applications for Pepper. The company is hoping that its humanoid robot, which has been deployed in homes, retail stores, and research labs, can also play a role in schools, helping to foster the next generation of engineers and roboticists.

Tethys is based on an intuitive, graphical approach to coding. To create a program, you drag boxes (representing different robot behaviors) on the screen and connect them with wires. You can run your program instantly on a Pepper to see how it works. You can also run it on a virtual robot on the screen.

As part of a pilot program, more than 1,000 students in about 20 public schools in Boston, San Francisco, and Vancouver, Canada, are already using the tool. SoftBank plans to continue expanding to more locations. (Educators interested in bringing Tethys and Pepper to their schools should reach out to the company by email.)

Bringing robots to the classroom

The idea of using robots to teach coding, logic, and problem-solving skills is not new (in fact, in the United States it goes back nearly half a century). Lego robotics kits like Mindstorms, Boost, and WeDo are widely used in STEM education today. Other popular robots and kits include Dash and Dot, Cubelets, Sphero, VEX, Parallax, and Ozobot. Last year, iRobot acquired Root, a robotics education startup founded by Harvard researchers.

Photo: SoftBank Robotics Using the Tethys visual programming tool, students can program Pepper to move, gesticulate, talk, and display graphics on its tablet. They can run their programs on a real robot or a virtual one on their computers.

So SoftBank is entering a crowded market, although one that has a lot of growth potential. And to be fair, SoftBank is not entirely new to the educational space—its experience goes back to the acquisition of French company Aldebaran Robotics, whose Nao humanoid has long been used in classrooms. Pepper, also originally developed by Aldebaran, is Nao’s newer, bigger sibling, and it, too, has been used in classrooms before.

Pepper’s size is probably one of its main advantages over the competition. It’s a 1.2-meter tall humanoid that can move around a room, dance, and have conversations and play games with people—not just a small wheeled robot beeping and driving on a tabletop.

On the other hand, Pepper’s size also means it costs several times as much as those other robots. That’s a challenge if SoftBank wants to get lots of them out to schools, which may not be able to afford them. So far the company has addressed the issue by donating Peppers—over 100 robots in the past two years.

How Tethys work

When SoftBank first took Pepper to classrooms, it discovered that the robot’s original software development platform, called Choregraphe, wasn’t designed as an educational tool. It was hard to use by non engineers, and was glitchy. SoftBank then partnered with Finger Food Advanced Technology Group, a Vancouver-based software company, to develop Tethys.

Image: SoftBank Robotics While Tethys is based on a visual programming environment, students can inspect the underlying Python scripts and modify them or write their own code.

Tethys is an integrated development environment, or IDE, that runs on a web browser (it works on regular laptops and also Chromebooks, popular in schools). It features a user-friendly visual programming interface, and in that sense it is similar to other visual programming languages like Blockly and Scratch.

But students aren’t limited to dragging blocks and wires on the screen; they can inspect the underlying Python scripts and modify them, or write their own code.

SoftBank says the new initiative is focused on “STREAM” education, or Science, Technology, Robotics, Engineering, Art, and Mathematics. Accordingly, Tethys is named after the Greek Titan goddess of streams, says SoftBank’s Dawson, who heads its STREAM Education program.

“It’s really important to make sure that more people are getting involved in robotics,” he says, “and that means not just the existing engineers who are out there, but trying to encourage the engineers of the future.”

When the group of high schoolers arrived for the coding camp, the idea of spending the day staring at a computer screen didn’t seem too exciting to them. But then Pepper rolled into the room.

“All of a sudden everyone wanted to become a robot coder,” says Kass Dawson, head of marketing and business strategy at SoftBank Robotics America, in San Francisco. He saw the same thing happen in other classrooms, where the friendly humanoid was an instant hit with students.

“What we realized very quickly was, we need to take advantage of the fact that this robot can get kids excited about computer science,” Dawson says.

Today SoftBank is launching Tethys, a visual programming tool designed to teach students how to code by creating applications for Pepper. The company is hoping that its humanoid robot, which has been deployed in homes, retail stores, and research labs, can also play a role in schools, helping to foster the next generation of engineers and roboticists.

Tethys is based on an intuitive, graphical approach to coding. To create a program, you drag boxes (representing different robot behaviors) on the screen and connect them with wires. You can run your program instantly on a Pepper to see how it works. You can also run it on a virtual robot on the screen.

As part of a pilot program, more than 1,000 students in about 20 public schools in Boston, San Francisco, and Vancouver, Canada, are already using the tool. SoftBank plans to continue expanding to more locations. (Educators interested in bringing Tethys and Pepper to their schools should reach out to the company by email.)

Bringing robots to the classroom

The idea of using robots to teach coding, logic, and problem-solving skills is not new (in fact, in the United States it goes back nearly half a century). Lego robotics kits like Mindstorms, Boost, and WeDo are widely used in STEM education today. Other popular robots and kits include Dash and Dot, Cubelets, Sphero, VEX, Parallax, and Ozobot. Last year, iRobot acquired Root, a robotics education startup founded by Harvard researchers.

Photo: SoftBank Robotics Using the Tethys visual programming tool, students can program Pepper to move, gesticulate, talk, and display graphics on its tablet. They can run their programs on a real robot or a virtual one on their computers.

So SoftBank is entering a crowded market, although one that has a lot of growth potential. And to be fair, SoftBank is not entirely new to the educational space—its experience goes back to the acquisition of French company Aldebaran Robotics, whose Nao humanoid has long been used in classrooms. Pepper, also originally developed by Aldebaran, is Nao’s newer, bigger sibling, and it, too, has been used in classrooms before.

Pepper’s size is probably one of its main advantages over the competition. It’s a 1.2-meter tall humanoid that can move around a room, dance, and have conversations and play games with people—not just a small wheeled robot beeping and driving on a tabletop.

On the other hand, Pepper’s size also means it costs several times as much as those other robots. That’s a challenge if SoftBank wants to get lots of them out to schools, which may not be able to afford them. So far the company has addressed the issue by donating Peppers—over 100 robots in the past two years.

How Tethys work

When SoftBank first took Pepper to classrooms, it discovered that the robot’s original software development platform, called Choregraphe, wasn’t designed as an educational tool. It was hard to use by non engineers, and was glitchy. SoftBank then partnered with Finger Food Advanced Technology Group, a Vancouver-based software company, to develop Tethys.

Image: SoftBank Robotics While Tethys is based on a visual programming environment, students can inspect the underlying Python scripts and modify them or write their own code.

Tethys is an integrated development environment, or IDE, that runs on a web browser (it works on regular laptops and also Chromebooks, popular in schools). It features a user-friendly visual programming interface, and in that sense it is similar to other visual programming languages like Blockly and Scratch.

But students aren’t limited to dragging blocks and wires on the screen; they can inspect the underlying Python scripts and modify them, or write their own code.

SoftBank says the new initiative is focused on “STREAM” education, or Science, Technology, Robotics, Engineering, Art, and Mathematics. Accordingly, Tethys is named after the Greek Titan goddess of streams, says SoftBank’s Dawson, who heads its STREAM Education program.

“It’s really important to make sure that more people are getting involved in robotics,” he says, “and that means not just the existing engineers who are out there, but trying to encourage the engineers of the future.”

Robots are promising tools for promoting engagement of autistic children in interventions and thereby increasing the amount of learning opportunities. However, designing deliberate robot behavior aimed at engaging autistic children remains challenging. Our current understanding of what interactions with a robot, or facilitated by a robot, are particularly motivating to autistic children is limited to qualitative reports with small sample sizes. Translating insights from these reports to design is difficult due to the large individual differences among autistic children in their needs, interests, and abilities. To address these issues, we conducted a descriptive study and report on an analysis of how 31 autistic children spontaneously interacted with a humanoid robot and an adult within the context of a robot-assisted intervention, as well as which individual characteristics were associated with the observed interactions. For this analysis, we used video recordings of autistic children engaged in a robot-assisted intervention that were recorded as part of the DE-ENIGMA database. The results showed that the autistic children frequently engaged in exploratory and functional interactions with the robot spontaneously, as well as in interactions with the adult that were elicited by the robot. In particular, we observed autistic children frequently initiating interactions aimed at making the robot do a certain action. Autistic children with stronger language ability, social functioning, and fewer autism spectrum-related symptoms, initiated more functional interactions with the robot and more robot-elicited interactions with the adult. We conclude that the children's individual characteristics, in particular the child's language ability, can be indicative of which types of interaction they are more likely to find interesting. Taking these into account for the design of deliberate robot behavior, coupled with providing more autonomy over the robot's behavior to the autistic children, appears promising for promoting engagement and facilitating more learning opportunities.

Many insect species, and even some vertebrates, assemble their bodies to form multi-functional materials that combine sensing, computation, and actuation. The tower-building behavior of red imported fire ants, Solenopsis invicta, presents a key example of this phenomenon of collective construction. While biological studies of collective construction focus on behavioral assays to measure the dynamics of formation and studies of swarm robotics focus on developing hardware that can assemble and interact, algorithms for designing such collective aggregations have been mostly overlooked. We address this gap by formulating an agent-based model for collective tower-building with a set of behavioral rules that incorporate local sensing of neighboring agents. We find that an attractive force makes tower building possible. Next, we explore the trade-offs between attraction and random motion to characterize the dynamics and phase transition of the tower building process. Lastly, we provide an optimization tool that may be used to design towers of specific shapes, mechanical loads, and dynamical properties, such as mechanical stability and mobility of the center of mass.

Today, Boston Dynamics and OTTO Motors (a division of Clearpath Robotics) are announcing a partnership to “coordinate mobile robots in the warehouse” as part of “the future of warehouse automation.” It’s a collaboration between OTTO’s autonomous mobile robots and Boston Dynamics’s Handle, showing how a heterogeneous robot team can be faster and more efficient in a realistic warehouse environment.

As much as we love Handle, it doesn’t really seem like the safest robot for humans to be working around. Its sheer size, dynamic motion, and heavy payloads mean that the kind of sense-and-avoid hardware and software you’d really want to have on it for humans to able to move through its space without getting smushed would likely be impractical, so you need another way of moving stuff in an out of its work zone. The Handle logistics video Boston Dynamics released about a year ago showed the robot working mostly with conveyor belts, but that kind of fixed infrastructure may not be ideal for warehouses that want to remain flexible.

This is where OTTO Motors comes in—its mobile robots (essentially autonomous mobile cargo pallets) can safely interact with Handles carrying boxes, moving stuff from where the Handles are working to where it needs to go without requiring intervention from a fragile and unpredictable human who would likely only get in the way of the whole process. 

From the press release:

“We’ve built a proof of concept demonstration of a heterogeneous fleet of robots building distribution center orders to provide a more flexible warehouse automation solution,” said Boston Dynamics VP of Product Engineering Kevin Blankespoor. “To meet the rates that our customers expect, we’re continuing to expand Handle’s capabilities and optimizing its interactions with other robots like the OTTO 1500 for warehouse applications.”

This sort of suggests that OTTO Motors might not be the only partner that Boston Dynamics is working with. There are certainly other companies who make autonomous mobile robots for warehouses like OTTO does, but it’s more fun to think about fleets of warehouse robots that are as heterogeneous as possible: drones, blimps, snake robots, hexapods—I wouldn’t put anything past them.

[ OTTO Motors ]

Today, Boston Dynamics and OTTO Motors (a division of Clearpath Robotics) are announcing a partnership to “coordinate mobile robots in the warehouse” as part of “the future of warehouse automation.” It’s a collaboration between OTTO’s autonomous mobile robots and Boston Dynamics’s Handle, showing how a heterogeneous robot team can be faster and more efficient in a realistic warehouse environment.

As much as we love Handle, it doesn’t really seem like the safest robot for humans to be working around. Its sheer size, dynamic motion, and heavy payloads mean that the kind of sense-and-avoid hardware and software you’d really want to have on it for humans to able to move through its space without getting smushed would likely be impractical, so you need another way of moving stuff in an out of its work zone. The Handle logistics video Boston Dynamics released about a year ago showed the robot working mostly with conveyor belts, but that kind of fixed infrastructure may not be ideal for warehouses that want to remain flexible.

This is where OTTO Motors comes in—its mobile robots (essentially autonomous mobile cargo pallets) can safely interact with Handles carrying boxes, moving stuff from where the Handles are working to where it needs to go without requiring intervention from a fragile and unpredictable human who would likely only get in the way of the whole process. 

From the press release:

“We’ve built a proof of concept demonstration of a heterogeneous fleet of robots building distribution center orders to provide a more flexible warehouse automation solution,” said Boston Dynamics VP of Product Engineering Kevin Blankespoor. “To meet the rates that our customers expect, we’re continuing to expand Handle’s capabilities and optimizing its interactions with other robots like the OTTO 1500 for warehouse applications.”

This sort of suggests that OTTO Motors might not be the only partner that Boston Dynamics is working with. There are certainly other companies who make autonomous mobile robots for warehouses like OTTO does, but it’s more fun to think about fleets of warehouse robots that are as heterogeneous as possible: drones, blimps, snake robots, hexapods—I wouldn’t put anything past them.

[ OTTO Motors ]

In this paper we describe the control approaches tested in the improved version of an existing soft robotic neck with two Degrees Of Freedom (DOF), able to achieve flexion, extension, and lateral bending movements similar to those of a human neck. The design is based on a cable-driven mechanism consisting of a spring acting as a cervical spine and three servomotor actuated tendons that let the neck to reach all desired postures. The prototype was manufactured using a 3D printer. Two control approaches are proposed and tested experimentally: a motor position approach using encoder feedback and a tip position approach using Inertial Measurement Unit (IMU) feedback, both applying fractional-order controllers. The platform operation is tested for different load configurations so that the robustness of the system can be checked.

For the past two weeks, teams of robots (and their humans) have been exploring an unfinished nuclear power plant in Washington State as part of DARPA’s Subterranean Challenge. The SubT Challenge consists of three separate circuits, each representing a distinct underground environment: tunnel systems, urban underground, and cave networks.

The Urban Circuit portion of the challenge ended last Thursday, and DARPA live streamed all of the course runs and put together some great video recaps of the competition itself. But that footage represents just a small portion of what actually went on at the challenge, as teams raced to implement fixes and improvements in hardware and software in between runs, often staying up all night in weird places trying to get their robots to work better (or work at all).

We visited the SubT Urban Challenge during the official media day last week, and also spent some time off-site with the teams themselves, as they solved problems and tested their robots wherever they could, from nearby high schools to empty malls to hotel stairwells at 5 a.m. 

And the winner of the SubT Urban Circuit is...

The winner of the SubT Urban Circuit was Team CoSTAR, a collaboration between NASA JPL, MIT, Caltech, KAIST, LTU, and industry partners, including Clearpath Robotics and Boston Dynamics. Second place went to Carnegie Mellon’s Team Explorer, which took first at the previous SubT Tunnel Circuit six months ago, setting up a highly competitive Cave Circuit event which will take place six months from now.

We’ll have some more details on the teams’ final scores, but first here’s a brief post-challenge overview video from DARPA to get you caught up:

The Urban Circuit location: an unfinished nuclear power plant

The Urban Circuit of the DARPA Subterranean Challenge was held at the Satsop Nuclear Power Plant, about an hour and a half south of Seattle. 

Photo: DARPA Aerial photo of the unfinished Satsop nuclear power plant.

Started in 1977, the plant was about 80 percent complete when state funding fell through, and after nothing happened for a couple of decades, ownership was transferred to the Satsop Redevelopment Project to try and figure out how to turn the aging dystopian infrastructure into something useful. Something useful includes renting the space for people to film action movies, and for DARPA to host challenges.

The biggest difference between Tunnel and Urban is that while Tunnel was mostly, you know, tunnels (mostly long straight-ish passages connected with each other), Urban included a variety of large spaces and interconnected small rooms spread out across multiple levels. This is a 5-minute long walkthrough from DARPA that shows one of the course configurations; you don’t need to watch the whole thing, but it should give you a pretty good idea of the sort of environment that these robots had to deal with:

The biggest challenge: Communications, or stairs?

While communications were an enormous challenge at the Tunnel Circuit, from talking with the teams it sounded like comms was not nearly as much of an issue at Urban, because of a combination of a slightly friendlier environment (concrete walls instead of meters of solid rock) and teams taking comms very, very seriously as they prepared their systems for this event. More teams used deployable networking nodes to build up a mesh network as their robots progressed farther into the course (more on this later), and there was also more of an emphasis on fully autonomous exploration where robots were comfortable operating for extended periods outside of communication range completely. 

Photo: Evan Ackerman/IEEE Spectrum Team garages at the event. You can’t see how cold it is, but if you could, you’d understand why they’re mostly empty.

When we talked to DARPA SubT Program Manager Tim Chung a few weeks ago, he was looking forward to creating an atmosphere of warm camaraderie between teams:

I’m super excited about how we set up the team garages at the Urban Circuit. It’ll be like pit row, in a way that really highlights how much I value the interactions between teams, it’ll be an opportunity to truly capitalize on having a high concentration of enthusiastic and ambitious roboticists in one area. 

Another challenge: Finding a warm place to test the robots

Having all the teams gathered at their garages would have been pretty awesome, except that the building somehow functioned as a giant heat sink, and while it was in the mid-30s Fahrenheit outside, it felt like the mid-20s inside! Neither humans nor robots had any particular desire to spend more time in the garages than was strictly necessary—most teams would arrive immediately before the start of their run staging time, and then escape to somewhere warmer immediately after their run ended. 

It wasn’t just a temperature thing that kept teams out of the garages—to test effectively, most teams needed a lot more dedicated space than was available on-site. Teams understood how important test environments were after the Tunnel Circuit, and most of them scrounged up spaces well in advance. Team CSIRO DATA61 found an indoor horse paddock at the local fairgrounds. Team CERBERUS set up in an empty storefront in a half dead mall about 20 miles away. And Team CoSTAR took over the conference center at a local hotel, which turned out to be my hotel, as I discovered when I met Spot undergoing testing in the hallway outside of my room right after I checked in:

Photo: Evan Ackerman/IEEE Spectrum Team CoSTAR’s Spot robot (on loan from Boston Dynamics) undergoing testing in a hotel hallway.

Spot is not exactly the stealthiest of robots, and the hotel testing was not what you’d call low-key. I can tell you that CoSTAR finished their testing at around 5:15 a.m., when Spot’s THUMP THUMP THUMP THUMP THUMPing gait woke up pretty much the entire hotel as the robot made its way back to its hotel room. Spot did do a very good job on the stairs, though:

Photo: Evan Ackerman/IEEE Spectrum Even with its top-heavy JPL autonomy and mapping payload, Spot was able to climb stairs without too much trouble.

After the early morning quadrupedal wake-up call, I put on every single layer of clothing I’d brought and drove up to the competition site for the DARPA media day. We were invited to watch the beginning of a few competition runs, take a brief course tour (after being sworn to secrecy), and speak with teams at the garages before and after their runs. During the Tunnel circuit, I’d focused on the creative communications strategies that each team was using, but for Urban, I asked teams to tell me about some of the clever hacks they’d come up with to solve challenges specific to the Urban circuit.

Here’s some of what teams came up with:

Team NCTU

Team NCTU from Taiwan has some of the most consistently creative approaches to the DARPA SubT courses we’ve seen. They’re probably best known for their “Duckiefloat” blimps, which had some trouble fitting through narrow tunnels during the Tunnel circuit six months ago. Knowing that passages would be even slimmer for the Urban Circuit, NCTU built a carbon fiber frame around the Duckiefloats to squish their sides in a bit.

Photo: Evan Ackerman/IEEE Spectrum Duckiefloat is much slimmer (if a bit less pleasingly spherical) thanks to a carbon fiber framework that squeezes it into a more streamlined shape to better fit through narrow corridors.

NCTU also added millimeter wave radar to one of the Duckiefloats as a lighter substitute for on-board lidar or RGBD cameras, and had good results navigating with the radar alone, which (as far as I know) is a totally unique approach. We will definitely be seeing more of Duckiefloat for the cave circuit.

Photo: Evan Ackerman/IEEE Spectrum NCTU’s Anchorball droppable WiFi nodes now include a speaker, which the Husky UGV can localize with microphone arrays (the black circle with the white border).

At Tunnel, NCTU dropped mesh WiFi nodes that doubled as beacons, called Anchorballs. For Urban, the Anchorballs are 100 percent less ball-like, and incorporate a speaker, which plays chirping noises once deployed. Microphone arrays on the Husky UGVs can localize this chirping, allowing multiple robots to use the nodes as tie points to coordinate their maps.

Photo: Evan Ackerman/IEEE Spectrum NCTU is developing mobile mesh network nodes in the form of autonomous robot balls.

Also under development at NCTU is this mobile Anchorball, which is basically a big Sphero with a bunch of networking gear packed into it that can move itself around to optimize signal strength.

Team NUS SEDS

Team NUS SEDS accidentally burned out a couple of the onboard computers driving their robots. The solution was to run out and buy a laptop, and then 3D print some mounts to attach the laptop to the top of the robot and run things from there.

Photo: Evan Ackerman/IEEE Spectrum When an onboard computer burned out, NUS SEDS bought a new laptop to power their mobile robot, because what else are you going to do?

They also had a larger tracked vehicle that was able to go up and down stairs, but it got stuck in customs and didn’t make it to the competition at all.

Team Explorer

Team Explorer did extensive testing in an abandoned hospital in Pittsburgh, which I’m sure wasn’t creepy at all. While they brought along some drones that were used very successfully, getting their beefy wheeled robots up and down stairs wasn’t easy. To add some traction, Explorer cut chunks out of the wheels on one of their robots to help it grip the edges of stairs. 

Photo: Evan Ackerman/IEEE Spectrum Team Explorer’s robot has wedges cut out of its wheels to help it get a grip on stairways.

It doesn’t look especially sophisticated, but the team lead Sebastian Scherer told me that this was the result of 14 (!) iterations of wheel and track modifications. 

Team MARBLE

Six months ago, we checked out a bunch of different creative communications strategies that teams used at SubT Tunnel. MARBLE improved on their droppable wireless repeater nodes with a powered, extending antenna (harvested from a Miata, apparently).

Photo: Evan Ackerman/IEEE Spectrum After being dropped from its carrier robot, this mesh networking node extends its antennas half a meter into the air to maximize signal strength.

This is more than just a neat trick: We were told that the extra height that the antennas have once fully deployed does significantly improve their performance.

Team Robotika

Based on their experience during the Tunnel Circuit, Team Robotika decided that there was no such thing as having too much light in the tunnels, so they brought along a robot with the most enormous light-to-robot ratio that we saw at SubT.

Photo: Evan Ackerman/IEEE Spectrum No such thing as too much light during DARPA SubT.

Like many other teams, Robotika was continually making minor hardware adjustments to refine the performance of their robots and make them more resilient to the environment. These last-minute plastic bumpers would keep the robot from driving up walls and potentially flipping itself over.

Photo: Evan Ackerman/IEEE Spectrum A bumper hacked together from plastic and duct tape keeps this robot from flipping itself over against walls. Team CSIRO Data61

I met CSIRO Data61 (based in Australia) at the testing location they’d found in a building at the Grays Harbor County Fairgrounds, right next to an indoor horse arena that provided an interesting environment, especially for their drones. During their first run, one of their large tracked robots (an ex-police robot called Titan) had the misfortune to get its track caught on an obstacle that was exactly the wrong size, and it burned out a couple motors trying to get free.

Photo: Evan Ackerman/IEEE Spectrum A burned motor, crispy on the inside.

You can practically smell that through the screen, right? And these are fancy Maxon motors, which you can’t just pick up at your local hardware store. CSIRO didn’t have spares with them, so the most expedient way to get new motors that were sure to work turned out to be flying another team member over from Australia (!) with extra motors in their carry-on luggage. And by Tuesday morning, the Titan was up and running again.

Photo: Evan Ackerman/IEEE Spectrum A fully operational Titan beside a pair of commercial SuperDroid robots at CSIRO’s off-site testing area. Team CERBERUS

Team CERBERUS didn’t have a run scheduled during the SubT media day, but they invited me to visit their testing area in an empty store next to an Extreme Fun Center in a slightly depressing mall in Aberdeen (Kurt Cobain’s hometown), about 20 miles down the road from Satsop. CERBERUS was using a mix of wheeled vehicles, collision-tolerant drones, and ANYmal legged robots.

Photo: Evan Ackerman/IEEE Spectrum Team CERBERUS doing some off-site testing of their robots with the lights off.

CERBERUS had noticed during a DARPA course pre-briefing that the Alpha course had an almost immediate 90-degree turn before a long passage, which would block any directional antennas placed in the staging area. To try to maximize communication range, they developed this dumb antenna robot: Dumb in the sense that it has no sensing or autonomy, but instead is designed to carry a giant tethered antenna just around that first corner.

Photo: Evan Ackerman/IEEE Spectrum Basically just a remote-controlled directional antenna, CERBERUS developed this robot to extend communications from their base station around the first corner of Alpha Course.

Another communications challenge was how to talk to robots after they traversed down a flight of stairs. Alpha Course featured a flight of stairs going downwards just past the starting gate, and CERBERUS wanted a way of getting a mesh networking node down those stairs to be able to reliably talk to robots exploring the lower level. Here’s what they came up with:

Photo: Evan Ackerman/IEEE Spectrum A mesh network node inside of a foam ball covered in duct tape can be thrown by a human into hard-to-reach spots near the starting area.

The initial idea was to put a node into a soccer ball which would then be kicked from the staging area, off the far wall, and down the stairs, but they ended up finding some hemispheres of green foam used for flower arrangements at Walmart, hollowed them out, put in a node, and then wrapped the whole thing in duct tape. With the addition of a tether, the node in a ball could be thrown from the staging area into the stairwell, and brought back up with the tether if it didn’t land in the right spot.

Plan B for stairwell communications was a bit more of a brute force approach, using a directional antenna on a stick that could be poked out of the starting area and angled over the stairwell.

Photo: Evan Ackerman/IEEE Spectrum If your antenna balls don’t work? Just add a directional antenna to a stick.

Since DARPA did allow tethers, CERBERUS figured that this was basically just a sort of rigid tether. Sounds good to me!

Team CoSTAR

Team CoSTAR surprised everyone by showing up to the SubT Urban Circuit with a pair of Spot quadrupeds from Boston Dynamics. The Spots were very much a last-minute addition to the team, and CoSTAR only had about six weeks to get them up and (metaphorically) running. Consequently, the Spots were a little bit overburdened with a payload that CoSTAR hasn’t had much of a chance to optimize. The payload takes care of all of the higher level autonomy and map making and stuff, while Spot’s own sensors handle the low-level motion planning. 

Photo: Evan Ackerman/IEEE Spectrum Team CoSTAR’s Spot robots carried a payload that was almost too heavy for the robot to manage, and included sensors, lights, computers, batteries, and even two mesh network node droppers.

In what would be a spectacular coincidence were both of these teams not packed full of brilliant roboticists, Team CoSTAR independently came up with something very similar to the throwable network node that Team CERBERUS was messing around with.

Photo: Evan Ackerman/IEEE Spectrum A throwable mesh network node embedded in a foam ball that could be bounced into a stairwell to extend communications.

One of the early prototypes of this thing was a Mars lander-style “airbag” system, consisting of a pyramid of foam balls with a network node embedded in the very center of the pile. They showed me a video of this thing, and it was ridiculously cool, but they found that carving out the inside of a foam ball worked just as well and was far easier to manage.

There was only so much testing that CoSTAR was able to do in the hotel and conference center, since a better match for the Urban Circuit would be a much larger area with long hallways, small rooms, and multiple levels that could be reached by ramps and stairs. So every evening, the team and their robots drove 10 minutes down the road to Elma High School, which seemed to be just about the perfect place for testing SubT robots. CoSTAR very kindly let me tag along one night to watch their Huskies and Spots explore the school looking for artifacts, and here are some pictures that I took.

Photo: Evan Ackerman/IEEE Spectrum The Elma High School cafeteria became the staging area for Team CoSTAR’s SubT test course. Two Boston Dynamics Spot robots and two Clearpath Robotics Huskies made up CoSTAR’s team of robots. The yellow total station behind the robots is used for initial location calibration, and many other teams relied on them as well. Photo: Evan Ackerman/IEEE Spectrum Team CoSTAR hid artifacts all over the school to test the robots’ ability to autonomously recognize and locate them. That’s a survivor dummy down the hall.

JPL put together this video of one of the test runs, which cuts out the three hours of setup and calibration and condenses all the good stuff into a minute and a half:

DARPA SubT Urban Circuit: Final scores

In their final SubT Urban run, CoSTAR scored a staggering 9 points, giving them a total of 16 for the Urban Circuit, 5 more than Team Explorer, which came in second. Third place went to Team CTU-CRAS-NORLAB, and as a self-funded (as opposed to DARPA-funded) team, they walked away with a $500,000 prize.

Image: DARPA DARPA SubT Urban Circuit final scores.

Six months from now, all of these teams will meet again to compete at the SubT Cave Circuit, the last (and perhaps most challenging) domain that DARPA has in store. We don’t yet know exactly when or where Cave will take place, but we do know that we'll be there to see what six more months of hard work and creativity can do for these teams and their robots.

[ DARPA SubT Urban Results ]

Special thanks to DARPA for putting on this incredible event, and thanks also to the teams that let me follow them around and get (ever so slightly) in the way for a day or two.

For the past two weeks, teams of robots (and their humans) have been exploring an unfinished nuclear power plant in Washington State as part of DARPA’s Subterranean Challenge. The SubT Challenge consists of three separate circuits, each representing a distinct underground environment: tunnel systems, urban underground, and cave networks.

The Urban Circuit portion of the challenge ended last Thursday, and DARPA live streamed all of the course runs and put together some great video recaps of the competition itself. But that footage represents just a small portion of what actually went on at the challenge, as teams raced to implement fixes and improvements in hardware and software in between runs, often staying up all night in weird places trying to get their robots to work better (or work at all).

We visited the SubT Urban Challenge during the official media day last week, and also spent some time off-site with the teams themselves, as they solved problems and tested their robots wherever they could, from nearby high schools to empty malls to hotel stairwells at 5 a.m. 

And the winner of the SubT Urban Circuit is...

The winner of the SubT Urban Circuit was Team CoSTAR, a collaboration between NASA JPL, MIT, Caltech, KAIST, LTU, and industry partners, including Clearpath Robotics and Boston Dynamics. Second place went to Carnegie Mellon’s Team Explorer, which took first at the previous SubT Tunnel Circuit six months ago, setting up a highly competitive Cave Circuit event which will take place six months from now.

We’ll have some more details on the teams’ final scores, but first here’s a brief post-challenge overview video from DARPA to get you caught up:

The Urban Circuit location: an unfinished nuclear power plant

The Urban Circuit of the DARPA Subterranean Challenge was held at the Satsop Nuclear Power Plant, about an hour and a half south of Seattle. 

Photo: DARPA Aerial photo of the unfinished Satsop nuclear power plant.

Started in 1977, the plant was about 80 percent complete when state funding fell through, and after nothing happened for a couple of decades, ownership was transferred to the Satsop Redevelopment Project to try and figure out how to turn the aging dystopian infrastructure into something useful. Something useful includes renting the space for people to film action movies, and for DARPA to host challenges.

The biggest difference between Tunnel and Urban is that while Tunnel was mostly, you know, tunnels (mostly long straight-ish passages connected with each other), Urban included a variety of large spaces and interconnected small rooms spread out across multiple levels. This is a 5-minute long walkthrough from DARPA that shows one of the course configurations; you don’t need to watch the whole thing, but it should give you a pretty good idea of the sort of environment that these robots had to deal with:

The biggest challenge: Communications, or stairs?

While communications were an enormous challenge at the Tunnel Circuit, from talking with the teams it sounded like comms was not nearly as much of an issue at Urban, because of a combination of a slightly friendlier environment (concrete walls instead of meters of solid rock) and teams taking comms very, very seriously as they prepared their systems for this event. More teams used deployable networking nodes to build up a mesh network as their robots progressed farther into the course (more on this later), and there was also more of an emphasis on fully autonomous exploration where robots were comfortable operating for extended periods outside of communication range completely. 

Photo: Evan Ackerman/IEEE Spectrum Team garages at the event. You can’t see how cold it is, but if you could, you’d understand why they’re mostly empty.

When we talked to DARPA SubT Program Manager Tim Chung a few weeks ago, he was looking forward to creating an atmosphere of warm camaraderie between teams:

I’m super excited about how we set up the team garages at the Urban Circuit. It’ll be like pit row, in a way that really highlights how much I value the interactions between teams, it’ll be an opportunity to truly capitalize on having a high concentration of enthusiastic and ambitious roboticists in one area. 

Another challenge: Finding a warm place to test the robots

Having all the teams gathered at their garages would have been pretty awesome, except that the building somehow functioned as a giant heat sink, and while it was in the mid-30s Fahrenheit outside, it felt like the mid-20s inside! Neither humans nor robots had any particular desire to spend more time in the garages than was strictly necessary—most teams would arrive immediately before the start of their run staging time, and then escape to somewhere warmer immediately after their run ended. 

It wasn’t just a temperature thing that kept teams out of the garages—to test effectively, most teams needed a lot more dedicated space than was available on-site. Teams understood how important test environments were after the Tunnel Circuit, and most of them scrounged up spaces well in advance. Team CSIRO DATA61 found an indoor horse paddock at the local fairgrounds. Team CERBERUS set up in an empty storefront in a half dead mall about 20 miles away. And Team CoSTAR took over the conference center at a local hotel, which turned out to be my hotel, as I discovered when I met Spot undergoing testing in the hallway outside of my room right after I checked in:

Photo: Evan Ackerman/IEEE Spectrum Team CoSTAR’s Spot robot (on loan from Boston Dynamics) undergoing testing in a hotel hallway.

Spot is not exactly the stealthiest of robots, and the hotel testing was not what you’d call low-key. I can tell you that CoSTAR finished their testing at around 5:15 a.m., when Spot’s THUMP THUMP THUMP THUMP THUMPing gait woke up pretty much the entire hotel as the robot made its way back to its hotel room. Spot did do a very good job on the stairs, though:

Photo: Evan Ackerman/IEEE Spectrum Even with its top-heavy JPL autonomy and mapping payload, Spot was able to climb stairs without too much trouble.

After the early morning quadrupedal wake-up call, I put on every single layer of clothing I’d brought and drove up to the competition site for the DARPA media day. We were invited to watch the beginning of a few competition runs, take a brief course tour (after being sworn to secrecy), and speak with teams at the garages before and after their runs. During the Tunnel circuit, I’d focused on the creative communications strategies that each team was using, but for Urban, I asked teams to tell me about some of the clever hacks they’d come up with to solve challenges specific to the Urban circuit.

Here’s some of what teams came up with:

Team NCTU

Team NCTU from Taiwan has some of the most consistently creative approaches to the DARPA SubT courses we’ve seen. They’re probably best known for their “Duckiefloat” blimps, which had some trouble fitting through narrow tunnels during the Tunnel circuit six months ago. Knowing that passages would be even slimmer for the Urban Circuit, NCTU built a carbon fiber frame around the Duckiefloats to squish their sides in a bit.

Photo: Evan Ackerman/IEEE Spectrum Duckiefloat is much slimmer (if a bit less pleasingly spherical) thanks to a carbon fiber framework that squeezes it into a more streamlined shape to better fit through narrow corridors.

NCTU also added millimeter wave radar to one of the Duckiefloats as a lighter substitute for on-board lidar or RGBD cameras, and had good results navigating with the radar alone, which (as far as I know) is a totally unique approach. We will definitely be seeing more of Duckiefloat for the cave circuit.

Photo: Evan Ackerman/IEEE Spectrum NCTU’s Anchorball droppable WiFi nodes now include a speaker, which the Husky UGV can localize with microphone arrays (the black circle with the white border).

At Tunnel, NCTU dropped mesh WiFi nodes that doubled as beacons, called Anchorballs. For Urban, the Anchorballs are 100 percent less ball-like, and incorporate a speaker, which plays chirping noises once deployed. Microphone arrays on the Husky UGVs can localize this chirping, allowing multiple robots to use the nodes as tie points to coordinate their maps.

Photo: Evan Ackerman/IEEE Spectrum NCTU is developing mobile mesh network nodes in the form of autonomous robot balls.

Also under development at NCTU is this mobile Anchorball, which is basically a big Sphero with a bunch of networking gear packed into it that can move itself around to optimize signal strength.

Team NUS SEDS

Team NUS SEDS accidentally burned out a couple of the onboard computers driving their robots. The solution was to run out and buy a laptop, and then 3D print some mounts to attach the laptop to the top of the robot and run things from there.

Photo: Evan Ackerman/IEEE Spectrum When an onboard computer burned out, NUS SEDS bought a new laptop to power their mobile robot, because what else are you going to do?

They also had a larger tracked vehicle that was able to go up and down stairs, but it got stuck in customs and didn’t make it to the competition at all.

Team Explorer

Team Explorer did extensive testing in an abandoned hospital in Pittsburgh, which I’m sure wasn’t creepy at all. While they brought along some drones that were used very successfully, getting their beefy wheeled robots up and down stairs wasn’t easy. To add some traction, Explorer cut chunks out of the wheels on one of their robots to help it grip the edges of stairs. 

Photo: Evan Ackerman/IEEE Spectrum Team Explorer’s robot has wedges cut out of its wheels to help it get a grip on stairways.

It doesn’t look especially sophisticated, but the team lead Sebastian Scherer told me that this was the result of 14 (!) iterations of wheel and track modifications. 

Team MARBLE

Six months ago, we checked out a bunch of different creative communications strategies that teams used at SubT Tunnel. MARBLE improved on their droppable wireless repeater nodes with a powered, extending antenna (harvested from a Miata, apparently).

Photo: Evan Ackerman/IEEE Spectrum After being dropped from its carrier robot, this mesh networking node extends its antennas half a meter into the air to maximize signal strength.

This is more than just a neat trick: We were told that the extra height that the antennas have once fully deployed does significantly improve their performance.

Team Robotika

Based on their experience during the Tunnel Circuit, Team Robotika decided that there was no such thing as having too much light in the tunnels, so they brought along a robot with the most enormous light-to-robot ratio that we saw at SubT.

Photo: Evan Ackerman/IEEE Spectrum No such thing as too much light during DARPA SubT.

Like many other teams, Robotika was continually making minor hardware adjustments to refine the performance of their robots and make them more resilient to the environment. These last-minute plastic bumpers would keep the robot from driving up walls and potentially flipping itself over.

Photo: Evan Ackerman/IEEE Spectrum A bumper hacked together from plastic and duct tape keeps this robot from flipping itself over against walls. Team CSIRO Data61

I met CSIRO Data61 (based in Australia) at the testing location they’d found in a building at the Grays Harbor County Fairgrounds, right next to an indoor horse arena that provided an interesting environment, especially for their drones. During their first run, one of their large tracked robots (an ex-police robot called Titan) had the misfortune to get its track caught on an obstacle that was exactly the wrong size, and it burned out a couple motors trying to get free.

Photo: Evan Ackerman/IEEE Spectrum A burned motor, crispy on the inside.

You can practically smell that through the screen, right? And these are fancy Maxon motors, which you can’t just pick up at your local hardware store. CSIRO didn’t have spares with them, so the most expedient way to get new motors that were sure to work turned out to be flying another team member over from Australia (!) with extra motors in their carry-on luggage. And by Tuesday morning, the Titan was up and running again.

Photo: Evan Ackerman/IEEE Spectrum A fully operational Titan beside a pair of commercial SuperDroid robots at CSIRO’s off-site testing area. Team CERBERUS

Team CERBERUS didn’t have a run scheduled during the SubT media day, but they invited me to visit their testing area in an empty store next to an Extreme Fun Center in a slightly depressing mall in Aberdeen (Kurt Cobain’s hometown), about 20 miles down the road from Satsop. CERBERUS was using a mix of wheeled vehicles, collision-tolerant drones, and ANYmal legged robots.

Photo: Evan Ackerman/IEEE Spectrum Team CERBERUS doing some off-site testing of their robots with the lights off.

CERBERUS had noticed during a DARPA course pre-briefing that the Alpha course had an almost immediate 90-degree turn before a long passage, which would block any directional antennas placed in the staging area. To try to maximize communication range, they developed this dumb antenna robot: Dumb in the sense that it has no sensing or autonomy, but instead is designed to carry a giant tethered antenna just around that first corner.

Photo: Evan Ackerman/IEEE Spectrum Basically just a remote-controlled directional antenna, CERBERUS developed this robot to extend communications from their base station around the first corner of Alpha Course.

Another communications challenge was how to talk to robots after they traversed down a flight of stairs. Alpha Course featured a flight of stairs going downwards just past the starting gate, and CERBERUS wanted a way of getting a mesh networking node down those stairs to be able to reliably talk to robots exploring the lower level. Here’s what they came up with:

Photo: Evan Ackerman/IEEE Spectrum A mesh network node inside of a foam ball covered in duct tape can be thrown by a human into hard-to-reach spots near the starting area.

The initial idea was to put a node into a soccer ball which would then be kicked from the staging area, off the far wall, and down the stairs, but they ended up finding some hemispheres of green foam used for flower arrangements at Walmart, hollowed them out, put in a node, and then wrapped the whole thing in duct tape. With the addition of a tether, the node in a ball could be thrown from the staging area into the stairwell, and brought back up with the tether if it didn’t land in the right spot.

Plan B for stairwell communications was a bit more of a brute force approach, using a directional antenna on a stick that could be poked out of the starting area and angled over the stairwell.

Photo: Evan Ackerman/IEEE Spectrum If your antenna balls don’t work? Just add a directional antenna to a stick.

Since DARPA did allow tethers, CERBERUS figured that this was basically just a sort of rigid tether. Sounds good to me!

Team CoSTAR

Team CoSTAR surprised everyone by showing up to the SubT Urban Circuit with a pair of Spot quadrupeds from Boston Dynamics. The Spots were very much a last-minute addition to the team, and CoSTAR only had about six weeks to get them up and (metaphorically) running. Consequently, the Spots were a little bit overburdened with a payload that CoSTAR hasn’t had much of a chance to optimize. The payload takes care of all of the higher level autonomy and map making and stuff, while Spot’s own sensors handle the low-level motion planning. 

Photo: Evan Ackerman/IEEE Spectrum Team CoSTAR’s Spot robots carried a payload that was almost too heavy for the robot to manage, and included sensors, lights, computers, batteries, and even two mesh network node droppers.

In what would be a spectacular coincidence were both of these teams not packed full of brilliant roboticists, Team CoSTAR independently came up with something very similar to the throwable network node that Team CERBERUS was messing around with.

Photo: Evan Ackerman/IEEE Spectrum A throwable mesh network node embedded in a foam ball that could be bounced into a stairwell to extend communications.

One of the early prototypes of this thing was a Mars lander-style “airbag” system, consisting of a pyramid of foam balls with a network node embedded in the very center of the pile. They showed me a video of this thing, and it was ridiculously cool, but they found that carving out the inside of a foam ball worked just as well and was far easier to manage.

There was only so much testing that CoSTAR was able to do in the hotel and conference center, since a better match for the Urban Circuit would be a much larger area with long hallways, small rooms, and multiple levels that could be reached by ramps and stairs. So every evening, the team and their robots drove 10 minutes down the road to Elma High School, which seemed to be just about the perfect place for testing SubT robots. CoSTAR very kindly let me tag along one night to watch their Huskies and Spots explore the school looking for artifacts, and here are some pictures that I took.

Photo: Evan Ackerman/IEEE Spectrum The Elma High School cafeteria became the staging area for Team CoSTAR’s SubT test course. Two Boston Dynamics Spot robots and two Clearpath Robotics Huskies made up CoSTAR’s team of robots. The yellow total station behind the robots is used for initial location calibration, and many other teams relied on them as well. Photo: Evan Ackerman/IEEE Spectrum Team CoSTAR hid artifacts all over the school to test the robots’ ability to autonomously recognize and locate them. That’s a survivor dummy down the hall.

JPL put together this video of one of the test runs, which cuts out the three hours of setup and calibration and condenses all the good stuff into a minute and a half:

DARPA SubT Urban Circuit: Final scores

In their final SubT Urban run, CoSTAR scored a staggering 9 points, giving them a total of 16 for the Urban Circuit, 5 more than Team Explorer, which came in second. Third place went to Team CTU-CRAS-NORLAB, and as a self-funded (as opposed to DARPA-funded) team, they walked away with a $500,000 prize.

Image: DARPA DARPA SubT Urban Circuit final scores.

Six months from now, all of these teams will meet again to compete at the SubT Cave Circuit, the last (and perhaps most challenging) domain that DARPA has in store. We don’t yet know exactly when or where Cave will take place, but we do know that we'll be there to see what six more months of hard work and creativity can do for these teams and their robots.

[ DARPA SubT Urban Results ]

Special thanks to DARPA for putting on this incredible event, and thanks also to the teams that let me follow them around and get (ever so slightly) in the way for a day or two.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

We’ll have more on the DARPA Subterranean Challenge Urban Circuit next week, but here’s a quick compilation from DARPA of some of the competition footage.

[ SubT ]

ABB set up a global competition in 2019 to assess 20 leading AI technology start-ups on how they could approach solutions for 26 real-world picking, packing and sorting challenges. The aim was to understand if AI is mature enough to fully unlock the potential for robotics and automation. ABB was also searching for a technology partner to co-develop robust AI solutions with. Covariant won the challenge by successfully completing each of the 26 challenges; on February 25, ABB and Covariant announced a partnership to bring AI-enabled robotic solutions to market.

We wrote about Covariant and its AI-based robot picking system last month. The most interesting part of the video above is probably the apple picking, where the system has to deal with irregular, shiny, rolling objects. The robot has a hard time picking upside-down apples, and after several failures in a row, it nudges the last one to make it easier to pick up. Impressive! And here’s one more video of real-time picking mostly transparent water bottles:

[ Covariant ]

Osaka University’s Affetto robot, which we’ve written about before, is looking somewhat more realistic than when we first wrote about it.

Those are some weird noises that it’s making though, right? Affetto, as it turns out, also doesn’t like getting poked in its (disembodied) tactile sensor:

They’re working on a body for it, too:

[ Osaka University ]

University of Washington students reimagine today’s libraries.

[ UW ]

Thanks Elcee!

Astrobee will be getting a hand up on the ISS, from Columbia’s ROAM Lab.

I think this will be Astrobee’s second hand, in addition to its perching arm. Maybe not designed for bimanual tasks, but still, pretty cool!

[ ROAM Lab ]

In this paper, we tackle the problem of pushing piles of small objects into a desired target set using visual feedback. Unlike conventional single-object manipulation pipelines, which estimate the state of the system parametrized by pose, the underlying physical state of this system is difficult to observe from images. Thus, we take the approach of reasoning directly in the space of images, and acquire the dynamics of visual measurements in order to synthesize a visual-feedback policy.

[ MIT ]

In this project we are exploring ways of interacting with terrain using hardware already present on exploration rovers - wheels! By using wheels for manipulation, we can expand the capabilities of space robots without the need for adding hardware. Nonprehensile terrain manipulation can be used many applications such as removing soil to sample below the surface or making terrain easier to cross for another robot. Watch until the end to see MiniRHex and the rover working together!

[ Robomechanics Lab ]

Dundee Precious Metals reveals how Exyn’s fully autonomous aerial drones are transforming their cavity monitoring systems with increased safety and maximum efficiency.

[ Exyn ]

Thanks Rachel!

Dragonfly is a NASA mission to explore the chemistry and habitability of Saturn’s largest moon, Titan. The fourth mission in the New Frontiers line, Dragonfly will send an autonomously-operated rotorcraft to visit dozens of sites on Titan, investigating the moon’s surface and shallow subsurface for organic molecules and possible biosignatures.

Dragonfly is scheduled to launch in 2026 and arrive at Titan in 2034.

[ NASA ]

Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart in cooperation with Tampere University in Finland developed a gel-like robot inspired by sea slugs and snails they are able to steer with light. Much like the soft body of these aquatic invertebrates, the bioinspired robot is able to deform easily inside water when exposed to this energy source.

Due to specifically aligned molecules of liquid crystal gels – its building material – and illumination of specific parts of the robot, it is able to crawl, walk, jump, and swim inside water. The scientists see their research project as an inspiration for other roboticists who struggle to design untethered soft robots that are able to move freely in a fluidic environment.

[ Max Planck Institute ]

Forests are a very challenging environment for drones, especially if you want to both avoid and map trees at the same time.

[ Kumar Lab ]

Some highlights from the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) that took place in Abu Dhabi, UAE last week.

[ MBZ IRC ]

I never get tired of hearing technical presentations from Skydio, and here’s Ryan Kennedy giving at talk at the GRASP Lab.

The technology for intelligent and trustworthy navigation of autonomous UAVs has reached an inflection point to provide transformative gains in capability, efficiency, and safety to major industries. Drones are starting to save lives of first responders, automate dangerous infrastructure inspection, digitize the physical world with millimeter precision, and capture Hollywood quality video - all on affordable consumer hardware.

At Skydio, we have invested five years of R&D in the ability to handle difficult unknown scenarios in real-time based on visual sensing, and shipped two generations of fully autonomous drone. In this talk, I will discuss the close collaboration of geometry, learning, and modeling within our system, our experience putting robots into production, and the challenges still ahead.

[ Skydio ]

This week’s CMU RI Seminar comes from Sarjoun Skaff at Bossa Nova Robotics: “Yes, That’s a Robot in Your Grocery Store. Now what?”

Retail stores are becoming ground zero for indoor robotics. Fleet of different robots have to coexist with each others and humans every day, navigating safely, coordinating missions, and interacting appropriately with people, all at large scale. For us roboticists, stores are giant labs where we’re learning what doesn’t work and iterating. If we get it right, it will serve as an example for other industries, and robots will finally become ubiquitous in our lives.

[ CMU RI ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

We’ll have more on the DARPA Subterranean Challenge Urban Circuit next week, but here’s a quick compilation from DARPA of some of the competition footage.

[ SubT ]

ABB set up a global competition in 2019 to assess 20 leading AI technology start-ups on how they could approach solutions for 26 real-world picking, packing and sorting challenges. The aim was to understand if AI is mature enough to fully unlock the potential for robotics and automation. ABB was also searching for a technology partner to co-develop robust AI solutions with. Covariant won the challenge by successfully completing each of the 26 challenges; on February 25, ABB and Covariant announced a partnership to bring AI-enabled robotic solutions to market.

We wrote about Covariant and its AI-based robot picking system last month. The most interesting part of the video above is probably the apple picking, where the system has to deal with irregular, shiny, rolling objects. The robot has a hard time picking upside-down apples, and after several failures in a row, it nudges the last one to make it easier to pick up. Impressive! And here’s one more video of real-time picking mostly transparent water bottles:

[ Covariant ]

Osaka University’s Affetto robot, which we’ve written about before, is looking somewhat more realistic than when we first wrote about it.

Those are some weird noises that it’s making though, right? Affetto, as it turns out, also doesn’t like getting poked in its (disembodied) tactile sensor:

They’re working on a body for it, too:

[ Osaka University ]

University of Washington students reimagine today’s libraries.

[ UW ]

Thanks Elcee!

Astrobee will be getting a hand up on the ISS, from Columbia’s ROAM Lab.

I think this will be Astrobee’s second hand, in addition to its perching arm. Maybe not designed for bimanual tasks, but still, pretty cool!

[ ROAM Lab ]

In this paper, we tackle the problem of pushing piles of small objects into a desired target set using visual feedback. Unlike conventional single-object manipulation pipelines, which estimate the state of the system parametrized by pose, the underlying physical state of this system is difficult to observe from images. Thus, we take the approach of reasoning directly in the space of images, and acquire the dynamics of visual measurements in order to synthesize a visual-feedback policy.

[ MIT ]

In this project we are exploring ways of interacting with terrain using hardware already present on exploration rovers - wheels! By using wheels for manipulation, we can expand the capabilities of space robots without the need for adding hardware. Nonprehensile terrain manipulation can be used many applications such as removing soil to sample below the surface or making terrain easier to cross for another robot. Watch until the end to see MiniRHex and the rover working together!

[ Robomechanics Lab ]

Dundee Precious Metals reveals how Exyn’s fully autonomous aerial drones are transforming their cavity monitoring systems with increased safety and maximum efficiency.

[ Exyn ]

Thanks Rachel!

Dragonfly is a NASA mission to explore the chemistry and habitability of Saturn’s largest moon, Titan. The fourth mission in the New Frontiers line, Dragonfly will send an autonomously-operated rotorcraft to visit dozens of sites on Titan, investigating the moon’s surface and shallow subsurface for organic molecules and possible biosignatures.

Dragonfly is scheduled to launch in 2026 and arrive at Titan in 2034.

[ NASA ]

Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart in cooperation with Tampere University in Finland developed a gel-like robot inspired by sea slugs and snails they are able to steer with light. Much like the soft body of these aquatic invertebrates, the bioinspired robot is able to deform easily inside water when exposed to this energy source.

Due to specifically aligned molecules of liquid crystal gels – its building material – and illumination of specific parts of the robot, it is able to crawl, walk, jump, and swim inside water. The scientists see their research project as an inspiration for other roboticists who struggle to design untethered soft robots that are able to move freely in a fluidic environment.

[ Max Planck Institute ]

Forests are a very challenging environment for drones, especially if you want to both avoid and map trees at the same time.

[ Kumar Lab ]

Some highlights from the Mohamed Bin Zayed International Robotics Challenge (MBZIRC) that took place in Abu Dhabi, UAE last week.

[ MBZ IRC ]

I never get tired of hearing technical presentations from Skydio, and here’s Ryan Kennedy giving at talk at the GRASP Lab.

The technology for intelligent and trustworthy navigation of autonomous UAVs has reached an inflection point to provide transformative gains in capability, efficiency, and safety to major industries. Drones are starting to save lives of first responders, automate dangerous infrastructure inspection, digitize the physical world with millimeter precision, and capture Hollywood quality video - all on affordable consumer hardware.

At Skydio, we have invested five years of R&D in the ability to handle difficult unknown scenarios in real-time based on visual sensing, and shipped two generations of fully autonomous drone. In this talk, I will discuss the close collaboration of geometry, learning, and modeling within our system, our experience putting robots into production, and the challenges still ahead.

[ Skydio ]

This week’s CMU RI Seminar comes from Sarjoun Skaff at Bossa Nova Robotics: “Yes, That’s a Robot in Your Grocery Store. Now what?”

Retail stores are becoming ground zero for indoor robotics. Fleet of different robots have to coexist with each others and humans every day, navigating safely, coordinating missions, and interacting appropriately with people, all at large scale. For us roboticists, stores are giant labs where we’re learning what doesn’t work and iterating. If we get it right, it will serve as an example for other industries, and robots will finally become ubiquitous in our lives.

[ CMU RI ]

Pages