Feed aggregator

Recently, with the increased number of robots entering numerous manufacturing fields, a considerable wealth of literature has appeared on the theme of physical human-robot interaction using data from proprioceptive sensors (motor or/and load side encoders). Most of the studies have then the accurate dynamic model of a robot for granted. In practice, however, model identification and observer design proceeds collision detection. To the best of our knowledge, no previous study has systematically investigated each aspect underlying physical human-robot interaction and the relationship between those aspects. In this paper, we bridge this gap by first reviewing the literature on model identification, disturbance estimation and collision detection, and discussing the relationship between the three, then by examining the practical sides of model-based collision detection on a case study conducted on UR10e. We show that the model identification step is critical for accurate collision detection, while the choice of the observer should be mostly based on computation time and the simplicity and flexibility of tuning. It is hoped that this study can serve as a roadmap to equip industrial robots with basic physical human-robot interaction capabilities.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online] Bay Area Robotics Symposium – November 20, 2020 – [Online] ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today's videos.

Sixteen teams chose their roster of virtual robots and sensor payloads, some based on real-life subterranean robots, and submitted autonomy and mapping algorithms that SubT Challenge officials then tested across eight cave courses in the cloud-based SubT Simulator. Their robots traversed the cave environments autonomously, without any input or adjustments from human operators. The Cave Circuit Virtual Competition teams earned points by correctly finding, identifying, and localizing up to 20 artifacts hidden in the cave courses within five-meter accuracy.

[ SubT ]

This year, the KUKA Innovation Award’s international jury of experts received a total of more than 40 ideas. The five finalist teams had time until November to implement their ideas. A KUKA LBR Med lightweight robot – the first robotic component to be certified for integration into a medical device – has been made available to them for this purpose. Beyond this, the teams have received a training for the hardware and coaching from KUKA experts throughout the competition. At virtual.MEDICA from 16-19.11.2020, the finalists presented their concepts to an international audience of experts and to the Innovation Award jury. 

The winner of the KUKA Innovation Award 2020, worth 20,000 euros, is Team HIFUSK from the Scuola Superiore Sant'Anna in Italy.

KUKA Innovation Award ]

Like everything else the in-person Cybathlon event was cancelled, but the competition itself took place, just a little more distributed than it would have been otherwise.

[ Cybathlon ]

Matternet, developer of the world's leading urban drone logistics platform, today announced the launch of operations at Labor Berlin Charité Vivantes in Germany. The program kicked-off November 17, 2020 with permanent operations expected to take flight next year, creating the first urban BVLOS [Beyond Visual Line of Sight] medical drone delivery network in the European Union. The drone network expects to significantly improve the timeliness and efficiency of Labor Berlin’s diagnostics services by providing an option to avoid roadway delays, which will improve patient experience with potentially life-saving benefits and lower costs.

Routine BVLOS over an urban area? Impressive.

[ Matternet ]

Robots playing diabolo!

Thanks Thilo!

[ OMRON Sinic X]

Anki's tech has been repackaged into this robot that serves butter:

[ Butter Robot ]

Berkshire Grey just announced our Picking With Purpose Program in which we’ve partnered our robotic automation solutions with food rescue organizations City Harvest and The Greater Boston Food Bank to pick, pack, and distribute food to families in need in time for Thanksgiving. Berkshire Grey donated about 40,000 pounds of food, used one of our robotic automation systems to pick and pack that food into meal boxes for families in need, and our team members volunteered to run the system. City Harvest and The Greater Boston Food Bank are distributing the 4,000 meal boxes we produced. This is just the beginning. We are building a sponsorship program to make Picking With Purpose an ongoing initiative.

[ Berkshire Grey ]

Thanks Peter!

We posted a video previously of Cassie learning to skip, but here's a much more detailed look (accompanying an ICRA submission) that includes some very impressive stair descending.

[ DRL ]

From garage inventors to university students and entrepreneurs, NASA is looking for ideas on how to excavate the Moon’s icy regolith, or dirt, and deliver it to a hypothetical processing plant at the lunar South Pole. The NASA Break the Ice Lunar Challenge, a NASA Centennial Challenge, is now open for registration. The competition will take place over two phases and will reward new ideas and approaches for a system architecture capable of excavating and moving icy regolith and water on the lunar surface.

[ NASA ]

Adaptation to various scene configurations and object properties, stability and dexterity in robotic grasping manipulation is far from explored. This work presents an origami-based shape morphing fingertip design to actively tackle the grasping stability and dexterity problems. The proposed fingertip utilizes origami as its skeleton providing degrees of freedom at desired positions and motor-driven four-bar-linkages as its transmission components to achieve a compact size of the fingertip.

[ Paper ]

"If Roboy crashes... you die."

[ Roboy ]

Traditionally lunar landers, as well as other large space exploration vehicles, are powered by solar arrays or small nuclear reactors. Rovers and small robots, however, are not big enough to carry their own dedicated power supplies and must be tethered to their larger counterparts via electrical cables. Tethering severely restricts mobility, and cables are prone to failure due to lunar dust (regolith) interfering with electrical contact points. Additionally, as robots become smaller and more complex, they are fitted with additional sensors that require more power, further exacerbating the problem. Lastly, solar arrays are not viable for charging during the lunar night. WiBotic is developing rapid charging systems and energy monitoring base stations for lunar robots, including the CubeRover – a shoebox-sized robot designed by Astrobotic – that will operate autonomously and charge wirelessly on the Moon.

[ WiBotic ]

Watching pick and place robots is my therapy.

[ Soft Robotics ]

It's really, really hard to beat liquid fuel for energy storage, as Quaternium demonstrates with their hybrid drone.

[ Quaternium ]

Thanks Gregorio!

State-of-the-art quadrotor simulators have a rigid and highly-specialized structure: either are they really fast, physically accurate, or photo-realistic. In this work, we propose a novel quadrotor simulator: Flightmare.

[ Flightmare ]

Drones that chuck fire-fighting balls into burning buildings, sure!

[ LARICS ]

If you missed ROS World, that's okay, because all of the talks are now online. Here's the opening keynote from Vivian Chu and Diligent robotics, along with a couple fun lightning talks.

[ ROS World 2020 ]

This week's CMU RI Seminar is by Chelsea Finn from Stanford University, on Data Scalability for Robot Learning.

Recent progress in robot learning has demonstrated how robots can acquire complex manipulation skills from perceptual inputs through trial and error, particularly with the use of deep neural networks. Despite these successes, the generalization and versatility of robots across environment conditions, tasks, and objects remains a major challenge. And, unfortunately, our existing algorithms and training set-ups are not prepared to tackle such challenges, which demand large and diverse sets of tasks and experiences. In this talk, I will discuss two central challenges that pertain to data scalability: first, acquiring large datasets of diverse and useful interactions with the world, and second, developing algorithms that can learn from such datasets. Then, I will describe multiple approaches that we might take to rethink our algorithms and data pipelines to serve these goals. This will include algorithms that allow a real robot to explore its environment in a targeted manner with minimal supervision, approaches that can perform robot reinforcement learning with videos of human trial-and-error experience, and visual model-based RL approaches that are not bottlenecked by their capacity to model everything about the world.

[ CMU RI ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online] Bay Area Robotics Symposium – November 20, 2020 – [Online] ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today's videos.

Sixteen teams chose their roster of virtual robots and sensor payloads, some based on real-life subterranean robots, and submitted autonomy and mapping algorithms that SubT Challenge officials then tested across eight cave courses in the cloud-based SubT Simulator. Their robots traversed the cave environments autonomously, without any input or adjustments from human operators. The Cave Circuit Virtual Competition teams earned points by correctly finding, identifying, and localizing up to 20 artifacts hidden in the cave courses within five-meter accuracy.

[ SubT ]

This year, the KUKA Innovation Award’s international jury of experts received a total of more than 40 ideas. The five finalist teams had time until November to implement their ideas. A KUKA LBR Med lightweight robot – the first robotic component to be certified for integration into a medical device – has been made available to them for this purpose. Beyond this, the teams have received a training for the hardware and coaching from KUKA experts throughout the competition. At virtual.MEDICA from 16-19.11.2020, the finalists presented their concepts to an international audience of experts and to the Innovation Award jury. 

The winner of the KUKA Innovation Award 2020, worth 20,000 euros, is Team HIFUSK from the Scuola Superiore Sant'Anna in Italy.

KUKA Innovation Award ]

Like everything else the in-person Cybathlon event was cancelled, but the competition itself took place, just a little more distributed than it would have been otherwise.

[ Cybathlon ]

Matternet, developer of the world's leading urban drone logistics platform, today announced the launch of operations at Labor Berlin Charité Vivantes in Germany. The program kicked-off November 17, 2020 with permanent operations expected to take flight next year, creating the first urban BVLOS [Beyond Visual Line of Sight] medical drone delivery network in the European Union. The drone network expects to significantly improve the timeliness and efficiency of Labor Berlin’s diagnostics services by providing an option to avoid roadway delays, which will improve patient experience with potentially life-saving benefits and lower costs.

Routine BVLOS over an urban area? Impressive.

[ Matternet ]

Robots playing diabolo!

Thanks Thilo!

[ OMRON Sinic X]

Anki's tech has been repackaged into this robot that serves butter:

[ Butter Robot ]

Berkshire Grey just announced our Picking With Purpose Program in which we’ve partnered our robotic automation solutions with food rescue organizations City Harvest and The Greater Boston Food Bank to pick, pack, and distribute food to families in need in time for Thanksgiving. Berkshire Grey donated about 40,000 pounds of food, used one of our robotic automation systems to pick and pack that food into meal boxes for families in need, and our team members volunteered to run the system. City Harvest and The Greater Boston Food Bank are distributing the 4,000 meal boxes we produced. This is just the beginning. We are building a sponsorship program to make Picking With Purpose an ongoing initiative.

[ Berkshire Grey ]

Thanks Peter!

We posted a video previously of Cassie learning to skip, but here's a much more detailed look (accompanying an ICRA submission) that includes some very impressive stair descending.

[ DRL ]

From garage inventors to university students and entrepreneurs, NASA is looking for ideas on how to excavate the Moon’s icy regolith, or dirt, and deliver it to a hypothetical processing plant at the lunar South Pole. The NASA Break the Ice Lunar Challenge, a NASA Centennial Challenge, is now open for registration. The competition will take place over two phases and will reward new ideas and approaches for a system architecture capable of excavating and moving icy regolith and water on the lunar surface.

[ NASA ]

Adaptation to various scene configurations and object properties, stability and dexterity in robotic grasping manipulation is far from explored. This work presents an origami-based shape morphing fingertip design to actively tackle the grasping stability and dexterity problems. The proposed fingertip utilizes origami as its skeleton providing degrees of freedom at desired positions and motor-driven four-bar-linkages as its transmission components to achieve a compact size of the fingertip.

[ Paper ]

"If Roboy crashes... you die."

[ Roboy ]

Traditionally lunar landers, as well as other large space exploration vehicles, are powered by solar arrays or small nuclear reactors. Rovers and small robots, however, are not big enough to carry their own dedicated power supplies and must be tethered to their larger counterparts via electrical cables. Tethering severely restricts mobility, and cables are prone to failure due to lunar dust (regolith) interfering with electrical contact points. Additionally, as robots become smaller and more complex, they are fitted with additional sensors that require more power, further exacerbating the problem. Lastly, solar arrays are not viable for charging during the lunar night. WiBotic is developing rapid charging systems and energy monitoring base stations for lunar robots, including the CubeRover – a shoebox-sized robot designed by Astrobotic – that will operate autonomously and charge wirelessly on the Moon.

[ WiBotic ]

Watching pick and place robots is my therapy.

[ Soft Robotics ]

It's really, really hard to beat liquid fuel for energy storage, as Quaternium demonstrates with their hybrid drone.

[ Quaternium ]

Thanks Gregorio!

State-of-the-art quadrotor simulators have a rigid and highly-specialized structure: either are they really fast, physically accurate, or photo-realistic. In this work, we propose a novel quadrotor simulator: Flightmare.

[ Flightmare ]

Drones that chuck fire-fighting balls into burning buildings, sure!

[ LARICS ]

If you missed ROS World, that's okay, because all of the talks are now online. Here's the opening keynote from Vivian Chu and Diligent robotics, along with a couple fun lightning talks.

[ ROS World 2020 ]

This week's CMU RI Seminar is by Chelsea Finn from Stanford University, on Data Scalability for Robot Learning.

Recent progress in robot learning has demonstrated how robots can acquire complex manipulation skills from perceptual inputs through trial and error, particularly with the use of deep neural networks. Despite these successes, the generalization and versatility of robots across environment conditions, tasks, and objects remains a major challenge. And, unfortunately, our existing algorithms and training set-ups are not prepared to tackle such challenges, which demand large and diverse sets of tasks and experiences. In this talk, I will discuss two central challenges that pertain to data scalability: first, acquiring large datasets of diverse and useful interactions with the world, and second, developing algorithms that can learn from such datasets. Then, I will describe multiple approaches that we might take to rethink our algorithms and data pipelines to serve these goals. This will include algorithms that allow a real robot to explore its environment in a targeted manner with minimal supervision, approaches that can perform robot reinforcement learning with videos of human trial-and-error experience, and visual model-based RL approaches that are not bottlenecked by their capacity to model everything about the world.

[ CMU RI ]

I have a confession to make: A robot haunts my nightmares. For me, Boston Dynamics’ Spot robot is 32.5 kilograms (71.1 pounds) of pure terror. It can climb stairs. It can open doors. Seeing it in a video cannot prepare you for the moment you cross paths on a trade-show floor. Now that companies can buy a Spot robot for US $74,500, you might encounter Spot anywhere.

Spot robots now patrol public parks in Singapore to enforce social distancing during the pandemic. They meet with COVID-19 patients at Boston’s Brigham and Women’s Hospital so that doctors can conduct remote consultations. Imagine coming across Spot while walking in the park or returning to your car in a parking garage. Wouldn’t you want to know why this hunk of metal is there and who’s operating it? Or at least whom to call to report a malfunction?

Robots are becoming more prominent in daily life, which is why I think governments need to create national registries of robots. Such a registry would let citizens and law enforcement look up the owner of any roaming robot, as well as learn that robot’s purpose. It’s not a far-fetched idea: The U.S. Federal Aviation Administration already has a registry for drones.

Governments could create national databases that require any companies operating robots in public spaces to report the robot make and model, its purpose, and whom to contact if the robot breaks down or causes problems. To allow anyone to use the database, all public robots would have an easily identifiable marker or model number on their bodies. Think of it as a license plate or pet microchip, but for bots.

There are some smaller-scale registries today. San Jose’s Department of Transportation (SJDOT), for example, is working with Kiwibot, a delivery robot manufacturer, to get real-time data from the robots as they roam the city’s streets. The Kiwibots report their location to SJDOT using the open-source Mobility Data Specification, which was originally developed by Los Angeles to track Bird scooters.

Real-time location reporting makes sense for Kiwibots and Spots wandering the streets, but it’s probably overkill for bots confined to cleaning floors or patrolling parking lots. That said, any robots that come in contact with the general public should clearly provide basic credentials and a way to hold their operators accountable. Given that many robots use cameras, people may also be interested in looking up who’s collecting and using that data.

I starting thinking about robot registries after Spot became available in June for anyone to purchase. The idea gained specificity after listening to Andra Keay, founder and managing director at Silicon Valley Robotics, discuss her five rules of ethical robotics at an Arm event in October. I had already been thinking that we needed some way to track robots, but her suggestion to tie robot license plates to a formal registry made me realize that people also need a way to clearly identify individual robots.

Keay pointed out that in addition to sating public curiosity and keeping an eye on robots that could cause harm, a registry could also track robots that have been hacked. For example, robots at risk of being hacked and running amok could be required to report their movements to a database, even if they’re typically restricted to a grocery store or warehouse. While we’re at it, Spot robots should be required to have sirens, because there’s no way I want one of those sneaking up on me.

This article appears in the December 2020 print issue as “Who’s Behind That Robot?”

DARPA held the Virtual Cave Circuit event of the Subterranean Challenge on Tuesday in the form of a several hour-long livestream. We got to watch (along with all of the competing teams) as virtual robots explored virtual caves fully autonomously, dodging rockfalls, spotting artifacts, scoring points, and sometimes running into stuff and falling over.

Expert commentary was provided by DARPA, and we were able to watch multiple teams running at once, skipping from highlight to highlight. It was really very well done (you can watch an archive of the entire stream here), but they made us wait until the very end to learn who won: First place went to Coordinated Robotics, with BARCS taking second, and third place going to newcomer Team Dynamo.

Huge congratulations to Coordinated Robotics! It’s worth pointing out that the top three teams were separated by an incredibly small handful of points, and on a slightly different day, with slightly different artifact positions, any of them could have come out on top. This doesn’t diminish Coordinated Robotics’ victory in the least—it means that the competition was fierce, and that the problem of autonomous cave exploration with robots has been solved (virtually, at least) in several different but effective ways.

We know Coordinated Robotics pretty well at this point, but here’s an introduction video:

You heard that right—Coordinated Robotics is just Kevin Knoedler, all by himself. This would be astonishing, if we weren’t already familiar with Kevin’s abilities: He won NASA’s virtual Space Robotics Challenge by himself in 2017, and Coordinated Robotics placed first in the DARPA SubT Virtual Tunnel Circuit and second in the Virtual Urban Circuit. We asked Kevin how he managed to do so spectacularly well (again), and here’s what he told us:

IEEE Spectrum: Can you describe what it was like to watch your team of robots on the live stream, and to see them score the most points?

Kevin Knoedler: It was exciting and stressful watching the live stream. It was exciting as the top few scores were quite close for the cave circuit. It was stressful because I started out behind and worked my way up, but did not do well on the final world. Luckily, not doing well on the first and last worlds was offset by better scores on many of the runs in between. DARPA did a very nice job with their live stream of the cave circuit results.

How did you decide on the makeup of your team, and on what sensors to use?

To decide on the makeup of the team I experimented with quite a few different vehicles. I had a lot of trouble with the X2 and other small ground vehicles flipping over. Based on that I looked at the larger ground vehicles that also had a sensor capable of identifying drop-offs. The vehicles that met those criteria for me were the Marble HD2, Marble Husky, Ozbot ATR, and the Absolem. Of those ground vehicles I went with the Marble HD2. It had a downward looking depth camera that I could use to detect drop-offs and was much more stable on the varied terrain than the X2. I had used the X3 aerial vehicle before and so that was my first choice for an aerial platform. 

What were some things that you learned in Tunnel and Urban that you were able to incorporate into your strategy for Cave?

In the Tunnel circuit I had learned a strategy to use ground vehicles and in the Urban circuit I had learned a strategy to use aerial vehicles. At a high level that was the biggest thing I learned from the previous circuits that I was able to apply to the Cave circuit. At a lower level I was able to apply many of the development and testing strategies from the previous circuits to the Cave circuit.

What aspect of the cave environment was most challenging for your robots?

I would say it wasn't just one aspect of the cave environment that was challenging for the robots. There were quite a few challenging aspects of the cave environment. For the ground vehicles there were frequently paths that looked good as the robot started on the path, but turned into drop-offs or difficult boulder crawls. While it was fun to see the robot plan well enough to slowly execute paths over the boulders, I was wishing that the robot was smart enough to try a different path rather than wasting so much time crawling over the large boulders. For the aerial vehicles the combination of tight paths along with large vertical spaces was the biggest challenge in the environment. The large open vertical areas were particularly challenging for my aerial robots. They could easily lose track of their position without enough nearby features to track and it was challenging to find the correct path in and out of such large vertical areas.

How will you be preparing for the SubT Final?

To prepare for the SubT Final the vehicles will be getting a lot smarter. The ground vehicles will be better at navigation and communicating with one another. The aerial vehicles will be better able to handle large vertical areas both from a positioning and a planning point of view. Finally, all of the vehicles will do a better job coordinating what areas have been explored and what areas have good leads for further exploration.

Image: DARPA The final score for the DARPA SubT Cave Circuit virtual competition.

We also had a chance to ask SubT program manager Tim Chung a few questions at yesterday’s post-event press conference, about the course itself and what he thinks teams should have learned from the competition:

IEEE Spectrum: Having looked through some real caves, can you give some examples of some of the most significant differences between this simulation and real caves? And with the enormous variety of caves out there, how generalizable are the solutions that teams came up with?

Tim Chung: Many of the caves that I’ve had to crawl through and gotten bumps and scrapes from had a couple of different features that I’ll highlight. The first is the variations in moisture— a lot of these caves were naturally formed with streams and such, so many of the caves we went to had significant mud, flowing water, and such. And so one of the things we're not capturing in the SubT simulator is explicitly anything that would submerge the robots, or otherwise short any of their systems. So from that perspective, that's one difference that's certainly notable. 

And then the other difference I think is the granularity of the terrain, whether it's rubble, sand, or just raw dirt, friction coefficients are all across the board,  and I think that's one of the things that any terrestrial simulator will both struggle with and potentially benefit from— that is, terramechanics simulation abilities. Given the emphasis on mobility in the SubT simulation, we’re capturing just a sliver of the complexity of terramechanics, but I think that's probably another take away that you'll certainly see—  where there’s that distinction between physical and virtual technologies. 

To answer your second question about generalizability— that’s the multi-million dollar question! It’s definitely at the crux of why we have eight diverse worlds, both in size verticality, dimensions, constraint passageways, etc. But this is eight out of countless variations, and the goal of course is to be able to investigate what those key dependencies are. What I'll say is that the out of the seventy three different virtual cave tiles, which are the building blocks that make up these virtual worlds, quite a number of them were not only inspired by real world caves, but were specifically designed so that we can essentially use these tiles as unit tests going forward. So, if I want to simulate vertical inclines, here are the tiles that are the vertical vertical unit tests for robots, and that’s how we’re trying to to think through how to tease out that generalizability factor. 

What are some observations from this event that you think systems track teams should pay attention to as they prepare for the final event?

One of the key things about the virtual competition is that you submit your software, and that's it. So you have to design everything from state management to failure mode triage, really thinking about what could go wrong and then building out your autonomous capabilities either to react to some of those conditions, or to anticipate them. And to be honest I think that the humans in the loop that we have in the systems competition really are key enablers of their capability, but also could someday (if not already) be a crutch that we might not be able to develop. 

Thinking through some of the failure modes in a fully autonomous software deployed setting are going to be incredibly valuable for the systems competitors, so that for example the human supervisor doesn't have to worry about those failure modes as much, or can respond in a more supervisory way rather than trying to joystick the robot around. I think that's going to be one of the greatest impacts,  thinking through what it means to send these robots off to autonomously get you the information you need and complete the mission

This isn’t to say that the humans aren't going to be useful and continue to play a role of course, but I think this shifting of the role of the human supervisor from being a state manager to being more of a tactical commander will dramatically highlight the impact of the virtual side on the systems side. 

What, if anything, should we take away from one person teams being able to do so consistently well in the virtual circuit? 

It’s a really interesting question. I think part of it has to do with systems integration versus software integration. There's something to be said for the richness of the technologies that can be developed, and how many people it requires to be able to develop some of those technologies. With the systems competitors, having one person try to build, manage, deploy, service, and operate all of those robots is still functionally quite challenging, whereas in the virtual competition, it really is a software deployment more than anything else. And so I think the commonality of single person teams may just be a virtue of the virtual competition not having some of those person-intensive requirements.

In terms of their strong performance, I give credit to all of these really talented folks who are taking upon themselves to jump into the competitor pool and see how well they do, and I think that just goes to show you that whether you're one person or ten people people or a hundred people on a team, a good idea translated and executed well really goes a long way.

Looking ahead, teams have a year to prepare for the final event, which is still scheduled to be held sometime in fall 2021. And even though there was no cave event for systems track teams, the fact that the final event will be a combination of tunnel, urban, and cave circuits means that systems track teams have been figuring out how to get their robots to work in caves anyway, and we’ll be bringing you some of their stories over the next few weeks.

[ DARPA SubT ]

DARPA held the Virtual Cave Circuit event of the Subterranean Challenge on Tuesday in the form of a several hour-long livestream. We got to watch (along with all of the competing teams) as virtual robots explored virtual caves fully autonomously, dodging rockfalls, spotting artifacts, scoring points, and sometimes running into stuff and falling over.

Expert commentary was provided by DARPA, and we were able to watch multiple teams running at once, skipping from highlight to highlight. It was really very well done (you can watch an archive of the entire stream here), but they made us wait until the very end to learn who won: First place went to Coordinated Robotics, with BARCS taking second, and third place going to newcomer Team Dynamo.

Huge congratulations to Coordinated Robotics! It’s worth pointing out that the top three teams were separated by an incredibly small handful of points, and on a slightly different day, with slightly different artifact positions, any of them could have come out on top. This doesn’t diminish Coordinated Robotics’ victory in the least—it means that the competition was fierce, and that the problem of autonomous cave exploration with robots has been solved (virtually, at least) in several different but effective ways.

We know Coordinated Robotics pretty well at this point, but here’s an introduction video:

You heard that right—Coordinated Robotics is just Kevin Knoedler, all by himself. This would be astonishing, if we weren’t already familiar with Kevin’s abilities: He won NASA’s virtual Space Robotics Challenge by himself in 2017, and Coordinated Robotics placed first in the DARPA SubT Virtual Tunnel Circuit and second in the Virtual Urban Circuit. We asked Kevin how he managed to do so spectacularly well (again), and here’s what he told us:

IEEE Spectrum: Can you describe what it was like to watch your team of robots on the live stream, and to see them score the most points?

Kevin Knoedler: It was exciting and stressful watching the live stream. It was exciting as the top few scores were quite close for the cave circuit. It was stressful because I started out behind and worked my way up, but did not do well on the final world. Luckily, not doing well on the first and last worlds was offset by better scores on many of the runs in between. DARPA did a very nice job with their live stream of the cave circuit results.

How did you decide on the makeup of your team, and on what sensors to use?

To decide on the makeup of the team I experimented with quite a few different vehicles. I had a lot of trouble with the X2 and other small ground vehicles flipping over. Based on that I looked at the larger ground vehicles that also had a sensor capable of identifying drop-offs. The vehicles that met those criteria for me were the Marble HD2, Marble Husky, Ozbot ATR, and the Absolem. Of those ground vehicles I went with the Marble HD2. It had a downward looking depth camera that I could use to detect drop-offs and was much more stable on the varied terrain than the X2. I had used the X3 aerial vehicle before and so that was my first choice for an aerial platform. 

What were some things that you learned in Tunnel and Urban that you were able to incorporate into your strategy for Cave?

In the Tunnel circuit I had learned a strategy to use ground vehicles and in the Urban circuit I had learned a strategy to use aerial vehicles. At a high level that was the biggest thing I learned from the previous circuits that I was able to apply to the Cave circuit. At a lower level I was able to apply many of the development and testing strategies from the previous circuits to the Cave circuit.

What aspect of the cave environment was most challenging for your robots?

I would say it wasn't just one aspect of the cave environment that was challenging for the robots. There were quite a few challenging aspects of the cave environment. For the ground vehicles there were frequently paths that looked good as the robot started on the path, but turned into drop-offs or difficult boulder crawls. While it was fun to see the robot plan well enough to slowly execute paths over the boulders, I was wishing that the robot was smart enough to try a different path rather than wasting so much time crawling over the large boulders. For the aerial vehicles the combination of tight paths along with large vertical spaces was the biggest challenge in the environment. The large open vertical areas were particularly challenging for my aerial robots. They could easily lose track of their position without enough nearby features to track and it was challenging to find the correct path in and out of such large vertical areas.

How will you be preparing for the SubT Final?

To prepare for the SubT Final the vehicles will be getting a lot smarter. The ground vehicles will be better at navigation and communicating with one another. The aerial vehicles will be better able to handle large vertical areas both from a positioning and a planning point of view. Finally, all of the vehicles will do a better job coordinating what areas have been explored and what areas have good leads for further exploration.

Image: DARPA The final score for the DARPA SubT Cave Circuit virtual competition.

We also had a chance to ask SubT program manager Tim Chung a few questions at yesterday’s post-event press conference, about the course itself and what he thinks teams should have learned from the competition:

IEEE Spectrum: Having looked through some real caves, can you give some examples of some of the most significant differences between this simulation and real caves? And with the enormous variety of caves out there, how generalizable are the solutions that teams came up with?

Tim Chung: Many of the caves that I’ve had to crawl through and gotten bumps and scrapes from had a couple of different features that I’ll highlight. The first is the variations in moisture— a lot of these caves were naturally formed with streams and such, so many of the caves we went to had significant mud, flowing water, and such. And so one of the things we're not capturing in the SubT simulator is explicitly anything that would submerge the robots, or otherwise short any of their systems. So from that perspective, that's one difference that's certainly notable. 

And then the other difference I think is the granularity of the terrain, whether it's rubble, sand, or just raw dirt, friction coefficients are all across the board,  and I think that's one of the things that any terrestrial simulator will both struggle with and potentially benefit from— that is, terramechanics simulation abilities. Given the emphasis on mobility in the SubT simulation, we’re capturing just a sliver of the complexity of terramechanics, but I think that's probably another take away that you'll certainly see—  where there’s that distinction between physical and virtual technologies. 

To answer your second question about generalizability— that’s the multi-million dollar question! It’s definitely at the crux of why we have eight diverse worlds, both in size verticality, dimensions, constraint passageways, etc. But this is eight out of countless variations, and the goal of course is to be able to investigate what those key dependencies are. What I'll say is that the out of the seventy three different virtual cave tiles, which are the building blocks that make up these virtual worlds, quite a number of them were not only inspired by real world caves, but were specifically designed so that we can essentially use these tiles as unit tests going forward. So, if I want to simulate vertical inclines, here are the tiles that are the vertical vertical unit tests for robots, and that’s how we’re trying to to think through how to tease out that generalizability factor. 

What are some observations from this event that you think systems track teams should pay attention to as they prepare for the final event?

One of the key things about the virtual competition is that you submit your software, and that's it. So you have to design everything from state management to failure mode triage, really thinking about what could go wrong and then building out your autonomous capabilities either to react to some of those conditions, or to anticipate them. And to be honest I think that the humans in the loop that we have in the systems competition really are key enablers of their capability, but also could someday (if not already) be a crutch that we might not be able to develop. 

Thinking through some of the failure modes in a fully autonomous software deployed setting are going to be incredibly valuable for the systems competitors, so that for example the human supervisor doesn't have to worry about those failure modes as much, or can respond in a more supervisory way rather than trying to joystick the robot around. I think that's going to be one of the greatest impacts,  thinking through what it means to send these robots off to autonomously get you the information you need and complete the mission

This isn’t to say that the humans aren't going to be useful and continue to play a role of course, but I think this shifting of the role of the human supervisor from being a state manager to being more of a tactical commander will dramatically highlight the impact of the virtual side on the systems side. 

What, if anything, should we take away from one person teams being able to do so consistently well in the virtual circuit? 

It’s a really interesting question. I think part of it has to do with systems integration versus software integration. There's something to be said for the richness of the technologies that can be developed, and how many people it requires to be able to develop some of those technologies. With the systems competitors, having one person try to build, manage, deploy, service, and operate all of those robots is still functionally quite challenging, whereas in the virtual competition, it really is a software deployment more than anything else. And so I think the commonality of single person teams may just be a virtue of the virtual competition not having some of those person-intensive requirements.

In terms of their strong performance, I give credit to all of these really talented folks who are taking upon themselves to jump into the competitor pool and see how well they do, and I think that just goes to show you that whether you're one person or ten people people or a hundred people on a team, a good idea translated and executed well really goes a long way.

Looking ahead, teams have a year to prepare for the final event, which is still scheduled to be held sometime in fall 2021. And even though there was no cave event for systems track teams, the fact that the final event will be a combination of tunnel, urban, and cave circuits means that systems track teams have been figuring out how to get their robots to work in caves anyway, and we’ll be bringing you some of their stories over the next few weeks.

[ DARPA SubT ]

Tactile sensing is an essential capability for a robot to perform manipulation tasks in cluttered environments. While larger areas can be assessed instantly with cameras, Lidars, and other remote sensors, tactile sensors can reduce their measurement uncertainties and gain information of the physical interactions between the objects and the robot end-effector that is not accessible via remote sensors. In this paper, we introduce the novel tactile sensor GelTip that has the shape of a finger and can sense contacts on any location of its surface. This contrasts to other camera-based tactile sensors that either only have a flat sensing surface, or a compliant tip of a limited sensing area, and our proposed GelTip sensor is able to detect contacts from all the directions, like a human finger. The sensor uses a camera located at its base to track the deformations of the opaque elastomer that covers its hollow, rigid, and transparent body. Because of this design, a gripper equipped with GelTip sensors is capable of simultaneously monitoring contacts happening inside and outside its grasp closure. Our extensive experiments show that the GelTip sensor can effectively localize these contacts at different locations of the finger body, with a small localization error of approximately 5 mm on average, and under 1 mm in the best cases. Furthermore, our experiments in a Blocks World environment demonstrate the advantages, and possibly a necessity, of leveraging all-around touch sensing in manipulation tasks. In particular, the experiments show that the contacts at different moments of the reach-to-grasp movements can be sensed using our novel GelTip sensor.

In the context of legged robotics, many criteria based on the control of the Center of Mass (CoM) have been developed to ensure a stable and safe robot locomotion. Defining a whole-body framework with the control of the CoM requires a planning strategy, often based on a specific type of gait and a reliable state-estimation. In a whole-body control approach, if the CoM task is not specified, the consequent redundancy can still be resolved by specifying a postural task that set references for all the joints. Therefore, the postural task can be exploited to keep a well-behaved, stable kinematic configuration. In this work, we propose a generic locomotion framework which is able to generate different kind of gaits, ranging from very dynamic gaits, such as the trot, to more static gaits like the crawl, without the need to plan the CoM trajectory. Consequently, the whole-body controller becomes planner-free and it does not require the estimation of the floating base state, which is often prone to drift. The framework is composed of a priority-based whole-body controller that works in synergy with a walking pattern generator. We show the effectiveness of the framework by presenting simulations on different types of simulated terrains, including rough terrain, using different quadruped platforms.

In-hand manipulation and grasp adjustment with dexterous robotic hands is a complex problem that not only requires highly coordinated finger movements but also deals with interaction variability. The control problem becomes even more complex when introducing tactile information into the feedback loop. Traditional approaches do not consider tactile feedback and attempt to solve the problem either by relying on complex models that are not always readily available or by constraining the problem in order to make it more tractable. In this paper, we propose a hierarchical control approach where a higher level policy is learned through reinforcement learning, while low level controllers ensure grip stability throughout the manipulation action. The low level controllers are independent grip stabilization controllers based on tactile feedback. The independent controllers allow reinforcement learning approaches to explore the manipulation tasks state-action space in a more structured manner. We show that this structure allows learning the unconstrained task with RL methods that cannot learn it in a non-hierarchical setting. The low level controllers also provide an abstraction to the tactile sensors input, allowing transfer to real robot platforms. We show preliminary results of the transfer of policies trained in simulation to the real robot hand.

Photo: F.J. Jimenez/Getty Images

The approach of a new year is always a time to take stock and be hopeful. This year, though, reflection and hope are more than de rigueur—they’re rejuvenating. We’re coming off a year in which doctors, engineers, and scientists took on the most dire public threat in decades, and in the new year we’ll see the greatest results of those global efforts. COVID-19 vaccines are just months away, and biomedical testing is being revolutionized.

At IEEE Spectrum we focus on the high-tech solutions: Can artificial intelligence (AI) be used to diagnose COVID-19 using cough recordings? Can mathematical modeling determine whether preventive measures against COVID-19 work? Can big data and AI provide accurate pandemic forecasting?

Consider our story “AI Recognizes COVID-19 in the Sound of a Cough,” reported by Megan Scudellari in our Human OS blog. Using a cellphone-recorded cough, machine-learning models can now detect coronavirus with 90 percent accuracy, even in people with no symptoms. It’s a remarkable research milestone. This AI model sifts through hundreds of factors to distinguish the COVID-19 cough from those of bronchitis, whooping cough, and asthma.

But while such high-tech triumphs give us hope, the no-tech solutions are mostly what we have to work with. Soon, as our Numbers Don’t Lie columnist, Vaclav Smil, pointed out in a recent email, we will have near-instantaneous home testing, and we will have an ability to use big data to crunch every move and every outbreak. But we are nowhere near that yet. So let’s use, as he says, some old-fashioned kindergarten epidemiology, the no-tech measures, while we work to get there:

Masks: Wear them. If we all did so, we could cut transmission by two-thirds, perhaps even 80 percent.

Hands: Wash them.

Social distancing: If we could all stay home for two weeks, we could see enormous declines in COVID-19 transmission.

These are all time-tested solutions, proven effective ages ago in countless outbreaks of diseases including typhoid and cholera. They’re inexpensive and easy to prescribe, and the regimens are easy to follow.

The conflict between public health and individual rights and privacy, however, is less easy to resolve. Even during the pandemic of 1918–19, there was widespread resistance to mask wearing and social distancing. Fifty million people died—675,000 in the United States alone. Today, we are up to 240,000 deaths in the United States, and the end is not in sight. Antiflu measures were framed in 1918 as a way to protect the troops fighting in World War I, and people who refused to wear masks were called out as “dangerous slackers.” There was a world war, and yet it was still hard to convince people of the need for even such simple measures.

Personally, I have found the resistance to these easy fixes startling. I wouldn’t want maskless, gloveless doctors taking me through a surgical procedure. Or waltzing in from lunch without washing their hands. I’m sure you wouldn’t, either.

Science-based medicine has been one of the world’s greatest and most fundamental advances. In recent years, it has been turbocharged by breakthroughs in genetics technologies, advanced materials, high-tech diagnostics, and implants and other electronics-based interventions. Such leaps have already saved untold lives, but there’s much more to be done. And there will be many more pandemics ahead for humanity.

Back to IEEE COVID-19 Resources

Cerebras Systems, which makes a specialized AI computer based on the largest chip ever made, is breaking out of its original role as a neural-network training powerhouse and turning its talents toward more traditional scientific computing. In a simulation having 500 million variables, the CS-1 trounced the 69th-most powerful supercomputer in the world. 

It also solved the problem—combustion in a coal-fired power plant—faster than the real-world flame it simulates. To top it off, Cerebras and its partners at the U.S. National Energy Technology Center claim, the CS-1 performed the feat faster than any present-day CPU or GPU-based supercomputer could.

The research, which was presented this week at the supercomputing conference SC20, shows that Cerebras’ AI architecture “is not a one trick pony,” says Cerebras CEO Andrew Feldman.

Weather forecasting, design of airplane wings, predicting temperatures in a nuclear power plant, and many other complex problems are solved by simulating “the movement of fluids in space over time,” he says. The simulation divides the world up into a set of cubes, models the movement of fluid in those cubes, and determines the interactions between the cubes. There can be 1 million or more of these cubes and it can take 500,000 variables to describe what’s happening.

According to Feldman, solving that takes a computer system with lots of processor cores, tons of memory very close to the cores, oodles of bandwidth connecting the cores and the memory, and loads of bandwidth connecting the cores. Conveniently, that’s what a neural-network training computer needs, too. The CS-1 contains a single piece of silicon with 400,000 cores, 18 gigabytes of memory, 9 petabytes of memory bandwidth, and 100 petabits per second of core-to-core bandwidth.

Scientists at NETL simulated combustion in a powerplant using both a Cerebras CS-1 and the Joule supercomputer, which has 84,000 CPU cores and consumes 450 kilowatts. By comparison, Cerebras runs on about 20 kilowatts. Joule completed the calculation in 2.1 milliseconds. The CS-1 was more than 200-times faster, finishing in 6 microseconds.

This speed has two implications, according to Feldman. One is that there is no combination of CPUs or even of GPUs today that could beat the CS-1 on this problem. He backs this up by pointing to the nature of the simulation—it does not scale well. Just as you can have too many cooks in the kitchen, throwing too many cores at a problem can actually slow the calculation down. Joule’s speed peaked when using 16,384 of its 84,000 cores.

The limitation comes from connectivity between the cores and between cores and memory. Imagine the volume to be simulated as a 370 x 370 x 370 stack of cubes (136,900 vertical stacks with 370 layers). Cerebras maps the problem to the wafer-scale chip by assigning the array of vertical stacks to a corresponding array of processor cores. Because of that arrangement, communicating the effects of one cube on another is done by transferring data between neighboring cores, which is as fast as it gets. And while each layer of the stack is computed, the data representing the other layers reside inside the core’s memory where it can be quickly accessed.

(Cerebras takes advantage of a similar kind of geometric mapping when training neural networks. [See sidebar “The Software Side of Cerebras,” January 2020.])

And because the simulation completed faster than the real-world combustion event being simulated, the CS-1 could now have a new job on its hands—playing a role in control systems for complex machines.

Feldman reports that the SC-1 has made inroads in the purpose for which it was originally built, as well. Drugmaker GlaxoSmithKline is a known customer, and the SC-1 is doing AI work at Argonne National Laboratory and Lawrence Livermore National Lab, the Pittsburgh Supercomputing Center. He says there are several customers he cannot name in the military, intelligence, and heavy manufacturing industries.

A next generation SC-1 is in the works, he says. The first generation used TSMC’s 16-nanometer process, but Cerebras already has a 7-nanometer version in hand with more than double the memory—40 GB—and the number of AI processor cores—850,000.

We consider the problem of learning generalized first-order representations of concepts from a small number of examples. We augment an inductive logic programming learner with 2 novel contributions. First, we define a distance measure between candidate concept representations that improves the efficiency of search for target concept and generalization. Second, we leverage richer human inputs in the form of advice to improve the sample efficiency of learning. We prove that the proposed distance measure is semantically valid and use that to derive a PAC bound. Our experiments on diverse learning tasks demonstrate both the effectiveness and efficiency of our approach.

316-P Sorter Induction

Robotic solutions can help your operation keep up with the demands of today’s changing e-commerce market. Honeywell Robotics is helping DCs evaluate solutions with powerful physics-based simulation tools to ensure that everything works together in an integrated ecosystem.

Put more than a quarter-century of automation expertise to work for you.

Download White Paper

In 2017, a team at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., was in the process of prototyping some small autonomous robots capable of exploring caves and subsurface voids on the Moon, Mars, and Titan, Saturn’s largest moon. Our goal was the development of new technologies to help us solve one of humanity’s most significant questions: is there or has there been life beyond Earth?

The more we study the surfaces of planetary bodies in our solar system, the more we are compelled to voyage underground to seek answers to this question. Planetary subsurface voids are not only one of the most likely places to find both signs of life, past and present, but thanks to the shelter they provide, are also one of the main candidates for future human habitation. While we were working on various technologies for cave exploration at JPL, DARPA launched the latest in its series of Grand Challenges, the Subterranean Challenge, or SubT. Compared to earlier events that focused on on-road driving and humanoid robots in pre-defined disaster relief scenarios, the focus of SubT is the exploration of unknown and extreme underground environments. Even though SubT is about exploring such environments on Earth, we can use the competition as an analog to help us learn how to explore unknown environments on other planetary bodies. 

From the beginning, the JPL team forged partnerships with four other institutions offering complementary capabilities to collectively address a daunting list of technical challenges across multiple domains in this competition. In addition to JPL’s experience in deploying robust and resilient autonomous systems in extreme and uncertain environments, the team also included Caltech, with its specialization in mobility, MIT, with its expertise in large-scale mapping, and KAIST (South Korea) and LTU (Sweden), experts in fast drones in underground environments. The more far-flung partnerships were the result of existing research collaborations, a typical pattern in robotics research. We also partnered with a range of companies who supported us with robot platforms and sensors. The shared philosophy of building Collaborative SubTerranean Autonomous Robots led to the birth of Team CoSTAR.

Our approach to the SubT Challenge

The SubT Challenge is designed to encourage progress in four distinct robotics domains: mobility (how to get around), perception (how to make sense of the world), networking (how to get the data back to the server by the end of the mission), and autonomy (how to make decisions). The competition rules and structure reflect meaningful real-world scenarios in underground environments including tunnels, urban areas, and caves. 

To be successful in the SubT Challenge requires a holistic solution that balances coverage of each domain and a recognition of how each is intertwined with the others. For example, the robots need to be small enough to travel through narrow passages, but large enough to carry the sensors and computers necessary to make autonomous decisions and while navigating in perceptually-degraded parts of the course, meaning dark, dusty or smoke-filled. There’s also the challenge of power and energy: The robots need to be quick and energy-efficient to meet the endurance requirements and traverse multiple kilometers per hour in extreme environments. At the same time, autonomous onboard decision making and large-scale mapping is the single biggest power demand. Such challenges are amplified on flying vehicles and require more dramatic trade-offs between flying time, size, and the autonomous capabilities.

Our answer to this call for versatility is to present a team of AI-powered robots, comprising multiple heterogeneous platforms, to handle the various challenges of each course. To enable modularity, all our robots are equipped with the same modular autonomy software, called NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is specifically designed to address stochasticity and uncertainty in various elements of the mission, including sensing, environment, motion, system health, and communication, among others. With a mix of wheeled, legged, tracked, and flying vehicles, our team relies on a decision-making process that translates mission specifications, risk, and time into strategies that adaptively prescribe which robot should be dispatched to which part of the course and when.

Image: Team CoSTAR A team of robots with heterogeneous capabilities handles various challenges of an unknown extreme environment. Let the exploration begin!

The hallmark of Team CoSTAR’s first year leading up to the SubT Tunnel Circuit was a series of fast iterations through potential robot configurations. Every few weeks, we would make a major adjustment to our overall solution architecture based on what we learned in the previous iteration. These changes could potentially be as major as changing our overall concept of operations, e.g., how many robots in what formation should be part of the solution. This required high-levels of adaptivity and agility in our solution development process and team culture.

Testing in representative environments offered us a crucial advantage in the competition. Our “local” test site (a four-hour drive from home) was an abandoned gold mine open to tourists called Eagle Mine. Its narrow passageways and dusty interior compelled us to invest in techniques for precise motion planning, dust mitigation, and flying in perceptually-degraded environments. For smaller-scale integration testing, we used what resources we had on the JPL campus. That meant setting up a series of inflatable tunnels in the Mars Yard, a dusty, rocky field used for rehearsing mobility sequences for Mars rovers. By joining multiple tunnels together, we could make test courses of varying lengths and widths, allowing us to make rapid progress, especially on our drones’ performance in dusty environments.

Photos: Team CoSTAR Team CoSTAR created a series of inflatable tunnels in the JPL Mars Yard to test certain specific autonomy capabilities without needing to travel to mines or caves.

Hybrid aerial-ground vehicles, platforms that roll or fly depending on obstacles in the local vicinity, were a major focus in the lead-up to our first test-run, the Systems Test and Integration eXercise (STIX), held by DARPA in Idaho Springs, Colorado, in April 2019. The robot that we developed, called Rollocopter, offers the potential for greater coverage of a given area, as it only flies when it needs to, such as to hop over a rubble pile. On flatter terrain, Rollocopter can travel in an energy-efficient ground-rolling mode. Rollocopter made its debut alongside a wheeled Husky robot, from Clearpath Robotics, at the STIX event, flying and driving in a sensing-degraded environment with high levels of dust.

Photos: Team CoSTAR The Rollocopter and Husky on their debut outing at the DARPA competition dry-run event called STIX.

Three months before the first scored SubT event, the Tunnel Circuit, DARPA revealed that the competition was to be held at a research coal mine in Pittsburgh, Pa. This mine appeared to have less dust, fewer obstacles, and wider passages than our test environments, but it was also more demanding due to its wet and muddy terrain and large, complex layout. This was a big surprise for the team, and we had to shift all kinds of things around as fast as we could. Fortunately, the muscle memory from our rapid development cycles prepared us to make a dramatic adjustment to our approach. Given the level of mud in a typical coal mine and challenges it imposes on rolling, we decided (with heavy hearts) to shelve the Rollocopters and focus on wheeled platforms and traditional quadcopters. Even though our robots are just machines, we do build up a sort of relationship with them, coming to know their quirks as we coax them to life. In light of this, the emotional pain of shelving a project can be quite acute. Nevertheless, we recognize that decisions like this are in service of the team’s broader goals, and our hope is that we’ll be able to bring these hybrid aerial-ground vehicles back in a different environment.

We not only had to rework our robot fleet before the SubT Tunnel Circuit, but also had to reassess our test plan: With no coal mines on the West Coast, we instead began scouting for coal mines in West Virginia, which lies in the same geological tract as the competition site. On the advice of one of our interns studying at West Virginia University, we contacted a small tourist mine in Beckley, W.V., called the Beckley Exhibition Coal Mine. We cold-called the mine, explaining (through mild disbelief) that we were from NASA and wanted to cross the country to test our robots in their mine. To our surprise, the town had a longstanding association with NASA. During our reconnaissance visit, the manager of the mine told us the story of local figure Homer Hickham, whose book about becoming a NASA engineer from this humble coal mining town went on to inspire the film October Sky. We were heartily welcomed.

In the month before the Tunnel event, we shipped all our robots to Beckley, where we kept a bruising cadence of day and night testing. By day, we went to locations such as Arch Mine, an active coal mine whose tunnels were 900 feet underground, and the Mine Safety and Health Authority (MSHA) facility, which had indoor mock-ups of mine environments complete with smoke simulators to train rescue personnel. By night, we ran tests in the Beckley tourist mine after the day’s tours were complete. We were working long hours, which demanded both mental and physical endurance: Every excursion involved a ritualistic loading and unloading of dozens of equipment boxes and robots, allowing us to set up shop anywhere with a power outlet. The discipline of practicing our pit crew roles in these settings paid off as the Tunnel Circuit began.

Photos: Team CoSTAR Team CoSTAR and our coal miner partners 900 feet underground at Arch Mine, an active coal mine in West Virginia. The team was testing robots in representative extreme environments to what we expected to find in the Tunnel Circuit event.

As the Tunnel Circuit event began, we noticed on the DARPA live stream that Team Explorer (a partnership between Carnegie Mellon and Oregon State University) was using some kind of device on a tripod at the mine entrance. Googling it, we learned that this was called a total station, a precision instrument normally used for surveying. Impressed at this team’s innovative application of such a tool to an unusual task, we decided to entertain even the most outlandish proposals for improving our performance and began trying to find a total station of our own before our next scored run, which was only two days away. This was a great idea to maximize the localization accuracies along the first ~80 meters of featureless mine entry tunnels, while the robot is still visible from starting location. There were no total station units to be found with a fast enough shipping time online, so we worked the phones to see if there was one we could borrow. Within the next two days, we managed to borrow a device, watch lots of YouTube videos to teach ourselves how to use it, and write and test code to integrate it into our operations workflow and localization algorithms. This was one of the fastest, most fun, and most last-minute efforts in our team during the last two years. Our performance at the Tunnel Circuit led to a second-place finish among some of the best robotics teams in the world.

Preparing for the Urban Circuit Photo: Team CoSTAR CoSTAR member and total station operator, Ed Terry, in a candid moment on the DARPA live stream at the first outing of the total station, following two intense days of on-the-fly integration of this system.

The Tunnel Circuit had shown us how important it was to test in realistic environments, and fortunately, finding test locations for the Urban Circuit-like environments was much easier. With a fully-integrated system and team structure in place, we entered the second year of the SubT Challenge with momentum, which was essential with only five months to adapt to yet another type of environment. We framed our preparation around monthly capability milestone demonstrations, a gated process which allowed us to triage the technologies we should focus on. We took the opportunity to improve the rigor of our techniques for Simultaneous Localization and Mapping (SLAM) and planning under uncertainty, and to upgrade our computing power.

One of the major additions for the Urban Circuit was the introduction of multi-level courses, where the ability to traverse stairways was a prerequisite for accessing large portions of the course. To handle this, we added tracked robots to our fleet. Thanks to the modularity of the NeBula software framework and highly transferable hardware, we were able to go up and down stairs with our tracked robot in four months.

A mere eight weeks before the competition, we struck a partnership with Boston Dynamics to use their Spot legged robot, which arrived at our lab just before Christmas. It seemed too daunting a task to integrate Spot into our team in such a short time. However, for the team members who volunteered to work on it over the Christmas break, the chance to be given the keys to such an advanced robot was a sort of Christmas present! To become part of the robot family, we needed proof that Spot could first integrate with the rest of our concept of operations, NeBula autonomy software, and NeBula autonomy hardware payload. Verifying these in the first two weeks, we were convinced that it was fit for the task. The team systematically added NeBula’s autonomy, perception, and communications modules over a matter of weeks. Boasting a payload capacity of up to 12 kg, we were able to equip Spot with the high levels of autonomy and situational awareness that allowed us to fully add it to our robot fleet only two weeks prior the competition.

Photos: Team CoSTAR Spots equipped with the NeBula autonomy and perception payload.

As we pushed Spot to traverse extreme terrains, we attached it to an elaborate rope system devised to save our precious robot when it fell. This was a precautionary measure to help us learn Spot’s limits with its unique payload configuration. After several weeks of refining our procedures for reliable stair climbing and building up confidence in the robot’s autonomy performance, we did away with the tether just one week before the competition.

Photo: Team CoSTAR A tethered Spot preparing for stair climbing trials. Our robots go to school

Shortly after the Urban Circuit competition location was revealed to be in the small town of Elma, Washington, we emailed Elma High School asking if they were open to NASA testing its robots in their buildings. In a follow-up phone call, a teacher reported that they thought this original email was a scam! After providing some more context for our request, they enthusiastically agreed to host us. In this way, we were able to not only test multi-level autonomy in complex building layouts but also to give the high-school students an inside look at a NASA JPL test campaign.
 
Each evening, after the students had left, we shifted our equipment and robots from our base in a hotel conference center to the school, and set up our command post in the cafeteria. The warm, clean, and well-lit school was a luxury compared to earlier field test settings in mines deep underground. Each night, we sought to cover more of the school’s complex layout: hallways, classrooms, and multiple sets of stairs. These mock runs taught us as much about the behavior of the robot team as it did about the human team, especially as everyone found ways of dealing with sustained fatigue. We typically kept practicing in the school until well after midnight, thanks to the flexibility and generosity of the staff. At one stage, we were concerned that tethering our legged robots up to the stairs would chip their paintwork but they said, “Don't worry about it, we need to repaint it sometime anyway!" We would periodically have visitors from the school, our hotel, and even local restaurants, whose encouragement kept our spirits high despite the long hours.

Photo: Team CoSTAR A birthday of one of CoSTAR team members during the competition week at our testing site.

Our first SubT Urban Circuit run was scheduled for the second day of the competition, which gave us a chance to watch the first day of the DARPA live stream. We noticed a down staircase right next to the starting gate of the Alpha course. One team member mentioned offhandedly that evening that we should try throwing a communications node into the staircase as a low-risk way of expanding our communications range. Minutes later, we started making phone calls to our hosts at Elma High School. The following morning at 7 a.m., one of the Elma school teachers arrived with a box full of basketballs and volleyballs. With these raw materials, we set about making a protective shield for the communications node to help it survive bouncing down several flights of stairs. One group started chipping away at the foam volleyballs while another set about taping together basketballs into a tetrahedron.

By 9 a.m., we had produced a hollowed-out foam volleyball with a communications node embedded in it, wrapped with a rope tether. For the first (and last) time in our team’s history, we assigned a job based on athletic ability. We chose well, and our node-in-a-ball thrower stood outside of the course and launched the node cleanly over the stairway bannister, allowing us to then gently lower it down on the tether. In the end, we didn’t need the extra range provided by the node-in-a-ball as our robots were able to come back into the communication range at the bottom of the staircase without any help. 

Image: Team CoSTAR Our node-in-a-ball in action: To expand our robot’s communications range, we threw a communications node embedded in a hollowed-out foam volleyball down a staircase.

Over a 60-minute scored run, only one human supervisor stationed outside the course can see information from within the course, and only if and when a communication link is established. In addition, a pit crew of up to nine people may assist in running checklists and deploying robots prior to the start of the mission. As soon as the robots enter the course itself, the team must trust that the hardware and autonomy software is sound while remaining ready to respond to inevitable anomalies. In this respect, the group starts to resemble an elite sports team, running a to-the-minute routine.

With holes in the floor, rubble piles, and water slicks, the Urban course put our robots through their paces. As the robots moved deeper into the course and out of communications range with the human supervisor, all we could do was rely on the robots’ autonomy. On the first day, the team was startled by repeated banging and crashing noises from within the course. With an unknown number of staircases, we feared the worst: That a wheeled rover had driven itself over the edge. To our relief, the sound was just from small wooden obstacles that the robot was casually driving over. 

Our days were structured around preparing for either test runs or scored runs, followed by a post-run debrief and then many hours poring over gigabytes of collected data and making bug fixes. We cycled through the pizza-subs-burgers trifecta multiple times, which spanned the culinary options available in Elma. Before beginning a run, we ran a “smoke test” of each robot in which we drove it 2 meters autonomously to verify that every part of the pipeline was still functional. We had checklists for everything, including a checklist item to pack the checklist itself and even to make sure the base station supervisor was in the car with us. These strict procedures helped guard against mistakes, which became more likely the longer we worked.

Every run revealed unexpected edge cases for mobility and autonomy that we had to rapidly address each night back at the hotel. We split the hotel conference center into a development zone and a testing zone. In the latter, we installed a test course configuration that would rotate on a daily basis, depending on what was the most pressing issue to solve. The terrain on the real course was extremely challenging, even for legged robots. In each of the first two scored runs, we lost one of our Spot robots to various negative obstacles such as holes in the ground. In a matter of hours after each run, the hardware team built reconfigurable barriers and a wooden stage with variable-size negative obstacles to test the resiliency of obstacle detection and avoidance strategies. After implementing these fixes, we transported the robots to the hotel to organize and prepare our fleet, which stoked the curiosity of fellow guests.

And the winner is…

Going into the final day of the competition, we were tied with Team Explorer. All of the parameter tuning, debugging, and exploration strategy refinements came together in time for the last round. Capping off a 1.5-year effort, we sent our robots into the SubT Urban Course for the final time. The wheeled Huskies led the way to build a communications backbone and explore the ground floor, with the legged Spots following behind to take the stairs to other levels. 

To score even a single point, a chain of events needs to happen flawlessly. Firstly, a robot needs to have covered enough space, traversing mobility-stressing and perceptually-degraded course elements, to reach an area that has an artifact. Multiple camera video streams as well as non-visual sensors are analyzed by the NeBula machine learning framework running on the robot to detect these artifacts. Once detected, an artifact’s location must be estimated to within 5 meters of the true location defined by DARPA with respect to a calibration target at the course entrance. Finally, the robot needs to bring itself back into communication range to report the artifact location within the 60-minute window of mission duration. A critical part of accomplishing the mission in this scenario is a decision-making module that can take into account the remaining mission time, predictive mission risk, as well as chances of losing an asset, re-establishing communication, and retrieving the data. It’s a delicate balance between spending time exploring to find as many artifacts as possible, and making sure that artifact locations can be returned to base before time runs out.

With only 40 report submissions allowed for 20 placed artifacts, our strategy was to collect as much information as possible before submitting artifact reports. This approach of maximizing the autonomous coverage of the space meant that a substantial amount of time could go by without hearing from a robot that may be out of communications range. This made for a tense dynamic as the clock ticked down. With only 15 minutes to go in the last run, we had scored just 2 points, which would have been our lowest score of the entire competition. It didn’t make sense: We had covered more ground than in all prior runs, but without the points to show for it. We were praying that the robots would prevail and come back into communication range before the clock ran out. Within the final 15 minutes, the robots started to show up one by one, delivering their locations of the artifacts they’d found. Submitting these incoming reports, our score increased rapidly to 9, turning the mood from despairing to jubilant, as we posted our best score yet.

Image: Team CoSTAR Only 15 minutes to the end of the mission, our autonomous robots returned to our communication range to deliver the scored artifacts, turning the mood from despairing to jubilant.

As the pit crew emerged from the course to meet the above-ground team, there was a flurry of breathless communication and in the confusion it did allow for one small prank. One of the pit crew members took our team lead aside and successfully convinced him and the above-ground team, for a minute or two prior to the formal announcement, that we had only scored two points! At the same time, we were being ushered over to a pop-up TV studio where we gathered before the camera for the final scores to be revealed. The scores flashed up on the screen showing us scoring 9 points and in first place. The surprised face of our pranked team members was priceless! For the entire team, the exhaustion, frustration, and dedication that we had given to the task dissolved in a moment of elation.

Image: Team CoSTAR The team reacts to the final scores being revealed.

While there is a healthy spirit of competition among the teams, we recognize that this challenge remains an unsolved problem and that we as a robotics community are collectively redefining the state of the art. In addition to the scored runs, we appreciated the opportunity to learn from the extraordinary variety of solutions on display from other teams. Both the formal knowledge exchange and the common experience of taking on the SubT Urban course enhanced the feeling of shared advancement.

Photos: Team CoSTAR Left: CoSTAR T-Rex, played by our field test lead, John Mayo, meets the team right after the final scored run; right: DARPA award ceremony. Post-competition and COVID-19

After the Urban competition, the COVID-19 pandemic set in and JPL shifted part of its focus and resources towards pandemic-related research, producing the VITAL respirator in 37 days. As our robot fleet served us faithfully during this competition, they earned some time to recuperate (with proper PPE), but they will soon be pressed into service once more. We are in the process of equipping them with UV lights to sterilize hospitals and the JPL campus, which reinforces the growing role robots are playing in applications where no human should venture.

Photo: Team CoSTAR CoSTAR robots recuperating with proper PPEs.

While the DARPA Cave Circuit in-person competition is another victim of COVID-19 restrictions, the team is continuing to prepare for this new environment. Supported by NASA Science Mission Directorate (SMD) the team focuses on searching for biological signs and resources in Martian-analog Lava tubes in Northern California. On a parallel track, our team is leveraging these capabilities to form mission concepts and autonomy solutions for lunar exploration to support the vision of NASA’s Artemis program. This will in turn help refine our traversability, navigation, and autonomy solutions for the tough environments to be found in the final round of the DARPA Subterranean Challenge in late 2021.

Image: Team CoSTAR Picture of robot in Martian-analog extreme terrains and lava tubes. Tests conducted in Lava Bed National Monument, Tulelake, Calif.

Edward Terry is a robotics engineer and CoSTAR team member. He studied aeronautical engineering at the University of Sydney and completed the master of science in robotic systems development at Carnegie Mellon University. In Team CoSTAR, his focus is on object detection and localization under perceptually-degraded conditions.

Fadhil Ginting is a robotics visiting student researcher at NASA’s Jet Propulsion Laboratory. He completed his master’s in robotics, system, and control at ETH Zurich. In Team CoSTAR, his focus is on learning and decision making for autonomous multi-robot systems.

Ali Agha is a principle investigator and research technologist at NASA’s Jet Propulsion Laboratory. His research centers on autonomy for robotic systems and spacecrafts, with a dual focus on planetary exploration and terrestrial applications. At JPL, he leads TEAM CoSTAR. Previously, he was with Qualcomm Research, leading the perception efforts for autonomous drones and robots. Prior to that, Dr. Agha was a postdoctoral researcher at MIT. Dr. Agha was named NASA NIAC fellow in 2018.

While we’re super bummed that COVID forced the cancellation of the Systems Track event of the DARPA Subterranean Challenge Cave Circuit, the good news is that the Virtual Track (being virtual) is 100 percent coronavirus-free, and the final event is taking place tomorrow, November 17, right on schedule. And honestly, it’s about time the Virtual Track gets the attention that it deserves—we’re as guilty as anyone of focusing more heavily on the Systems Track, being full of real robots that alternate between amazingly talented and amazingly klutzy, but the Virtual Track is just as compelling, in a very different way.

DARPA has scheduled the Cave Circuit Virtual Track live event for Tuesday starting at 2 p.m. ET, and we’ve got all the details.

If you’ve been mostly following the Systems Track up until this point, you should definitely check out the article that the Urban Circuit Virtual Track winning team, Michigan Tech’s Team BARCS, wrote for us last week. It’s a great way of getting up to speed on what makes the virtual SubT competition so important, and so exciting.

All the Virtual Track teams that submitted their code have absolutely no idea how well their virtual robots did, and they’ll be watching their runs at the same time as we are.

The really amazing thing about the Virtual Track is that unlike the Systems Track, where a human in the loop can send commands to any robot in communications range, the virtual teams of robots operate fully autonomously. In fact, Virtual Track teams sent their code in weeks ago, and DARPA has been running the competition itself in secret, but on Tuesday, everyone will find out how they did. Here’s the announcement:

On Tuesday, November 17 at 2PM EST, the Defense Research Projects Agency (DARPA) will webcast its Subterranean (SubT) Challenge Cave Circuit Virtual Competition. Viewers can follow virtual versions of real autonomous robots, driven by software and algorithms created by 16 competitors, as they search a variety of virtual cave environments for target artifacts. The SubT Challenge is helping DARPA develop new tools for time-sensitive combat operations or disaster response scenarios. The winners of this virtual showcase will be announced at the end of the webcast, and $500,000 worth of prizes is at stake.

What we’re really looking forward to on Tuesday is the expert commentary. During past Systems Track events, live streaming video was available of the runs, but both the teams and the DARPA folks were far too busy running the actual competition to devote much time to commentating. Since the virtual competition itself has already been completed, we’ll be getting a sort of highlights show on Tuesday, with commentary from DARPA program manager Tim Chung, virtual competition lead Angela Maio, along with Camryn Irwin, who did a fantastic job hosting the Urban Circuit livestream earlier this year. We’ll be seeing competition run-throughs from a variety of teams, although not every run and not in real-time of course, since the event is only a couple hours long. But there will be a lot more detail than we’ve ever had before on technology and strategy directly from DARPA.

All the Virtual Track teams that submitted their code have absolutely no idea how well their virtual robots did, and they’ll be watching their runs at the same time as we are. I’ll be on Twitter for the entire event (@BotJunkie) to provide some vaguely informed and hopefully amusing commentary, and we’re hoping that some of the competing teams will be on Twitter as well to let us know how happy (or sad) they are with how their robots are performing. If you have questions, let me know, and we’ll do our best to get in touch with the teams directly, or go through DARPA during a post-event press briefing scheduled for Wednesday.

[ DARPA SubT Virtual Cave Circuit Livestream ]

While we’re super bummed that COVID forced the cancellation of the Systems Track event of the DARPA Subterranean Challenge Cave Circuit, the good news is that the Virtual Track (being virtual) is 100 percent coronavirus-free, and the final event is taking place tomorrow, November 17, right on schedule. And honestly, it’s about time the Virtual Track gets the attention that it deserves—we’re as guilty as anyone of focusing more heavily on the Systems Track, being full of real robots that alternate between amazingly talented and amazingly klutzy, but the Virtual Track is just as compelling, in a very different way.

DARPA has scheduled the Cave Circuit Virtual Track live event for Tuesday starting at 2 p.m. ET, and we’ve got all the details.

If you’ve been mostly following the Systems Track up until this point, you should definitely check out the article that the Urban Circuit Virtual Track winning team, Michigan Tech’s Team BARCS, wrote for us last week. It’s a great way of getting up to speed on what makes the virtual SubT competition so important, and so exciting.

All the Virtual Track teams that submitted their code have absolutely no idea how well their virtual robots did, and they’ll be watching their runs at the same time as we are.

The really amazing thing about the Virtual Track is that unlike the Systems Track, where a human in the loop can send commands to any robot in communications range, the virtual teams of robots operate fully autonomously. In fact, Virtual Track teams sent their code in weeks ago, and DARPA has been running the competition itself in secret, but on Tuesday, everyone will find out how they did. Here’s the announcement:

On Tuesday, November 17 at 2PM EST, the Defense Research Projects Agency (DARPA) will webcast its Subterranean (SubT) Challenge Cave Circuit Virtual Competition. Viewers can follow virtual versions of real autonomous robots, driven by software and algorithms created by 16 competitors, as they search a variety of virtual cave environments for target artifacts. The SubT Challenge is helping DARPA develop new tools for time-sensitive combat operations or disaster response scenarios. The winners of this virtual showcase will be announced at the end of the webcast, and $500,000 worth of prizes is at stake.

What we’re really looking forward to on Tuesday is the expert commentary. During past Systems Track events, live streaming video was available of the runs, but both the teams and the DARPA folks were far too busy running the actual competition to devote much time to commentating. Since the virtual competition itself has already been completed, we’ll be getting a sort of highlights show on Tuesday, with commentary from DARPA program manager Tim Chung, virtual competition lead Angela Maio, along with Camryn Irwin, who did a fantastic job hosting the Urban Circuit livestream earlier this year. We’ll be seeing competition run-throughs from a variety of teams, although not every run and not in real-time of course, since the event is only a couple hours long. But there will be a lot more detail than we’ve ever had before on technology and strategy directly from DARPA.

All the Virtual Track teams that submitted their code have absolutely no idea how well their virtual robots did, and they’ll be watching their runs at the same time as we are. I’ll be on Twitter for the entire event (@BotJunkie) to provide some vaguely informed and hopefully amusing commentary, and we’re hoping that some of the competing teams will be on Twitter as well to let us know how happy (or sad) they are with how their robots are performing. If you have questions, let me know, and we’ll do our best to get in touch with the teams directly, or go through DARPA during a post-event press briefing scheduled for Wednesday.

[ DARPA SubT Virtual Cave Circuit Livestream ]

Human intention detection is fundamental to the control of robotic devices in order to assist humans according to their needs. This paper presents a novel approach for detecting hand motion intention, i.e., rest, open, close, and grasp, and grasping force estimation using force myography (FMG). The output is further used to control a soft hand exoskeleton called an SEM Glove. In this method, two sensor bands constructed using force sensing resistor (FSR) sensors are utilized to detect hand motion states and muscle activities. Upon placing both bands on an arm, the sensors can measure normal forces caused by muscle contraction/relaxation. Afterwards, the sensor data is processed, and hand motions are identified through a threshold-based classification method. The developed method has been tested on human subjects for object-grasping tasks. The results show that the developed method can detect hand motions accurately and to provide assistance w.r.t to the task requirement.

Electro-ribbon actuators are lightweight, flexible, high-performance actuators for next generation soft robotics. When electrically charged, electrostatic forces cause the electrode ribbons to progressively zip together through a process called dielectrophoretic liquid zipping (DLZ), delivering contractions of more than 99% of their length. Electro-ribbon actuators exhibit pull-in instability, and this phenomenon makes them challenging to control: below the pull-in voltage threshold, actuator contraction is small, while above this threshold, increasing electrostatic forces cause the actuator to completely contract, providing a narrow contraction range for feedforward control. We show that application of a time-varying voltage profile that starts above pull-in threshold, but subsequently reduces, allows access to intermediate steady-states not accessible using traditional feed-forward control. A modified proportional-integral closed-loop controller is proposed (Boost-PI), which incorporates a variable boost voltage to temporarily elevate actuation close to, but not exceeding, the pull-in voltage threshold. This primes the actuator for zipping and drastically reduces rise time compared with a traditional PI controller. A multi-objective parameter-space approach was implemented to choose appropriate controller gains by assessing the metrics of rise time, overshoot, steady-state error, and settle time. This proposed control method addresses a key limitation of the electro-ribbon actuators, allowing the actuator to perform staircase and oscillatory control tasks. This significantly increases the range of applications which can exploit this new DLZ actuation technology.

Significant information extraction from the images that are geometrically distorted or transformed is mainstream procedure in image processing. It becomes difficult to retrieve the relevant region when the images get distorted by some geometric deformation. Hu's moments are helpful in extracting information from such distorted images due to their unique invariance property. This work focuses on early detection and gradation of Knee Osteoarthritis utilizing Hu's invariant moments to understand the geometric transformation of the cartilage region in Knee X-ray images. The seven invariant moments are computed for the rotated version of the test image. The results demonstrated are found to be more competitive and promising, which are validated by ortho surgeons and rheumatologists.

Several lower-limb exoskeletons enable overcoming obstacles that would impair daily activities of wheelchair users, such as going upstairs. Still, as most of the currently commercialized exoskeletons require the use of crutches, they prevent the user from interacting efficiently with the environment. In a previous study, a bio-inspired controller was developed to allow dynamic standing balance for such exoskeletons. It was however only tested on the device without any user. This work describes and evaluates a new controller that extends this previous one with an online model compensation, and the contribution of the hip joint against strong perturbations. In addition, both controllers are tested with the exoskeleton TWIICE One, worn by a complete spinal cord injury pilot. Their performances are compared by the mean of three tasks: standing quietly, resisting external perturbations, and lifting barbells of increasing weight. The new controller exhibits a similar performance for quiet standing, longer recovery time for dynamic perturbations but better ability to sustain prolonged perturbations, and higher weightlifting capability.

Pages