Feed aggregator

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICCR 2020 – December 26-29, 2020 – [Online Conference] HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference]

Let us know if you have suggestions for next week, and enjoy today's videos.

What a lovely Christmas video from Norlab.

[ Norlab ]

Thanks Francois!

MIT Mini-Cheetahs are looking for a new home. Our new cheetah cubs, born at NAVER LABS, are for the MIT Mini-Cheetah workshop. MIT professor Sangbae Kim and his research team are supporting joint research by distributing Mini-Cheetahs to researchers all around the world.

NAVER Labs ]

For several years, NVIDIA’s research teams have been working to leverage GPU technology to accelerate reinforcement learning (RL). As a result of this promising research, NVIDIA is pleased to announce a preview release of Isaac Gym – NVIDIA’s physics simulation environment for reinforcement learning research. RL-based training is now more accessible as tasks that once required thousands of CPU cores can now instead be trained using a single GPU.

[ NVIDIA ]

At SINTEF in Norway, they're working on ways of using robots to keep tabs on giant floating cages of tasty fish:

One of the tricky things about operating robots in an environment like this is localization, so SINTEF is working on a solution that uses beacons:

While that video shows a lot of simulation (because otherwise there are tons of fish in the way), we're told that the autonomous navigation has been successfully demonstrated with an ROV in "a full scale fish farm with up to 200.000 salmon swimming around the robot."

[ SINTEF ]

Thanks Eleni!

We’ve been getting ready for the snow in the most BG way possible. Wishing all of you a happy and healthy holiday season.

[ Berkshire Grey ]

ANYbotics doesn’t care what time of the year it is, so Happy Easter!

And here's a little bit about why ANYmal C looks the way it does.

[ ANYbotics ]

Robert "Buz" Chmielewski is using two modular prosthetic limbs developed by APL to feed himself dessert. Smart software puts his utensils in roughly the right spot, and then Buz uses his brain signals to cut the food with knife and fork. Once he is done cutting, the software then brings the food near his mouth, where he again uses brain signals to bring the food the last several inches to his mouth so that he can eat it.

[ JHUAPL ]

Introducing VESPER: a new military-grade small drone that is designed, sourced and built in the United States. Vesper offers a 50-minutes flight time, with speeds up to 45 mph (72 kph) and a total flight range of 25 miles (45 km). The magnetic snap-together architecture enables extremely fast transitions: the battery, props and rotor set can each be swapped in <5 seconds.

[ Vantage Robotics ]

In this video, a multi-material robot simulator is used to design a shape-changing robot, which is then transferred to physical hardware. The simulated and real robots can use shape change to switch between rolling gaits and inchworm gaits, to locomote in multiple environments.

[ Yale Faboratory ]

Get a preview of the cave environments that are being used to inspire the Final Event competition course of the DARPA Subterranean Challenge. In the Final Event, teams will deploy their robots to rapidly map, navigate, and search in competition courses that combine elements of man-made tunnel systems, urban underground, and natural cave networks!

The reason to pay attention this particular video is that it gives us some idea of what DARPA means when they say "cave."

[ SubT ]

MQ25 takes another step toward unmanned aerial refueling for the U.S. Navy. The MQ-25 test asset has flown for the first time with an aerial refueling pod containing the hose and basket that will make it an aerial refueler.

[ Boeing ]

We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize on-board proprioceptive and exteroceptive feedback to map sensory information and desired base velocity commands into footstep plans using a reinforcement learning (RL) policy trained in simulation over a wide range of procedurally generated terrains.

[ DRS ]

The video shows the results of the German research project RoPHa. Within the project, the partners developed technologies for two application scenarios with the service robot Care-O-bot 4 in order to support people in need of help when eating.

[ RoPHa Project ]

Thanks Jenny!

This looks like it would be fun, if you are a crazy person.

[ Team BlackSheep ]

Robot accuracy is the limiting factor in many industrial applications. Manufacturers often only specify the pose repeatability values of their robotic systems. Fraunhofer IPA has set up a testing environment for automated measuring of accuracy performance criteria of industrial robots. Following the procedures defined in norm ISO 9283 allows generating reliable and repeatable results. They can be the basis for targeted measures increasing the robotic system’s accuracy.

[ Fraunhofer ]

Thanks Jenny!

The IEEE Women in Engineering - Robotics and Automation Society (WIE-RAS) hosted an online panel on best practices for teaching robotics. The diverse panel boasts experts in robotics education from a variety of disciplines, institutions, and areas of expertise.

[ IEEE RAS ]

Northwestern researchers have developed a first-of-its-kind soft, aquatic robot that is powered by light and rotating magnetic fields. These life-like robotic materials could someday be used as "smart" microscopic systems for production of fuels and drugs, environmental cleanup or transformative medical procedures.

[ Northwestern ]

Tech United Eindhoven's soccer robots now have eight wheels instead of four wheels, making them tweleve times better, if my math is right.

[ TU Eindhoven ]

This morning just after 3 a.m. ET, Boston Dynamics sent out a media release confirming that Hyundai Motor Group has acquired a controlling interest in the company that values Boston Dynamics at US $1.1 billion:

Under the agreement, Hyundai Motor Group will hold an approximately 80 percent stake in Boston Dynamics and SoftBank, through one of its affiliates, will retain an approximately 20 percent stake in Boston Dynamics after the closing of the transaction.

The release is very long, but does have some interesting bits—we’ll go through them, and talk about what this might mean for both Boston Dynamics and Hyundai.

We’ve asked Boston Dynamics for comment, but they’ve been unusually quiet for the last few days (I wonder why!). So at this point just keep in mind that the only things we know for sure are the ones in the release. If (when?) we hear anything from either Boston Dynamics or Hyundai, we’ll update this post.

The first thing to be clear on is that the acquisition is split between Hyundai Motor Group’s affiliates, including Hyundai MotorHyundai Mobis, and Hyundai Glovis. Hyundai Motor makes cars, Hyundai Mobis makes car parts and seems to be doing some autonomous stuff as well, and Hyundai Glovis does logistics. There are many other groups that share the Hyundai name, but they’re separate entities, at least on paper. For example, there’s a Hyundai Robotics, but that’s part of Hyundai Heavy Industries, a different company than Hyundai Motor Group. But for this article, when we say “Hyundai,” we’re talking about Hyundai Motor Group.

What’s in it for Hyundai?

Let’s get into the press release, which is filled with press release-y terms like “synergies” and “working together”—you can view the whole thing here—but still has some parts that convey useful info.

By establishing a leading presence in the field of robotics, the acquisition will mark another major step for Hyundai Motor Group toward its strategic transformation into a Smart Mobility Solution Provider. To propel this transformation, Hyundai Motor Group has invested substantially in development of future technologies, including in fields such as autonomous driving technology, connectivity, eco-friendly vehicles, smart factories, advanced materials, artificial intelligence (AI), and robots.

If Hyundai wants to be a “Smart Mobility Solution Provider” with a focus on vehicles, it really seems like there’s a whole bunch of other ways they could have spent most of a billion dollars that would get them there quicker. Will Boston Dynamics’ expertise help them develop autonomous driving technology? Sure, I guess, but why not just buy an autonomous car startup instead? Boston Dynamics is more about “robots,” which happens to be dead last on the list above.

There was some speculation a couple of weeks ago that Hyundai was going to try and leverage Boston Dynamics to make a real version of this hybrid wheeled/legged concept car, so if that’s what Hyundai means by “Smart Mobility Solution Provider,” then I suppose the Boston Dynamics acquisition makes more sense. Still, I think that’s unlikely, because it’s just a concept car, after all.

In addition to “smart mobility,” which seems like a longer-term goal for Hyundai, the company also mentions other, more immediate benefits from the acquisition: 

Advanced robotics offer opportunities for rapid growth with the potential to positively impact society in multiple ways. Boston Dynamics is the established leader in developing agile, mobile robots that have been successfully integrated into various business operations. The deal is also expected to allow Hyundai Motor Group and Boston Dynamics to leverage each other’s respective strengths in manufacturing, logistics, construction and automation.

“Successfully integrated” might be a little optimistic here. They’re talking about Spot, of course, but I think the best you could say at this point is that Spot is in the middle of some promising pilot projects. Whether it’ll be successfully integrated in the sense that it’ll have long-term commercial usefulness and value remains to be seen. I’m optimistic about this as well, but Spot is definitely not there yet.

What does probably hold a lot of value for Hyundai is getting Spot, Pick, and perhaps even Handle into that “manufacturing, logistics, construction” stuff. This is the bread and butter for robots right now, and Boston Dynamics has plenty of valuable technology to offer in those spaces.

Photo: Bob O’Connor Boston Dynamics is selling Spot for $74,500, shipping included. Betting on Spot and Pick

With Boston Dynamics founder Marc Raibert’s transition to Chairman of the company, the CEO position is now occupied by Robert Playter, the long-time VP of engineering and more recently COO at Boston Dynamics. Here’s his statement from the release:

“Boston Dynamics’ commercial business has grown rapidly as we’ve brought to market the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility. We and Hyundai share a view of the transformational power of mobility and look forward to working together to accelerate our plans to enable the world with cutting edge automation, and to continue to solve the world’s hardest robotics challenges for our customers.”

Whether Spot is in fact “the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility” on the market is perhaps something that could be argued against, although I won’t. Whether or not it was the first robot that can do these kinds of things, it’s definitely not the only robot that do these kinds of things, and going forward, it’s going to be increasingly challenging for Spot to maintain its uniqueness.

For a long time, Boston Dynamics totally owned the quadruped space. Now, they’re one company among many—ANYbotics and Unitree are just two examples of other quadrupeds that are being successfully commercialized. Spot is certainly very capable and easy to use, and we shouldn’t underestimate the effort required to create a robot as complex as Spot that can be commercially used and supported. But it’s not clear how long they’ll maintain that advantage, with much more affordable platforms coming out of Asia, and other companies offering some unique new capabilities.

Photo: Boston Dynamics Boston Dynamics’ Handle is an all-electric robot featuring a leg-wheel hybrid mobility system, a manipulator arm with a vacuum gripper, and a counterbalancing tail.

Boston Dynamics’ picking system, which stemmed from their 2019 acquisition of Kinema Systems, faces the same kinds of challenges—it’s very good, but it’s not totally unique.

Boston Dynamics produces highly capable mobile robots with advanced mobility, dexterity and intelligence, enabling automation in difficult, dangerous, or unstructured environments. The company launched sales of its first commercial robot, Spot in June of 2020 and has since sold hundreds of robots in a variety of industries, such as power utilities, construction, manufacturing, oil and gas, and mining. Boston Dynamics plans to expand the Spot product line early next year with an enterprise version of the robot with greater levels of autonomy and remote inspection capabilities, and the release of a robotic arm, which will be a breakthrough in mobile manipulation.

Boston Dynamics is also entering the logistics automation market with the industry leading Pick, a computer vision-based depalletizing solution, and will introduce a mobile robot for warehouses in 2021.

Huh. We’ll be trying to figure out what “greater levels of autonomy” means, as well as whether the “mobile robot for warehouses” is Handle, or something more like an autonomous mobile robot (AMR) platform. I’d honestly be surprised if Handle was ready for work outside of Boston Dynamics next year, and it’s hard to imagine how Boston Dynamics could leverage their expertise into the AMR space with something that wouldn’t just seem… Dull, compared to what they usually do. I hope to be surprised, though!

A new deep-pocketed benefactor

Hyundai Motor Group’s decision to acquire Boston Dynamics is based on its growth potential and wide range of capabilities.

“Wide range of capabilities” we get, but that other phrase, “growth potential,” has a heck of a lot wrapped up in it. At the moment, Boston Dynamics is nowhere near profitable, as far as we know. SoftBank acquired Boston Dynamics in 2017 for between one hundred and two hundred million, and over the last three years they’ve poured hundreds of millions more into Boston Dynamics.

Hyundai’s 80 percent stake just means that they’ll need to take over the majority of that support, and perhaps even increase it if Boston Dynamics’ growth is one of their primary goals. Hyundai can’t have a reasonable expectation that Boston Dynamics will be profitable any time soon; they’re selling Spots now, but it’s an open question whether Spot will manage to find a scalable niche in which it’ll be useful in the sort of volume that will make it a sustainable commercial success. And even if it does become a success, it seems unlikely that Spot by itself will make a significant dent in Boston Dynamics’ burn rate anytime soon. Boston Dynamics will have more products of course, but it’s going to take a while, and Hyundai will need to support them in the interim.

Depending on whether Hyundai views Boston Dynamics as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the  next Atlas, when the  current one still seems so far from commercialization

It’s become clear that to sustain itself, Boston Dynamics needs a benefactor with very deep pockets and a long time horizon. Initially, Boston Dynamics’ business model (or whatever you want to call it) was to do bespoke projects for defense-ish folks like DARPA, but from what we understand Boston Dynamics stopped that sort of work after Google acquired them back in 2013. From one perspective, that government funding did exactly what it was supposed to do, which was to fund the development of legged robots through low TRLs (technology readiness levels) to the point where they could start to explore commercialization.

The question now, though, is whether Hyundai is willing to let Boston Dynamics undertake the kinds of low-TRL, high-risk projects that led from BigDog to LS3 to Spot, and from PETMAN to DRC Atlas to the current Atlas. So will Hyundai be cool about the whole thing and be the sort of benefactor that’s willing to give Boston Dynamics the resources that they need to keep doing what they’re doing, without having to answer too many awkward questions about things like practicality and profitability? Hyundai can certainly afford to do this, but so could SoftBank, and Google—the question is whether Hyundai will want to, over the length of time that’s required for the development of the kind of ultra-sophisticated robotics hardware that Boston Dynamics specializes in.

To put it another way: Depending whether Hyundai’s perspective on Boston Dynamics is as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the next Atlas, when the current one still seems so far from commercialization.

Google, SoftBank, now Hyundai

Boston Dynamics possesses multiple key technologies for high-performance robots equipped with perception, navigation, and intelligence. 

Hyundai Motor Group’s AI and Human Robot Interaction (HRI) expertise is highly synergistic with Boston Dynamics’s 3D vision, manipulation, and bipedal/quadruped expertise.

As it turns out, Hyundai Motors does have its own robotics lab, called Hyundai Motors Robotics Lab. Their website is not all that great, but here’s a video from last year:

I’m not entirely clear on what Hyundai means when they use the word “synergistic” when they talk about their robotics lab and Boston Dynamics, but it’s a little bit concerning. Usually, when a big company buys a little company that specializes in something that the big company is interested in, the idea is that the little company, to some extent, will be absorbed into the big company to give them some expertise in that area. Historically, however, Boston Dynamics has been highly resistant to this, maintaining its post-acquisition independence and appearing to be very reluctant to do anything besides what it wants to do, at whatever pace it wants to do it, and as by itself as possible.

From what we understand, Boston Dynamics didn’t integrate particularly well with Google’s robotics push in 2013, and we haven’t seen much evidence that SoftBank’s experience was much different. The most direct benefit to SoftBank (or at least the most visible one) was the addition of a fleet of Spot robots to the SoftBank Hawks baseball team cheerleading squad, along with a single (that we know about) choreographed gymnastics routine from an Atlas robot that was only shown on video.

And honestly, if you were a big manufacturing company with a bunch of money and you wanted to build up your own robotics program quickly, you’d probably have much better luck picking up some smaller robotics companies who were a bit less individualistic and would probably be more amenable to integration and would cost way less than a billion dollars-ish. And if integration is ultimately Hyundai’s goal, we’ll be very sad, because it’ll likely signal the end of Boston Dynamics doing the unfettered crazy stuff that we’ve grown to love.

Photo: Bob O’Connor Possibly the most agile humanoid robot ever built, Atlas can run, climb, jump over obstacles, and even get up after a fall. Boston Dynamics contemplates its future

The release ends by saying that the transaction is “subject to regulatory approvals and other customary closing conditions” and “is expected to close by June of 2021.” Again, you can read the whole thing here.

My initial reaction is that, despite the “synergies” described by Hyundai, it’s certainly not immediately obvious why the company wants to own 80 percent of Boston Dynamics. I’d also like a better understanding of how they arrived at the $1.1 billion valuation. I’m not saying this because I don’t believe in what Boston Dynamics is doing or in the inherent value of the company, because I absolutely do, albeit perhaps in a slightly less tangible sense. But when you start tossing around numbers like these, a big pile of expectations inevitably comes along with them. I hope that Boston Dynamics is unique enough that the kinds of rules that normally apply to robotics companies (or companies in general) can be set aside, at least somewhat, but I also worry that what made Boston Dynamics great was the explicit funding for the kinds of radical ideas that eventually resulted in robots like Atlas and Spot.

Can Hyundai continue giving Boston Dynamics the support and freedom that they need to keep doing the kinds of things that have made them legendary? I certainly hope so.

This morning just after 3 a.m. ET, Boston Dynamics sent out a media release confirming that Hyundai Motor Group has acquired a controlling interest in the company that values Boston Dynamics at US $1.1 billion:

Under the agreement, Hyundai Motor Group will hold an approximately 80 percent stake in Boston Dynamics and SoftBank, through one of its affiliates, will retain an approximately 20 percent stake in Boston Dynamics after the closing of the transaction.

The release is very long, but does have some interesting bits—we’ll go through them, and talk about what this might mean for both Boston Dynamics and Hyundai.

We’ve asked Boston Dynamics for comment, but they’ve been unusually quiet for the last few days (I wonder why!). So at this point just keep in mind that the only things we know for sure are the ones in the release. If (when?) we hear anything from either Boston Dynamics or Hyundai, we’ll update this post.

The first thing to be clear on is that the acquisition is split between Hyundai Motor Group’s affiliates, including Hyundai MotorHyundai Mobis, and Hyundai Glovis. Hyundai Motor makes cars, Hyundai Mobis makes car parts and seems to be doing some autonomous stuff as well, and Hyundai Glovis does logistics. There are many other groups that share the Hyundai name, but they’re separate entities, at least on paper. For example, there’s a Hyundai Robotics, but that’s part of Hyundai Heavy Industries, a different company than Hyundai Motor Group. But for this article, when we say “Hyundai,” we’re talking about Hyundai Motor Group.

What’s in it for Hyundai?

Let’s get into the press release, which is filled with press release-y terms like “synergies” and “working together”—you can view the whole thing here—but still has some parts that convey useful info.

By establishing a leading presence in the field of robotics, the acquisition will mark another major step for Hyundai Motor Group toward its strategic transformation into a Smart Mobility Solution Provider. To propel this transformation, Hyundai Motor Group has invested substantially in development of future technologies, including in fields such as autonomous driving technology, connectivity, eco-friendly vehicles, smart factories, advanced materials, artificial intelligence (AI), and robots.

If Hyundai wants to be a “Smart Mobility Solution Provider” with a focus on vehicles, it really seems like there’s a whole bunch of other ways they could have spent most of a billion dollars that would get them there quicker. Will Boston Dynamics’ expertise help them develop autonomous driving technology? Sure, I guess, but why not just buy an autonomous car startup instead? Boston Dynamics is more about “robots,” which happens to be dead last on the list above.

There was some speculation a couple of weeks ago that Hyundai was going to try and leverage Boston Dynamics to make a real version of this hybrid wheeled/legged concept car, so if that’s what Hyundai means by “Smart Mobility Solution Provider,” then I suppose the Boston Dynamics acquisition makes more sense. Still, I think that’s unlikely, because it’s just a concept car, after all.

In addition to “smart mobility,” which seems like a longer-term goal for Hyundai, the company also mentions other, more immediate benefits from the acquisition: 

Advanced robotics offer opportunities for rapid growth with the potential to positively impact society in multiple ways. Boston Dynamics is the established leader in developing agile, mobile robots that have been successfully integrated into various business operations. The deal is also expected to allow Hyundai Motor Group and Boston Dynamics to leverage each other’s respective strengths in manufacturing, logistics, construction and automation.

“Successfully integrated” might be a little optimistic here. They’re talking about Spot, of course, but I think the best you could say at this point is that Spot is in the middle of some promising pilot projects. Whether it’ll be successfully integrated in the sense that it’ll have long-term commercial usefulness and value remains to be seen. I’m optimistic about this as well, but Spot is definitely not there yet.

What does probably hold a lot of value for Hyundai is getting Spot, Pick, and perhaps even Handle into that “manufacturing, logistics, construction” stuff. This is the bread and butter for robots right now, and Boston Dynamics has plenty of valuable technology to offer in those spaces.

Photo: Bob O’Connor Boston Dynamics is selling Spot for $74,500, shipping included. Betting on Spot and Pick

With Boston Dynamics founder Marc Raibert’s transition to Chairman of the company, the CEO position is now occupied by Robert Playter, the long-time VP of engineering and more recently COO at Boston Dynamics. Here’s his statement from the release:

“Boston Dynamics’ commercial business has grown rapidly as we’ve brought to market the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility. We and Hyundai share a view of the transformational power of mobility and look forward to working together to accelerate our plans to enable the world with cutting edge automation, and to continue to solve the world’s hardest robotics challenges for our customers.”

Whether Spot is in fact “the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility” on the market is perhaps something that could be argued against, although I won’t. Whether or not it was the first robot that can do these kinds of things, it’s definitely not the only robot that do these kinds of things, and going forward, it’s going to be increasingly challenging for Spot to maintain its uniqueness.

For a long time, Boston Dynamics totally owned the quadruped space. Now, they’re one company among many—ANYbotics and Unitree are just two examples of other quadrupeds that are being successfully commercialized. Spot is certainly very capable and easy to use, and we shouldn’t underestimate the effort required to create a robot as complex as Spot that can be commercially used and supported. But it’s not clear how long they’ll maintain that advantage, with much more affordable platforms coming out of Asia, and other companies offering some unique new capabilities.

Photo: Boston Dynamics Boston Dynamics’ Handle is an all-electric robot featuring a leg-wheel hybrid mobility system, a manipulator arm with a vacuum gripper, and a counterbalancing tail.

Boston Dynamics’ picking system, which stemmed from their 2019 acquisition of Kinema Systems, faces the same kinds of challenges—it’s very good, but it’s not totally unique.

Boston Dynamics produces highly capable mobile robots with advanced mobility, dexterity and intelligence, enabling automation in difficult, dangerous, or unstructured environments. The company launched sales of its first commercial robot, Spot in June of 2020 and has since sold hundreds of robots in a variety of industries, such as power utilities, construction, manufacturing, oil and gas, and mining. Boston Dynamics plans to expand the Spot product line early next year with an enterprise version of the robot with greater levels of autonomy and remote inspection capabilities, and the release of a robotic arm, which will be a breakthrough in mobile manipulation.

Boston Dynamics is also entering the logistics automation market with the industry leading Pick, a computer vision-based depalletizing solution, and will introduce a mobile robot for warehouses in 2021.

Huh. We’ll be trying to figure out what “greater levels of autonomy” means, as well as whether the “mobile robot for warehouses” is Handle, or something more like an autonomous mobile robot (AMR) platform. I’d honestly be surprised if Handle was ready for work outside of Boston Dynamics next year, and it’s hard to imagine how Boston Dynamics could leverage their expertise into the AMR space with something that wouldn’t just seem… Dull, compared to what they usually do. I hope to be surprised, though!

A new deep-pocketed benefactor

Hyundai Motor Group’s decision to acquire Boston Dynamics is based on its growth potential and wide range of capabilities.

“Wide range of capabilities” we get, but that other phrase, “growth potential,” has a heck of a lot wrapped up in it. At the moment, Boston Dynamics is nowhere near profitable, as far as we know. SoftBank acquired Boston Dynamics in 2017 for between one hundred and two hundred million, and over the last three years they’ve poured hundreds of millions more into Boston Dynamics.

Hyundai’s 80 percent stake just means that they’ll need to take over the majority of that support, and perhaps even increase it if Boston Dynamics’ growth is one of their primary goals. Hyundai can’t have a reasonable expectation that Boston Dynamics will be profitable any time soon; they’re selling Spots now, but it’s an open question whether Spot will manage to find a scalable niche in which it’ll be useful in the sort of volume that will make it a sustainable commercial success. And even if it does become a success, it seems unlikely that Spot by itself will make a significant dent in Boston Dynamics’ burn rate anytime soon. Boston Dynamics will have more products of course, but it’s going to take a while, and Hyundai will need to support them in the interim.

Depending on whether Hyundai views Boston Dynamics as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the  next Atlas, when the  current one still seems so far from commercialization

It’s become clear that to sustain itself, Boston Dynamics needs a benefactor with very deep pockets and a long time horizon. Initially, Boston Dynamics’ business model (or whatever you want to call it) was to do bespoke projects for defense-ish folks like DARPA, but from what we understand Boston Dynamics stopped that sort of work after Google acquired them back in 2013. From one perspective, that government funding did exactly what it was supposed to do, which was to fund the development of legged robots through low TRLs (technology readiness levels) to the point where they could start to explore commercialization.

The question now, though, is whether Hyundai is willing to let Boston Dynamics undertake the kinds of low-TRL, high-risk projects that led from BigDog to LS3 to Spot, and from PETMAN to DRC Atlas to the current Atlas. So will Hyundai be cool about the whole thing and be the sort of benefactor that’s willing to give Boston Dynamics the resources that they need to keep doing what they’re doing, without having to answer too many awkward questions about things like practicality and profitability? Hyundai can certainly afford to do this, but so could SoftBank, and Google—the question is whether Hyundai will want to, over the length of time that’s required for the development of the kind of ultra-sophisticated robotics hardware that Boston Dynamics specializes in.

To put it another way: Depending whether Hyundai’s perspective on Boston Dynamics is as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the next Atlas, when the current one still seems so far from commercialization.

Google, SoftBank, now Hyundai

Boston Dynamics possesses multiple key technologies for high-performance robots equipped with perception, navigation, and intelligence. 

Hyundai Motor Group’s AI and Human Robot Interaction (HRI) expertise is highly synergistic with Boston Dynamics’s 3D vision, manipulation, and bipedal/quadruped expertise.

As it turns out, Hyundai Motors does have its own robotics lab, called Hyundai Motors Robotics Lab. Their website is not all that great, but here’s a video from last year:

I’m not entirely clear on what Hyundai means when they use the word “synergistic” when they talk about their robotics lab and Boston Dynamics, but it’s a little bit concerning. Usually, when a big company buys a little company that specializes in something that the big company is interested in, the idea is that the little company, to some extent, will be absorbed into the big company to give them some expertise in that area. Historically, however, Boston Dynamics has been highly resistant to this, maintaining its post-acquisition independence and appearing to be very reluctant to do anything besides what it wants to do, at whatever pace it wants to do it, and as by itself as possible.

From what we understand, Boston Dynamics didn’t integrate particularly well with Google’s robotics push in 2013, and we haven’t seen much evidence that SoftBank’s experience was much different. The most direct benefit to SoftBank (or at least the most visible one) was the addition of a fleet of Spot robots to the SoftBank Hawks baseball team cheerleading squad, along with a single (that we know about) choreographed gymnastics routine from an Atlas robot that was only shown on video.

And honestly, if you were a big manufacturing company with a bunch of money and you wanted to build up your own robotics program quickly, you’d probably have much better luck picking up some smaller robotics companies who were a bit less individualistic and would probably be more amenable to integration and would cost way less than a billion dollars-ish. And if integration is ultimately Hyundai’s goal, we’ll be very sad, because it’ll likely signal the end of Boston Dynamics doing the unfettered crazy stuff that we’ve grown to love.

Photo: Bob O’Connor Possibly the most agile humanoid robot ever built, Atlas can run, climb, jump over obstacles, and even get up after a fall. Boston Dynamics contemplates its future

The release ends by saying that the transaction is “subject to regulatory approvals and other customary closing conditions” and “is expected to close by June of 2021.” Again, you can read the whole thing here.

My initial reaction is that, despite the “synergies” described by Hyundai, it’s certainly not immediately obvious why the company wants to own 80 percent of Boston Dynamics. I’d also like a better understanding of how they arrived at the $1.1 billion valuation. I’m not saying this because I don’t believe in what Boston Dynamics is doing or in the inherent value of the company, because I absolutely do, albeit perhaps in a slightly less tangible sense. But when you start tossing around numbers like these, a big pile of expectations inevitably comes along with them. I hope that Boston Dynamics is unique enough that the kinds of rules that normally apply to robotics companies (or companies in general) can be set aside, at least somewhat, but I also worry that what made Boston Dynamics great was the explicit funding for the kinds of radical ideas that eventually resulted in robots like Atlas and Spot.

Can Hyundai continue giving Boston Dynamics the support and freedom that they need to keep doing the kinds of things that have made them legendary? I certainly hope so.

As much as we like to go on about bio-inspired robots (and we do go on about them), there are some things that nature hasn’t quite figured out yet. Wheels are almost one of those things—while some animals do roll, and have inspired robots based on that rolling, true wheeled motion isn’t found in nature above the microscopic level. When humans figured out how useful wheels were, we (among other things) strapped them to our feet to make our motion more efficient under certain conditions, which really showed nature who was boss. Our smug wheeled superiority hasn’t lasted very long, though, because robots are rapidly becoming more skilled with wheels than we can ever hope to be.

The key difference between a human on roller skates and a robot on actuated wheels is that the robot, if it’s engineered properly, can exert control over its wheels with a nuance that we’ll never be able to match. We’ve seen this in action with Boston Dynamics’ Handle, Handle, although so far, Handle hasn’t seemed to take full advantage of the fact that it’s got legs, too. To understand why wheels and legs together are such a game-changer for robotic mobility, we can take a look at ANYmal, which seamlessly blends four legs and four wheels together with every movement it makes.

The really cool thing here is that ANYmal is dynamically choosing an optimal hybrid gait that’s a fusion of powered rolling and legged stepping. It’s doing this “blind,” without any camera or lidar inputs, just based on the feel of the terrain underneath its wheels. You can see how it transitions seamlessly between rolling and stepping, even mid-stride, based on how much utility the wheeled motion has on a per-leg basis—if a wheel stops being efficient, the controller switches that leg to a stepping motion instead, while maintaining coordination with the other legs. Overall, this makes ANYmal move more quickly without reducing its ability to handle challenging terrain, and reduces its cost of transport since rolling is much more efficient than walking.

For more details, we spoke with Marko Bjelonic from ETH Zurich.

IEEE Spectrum: Are there certain kinds of terrain that make these ANYmal’s gait transitions particularly challenging?

Marko Bjelonic: Aperiodic gait sequences are automatically found through kinematic leg utilities without the need for predefined gait timings. Based on the robot's current situation, each leg can reason on its own when it is a good time to lift off the ground. Our approach works quite well in rough terrain, but more considerable obstacles, e.g., stairs, are challenging.

How much of a difference do you think incorporating sensors to identify terrain would make to ANYmal’s capability?

Our submitted publication is only based on proprioceptive signals, i.e., no terrain perception is used to make gait transitions based on the perceived environment. We are surprised how well this framework already works on flat and uneven terrain. However, we are currently working on an extension that considers the terrain upfront for the robot to plan the stepping sequences. This terrain-responsive extension is capable of handling also larger obstacles like stairs.

“My experience shows me that the current version of ANYmal with actuated wheels improves mobility drastically. And I believe that these kinds of robots will outperform nature first. There is no animal or human being that can exploit such a concept.” —Marko Bjelonic, ETH Zurich

How many degrees of freedom do you think are optimal for a hybrid robot like ANYmal? For example, if the wheels could be steerable, would that be beneficial? 

It is a nice challenge to have no steerable wheels, because then the robot is forced to explore hybrid roller-walking motions. From an application perspective, it would be beneficial to have the possibility of steering the wheels. We already analyzed the leg configuration and the amount of actuation per leg and found that no additional degrees of freedom are necessary to achieve this. We can rotate the first actuator, the hip adduction/abduction, and without increasing the complexity, we increase the robot's mobility and add the possibility of steering the wheels.

What are the disadvantages of hybrid mobility? Why shouldn’t every legged robot also have wheels?

Every legged robot should have wheels! I think it’s going to be more common in the future. There are currently only a few hybrid mobility concepts out there, e.g., the roller-walking ANYmal, the CENTAURO robot, and Handle from Boston Dynamics. The additional degrees of freedom and missing counterparts in nature make designing locomotion capabilities for wheeled-legged robots more challenging. This is one reason why we do not see more of these creatures. But I am sure that more concepts will follow with the current advancements in this field.

What are you working on next?

We are working on an artistic framework enabling the robot more complex motions on the ground and over challenging obstacles. The challenge here is how to find optimal maneuvers for such high-dimensional problems and how to execute these motions on the real robot robustly.

“Whole-Body MPC and Online Gait Sequence Generation for Wheeled-Legged Robots,” by Marko Bjelonic, Ruben Grandia, Oliver Harley, Cla Galliard, Samuel Zimmermann, and Marco Hutter from ETH Zürich, is available on arXiv.​

As much as we like to go on about bio-inspired robots (and we do go on about them), there are some things that nature hasn’t quite figured out yet. Wheels are almost one of those things—while some animals do roll, and have inspired robots based on that rolling, true wheeled motion isn’t found in nature above the microscopic level. When humans figured out how useful wheels were, we (among other things) strapped them to our feet to make our motion more efficient under certain conditions, which really showed nature who was boss. Our smug wheeled superiority hasn’t lasted very long, though, because robots are rapidly becoming more skilled with wheels than we can ever hope to be.

The key difference between a human on roller skates and a robot on actuated wheels is that the robot, if it’s engineered properly, can exert control over its wheels with a nuance that we’ll never be able to match. We’ve seen this in action with Boston Dynamics’ Handle, Handle, although so far, Handle hasn’t seemed to take full advantage of the fact that it’s got legs, too. To understand why wheels and legs together are such a game-changer for robotic mobility, we can take a look at ANYmal, which seamlessly blends four legs and four wheels together with every movement it makes.

The really cool thing here is that ANYmal is dynamically choosing an optimal hybrid gait that’s a fusion of powered rolling and legged stepping. It’s doing this “blind,” without any camera or lidar inputs, just based on the feel of the terrain underneath its wheels. You can see how it transitions seamlessly between rolling and stepping, even mid-stride, based on how much utility the wheeled motion has on a per-leg basis—if a wheel stops being efficient, the controller switches that leg to a stepping motion instead, while maintaining coordination with the other legs. Overall, this makes ANYmal move more quickly without reducing its ability to handle challenging terrain, and reduces its cost of transport since rolling is much more efficient than walking.

For more details, we spoke with Marko Bjelonic from ETH Zurich.

IEEE Spectrum: Are there certain kinds of terrain that make these ANYmal’s gait transitions particularly challenging?

Marko Bjelonic: Aperiodic gait sequences are automatically found through kinematic leg utilities without the need for predefined gait timings. Based on the robot's current situation, each leg can reason on its own when it is a good time to lift off the ground. Our approach works quite well in rough terrain, but more considerable obstacles, e.g., stairs, are challenging.

How much of a difference do you think incorporating sensors to identify terrain would make to ANYmal’s capability?

Our submitted publication is only based on proprioceptive signals, i.e., no terrain perception is used to make gait transitions based on the perceived environment. We are surprised how well this framework already works on flat and uneven terrain. However, we are currently working on an extension that considers the terrain upfront for the robot to plan the stepping sequences. This terrain-responsive extension is capable of handling also larger obstacles like stairs.

“My experience shows me that the current version of ANYmal with actuated wheels improves mobility drastically. And I believe that these kinds of robots will outperform nature first. There is no animal or human being that can exploit such a concept.” —Marko Bjelonic, ETH Zurich

How many degrees of freedom do you think are optimal for a hybrid robot like ANYmal? For example, if the wheels could be steerable, would that be beneficial? 

It is a nice challenge to have no steerable wheels, because then the robot is forced to explore hybrid roller-walking motions. From an application perspective, it would be beneficial to have the possibility of steering the wheels. We already analyzed the leg configuration and the amount of actuation per leg and found that no additional degrees of freedom are necessary to achieve this. We can rotate the first actuator, the hip adduction/abduction, and without increasing the complexity, we increase the robot's mobility and add the possibility of steering the wheels.

What are the disadvantages of hybrid mobility? Why shouldn’t every legged robot also have wheels?

Every legged robot should have wheels! I think it’s going to be more common in the future. There are currently only a few hybrid mobility concepts out there, e.g., the roller-walking ANYmal, the CENTAURO robot, and Handle from Boston Dynamics. The additional degrees of freedom and missing counterparts in nature make designing locomotion capabilities for wheeled-legged robots more challenging. This is one reason why we do not see more of these creatures. But I am sure that more concepts will follow with the current advancements in this field.

What are you working on next?

We are working on an artistic framework enabling the robot more complex motions on the ground and over challenging obstacles. The challenge here is how to find optimal maneuvers for such high-dimensional problems and how to execute these motions on the real robot robustly.

“Whole-Body MPC and Online Gait Sequence Generation for Wheeled-Legged Robots,” by Marko Bjelonic, Ruben Grandia, Oliver Harley, Cla Galliard, Samuel Zimmermann, and Marco Hutter from ETH Zürich, is available on arXiv.​

Sonar, which measures the time it takes for sound waves to bounce off objects and travel back to a receiver, is the best way to visualize underwater terrain or inspect marine-based structures. Sonar systems, though, have to be deployed on ships or buoys, making them slow and limiting the area they can cover.

However, engineers at Stanford University have developed a new hybrid technique combining light and sound. Aircraft, they suggest, could use this combined laser/sonar technology to sweep the ocean surface for high-resolution images of submerged objects. The proof-of-concept airborne sonar system, presented recently in the journal IEEE Access, could make it easier and faster to find sunken wrecks, investigate marine habitats, and spot enemy submarines.

“Our system could be on a drone, airplane or helicopter,” says Amin Arbabian, an electrical engineering professor at Stanford University. “It could be deployed rapidly…and cover larger areas.”

Airborne radar and lidar are used to map the Earth’s surface at high resolution. Both can penetrate clouds and forest cover, making them especially useful in the air and on the ground. But peering into water from the air is a different challenge. Sound, radio, and light waves all quickly lose their energy when traveling from air into water and back. This attenuation is even worse in turbid water, Arbabian says.

So he and his students combined the two modalities—laser and sonar. Their system relies on the well-known photoacoustic effect, which turns pulses of light into sound. “When you shine a pulse of light on an object it heats up and expands and that leads to a sound wave because it moves molecules of air around the object,” he says.

The group’s new photoacoustic sonar system begins by shooting laser pulses at the water surface. Water absorbs most of the energy, creating ultrasound waves that move through it much like conventional sonar. These waves bounce off objects, and some of the reflected waves go back out from the water into the air.

At this point, the acoustic echoes lose a tremendous amount of energy as they cross that water-air barrier and then travel through the air. Here is where another critical part of the team’s design comes in.

Image: Aidan Fitzpatrick

To detect the weak acoustic waves in air, the team uses an ultra-sensitive microelectromechanical device with the mouthful name of an air-coupled capacitive micromachined ultrasonic transducer (CMUT). These devices are simple capacitors with a thin plate that vibrates when hit by ultrasound waves, causing a detectable change in capacitance. They are known to be efficient at detecting sound waves in air, and Arbabian has been investigating the use of CMUT sensors for remote ultrasound imaging. Special software processes the detected ultrasound signals to reconstruct a high-resolution 3D image of the underwater object.

Gif: Aidan Fitzpatrick An animation showing the 3D image of the submerged object recreated using reflected ultrasound waves.

The researchers tested the system by imaging metal bars of different heights and diameters placed in a large 25cm-deep fish tank filled with clear water. The CMUT detector was 10cm above the water surface.

The system should work in murky water, Arbabian says, although they haven’t tested that yet. Next up, they plan to image objects placed in a swimming pool, for which they will have to use more powerful laser sources that work for deeper water. They also want to improve the system so it works with waves, which distort signals and make the detection and image reconstruction much harder. “This proof of concept is to show that you can see through the air-water interface” Arbabian says. “That’s the hardest part of this problem. Once we can prove it works it can scale up to greater depths and larger objects.”

This paper tackles the problem of formation reconstruction for a team of vehicles based on the knowledge of the range between agents of a subset of the participants. One main peculiarity of the proposed approach is that the relative velocity between agents, which is a fundamental data to solve the problem, is not assumed to be known in advance neither directly communicated. For the purpose of estimating this quantity, a collaborative control protocol is designed in order to mount the velocity data in the motion of each vehicle as a parameter through a dedicated control protocol, so that it can be inferred from the motion of the neighbor agents. Moreover, some suitable geometrical constraints related to the agents' relative positions are built and explicitly taken into account in the estimation framework providing a more accurate estimate. The issue of the presence of delays in the transmitted signals is also studied and two possible solutions are provided explaining how it is possible to get a reasonable range data exchange to get the solution both in a centralized fashion and in a decentralized one. Numerical examples are presented corroborating the validity of the proposed approach.

Occupational back-support exoskeletons are becoming a more and more common solution to mitigate work-related lower-back pain associated with lifting activities. In addition to lifting, there are many other tasks performed by workers, such as carrying, pushing, and pulling, that might benefit from the use of an exoskeleton. In this work, the impact that carrying has on lower-back loading compared to lifting and the need to select different assistive strategies based on the performed task are presented. This latter need is studied by using a control strategy that commands for constant torques. The results of the experimental campaign conducted on 9 subjects suggest that such a control strategy is beneficial for the back muscles (up to 12% reduction in overall lumbar activity), but constrains the legs (around 10% reduction in hip and knee ranges of motion). Task recognition and the design of specific controllers can be exploited by active and, partially, passive exoskeletons to enhance their versatility, i.e., the ability to adapt to different requirements.

Background: Gait analysis studies during robot-assisted walking have been predominantly focused on lower limb biomechanics. During robot-assisted walking, the users' interaction with the robot and their adaptations translate into altered gait mechanics. Hence, robust and objective metrics for quantifying walking performance during robot-assisted gait are especially relevant as it relates to dynamic stability. In this study, we assessed bi-planar dynamic stability margins for healthy adults during robot-assisted walking using EksoGT™, ReWalk™, and Indego® compared to independent overground walking at slow, self-selected, and fast speeds. Further, we examined the use of forearm crutches and its influence on dynamic gait stability margins.

Methods: Kinematic data were collected at 60 Hz under several walking conditions with and without the robotic exoskeleton for six healthy controls. Outcome measures included (i) whole-body center of mass (CoM) and extrapolated CoM (XCoM), (ii) base of support (BoS), (iii) margin of stability (MoS) with respect to both feet and bilateral crutches.

Results: Stability outcomes during exoskeleton-assisted walking at self-selected, comfortable walking speeds were significantly (p < 0.05) different compared to overground walking at self-selected speeds. Unlike overground walking, the control mechanisms for stability using these exoskeletons were not related to walking speed. MoSs were lower during the single support phase of gait, especially in the medial–lateral direction for all devices. MoSs relative to feet were significantly (p < 0.05) lower than those relative to crutches. The spatial location of crutches during exoskeleton-assisted walking pushed the whole-body CoM, during single support, beyond the lateral boundary of the lead foot, increasing the risk for falls if crutch slippage were to occur.

Conclusion: Careful consideration of crutch placement is critical to ensuring that the margins of stability are always within the limits of the BoS to control stability and decrease fall risk.

Any successful implementation of artificial intelligence hinges on asking the right questions in the right way. That’s what the British AI company DeepMind (a subsidiary of Alphabet) accomplished when it used its neural network to tackle one of biology’s grand challenges, the protein-folding problem. Its neural net, known as AlphaFold, was able to predict the 3D structures of proteins based on their amino acid sequences with unprecedented accuracy. 

AlphaFold’s predictions at the 14th Critical Assessment of protein Structure Prediction (CASP14) were accurate to within an atom’s width for most of the proteins. The competition consisted of blindly predicting the structure of proteins that have only recently been experimentally determined—with some still awaiting determination.

Called the building blocks of life, proteins consist of 20 different amino acids in various combinations and sequences. A protein's biological function is tied to its 3D structure. Therefore, knowledge of the final folded shape is essential to understanding how a specific protein works—such as how they interact with other biomolecules, how they may be controlled or modified, and so on. “Being able to predict structure from sequence is the first real step towards protein design,” says Janet M. Thornton, director emeritus of the European Bioinformatics Institute. It also has enormous benefits in understanding disease-causing pathogens. For instance, at the moment only about 18 of the 26 proteins in the SARS-CoV-2 virus are known.

Predicting a protein’s 3D structure is a computational nightmare. In 1969 Cyrus Levinthal estimated that there are 10300 possible conformational combinations for a single protein, which would take longer than the age of the known universe to evaluate by brute force calculation. AlphaFold can do it in a few days.

As scientific breakthroughs go, AlphaFold’s discovery is right up there with the likes of James Watson and Francis Crick’s DNA double-helix model, or, more recently, Jennifer Doudna and Emmanuelle Charpentier’s CRISPR-Cas9 genome editing technique.

How did a team that just a few years ago was teaching an AI to master a 3,000-year-old game end up training one to answer a question plaguing biologists for five decades? That, says Briana Brownell, data scientist and founder of the AI company PureStrategy, is the beauty of artificial intelligence: The same kind of algorithm can be used for very different things. 

“Whenever you have a problem that you want to solve with AI,” she says, “you need to figure out how to get the right data into the model—and then the right  sort of output that you can translate back into the real world.” 

DeepMind’s success, she says, wasn’t so much a function of picking the right neural nets but rather “how they set up the problem in a sophisticated enough way that the neural network-based modeling [could] actually answer the question.”

AlphaFold showed promise in 2018, when DeepMind introduced a previous iteration of their AI at CASP13, achieving the highest accuracy among all participants. The team had trained its to model target shapes from scratch, without using previously solved proteins as templates.

For 2020 they deployed new deep learning architectures into the AI, using an attention-based model that was trained end-to-end. Attention in a deep learning network refers to a component that manages and quantifies the interdependence between the input and output elements, as well as between the input elements themselves. 

The system was trained on public datasets of the approximately 170,000 known experimental protein structures in addition to databases with protein sequences of unknown structures. 

“If you look at the difference between their entry two years ago and this one, the structure of the AI system was different,” says Brownell. “This time, they’ve figured out how to translate the real world into data … [and] created an output that could be translated back into the real world.”

Like any AI system, AlphaFold may need to contend with biases in the training data. For instance, Brownell says, AlphaFold is using available information about protein structure that has been measured in other ways. However, there are also many proteins with as yet unknown 3D structures. Therefore, she says, a bias could conceivably creep in toward those kinds of proteins that we have more structural data for. 

Thornton says it’s difficult to predict how long it will take for AlphaFold’s breakthrough to translate into real-world applications.

“We only have experimental structures for about 10 per cent of the 20,000 proteins [in] the human body,” she says. “A powerful AI model could unveil the structures of the other 90 per cent.”

Apart from increasing our understanding of human biology and health, she adds, “it is the first real step toward… building proteins that fulfill a specific function. From protein therapeutics to biofuels or enzymes that eat plastic, the possibilities are endless.”

This work presents a novel five-fingered soft hand prototype actuated by Shape Memory Alloy (SMA) wires. The use of thin (100 μm diameter) SMA wire actuators, in conjunction with an entirely 3D printed hand skeleton, guarantees an overall lightweight and flexible structure capable of silent motion. To enable high forces with sufficiently high actuation speed at each fingertip, bundles of welded actuated SMA wires are used. In order to increase the compliance of each finger, flexible joints from superelastic SMA wires are inserted between each phalanx. The resulting system is a versatile hand prototype having intrinsically elastic fingers, which is capable to grasp several types of objects with a considerable force. The paper starts with the description of the finger hand design, along with practical considerations for the optimal placement of the superelastic SMA in the soft joint. The maximum achievable displacement of each finger phalanx is measured together with the phalanxes dynamic responsiveness at different power stimuli. Several force measurement are also realized at each finger phalanx. The versatility of the prototype is finally demonstrated by presenting several possible hand configurations while handling objects with different sizes and shapes.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today’s videos.

Another BIG step for Japan’s Gundam project.

[ Gundam Factory ]

We present an interactive design system that allows users to create sculpting styles and fabricate clay models using a standard 6-axis robot arm. Given a general mesh as input, the user iteratively selects sub-areas of the mesh through decomposition and embeds the design expression into an initial set of toolpaths by modifying key parameters that affect the visual appearance of the sculpted surface finish. We demonstrate the versatility of our approach by designing and fabricating different sculpting styles over a wide range of clay models.

[ Disney Research ]

China’s Chang’e-5 completed the drilling, sampling and sealing of lunar soil at 04:53 BJT on Wednesday, marking the first automatic sampling on the Moon, the China National Space Administration (CNSA) announced Wednesday.

[ CCTV ]

Red Hat’s been putting together an excellent documentary on Willow Garage and ROS, and all five parts have just been released. We posted Part 1 a little while ago, so here’s Part 2 and Part 3.

Parts 4 and 5 are at the link below!

[ Red Hat ]

Congratulations to ANYbotics on a well-deserved raise!

ANYbotics has origins in the Robotic Systems Lab at ETH Zurich, and ANYmal’s heritage can be traced back at least as far as StarlETH, which we first met at ICRA 2013.

[ ANYbotics ]

Most conventional robots are working with 0.05-0.1mm accuracy. Such accuracy requires high-end components like low-backlash gears, high-resolution encoders, complicated CNC parts, powerful motor drives, etc. Those in combination end up an expensive solution, which is either unaffordable or unnecessary for many applications. As a result, we found the Apicoo Robotics to provide our customers solutions with a much lower cost and higher stability.

[ Apicoo Robotics ]

The Skydio 2 is an incredible drone that can take incredible footage fully autonomously, but it definitely helps if you do incredible things in incredible places.

[ Skydio ]

Jueying is the first domestic sensitive quadruped robot for industry applications and scenarios. It can coordinate (replace) humans to reach any place that can be reached. It has superior environmental adaptability, excellent dynamic balance capabilities and precise Environmental perception capabilities. By carrying functional modules for different application scenarios in the safe load area, the mobile superiority of the quadruped robot can be organically integrated with the commercialization of functional modules, providing smart factories, smart parks, scene display and public safety application solutions.

[ DeepRobotics ]

We have developed semi-autonomous quadruped robot, called LASER-D (Legged-Agile-Smart-Efficient Robot for Disinfection) for performing disinfection in cluttered environments. The robot is equipped with a spray-based disinfection system and leverages the body motion to controlling the spray action without the need for an extra stabilization mechanism. The system includes an image processing capability to verify disinfected regions with high accuracy. This system allows the robot to successfully carry out effective disinfection tasks while safely traversing through cluttered environments, climb stairs/slopes, and navigate on slippery surfaces.

[ USC Viterbi ]

We propose the “multi-vision hand”, in which a number of small high-speed cameras are mounted on the robot hand of a common 7 degrees-of-freedom robot. Also, we propose visual-servoing control by using a multi-vision system that combines the multi-vision hand and external fixed high-speed cameras. The target task was ball catching motion, which requires high-speed operation. In the proposed catching control, the catch position of the ball, which is estimated by the external fixed high-speed cameras, is corrected by the multi-vision hand in real-time.

More details available through IROS on-demand.

[ Namiki Laboratory ]

Shunichi Kurumaya wrote in to share his work on PneuFinger, a pneumatically actuated compliant robotic gripping system.

[ Nakamura Lab ]

Thanks Shunichi!

Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent, e.g., ``Go to the large green bowl’’. The training process, then, interrelates the different modalities to encode the correlations between language, perception, and motion. The resulting language-conditioned visuomotor policies can be conditioned at run time on new human commands and instructions, which allows for more fine-grained control over the trained policies while also reducing situational ambiguity.

[ ASU ]

Thanks Heni!

Gita is on sale for the holidays for only $2,000.

[ Gita ]

This video introduces a computational approach for routing thin artificial muscle actuators through hyperelastic soft robots, in order to achieve a desired deformation behavior. Provided with a robot design, and a set of example deformations, we continuously co-optimize the routing of actuators, and their actuation, to approximate example deformations as closely as possible.

[ Disney Research ]

Researchers and mountain rescuers in Switzerland are making huge progress in the field of autonomous drones as the technology becomes more in-demand for global search-and-rescue operations.

[ SWI ]

This short clip of the Ghost Robotics V60 features an interesting, if awkward looking, righting behavior at the end.

[ Ghost Robotics ]

Europe’s Rosalind Franklin ExoMars rover has a younger ’sibling’, ExoMy. The blueprints and software for this mini-version of the full-size Mars explorer are available for free so that anyone can 3D print, assemble and program their own ExoMy.

[ ESA ]

The holiday season is here, and with the added impact of Covid-19 consumer demand is at an all-time high. Berkshire Grey is the partner that today’s leading organizations turn to when it comes to fulfillment automation.

[ Berkshire Grey ]

Until very recently, the vast majority of studies and reports on the use of cargo drones for public health were almost exclusively focused on the technology. The driving interest from was on the range that these drones could travel, how much they could carry and how they worked. Little to no attention was placed on the human side of these projects. Community perception, community engagement, consent and stakeholder feedback were rarely if ever addressed. This webinar presents the findings from a very recent study that finally sheds some light on the human side of drone delivery projects.

[ WeRobotics ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today’s videos.

Another BIG step for Japan’s Gundam project.

[ Gundam Factory ]

We present an interactive design system that allows users to create sculpting styles and fabricate clay models using a standard 6-axis robot arm. Given a general mesh as input, the user iteratively selects sub-areas of the mesh through decomposition and embeds the design expression into an initial set of toolpaths by modifying key parameters that affect the visual appearance of the sculpted surface finish. We demonstrate the versatility of our approach by designing and fabricating different sculpting styles over a wide range of clay models.

[ Disney Research ]

China’s Chang’e-5 completed the drilling, sampling and sealing of lunar soil at 04:53 BJT on Wednesday, marking the first automatic sampling on the Moon, the China National Space Administration (CNSA) announced Wednesday.

[ CCTV ]

Red Hat’s been putting together an excellent documentary on Willow Garage and ROS, and all five parts have just been released. We posted Part 1 a little while ago, so here’s Part 2 and Part 3.

Parts 4 and 5 are at the link below!

[ Red Hat ]

Congratulations to ANYbotics on a well-deserved raise!

ANYbotics has origins in the Robotic Systems Lab at ETH Zurich, and ANYmal’s heritage can be traced back at least as far as StarlETH, which we first met at ICRA 2013.

[ ANYbotics ]

Most conventional robots are working with 0.05-0.1mm accuracy. Such accuracy requires high-end components like low-backlash gears, high-resolution encoders, complicated CNC parts, powerful motor drives, etc. Those in combination end up an expensive solution, which is either unaffordable or unnecessary for many applications. As a result, we found the Apicoo Robotics to provide our customers solutions with a much lower cost and higher stability.

[ Apicoo Robotics ]

The Skydio 2 is an incredible drone that can take incredible footage fully autonomously, but it definitely helps if you do incredible things in incredible places.

[ Skydio ]

Jueying is the first domestic sensitive quadruped robot for industry applications and scenarios. It can coordinate (replace) humans to reach any place that can be reached. It has superior environmental adaptability, excellent dynamic balance capabilities and precise Environmental perception capabilities. By carrying functional modules for different application scenarios in the safe load area, the mobile superiority of the quadruped robot can be organically integrated with the commercialization of functional modules, providing smart factories, smart parks, scene display and public safety application solutions.

[ DeepRobotics ]

We have developed semi-autonomous quadruped robot, called LASER-D (Legged-Agile-Smart-Efficient Robot for Disinfection) for performing disinfection in cluttered environments. The robot is equipped with a spray-based disinfection system and leverages the body motion to controlling the spray action without the need for an extra stabilization mechanism. The system includes an image processing capability to verify disinfected regions with high accuracy. This system allows the robot to successfully carry out effective disinfection tasks while safely traversing through cluttered environments, climb stairs/slopes, and navigate on slippery surfaces.

[ USC Viterbi ]

We propose the “multi-vision hand”, in which a number of small high-speed cameras are mounted on the robot hand of a common 7 degrees-of-freedom robot. Also, we propose visual-servoing control by using a multi-vision system that combines the multi-vision hand and external fixed high-speed cameras. The target task was ball catching motion, which requires high-speed operation. In the proposed catching control, the catch position of the ball, which is estimated by the external fixed high-speed cameras, is corrected by the multi-vision hand in real-time.

More details available through IROS on-demand.

[ Namiki Laboratory ]

Shunichi Kurumaya wrote in to share his work on PneuFinger, a pneumatically actuated compliant robotic gripping system.

[ Nakamura Lab ]

Thanks Shunichi!

Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent, e.g., ``Go to the large green bowl’’. The training process, then, interrelates the different modalities to encode the correlations between language, perception, and motion. The resulting language-conditioned visuomotor policies can be conditioned at run time on new human commands and instructions, which allows for more fine-grained control over the trained policies while also reducing situational ambiguity.

[ ASU ]

Thanks Heni!

Gita is on sale for the holidays for only $2,000.

[ Gita ]

This video introduces a computational approach for routing thin artificial muscle actuators through hyperelastic soft robots, in order to achieve a desired deformation behavior. Provided with a robot design, and a set of example deformations, we continuously co-optimize the routing of actuators, and their actuation, to approximate example deformations as closely as possible.

[ Disney Research ]

Researchers and mountain rescuers in Switzerland are making huge progress in the field of autonomous drones as the technology becomes more in-demand for global search-and-rescue operations.

[ SWI ]

This short clip of the Ghost Robotics V60 features an interesting, if awkward looking, righting behavior at the end.

[ Ghost Robotics ]

Europe’s Rosalind Franklin ExoMars rover has a younger ’sibling’, ExoMy. The blueprints and software for this mini-version of the full-size Mars explorer are available for free so that anyone can 3D print, assemble and program their own ExoMy.

[ ESA ]

The holiday season is here, and with the added impact of Covid-19 consumer demand is at an all-time high. Berkshire Grey is the partner that today’s leading organizations turn to when it comes to fulfillment automation.

[ Berkshire Grey ]

Until very recently, the vast majority of studies and reports on the use of cargo drones for public health were almost exclusively focused on the technology. The driving interest from was on the range that these drones could travel, how much they could carry and how they worked. Little to no attention was placed on the human side of these projects. Community perception, community engagement, consent and stakeholder feedback were rarely if ever addressed. This webinar presents the findings from a very recent study that finally sheds some light on the human side of drone delivery projects.

[ WeRobotics ]

Optical see-through (OST) augmented reality head-mounted displays are quickly emerging as a key asset in several application fields but their ability to profitably assist high precision activities in the peripersonal space is still sub-optimal due to the calibration procedure required to properly model the user's viewpoint through the see-through display. In this work, we demonstrate the beneficial impact, on the parallax-related AR misregistration, of the use of optical see-through displays whose optical engines collimate the computer-generated image at a depth close to the fixation point of the user in the peripersonal space. To estimate the projection parameters of the OST display for a generic viewpoint position, our strategy relies on a dedicated parameterization of the virtual rendering camera based on a calibration routine that exploits photogrammetry techniques. We model the registration error due to the viewpoint shift and we validate it on an OST display with short focal distance. The results of the tests demonstrate that with our strategy the parallax-related registration error is submillimetric provided that the scene under observation stays within a suitable view volume that falls in a ±10 cm depth range around the focal plane of the display. This finding will pave the way to the development of new multi-focal models of OST HMDs specifically conceived to aid high-precision manual tasks in the peripersonal space.

Research on human-robot interactions has been driven by the increasing employment of robotic manipulators in manufacturing and production. Toward developing more effective human-robot collaboration during shared tasks, this paper proposes an interaction scheme by employing machine learning algorithms to interpret biosignals acquired from the human user and accordingly planning the robot reaction. More specifically, a force myography (FMG) band was wrapped around the user's forearm and was used to collect information about muscle contractions during a set of collaborative tasks between the user and an industrial robot. A recurrent neural network model was trained to estimate the user's hand movement pattern based on the collected FMG data to determine whether the performed motion was random or intended as part of the predefined collaborative tasks. Experimental evaluation during two practical collaboration scenarios demonstrated that the trained model could successfully estimate the category of hand motion, i.e., intended or random, such that the robot either assisted with performing the task or changed its course of action to avoid collision. Furthermore, proximity sensors were mounted on the robotic arm to investigate if monitoring the distance between the user and the robot had an effect on the outcome of the collaborative effort. While further investigation is required to rigorously establish the safety of the human worker, this study demonstrates the potential of FMG-based wearable technologies to enhance human-robot collaboration in industrial settings.

Snake robotics is an important research topic with a wide range of applications, including inspection in confined spaces, search-and-rescue, and disaster response. Snake robots are well-suited to these applications because of their versatility and adaptability to unstructured and constrained environments. In this paper, we introduce a soft pneumatic robotic snake that can imitate the capabilities of biological snakes, its soft body can provide flexibility and adaptability to the environment. This paper combines soft mobile robot modeling, proprioceptive feedback control, and motion planning to pave the way for functional soft robotic snake autonomy. We propose a pressure-operated soft robotic snake with a high degree of modularity that makes use of customized embedded flexible curvature sensing. On this platform, we introduce the use of iterative learning control using feedback from the on-board curvature sensors to enable the snake to automatically correct its gait for superior locomotion. We also present a motion planning and trajectory tracking algorithm using an adaptive bounding box, which allows for efficient motion planning that still takes into account the kinematic state of the soft robotic snake. We test this algorithm experimentally, and demonstrate its performance in obstacle avoidance scenarios.

Although the in-person Systems Track event of the DARPA SubT Challenge was cancelled because of the global pandemic, the Systems Track teams still have to prepare for the Final Event in 2021, which will include a cave component. Systems Track teams have been on their own to find cave environments to test in, and many of them are running their own DARPA-style competitions to test their software and hardware.

We’ll be posting a series of interviews exploring where and how the teams are making this happen, and today we’re featuring Team CSIRO Data 61, based in Brisbane, Australia.

This interview features the following roboticists from Team CSIRO Data 61:

  • Katrina Lo Surdo—Electrical and Computer Engineer, Advanced Mechatronics Systems

  • Nicolas Hudson—Senior Principal Research Scientist, Group Leader 

  • Navinda Kottege—Principal Research Scientist, Dynamic Platforms Team Leader

  • Fletcher Talbot—Software Engineer, Dynamic Platforms and Primary Robot Operator

IEEE Spectrum: Tell me about your cave! How’d you find your cave, and what kind of cave was it?

Katrina Lo Surdo: We basically just sent a bunch of emails around to different caving clubs all across Australia asking if they knew where we could test our robots, and most of them said no. But this particular caving club in Chillagoe (a 20 hours’ drive north of Brisbane) said they knew of a good cave. The caves in Chillagoe used to be coral reefs—they were formed about 400 million years ago, and then over time the reefs turned into limestone and then that limestone eroded into caves. In the particular cave that we went to, although a lot of the formations and the actual sort of caverns themselves are formed by limestone, there’s a lot of sediment that has been deposited inside the caves so the floor is reasonably flat. And it’s got that red dirt feel that you think of when you think of Australia.

I do think this cave had a good mix of a lot of the elements that most caves would have. It did have some verticality, some massive caverns, and some really small constrained passageways. And it was really sprawling as well, so I think it was a good representation of a lot of different types of caves.

Were you looking for any cave you could find, or a cave that was particularly robot friendly?

Lo Surdo: We wanted to be able to succeed as much as possible, but the cave needed to provide enough of a challenge that it would be useful for us to go. So if it was going to be completely flat with no obstacles, I don’t think it would have been good. And another thing that would be looked at was whether the cave itself is fragile or anything, because obviously we’re rolling our robots around and we don’t want to be damaging it.

The terrain itself was quite extreme, although a human could walk through a large portion of it without difficulty.

Nicolas Hudson: We should add that Katrina is an experienced caver and an expert climber, so when she says it’s easily traversable by a human, she means that cavers find it easy. There were others on the team who were not comfortable at all in the cave.

What do you feel like the biggest new challenge was, going from an urban environment to a cave environment?

Hudson: My take going from the Urban Circuit to this cave was that at Urban, it was essentially set up so that a human, legged, or tracked system could traverse the entire thing. For example, at Urban, we flew a drone through a hole in the floor, but there was a staircase right next to it. In the cave, there were parts that were only drone-accessible.

Another good example is that our drone actually flew way beyond the course we expected at one point, because we don’t have any artificial constraints—it’s just the cave system. And it was flying through an area that we weren’t comfortable going as people. So, I think the cave system was really a place where the mobility of drones shines in certain areas even more so than urban environments. That was the most important difference from my perspective.

How did your team of robots change between Urban and Cave?

Hudson: Our robots didn’t change a lot. We kept the large Titan robots because they’re by far our most capable ground platform. In my opinion, they’re actually more capable than legs on slippery intense slopes because of the amount of grip that they have. There are things I wouldn’t walk up that the Titan can drive up. So that stayed as our primary platform.

While the larger platforms could cover a lot of ground and were very stable, the smaller tracked platforms, SuperDroid robots which are about a meter long, didn’t even function in the cave. Like, they went a meter and then the just traction wasn’t enough, because they were too small. We’ve started working a beefed-up small tracked platform that has a lot more grip. We decided not to push for legs in the cave. We have a Ghost Vision 60. And we thought about, do we go legged in this environment, and we decided not to because of how unstructured it was, and just because of the difficulty of human traversing it. 

I really think the big difference was the drone played a much larger role. Where in Urban the drone had this targeted investigation role where it would be sitting on the back of the Titans and it would take off and you’d send it up through a hole or something like that, in the cave, what we found ourselves doing was really using it to sort of scout because the ground was just so challenging. The cost to go 20 meters in a cave with a ground robot can be absurdly difficult. And so getting better situational awareness quickly with the drone was probably where the concept of operations changed more than the robots did.

Photo: Team CSIRO Data 61

With such extreme mobility challenges, why use ground robots at all? Why not just stick with drones?

Hudson: We found that perception was significantly better on the ground robots. The ground robots have four cameras, and so they’re running 360 vision the whole time for object detection. The drone was great as a scout, but it was really difficult for it to find objects because there are so many crevices that to look through every area with a drone is very time consuming and they run out of battery. And so it’s really the endurance of the ground robot and the better perception where they played their part. 

We used the drones to figure out the topological layout of the cave. We didn’t let the operators see the cave beforehand, and it’s sort of hard to comprehend—in Urban, the drone did quite well because there were these very geometric rooms and so you could sort of cover things with a gimbal camera. But in the cave there’s just so many strange structures, and you have very poor camera coverage with a single camera. 

When you’re using the drones and the ground robots together, how are the robots able to decide where it’s safe to go with that terrain variability?

Hudson: I’ll answer that with respect to our first couple mock-competition runs, where the robot operators didn’t have any prior knowledge of the cave. What happened is that once the drones did a scouting mission, the operator gets a reasonably good idea if there’s any constrictions or any large elevation changes. And then we spread out the ground robots to different areas and tried things. 

Our autonomy system went up some things we didn’t expect it to—we just thought it would say “don’t go there.” And in other cases there was a little ledge or a series of rocks that the autonomy system said “I don’t want to do that” but it looked traversable in the map. We have a sort of backup teleoperation mode where you can just command the velocity of the tracks. One time, that was beneficial, in that it actually went through something that the autonomy system didn’t. But the other two times, it ended up flipping the robot, and one of those times, it actually flipped the robot and crushed the drone.

So a real lesson learned is that it was incredibly hard for human operators to perceive what was traversable and what was not, even with 3D point clouds and cameras. My overwhelming impression was it was unbelievably difficult to predict, as a person, what was traversable by a robot. 

Lo Surdo: And the autonomy did a much better job at choosing a path.

So the autonomy was doing a better job than the human teleoperators, even in this complex environment?

Hudson: It’s a difficult question to answer. Half of the time, that’s absolutely correct: The robot was more capable than the human thought it would be. There were other times that I think a human with a teleoperation system standing right next to the robot could better understand things like crazy terrain formations or dust, and the robot just didn’t have that context. I think if I had to rank it, a person with a remote control right next to the robot is probably the gold standard. We never really had issues with that. Then but the autonomy was definitely better than someone at the base station with a little bit of latency.

And that’s much different than your experience with Tunnel or Urban, right? Where a human teleoperator could be both more efficient and safer than a fully autonomous robot?

Hudson: That’s right. 

What were some challenges that were unique to the cave?

Hudson: The cave terrain was a big mix of things. There was a dry river bed in parts of it, and then other parts of it had these rocks that look almost like coral. There were formations that drop from the ceiling, things that have grown up from the ground, and it was just this completely random distribution of obstacles that’s hard for a human to make up, if that makes sense. And we definitely saw the robots getting trapped once or twice by those kinds of things.

Every run that we had we ended up with our large ground robots flipped over at least once, and that almost always occurred because it slipped off a two meter drop when the terrain deformed underneath the robot. Because the Titans are so sturdily built, the perception pack was protected, and the entire setup could be turned back over and they kept working.

Lo Surdo: There was also quite a natural flow to the terrain, because that’s where people had traversed through, and I think in a lot of cases the autonomy did a pretty good job of picking its way through those obstacles, and following the path that the humans had taken to get to different places. That was impressive to me. 

Navinda Kottege: I think the randomness also may be related to the relatively poor performance of the operators, because in the other SubT circuits, the level of situational awareness they got from the sensors would be augmented by their prior experience. Even in Urban, if it’s a room, it’s a geometric shape, and the human operator can kind of fill in the blanks because they have some prior experience. In caves, since they haven’t experienced that kind of environment, with the patchy situational awareness they get from the sensors it’s very challenging to make assumptions about what the environment around the robot is like.

What kind of experience did you have as a robot operator during your mock Cave Circuit competition?

Fletcher Talbot: It was extremely difficult. We made some big assumptions which turned out to be very wrong about the terrain, because myself and the other operator were completely unaware of what the cave looked like—we didn’t see any photos or anything before we actually visited. And my internal idea of what it would look like was wrong initially, misinformed somewhat by some of the feedback we got back from point clouds and meshes, and then rudely awakened by going on a tour through the cave after our mock competition ended. 

“During our runs we saw some slopes that looked completely traversable and so we tried to send robots up those slopes—if I had known what those slopes actually looked like, I never would have done that. But the robots themselves were beasts, and just did stuff that we would never have thought possible.” —Fletcher Talbot, Team CSIRO Data 61

For example, during our runs we saw some slopes that looked completely traversable and so we tried to send robots up those slopes—if I had known what those slopes actually looked like, I never would have done that. But the robots themselves were beasts, and just did stuff that we would never have thought possible.

We definitely learned that operators can hamper the progress of the robots, because we don’t really know what we’re doing sometimes. My approach going through the different runs was to just let the robots be more autonomous, and just give them very high level commands rather than trying to do any kind of finessing into gaps and stuff. That was my takeaway— trust the autonomy. 

So the cave circuit has gotten you to trust your robots more?

Talbot: Yeah, some other stuff that robots did was insane. As operators we never would have expected them to be able to do it, or commanded them to do it in the first place.

Photo: Team CSIRO Data 61

What were the results of the competition that you held?

Lo Surdo: We made our best guess as to what DARPA would do in an environment like this, and hid artifacts around the cave in the way that we’ve seen them hide artifacts before. 

Hudson: We set up the staging area with a team of 14 people; we took a lot of people because it was only a 20-hour drive away [as opposed to a flight across the world]. The operators came in and only saw the staging area.

Talbot: We were blind to what the course was going to be like, or where the objects were. We only knew what objects were brought.

Kottege: We did four runs overall, dividing the cave into two courses, doing two runs each. 

Talbot: The performance was reasonably consistent, I think, throughout all the runs. It was always four or five objects detected, about half the ones that were placed on the course.

How are you feeling about the combined circuit for the SubT Final?

Kottege: I think we have some pretty good ideas of what we need to focus on, but there’s also this big question of how DARPA will set up the combined event. So, once that announcement is made, there will be some more tweaking of our approach.

This is probably true for other teams as well, but after we performed at Urban, we felt like if we got a chance to do Tunnel again, we’d be able to really ace it, because we’d improved that much. Similarly, once we did our cave testing, we’ve had a similar sentiment— that if we got a chance to do Urban again, we’d probably do far better. I think that’s a really good place to be at, but I’m sure DARPA has some interesting challenges in mind for the final.

Lo Surdo: I do think that us going to the cave gives us a bit of an advantage, because there’s some terrain that you can’t really make or simulate, and some of the stuff we learned was really valuable and I think we’ll really serve us in the next competition. One thing in particular was the way that our robots assessed risk—we went up some really crazy terrain which was amazing, but in some instances there was a really easy pathway right next to it. So assessing risk is something that we’re going to be looking at improving in the future.

Talbot: With the cave, it’s hard to gauge the difficulty level compared to what DARPA might have given us—whether we met that difficulty level or went way above it or maybe even undershot it, we don’t really know. But it’ll be very interesting to see what DARPA throws at us, or if they give us some indication of what they were going to give us the cave so we can sort of balance it and figure out whether we hit the mark.

Photo: Team CSIRO Data 61

Now that you’ve been through Tunnel and Urban and your own version of cave, do you feel like you’re approaching a generalizable solution for underground environments?

Kottege: I’m fairly confident that we are approaching a generalizable state where our robots can perform quite well in a given underground environment. 

Talbot: Yeah, I think we are getting there. I think it needs some refinement, but I think the key components are there. One of the benefits of doing these field trips, and hopefully we do more in the future, is that we don’t really know what we can’t do until we come across that obstacle in real life. And then we go, “oh crap, we’re not prepared for that!” But from all the test environments that we’ve been in, I think we have a somewhat generalizable solution.

Read more DARPA SubT coverage from IEEE Spectrum

Although the in-person Systems Track event of the DARPA SubT Challenge was cancelled because of the global pandemic, the Systems Track teams still have to prepare for the Final Event in 2021, which will include a cave component. Systems Track teams have been on their own to find cave environments to test in, and many of them are running their own DARPA-style competitions to test their software and hardware.

We’ll be posting a series of interviews exploring where and how the teams are making this happen, and today we’re featuring Team CSIRO Data 61, based in Brisbane, Australia.

This interview features the following roboticists from Team CSIRO Data 61:

  • Katrina Lo Surdo—Electrical and Computer Engineer, Advanced Mechatronics Systems

  • Nicolas Hudson—Senior Principal Research Scientist, Group Leader 

  • Navinda Kottege—Principal Research Scientist, Dynamic Platforms Team Leader

  • Fletcher Talbot—Software Engineer, Dynamic Platforms and Primary Robot Operator

IEEE Spectrum: Tell me about your cave! How’d you find your cave, and what kind of cave was it?

Katrina Lo Surdo: We basically just sent a bunch of emails around to different caving clubs all across Australia asking if they knew where we could test our robots, and most of them said no. But this particular caving club in Chillagoe (a 20 hours’ drive north of Brisbane) said they knew of a good cave. The caves in Chillagoe used to be coral reefs—they were formed about 400 million years ago, and then over time the reefs turned into limestone and then that limestone eroded into caves. In the particular cave that we went to, although a lot of the formations and the actual sort of caverns themselves are formed by limestone, there’s a lot of sediment that has been deposited inside the caves so the floor is reasonably flat. And it’s got that red dirt feel that you think of when you think of Australia.

I do think this cave had a good mix of a lot of the elements that most caves would have. It did have some verticality, some massive caverns, and some really small constrained passageways. And it was really sprawling as well, so I think it was a good representation of a lot of different types of caves.

Were you looking for any cave you could find, or a cave that was particularly robot friendly?

Lo Surdo: We wanted to be able to succeed as much as possible, but the cave needed to provide enough of a challenge that it would be useful for us to go. So if it was going to be completely flat with no obstacles, I don’t think it would have been good. And another thing that would be looked at was whether the cave itself is fragile or anything, because obviously we’re rolling our robots around and we don’t want to be damaging it.

The terrain itself was quite extreme, although a human could walk through a large portion of it without difficulty.

Nicolas Hudson: We should add that Katrina is an experienced caver and an expert climber, so when she says it’s easily traversable by a human, she means that cavers find it easy. There were others on the team who were not comfortable at all in the cave.

What do you feel like the biggest new challenge was, going from an urban environment to a cave environment?

Hudson: My take going from the Urban Circuit to this cave was that at Urban, it was essentially set up so that a human, legged, or tracked system could traverse the entire thing. For example, at Urban, we flew a drone through a hole in the floor, but there was a staircase right next to it. In the cave, there were parts that were only drone-accessible.

Another good example is that our drone actually flew way beyond the course we expected at one point, because we don’t have any artificial constraints—it’s just the cave system. And it was flying through an area that we weren’t comfortable going as people. So, I think the cave system was really a place where the mobility of drones shines in certain areas even more so than urban environments. That was the most important difference from my perspective.

How did your team of robots change between Urban and Cave?

Hudson: Our robots didn’t change a lot. We kept the large Titan robots because they’re by far our most capable ground platform. In my opinion, they’re actually more capable than legs on slippery intense slopes because of the amount of grip that they have. There are things I wouldn’t walk up that the Titan can drive up. So that stayed as our primary platform.

While the larger platforms could cover a lot of ground and were very stable, the smaller tracked platforms, SuperDroid robots which are about a meter long, didn’t even function in the cave. Like, they went a meter and then the just traction wasn’t enough, because they were too small. We’ve started working a beefed-up small tracked platform that has a lot more grip. We decided not to push for legs in the cave. We have a Ghost Vision 60. And we thought about, do we go legged in this environment, and we decided not to because of how unstructured it was, and just because of the difficulty of human traversing it. 

I really think the big difference was the drone played a much larger role. Where in Urban the drone had this targeted investigation role where it would be sitting on the back of the Titans and it would take off and you’d send it up through a hole or something like that, in the cave, what we found ourselves doing was really using it to sort of scout because the ground was just so challenging. The cost to go 20 meters in a cave with a ground robot can be absurdly difficult. And so getting better situational awareness quickly with the drone was probably where the concept of operations changed more than the robots did.

Photo: Team CSIRO Data 61

With such extreme mobility challenges, why use ground robots at all? Why not just stick with drones?

Hudson: We found that perception was significantly better on the ground robots. The ground robots have four cameras, and so they’re running 360 vision the whole time for object detection. The drone was great as a scout, but it was really difficult for it to find objects because there are so many crevices that to look through every area with a drone is very time consuming and they run out of battery. And so it’s really the endurance of the ground robot and the better perception where they played their part. 

We used the drones to figure out the topological layout of the cave. We didn’t let the operators see the cave beforehand, and it’s sort of hard to comprehend—in Urban, the drone did quite well because there were these very geometric rooms and so you could sort of cover things with a gimbal camera. But in the cave there’s just so many strange structures, and you have very poor camera coverage with a single camera. 

When you’re using the drones and the ground robots together, how are the robots able to decide where it’s safe to go with that terrain variability?

Hudson: I’ll answer that with respect to our first couple mock-competition runs, where the robot operators didn’t have any prior knowledge of the cave. What happened is that once the drones did a scouting mission, the operator gets a reasonably good idea if there’s any constrictions or any large elevation changes. And then we spread out the ground robots to different areas and tried things. 

Our autonomy system went up some things we didn’t expect it to—we just thought it would say “don’t go there.” And in other cases there was a little ledge or a series of rocks that the autonomy system said “I don’t want to do that” but it looked traversable in the map. We have a sort of backup teleoperation mode where you can just command the velocity of the tracks. One time, that was beneficial, in that it actually went through something that the autonomy system didn’t. But the other two times, it ended up flipping the robot, and one of those times, it actually flipped the robot and crushed the drone.

So a real lesson learned is that it was incredibly hard for human operators to perceive what was traversable and what was not, even with 3D point clouds and cameras. My overwhelming impression was it was unbelievably difficult to predict, as a person, what was traversable by a robot. 

Lo Surdo: And the autonomy did a much better job at choosing a path.

So the autonomy was doing a better job than the human teleoperators, even in this complex environment?

Hudson: It’s a difficult question to answer. Half of the time, that’s absolutely correct: The robot was more capable than the human thought it would be. There were other times that I think a human with a teleoperation system standing right next to the robot could better understand things like crazy terrain formations or dust, and the robot just didn’t have that context. I think if I had to rank it, a person with a remote control right next to the robot is probably the gold standard. We never really had issues with that. Then but the autonomy was definitely better than someone at the base station with a little bit of latency.

And that’s much different than your experience with Tunnel or Urban, right? Where a human teleoperator could be both more efficient and safer than a fully autonomous robot?

Hudson: That’s right. 

What were some challenges that were unique to the cave?

Hudson: The cave terrain was a big mix of things. There was a dry river bed in parts of it, and then other parts of it had these rocks that look almost like coral. There were formations that drop from the ceiling, things that have grown up from the ground, and it was just this completely random distribution of obstacles that’s hard for a human to make up, if that makes sense. And we definitely saw the robots getting trapped once or twice by those kinds of things.

Every run that we had we ended up with our large ground robots flipped over at least once, and that almost always occurred because it slipped off a two meter drop when the terrain deformed underneath the robot. Because the Titans are so sturdily built, the perception pack was protected, and the entire setup could be turned back over and they kept working.

Lo Surdo: There was also quite a natural flow to the terrain, because that’s where people had traversed through, and I think in a lot of cases the autonomy did a pretty good job of picking its way through those obstacles, and following the path that the humans had taken to get to different places. That was impressive to me. 

Navinda Kottege: I think the randomness also may be related to the relatively poor performance of the operators, because in the other SubT circuits, the level of situational awareness they got from the sensors would be augmented by their prior experience. Even in Urban, if it’s a room, it’s a geometric shape, and the human operator can kind of fill in the blanks because they have some prior experience. In caves, since they haven’t experienced that kind of environment, with the patchy situational awareness they get from the sensors it’s very challenging to make assumptions about what the environment around the robot is like.

What kind of experience did you have as a robot operator during your mock Cave Circuit competition?

Fletcher Talbot: It was extremely difficult. We made some big assumptions which turned out to be very wrong about the terrain, because myself and the other operator were completely unaware of what the cave looked like—we didn’t see any photos or anything before we actually visited. And my internal idea of what it would look like was wrong initially, misinformed somewhat by some of the feedback we got back from point clouds and meshes, and then rudely awakened by going on a tour through the cave after our mock competition ended. 

“During our runs we saw some slopes that looked completely traversable and so we tried to send robots up those slopes—if I had known what those slopes actually looked like, I never would have done that. But the robots themselves were beasts, and just did stuff that we would never have thought possible.” —Fletcher Talbot, Team CSIRO Data 61

For example, during our runs we saw some slopes that looked completely traversable and so we tried to send robots up those slopes—if I had known what those slopes actually looked like, I never would have done that. But the robots themselves were beasts, and just did stuff that we would never have thought possible.

We definitely learned that operators can hamper the progress of the robots, because we don’t really know what we’re doing sometimes. My approach going through the different runs was to just let the robots be more autonomous, and just give them very high level commands rather than trying to do any kind of finessing into gaps and stuff. That was my takeaway— trust the autonomy. 

So the cave circuit has gotten you to trust your robots more?

Talbot: Yeah, some other stuff that robots did was insane. As operators we never would have expected them to be able to do it, or commanded them to do it in the first place.

Photo: Team CSIRO Data 61

What were the results of the competition that you held?

Lo Surdo: We made our best guess as to what DARPA would do in an environment like this, and hid artifacts around the cave in the way that we’ve seen them hide artifacts before. 

Hudson: We set up the staging area with a team of 14 people; we took a lot of people because it was only a 20-hour drive away [as opposed to a flight across the world]. The operators came in and only saw the staging area.

Talbot: We were blind to what the course was going to be like, or where the objects were. We only knew what objects were brought.

Kottege: We did four runs overall, dividing the cave into two courses, doing two runs each. 

Talbot: The performance was reasonably consistent, I think, throughout all the runs. It was always four or five objects detected, about half the ones that were placed on the course.

How are you feeling about the combined circuit for the SubT Final?

Kottege: I think we have some pretty good ideas of what we need to focus on, but there’s also this big question of how DARPA will set up the combined event. So, once that announcement is made, there will be some more tweaking of our approach.

This is probably true for other teams as well, but after we performed at Urban, we felt like if we got a chance to do Tunnel again, we’d be able to really ace it, because we’d improved that much. Similarly, once we did our cave testing, we’ve had a similar sentiment— that if we got a chance to do Urban again, we’d probably do far better. I think that’s a really good place to be at, but I’m sure DARPA has some interesting challenges in mind for the final.

Lo Surdo: I do think that us going to the cave gives us a bit of an advantage, because there’s some terrain that you can’t really make or simulate, and some of the stuff we learned was really valuable and I think we’ll really serve us in the next competition. One thing in particular was the way that our robots assessed risk—we went up some really crazy terrain which was amazing, but in some instances there was a really easy pathway right next to it. So assessing risk is something that we’re going to be looking at improving in the future.

Talbot: With the cave, it’s hard to gauge the difficulty level compared to what DARPA might have given us—whether we met that difficulty level or went way above it or maybe even undershot it, we don’t really know. But it’ll be very interesting to see what DARPA throws at us, or if they give us some indication of what they were going to give us the cave so we can sort of balance it and figure out whether we hit the mark.

Photo: Team CSIRO Data 61

Now that you’ve been through Tunnel and Urban and your own version of cave, do you feel like you’re approaching a generalizable solution for underground environments?

Kottege: I’m fairly confident that we are approaching a generalizable state where our robots can perform quite well in a given underground environment. 

Talbot: Yeah, I think we are getting there. I think it needs some refinement, but I think the key components are there. One of the benefits of doing these field trips, and hopefully we do more in the future, is that we don’t really know what we can’t do until we come across that obstacle in real life. And then we go, “oh crap, we’re not prepared for that!” But from all the test environments that we’ve been in, I think we have a somewhat generalizable solution.

Read more DARPA SubT coverage from IEEE Spectrum

Pages