Feed aggregator

Cerebras Systems, which makes a specialized AI computer based on the largest chip ever made, is breaking out of its original role as a neural-network training powerhouse and turning its talents toward more traditional scientific computing. In a simulation having 500 million variables, the CS-1 trounced the 69th-most powerful supercomputer in the world. 

It also solved the problem—combustion in a coal-fired power plant—faster than the real-world flame it simulates. To top it off, Cerebras and its partners at the U.S. National Energy Technology Center claim, the CS-1 performed the feat faster than any present-day CPU or GPU-based supercomputer could.

The research, which was presented this week at the supercomputing conference SC20, shows that Cerebras’ AI architecture “is not a one trick pony,” says Cerebras CEO Andrew Feldman.

Weather forecasting, design of airplane wings, predicting temperatures in a nuclear power plant, and many other complex problems are solved by simulating “the movement of fluids in space over time,” he says. The simulation divides the world up into a set of cubes, models the movement of fluid in those cubes, and determines the interactions between the cubes. There can be 1 million or more of these cubes and it can take 500,000 variables to describe what’s happening.

According to Feldman, solving that takes a computer system with lots of processor cores, tons of memory very close to the cores, oodles of bandwidth connecting the cores and the memory, and loads of bandwidth connecting the cores. Conveniently, that’s what a neural-network training computer needs, too. The CS-1 contains a single piece of silicon with 400,000 cores, 18 gigabytes of memory, 9 petabytes of memory bandwidth, and 100 petabits per second of core-to-core bandwidth.

Scientists at NETL simulated combustion in a powerplant using both a Cerebras CS-1 and the Joule supercomputer, which has 84,000 CPU cores and consumes 450 kilowatts. By comparison, Cerebras runs on about 20 kilowatts. Joule completed the calculation in 2.1 milliseconds. The CS-1 was more than 200-times faster, finishing in 6 microseconds.

This speed has two implications, according to Feldman. One is that there is no combination of CPUs or even of GPUs today that could beat the CS-1 on this problem. He backs this up by pointing to the nature of the simulation—it does not scale well. Just as you can have too many cooks in the kitchen, throwing too many cores at a problem can actually slow the calculation down. Joule’s speed peaked when using 16,384 of its 84,000 cores.

The limitation comes from connectivity between the cores and between cores and memory. Imagine the volume to be simulated as a 370 x 370 x 370 stack of cubes (136,900 vertical stacks with 370 layers). Cerebras maps the problem to the wafer-scale chip by assigning the array of vertical stacks to a corresponding array of processor cores. Because of that arrangement, communicating the effects of one cube on another is done by transferring data between neighboring cores, which is as fast as it gets. And while each layer of the stack is computed, the data representing the other layers reside inside the core’s memory where it can be quickly accessed.

(Cerebras takes advantage of a similar kind of geometric mapping when training neural networks. [See sidebar “The Software Side of Cerebras,” January 2020.])

And because the simulation completed faster than the real-world combustion event being simulated, the CS-1 could now have a new job on its hands—playing a role in control systems for complex machines.

Feldman reports that the SC-1 has made inroads in the purpose for which it was originally built, as well. Drugmaker GlaxoSmithKline is a known customer, and the SC-1 is doing AI work at Argonne National Laboratory and Lawrence Livermore National Lab, the Pittsburgh Supercomputing Center. He says there are several customers he cannot name in the military, intelligence, and heavy manufacturing industries.

A next generation SC-1 is in the works, he says. The first generation used TSMC’s 16-nanometer process, but Cerebras already has a 7-nanometer version in hand with more than double the memory—40 GB—and the number of AI processor cores—850,000.

We consider the problem of learning generalized first-order representations of concepts from a small number of examples. We augment an inductive logic programming learner with 2 novel contributions. First, we define a distance measure between candidate concept representations that improves the efficiency of search for target concept and generalization. Second, we leverage richer human inputs in the form of advice to improve the sample efficiency of learning. We prove that the proposed distance measure is semantically valid and use that to derive a PAC bound. Our experiments on diverse learning tasks demonstrate both the effectiveness and efficiency of our approach.

316-P Sorter Induction

Robotic solutions can help your operation keep up with the demands of today’s changing e-commerce market. Honeywell Robotics is helping DCs evaluate solutions with powerful physics-based simulation tools to ensure that everything works together in an integrated ecosystem.

Put more than a quarter-century of automation expertise to work for you.

Download White Paper

In 2017, a team at NASA’s Jet Propulsion Laboratory in Pasadena, Calif., was in the process of prototyping some small autonomous robots capable of exploring caves and subsurface voids on the Moon, Mars, and Titan, Saturn’s largest moon. Our goal was the development of new technologies to help us solve one of humanity’s most significant questions: is there or has there been life beyond Earth?

The more we study the surfaces of planetary bodies in our solar system, the more we are compelled to voyage underground to seek answers to this question. Planetary subsurface voids are not only one of the most likely places to find both signs of life, past and present, but thanks to the shelter they provide, are also one of the main candidates for future human habitation. While we were working on various technologies for cave exploration at JPL, DARPA launched the latest in its series of Grand Challenges, the Subterranean Challenge, or SubT. Compared to earlier events that focused on on-road driving and humanoid robots in pre-defined disaster relief scenarios, the focus of SubT is the exploration of unknown and extreme underground environments. Even though SubT is about exploring such environments on Earth, we can use the competition as an analog to help us learn how to explore unknown environments on other planetary bodies. 

From the beginning, the JPL team forged partnerships with four other institutions offering complementary capabilities to collectively address a daunting list of technical challenges across multiple domains in this competition. In addition to JPL’s experience in deploying robust and resilient autonomous systems in extreme and uncertain environments, the team also included Caltech, with its specialization in mobility, MIT, with its expertise in large-scale mapping, and KAIST (South Korea) and LTU (Sweden), experts in fast drones in underground environments. The more far-flung partnerships were the result of existing research collaborations, a typical pattern in robotics research. We also partnered with a range of companies who supported us with robot platforms and sensors. The shared philosophy of building Collaborative SubTerranean Autonomous Robots led to the birth of Team CoSTAR.

Our approach to the SubT Challenge

The SubT Challenge is designed to encourage progress in four distinct robotics domains: mobility (how to get around), perception (how to make sense of the world), networking (how to get the data back to the server by the end of the mission), and autonomy (how to make decisions). The competition rules and structure reflect meaningful real-world scenarios in underground environments including tunnels, urban areas, and caves. 

To be successful in the SubT Challenge requires a holistic solution that balances coverage of each domain and a recognition of how each is intertwined with the others. For example, the robots need to be small enough to travel through narrow passages, but large enough to carry the sensors and computers necessary to make autonomous decisions and while navigating in perceptually-degraded parts of the course, meaning dark, dusty or smoke-filled. There’s also the challenge of power and energy: The robots need to be quick and energy-efficient to meet the endurance requirements and traverse multiple kilometers per hour in extreme environments. At the same time, autonomous onboard decision making and large-scale mapping is the single biggest power demand. Such challenges are amplified on flying vehicles and require more dramatic trade-offs between flying time, size, and the autonomous capabilities.

Our answer to this call for versatility is to present a team of AI-powered robots, comprising multiple heterogeneous platforms, to handle the various challenges of each course. To enable modularity, all our robots are equipped with the same modular autonomy software, called NeBula (Networked Belief-aware Perceptual Autonomy). NeBula is specifically designed to address stochasticity and uncertainty in various elements of the mission, including sensing, environment, motion, system health, and communication, among others. With a mix of wheeled, legged, tracked, and flying vehicles, our team relies on a decision-making process that translates mission specifications, risk, and time into strategies that adaptively prescribe which robot should be dispatched to which part of the course and when.

Image: Team CoSTAR A team of robots with heterogeneous capabilities handles various challenges of an unknown extreme environment. Let the exploration begin!

The hallmark of Team CoSTAR’s first year leading up to the SubT Tunnel Circuit was a series of fast iterations through potential robot configurations. Every few weeks, we would make a major adjustment to our overall solution architecture based on what we learned in the previous iteration. These changes could potentially be as major as changing our overall concept of operations, e.g., how many robots in what formation should be part of the solution. This required high-levels of adaptivity and agility in our solution development process and team culture.

Testing in representative environments offered us a crucial advantage in the competition. Our “local” test site (a four-hour drive from home) was an abandoned gold mine open to tourists called Eagle Mine. Its narrow passageways and dusty interior compelled us to invest in techniques for precise motion planning, dust mitigation, and flying in perceptually-degraded environments. For smaller-scale integration testing, we used what resources we had on the JPL campus. That meant setting up a series of inflatable tunnels in the Mars Yard, a dusty, rocky field used for rehearsing mobility sequences for Mars rovers. By joining multiple tunnels together, we could make test courses of varying lengths and widths, allowing us to make rapid progress, especially on our drones’ performance in dusty environments.

Photos: Team CoSTAR Team CoSTAR created a series of inflatable tunnels in the JPL Mars Yard to test certain specific autonomy capabilities without needing to travel to mines or caves.

Hybrid aerial-ground vehicles, platforms that roll or fly depending on obstacles in the local vicinity, were a major focus in the lead-up to our first test-run, the Systems Test and Integration eXercise (STIX), held by DARPA in Idaho Springs, Colorado, in April 2019. The robot that we developed, called Rollocopter, offers the potential for greater coverage of a given area, as it only flies when it needs to, such as to hop over a rubble pile. On flatter terrain, Rollocopter can travel in an energy-efficient ground-rolling mode. Rollocopter made its debut alongside a wheeled Husky robot, from Clearpath Robotics, at the STIX event, flying and driving in a sensing-degraded environment with high levels of dust.

Photos: Team CoSTAR The Rollocopter and Husky on their debut outing at the DARPA competition dry-run event called STIX.

Three months before the first scored SubT event, the Tunnel Circuit, DARPA revealed that the competition was to be held at a research coal mine in Pittsburgh, Pa. This mine appeared to have less dust, fewer obstacles, and wider passages than our test environments, but it was also more demanding due to its wet and muddy terrain and large, complex layout. This was a big surprise for the team, and we had to shift all kinds of things around as fast as we could. Fortunately, the muscle memory from our rapid development cycles prepared us to make a dramatic adjustment to our approach. Given the level of mud in a typical coal mine and challenges it imposes on rolling, we decided (with heavy hearts) to shelve the Rollocopters and focus on wheeled platforms and traditional quadcopters. Even though our robots are just machines, we do build up a sort of relationship with them, coming to know their quirks as we coax them to life. In light of this, the emotional pain of shelving a project can be quite acute. Nevertheless, we recognize that decisions like this are in service of the team’s broader goals, and our hope is that we’ll be able to bring these hybrid aerial-ground vehicles back in a different environment.

We not only had to rework our robot fleet before the SubT Tunnel Circuit, but also had to reassess our test plan: With no coal mines on the West Coast, we instead began scouting for coal mines in West Virginia, which lies in the same geological tract as the competition site. On the advice of one of our interns studying at West Virginia University, we contacted a small tourist mine in Beckley, W.V., called the Beckley Exhibition Coal Mine. We cold-called the mine, explaining (through mild disbelief) that we were from NASA and wanted to cross the country to test our robots in their mine. To our surprise, the town had a longstanding association with NASA. During our reconnaissance visit, the manager of the mine told us the story of local figure Homer Hickham, whose book about becoming a NASA engineer from this humble coal mining town went on to inspire the film October Sky. We were heartily welcomed.

In the month before the Tunnel event, we shipped all our robots to Beckley, where we kept a bruising cadence of day and night testing. By day, we went to locations such as Arch Mine, an active coal mine whose tunnels were 900 feet underground, and the Mine Safety and Health Authority (MSHA) facility, which had indoor mock-ups of mine environments complete with smoke simulators to train rescue personnel. By night, we ran tests in the Beckley tourist mine after the day’s tours were complete. We were working long hours, which demanded both mental and physical endurance: Every excursion involved a ritualistic loading and unloading of dozens of equipment boxes and robots, allowing us to set up shop anywhere with a power outlet. The discipline of practicing our pit crew roles in these settings paid off as the Tunnel Circuit began.

Photos: Team CoSTAR Team CoSTAR and our coal miner partners 900 feet underground at Arch Mine, an active coal mine in West Virginia. The team was testing robots in representative extreme environments to what we expected to find in the Tunnel Circuit event.

As the Tunnel Circuit event began, we noticed on the DARPA live stream that Team Explorer (a partnership between Carnegie Mellon and Oregon State University) was using some kind of device on a tripod at the mine entrance. Googling it, we learned that this was called a total station, a precision instrument normally used for surveying. Impressed at this team’s innovative application of such a tool to an unusual task, we decided to entertain even the most outlandish proposals for improving our performance and began trying to find a total station of our own before our next scored run, which was only two days away. This was a great idea to maximize the localization accuracies along the first ~80 meters of featureless mine entry tunnels, while the robot is still visible from starting location. There were no total station units to be found with a fast enough shipping time online, so we worked the phones to see if there was one we could borrow. Within the next two days, we managed to borrow a device, watch lots of YouTube videos to teach ourselves how to use it, and write and test code to integrate it into our operations workflow and localization algorithms. This was one of the fastest, most fun, and most last-minute efforts in our team during the last two years. Our performance at the Tunnel Circuit led to a second-place finish among some of the best robotics teams in the world.

Preparing for the Urban Circuit Photo: Team CoSTAR CoSTAR member and total station operator, Ed Terry, in a candid moment on the DARPA live stream at the first outing of the total station, following two intense days of on-the-fly integration of this system.

The Tunnel Circuit had shown us how important it was to test in realistic environments, and fortunately, finding test locations for the Urban Circuit-like environments was much easier. With a fully-integrated system and team structure in place, we entered the second year of the SubT Challenge with momentum, which was essential with only five months to adapt to yet another type of environment. We framed our preparation around monthly capability milestone demonstrations, a gated process which allowed us to triage the technologies we should focus on. We took the opportunity to improve the rigor of our techniques for Simultaneous Localization and Mapping (SLAM) and planning under uncertainty, and to upgrade our computing power.

One of the major additions for the Urban Circuit was the introduction of multi-level courses, where the ability to traverse stairways was a prerequisite for accessing large portions of the course. To handle this, we added tracked robots to our fleet. Thanks to the modularity of the NeBula software framework and highly transferable hardware, we were able to go up and down stairs with our tracked robot in four months.

A mere eight weeks before the competition, we struck a partnership with Boston Dynamics to use their Spot legged robot, which arrived at our lab just before Christmas. It seemed too daunting a task to integrate Spot into our team in such a short time. However, for the team members who volunteered to work on it over the Christmas break, the chance to be given the keys to such an advanced robot was a sort of Christmas present! To become part of the robot family, we needed proof that Spot could first integrate with the rest of our concept of operations, NeBula autonomy software, and NeBula autonomy hardware payload. Verifying these in the first two weeks, we were convinced that it was fit for the task. The team systematically added NeBula’s autonomy, perception, and communications modules over a matter of weeks. Boasting a payload capacity of up to 12 kg, we were able to equip Spot with the high levels of autonomy and situational awareness that allowed us to fully add it to our robot fleet only two weeks prior the competition.

Photos: Team CoSTAR Spots equipped with the NeBula autonomy and perception payload.

As we pushed Spot to traverse extreme terrains, we attached it to an elaborate rope system devised to save our precious robot when it fell. This was a precautionary measure to help us learn Spot’s limits with its unique payload configuration. After several weeks of refining our procedures for reliable stair climbing and building up confidence in the robot’s autonomy performance, we did away with the tether just one week before the competition.

Photo: Team CoSTAR A tethered Spot preparing for stair climbing trials. Our robots go to school

Shortly after the Urban Circuit competition location was revealed to be in the small town of Elma, Washington, we emailed Elma High School asking if they were open to NASA testing its robots in their buildings. In a follow-up phone call, a teacher reported that they thought this original email was a scam! After providing some more context for our request, they enthusiastically agreed to host us. In this way, we were able to not only test multi-level autonomy in complex building layouts but also to give the high-school students an inside look at a NASA JPL test campaign.
 
Each evening, after the students had left, we shifted our equipment and robots from our base in a hotel conference center to the school, and set up our command post in the cafeteria. The warm, clean, and well-lit school was a luxury compared to earlier field test settings in mines deep underground. Each night, we sought to cover more of the school’s complex layout: hallways, classrooms, and multiple sets of stairs. These mock runs taught us as much about the behavior of the robot team as it did about the human team, especially as everyone found ways of dealing with sustained fatigue. We typically kept practicing in the school until well after midnight, thanks to the flexibility and generosity of the staff. At one stage, we were concerned that tethering our legged robots up to the stairs would chip their paintwork but they said, “Don't worry about it, we need to repaint it sometime anyway!" We would periodically have visitors from the school, our hotel, and even local restaurants, whose encouragement kept our spirits high despite the long hours.

Photo: Team CoSTAR A birthday of one of CoSTAR team members during the competition week at our testing site.

Our first SubT Urban Circuit run was scheduled for the second day of the competition, which gave us a chance to watch the first day of the DARPA live stream. We noticed a down staircase right next to the starting gate of the Alpha course. One team member mentioned offhandedly that evening that we should try throwing a communications node into the staircase as a low-risk way of expanding our communications range. Minutes later, we started making phone calls to our hosts at Elma High School. The following morning at 7 a.m., one of the Elma school teachers arrived with a box full of basketballs and volleyballs. With these raw materials, we set about making a protective shield for the communications node to help it survive bouncing down several flights of stairs. One group started chipping away at the foam volleyballs while another set about taping together basketballs into a tetrahedron.

By 9 a.m., we had produced a hollowed-out foam volleyball with a communications node embedded in it, wrapped with a rope tether. For the first (and last) time in our team’s history, we assigned a job based on athletic ability. We chose well, and our node-in-a-ball thrower stood outside of the course and launched the node cleanly over the stairway bannister, allowing us to then gently lower it down on the tether. In the end, we didn’t need the extra range provided by the node-in-a-ball as our robots were able to come back into the communication range at the bottom of the staircase without any help. 

Image: Team CoSTAR Our node-in-a-ball in action: To expand our robot’s communications range, we threw a communications node embedded in a hollowed-out foam volleyball down a staircase.

Over a 60-minute scored run, only one human supervisor stationed outside the course can see information from within the course, and only if and when a communication link is established. In addition, a pit crew of up to nine people may assist in running checklists and deploying robots prior to the start of the mission. As soon as the robots enter the course itself, the team must trust that the hardware and autonomy software is sound while remaining ready to respond to inevitable anomalies. In this respect, the group starts to resemble an elite sports team, running a to-the-minute routine.

With holes in the floor, rubble piles, and water slicks, the Urban course put our robots through their paces. As the robots moved deeper into the course and out of communications range with the human supervisor, all we could do was rely on the robots’ autonomy. On the first day, the team was startled by repeated banging and crashing noises from within the course. With an unknown number of staircases, we feared the worst: That a wheeled rover had driven itself over the edge. To our relief, the sound was just from small wooden obstacles that the robot was casually driving over. 

Our days were structured around preparing for either test runs or scored runs, followed by a post-run debrief and then many hours poring over gigabytes of collected data and making bug fixes. We cycled through the pizza-subs-burgers trifecta multiple times, which spanned the culinary options available in Elma. Before beginning a run, we ran a “smoke test” of each robot in which we drove it 2 meters autonomously to verify that every part of the pipeline was still functional. We had checklists for everything, including a checklist item to pack the checklist itself and even to make sure the base station supervisor was in the car with us. These strict procedures helped guard against mistakes, which became more likely the longer we worked.

Every run revealed unexpected edge cases for mobility and autonomy that we had to rapidly address each night back at the hotel. We split the hotel conference center into a development zone and a testing zone. In the latter, we installed a test course configuration that would rotate on a daily basis, depending on what was the most pressing issue to solve. The terrain on the real course was extremely challenging, even for legged robots. In each of the first two scored runs, we lost one of our Spot robots to various negative obstacles such as holes in the ground. In a matter of hours after each run, the hardware team built reconfigurable barriers and a wooden stage with variable-size negative obstacles to test the resiliency of obstacle detection and avoidance strategies. After implementing these fixes, we transported the robots to the hotel to organize and prepare our fleet, which stoked the curiosity of fellow guests.

And the winner is…

Going into the final day of the competition, we were tied with Team Explorer. All of the parameter tuning, debugging, and exploration strategy refinements came together in time for the last round. Capping off a 1.5-year effort, we sent our robots into the SubT Urban Course for the final time. The wheeled Huskies led the way to build a communications backbone and explore the ground floor, with the legged Spots following behind to take the stairs to other levels. 

To score even a single point, a chain of events needs to happen flawlessly. Firstly, a robot needs to have covered enough space, traversing mobility-stressing and perceptually-degraded course elements, to reach an area that has an artifact. Multiple camera video streams as well as non-visual sensors are analyzed by the NeBula machine learning framework running on the robot to detect these artifacts. Once detected, an artifact’s location must be estimated to within 5 meters of the true location defined by DARPA with respect to a calibration target at the course entrance. Finally, the robot needs to bring itself back into communication range to report the artifact location within the 60-minute window of mission duration. A critical part of accomplishing the mission in this scenario is a decision-making module that can take into account the remaining mission time, predictive mission risk, as well as chances of losing an asset, re-establishing communication, and retrieving the data. It’s a delicate balance between spending time exploring to find as many artifacts as possible, and making sure that artifact locations can be returned to base before time runs out.

With only 40 report submissions allowed for 20 placed artifacts, our strategy was to collect as much information as possible before submitting artifact reports. This approach of maximizing the autonomous coverage of the space meant that a substantial amount of time could go by without hearing from a robot that may be out of communications range. This made for a tense dynamic as the clock ticked down. With only 15 minutes to go in the last run, we had scored just 2 points, which would have been our lowest score of the entire competition. It didn’t make sense: We had covered more ground than in all prior runs, but without the points to show for it. We were praying that the robots would prevail and come back into communication range before the clock ran out. Within the final 15 minutes, the robots started to show up one by one, delivering their locations of the artifacts they’d found. Submitting these incoming reports, our score increased rapidly to 9, turning the mood from despairing to jubilant, as we posted our best score yet.

Image: Team CoSTAR Only 15 minutes to the end of the mission, our autonomous robots returned to our communication range to deliver the scored artifacts, turning the mood from despairing to jubilant.

As the pit crew emerged from the course to meet the above-ground team, there was a flurry of breathless communication and in the confusion it did allow for one small prank. One of the pit crew members took our team lead aside and successfully convinced him and the above-ground team, for a minute or two prior to the formal announcement, that we had only scored two points! At the same time, we were being ushered over to a pop-up TV studio where we gathered before the camera for the final scores to be revealed. The scores flashed up on the screen showing us scoring 9 points and in first place. The surprised face of our pranked team members was priceless! For the entire team, the exhaustion, frustration, and dedication that we had given to the task dissolved in a moment of elation.

Image: Team CoSTAR The team reacts to the final scores being revealed.

While there is a healthy spirit of competition among the teams, we recognize that this challenge remains an unsolved problem and that we as a robotics community are collectively redefining the state of the art. In addition to the scored runs, we appreciated the opportunity to learn from the extraordinary variety of solutions on display from other teams. Both the formal knowledge exchange and the common experience of taking on the SubT Urban course enhanced the feeling of shared advancement.

Photos: Team CoSTAR Left: CoSTAR T-Rex, played by our field test lead, John Mayo, meets the team right after the final scored run; right: DARPA award ceremony. Post-competition and COVID-19

After the Urban competition, the COVID-19 pandemic set in and JPL shifted part of its focus and resources towards pandemic-related research, producing the VITAL respirator in 37 days. As our robot fleet served us faithfully during this competition, they earned some time to recuperate (with proper PPE), but they will soon be pressed into service once more. We are in the process of equipping them with UV lights to sterilize hospitals and the JPL campus, which reinforces the growing role robots are playing in applications where no human should venture.

Photo: Team CoSTAR CoSTAR robots recuperating with proper PPEs.

While the DARPA Cave Circuit in-person competition is another victim of COVID-19 restrictions, the team is continuing to prepare for this new environment. Supported by NASA Science Mission Directorate (SMD) the team focuses on searching for biological signs and resources in Martian-analog Lava tubes in Northern California. On a parallel track, our team is leveraging these capabilities to form mission concepts and autonomy solutions for lunar exploration to support the vision of NASA’s Artemis program. This will in turn help refine our traversability, navigation, and autonomy solutions for the tough environments to be found in the final round of the DARPA Subterranean Challenge in late 2021.

Image: Team CoSTAR Picture of robot in Martian-analog extreme terrains and lava tubes. Tests conducted in Lava Bed National Monument, Tulelake, Calif.

Edward Terry is a robotics engineer and CoSTAR team member. He studied aeronautical engineering at the University of Sydney and completed the master of science in robotic systems development at Carnegie Mellon University. In Team CoSTAR, his focus is on object detection and localization under perceptually-degraded conditions.

Fadhil Ginting is a robotics visiting student researcher at NASA’s Jet Propulsion Laboratory. He completed his master’s in robotics, system, and control at ETH Zurich. In Team CoSTAR, his focus is on learning and decision making for autonomous multi-robot systems.

Ali Agha is a principle investigator and research technologist at NASA’s Jet Propulsion Laboratory. His research centers on autonomy for robotic systems and spacecrafts, with a dual focus on planetary exploration and terrestrial applications. At JPL, he leads TEAM CoSTAR. Previously, he was with Qualcomm Research, leading the perception efforts for autonomous drones and robots. Prior to that, Dr. Agha was a postdoctoral researcher at MIT. Dr. Agha was named NASA NIAC fellow in 2018.

While we’re super bummed that COVID forced the cancellation of the Systems Track event of the DARPA Subterranean Challenge Cave Circuit, the good news is that the Virtual Track (being virtual) is 100 percent coronavirus-free, and the final event is taking place tomorrow, November 17, right on schedule. And honestly, it’s about time the Virtual Track gets the attention that it deserves—we’re as guilty as anyone of focusing more heavily on the Systems Track, being full of real robots that alternate between amazingly talented and amazingly klutzy, but the Virtual Track is just as compelling, in a very different way.

DARPA has scheduled the Cave Circuit Virtual Track live event for Tuesday starting at 2 p.m. ET, and we’ve got all the details.

If you’ve been mostly following the Systems Track up until this point, you should definitely check out the article that the Urban Circuit Virtual Track winning team, Michigan Tech’s Team BARCS, wrote for us last week. It’s a great way of getting up to speed on what makes the virtual SubT competition so important, and so exciting.

All the Virtual Track teams that submitted their code have absolutely no idea how well their virtual robots did, and they’ll be watching their runs at the same time as we are.

The really amazing thing about the Virtual Track is that unlike the Systems Track, where a human in the loop can send commands to any robot in communications range, the virtual teams of robots operate fully autonomously. In fact, Virtual Track teams sent their code in weeks ago, and DARPA has been running the competition itself in secret, but on Tuesday, everyone will find out how they did. Here’s the announcement:

On Tuesday, November 17 at 2PM EST, the Defense Research Projects Agency (DARPA) will webcast its Subterranean (SubT) Challenge Cave Circuit Virtual Competition. Viewers can follow virtual versions of real autonomous robots, driven by software and algorithms created by 16 competitors, as they search a variety of virtual cave environments for target artifacts. The SubT Challenge is helping DARPA develop new tools for time-sensitive combat operations or disaster response scenarios. The winners of this virtual showcase will be announced at the end of the webcast, and $500,000 worth of prizes is at stake.

What we’re really looking forward to on Tuesday is the expert commentary. During past Systems Track events, live streaming video was available of the runs, but both the teams and the DARPA folks were far too busy running the actual competition to devote much time to commentating. Since the virtual competition itself has already been completed, we’ll be getting a sort of highlights show on Tuesday, with commentary from DARPA program manager Tim Chung, virtual competition lead Angela Maio, along with Camryn Irwin, who did a fantastic job hosting the Urban Circuit livestream earlier this year. We’ll be seeing competition run-throughs from a variety of teams, although not every run and not in real-time of course, since the event is only a couple hours long. But there will be a lot more detail than we’ve ever had before on technology and strategy directly from DARPA.

All the Virtual Track teams that submitted their code have absolutely no idea how well their virtual robots did, and they’ll be watching their runs at the same time as we are. I’ll be on Twitter for the entire event (@BotJunkie) to provide some vaguely informed and hopefully amusing commentary, and we’re hoping that some of the competing teams will be on Twitter as well to let us know how happy (or sad) they are with how their robots are performing. If you have questions, let me know, and we’ll do our best to get in touch with the teams directly, or go through DARPA during a post-event press briefing scheduled for Wednesday.

[ DARPA SubT Virtual Cave Circuit Livestream ]

While we’re super bummed that COVID forced the cancellation of the Systems Track event of the DARPA Subterranean Challenge Cave Circuit, the good news is that the Virtual Track (being virtual) is 100 percent coronavirus-free, and the final event is taking place tomorrow, November 17, right on schedule. And honestly, it’s about time the Virtual Track gets the attention that it deserves—we’re as guilty as anyone of focusing more heavily on the Systems Track, being full of real robots that alternate between amazingly talented and amazingly klutzy, but the Virtual Track is just as compelling, in a very different way.

DARPA has scheduled the Cave Circuit Virtual Track live event for Tuesday starting at 2 p.m. ET, and we’ve got all the details.

If you’ve been mostly following the Systems Track up until this point, you should definitely check out the article that the Urban Circuit Virtual Track winning team, Michigan Tech’s Team BARCS, wrote for us last week. It’s a great way of getting up to speed on what makes the virtual SubT competition so important, and so exciting.

All the Virtual Track teams that submitted their code have absolutely no idea how well their virtual robots did, and they’ll be watching their runs at the same time as we are.

The really amazing thing about the Virtual Track is that unlike the Systems Track, where a human in the loop can send commands to any robot in communications range, the virtual teams of robots operate fully autonomously. In fact, Virtual Track teams sent their code in weeks ago, and DARPA has been running the competition itself in secret, but on Tuesday, everyone will find out how they did. Here’s the announcement:

On Tuesday, November 17 at 2PM EST, the Defense Research Projects Agency (DARPA) will webcast its Subterranean (SubT) Challenge Cave Circuit Virtual Competition. Viewers can follow virtual versions of real autonomous robots, driven by software and algorithms created by 16 competitors, as they search a variety of virtual cave environments for target artifacts. The SubT Challenge is helping DARPA develop new tools for time-sensitive combat operations or disaster response scenarios. The winners of this virtual showcase will be announced at the end of the webcast, and $500,000 worth of prizes is at stake.

What we’re really looking forward to on Tuesday is the expert commentary. During past Systems Track events, live streaming video was available of the runs, but both the teams and the DARPA folks were far too busy running the actual competition to devote much time to commentating. Since the virtual competition itself has already been completed, we’ll be getting a sort of highlights show on Tuesday, with commentary from DARPA program manager Tim Chung, virtual competition lead Angela Maio, along with Camryn Irwin, who did a fantastic job hosting the Urban Circuit livestream earlier this year. We’ll be seeing competition run-throughs from a variety of teams, although not every run and not in real-time of course, since the event is only a couple hours long. But there will be a lot more detail than we’ve ever had before on technology and strategy directly from DARPA.

All the Virtual Track teams that submitted their code have absolutely no idea how well their virtual robots did, and they’ll be watching their runs at the same time as we are. I’ll be on Twitter for the entire event (@BotJunkie) to provide some vaguely informed and hopefully amusing commentary, and we’re hoping that some of the competing teams will be on Twitter as well to let us know how happy (or sad) they are with how their robots are performing. If you have questions, let me know, and we’ll do our best to get in touch with the teams directly, or go through DARPA during a post-event press briefing scheduled for Wednesday.

[ DARPA SubT Virtual Cave Circuit Livestream ]

Human intention detection is fundamental to the control of robotic devices in order to assist humans according to their needs. This paper presents a novel approach for detecting hand motion intention, i.e., rest, open, close, and grasp, and grasping force estimation using force myography (FMG). The output is further used to control a soft hand exoskeleton called an SEM Glove. In this method, two sensor bands constructed using force sensing resistor (FSR) sensors are utilized to detect hand motion states and muscle activities. Upon placing both bands on an arm, the sensors can measure normal forces caused by muscle contraction/relaxation. Afterwards, the sensor data is processed, and hand motions are identified through a threshold-based classification method. The developed method has been tested on human subjects for object-grasping tasks. The results show that the developed method can detect hand motions accurately and to provide assistance w.r.t to the task requirement.

Electro-ribbon actuators are lightweight, flexible, high-performance actuators for next generation soft robotics. When electrically charged, electrostatic forces cause the electrode ribbons to progressively zip together through a process called dielectrophoretic liquid zipping (DLZ), delivering contractions of more than 99% of their length. Electro-ribbon actuators exhibit pull-in instability, and this phenomenon makes them challenging to control: below the pull-in voltage threshold, actuator contraction is small, while above this threshold, increasing electrostatic forces cause the actuator to completely contract, providing a narrow contraction range for feedforward control. We show that application of a time-varying voltage profile that starts above pull-in threshold, but subsequently reduces, allows access to intermediate steady-states not accessible using traditional feed-forward control. A modified proportional-integral closed-loop controller is proposed (Boost-PI), which incorporates a variable boost voltage to temporarily elevate actuation close to, but not exceeding, the pull-in voltage threshold. This primes the actuator for zipping and drastically reduces rise time compared with a traditional PI controller. A multi-objective parameter-space approach was implemented to choose appropriate controller gains by assessing the metrics of rise time, overshoot, steady-state error, and settle time. This proposed control method addresses a key limitation of the electro-ribbon actuators, allowing the actuator to perform staircase and oscillatory control tasks. This significantly increases the range of applications which can exploit this new DLZ actuation technology.

Significant information extraction from the images that are geometrically distorted or transformed is mainstream procedure in image processing. It becomes difficult to retrieve the relevant region when the images get distorted by some geometric deformation. Hu's moments are helpful in extracting information from such distorted images due to their unique invariance property. This work focuses on early detection and gradation of Knee Osteoarthritis utilizing Hu's invariant moments to understand the geometric transformation of the cartilage region in Knee X-ray images. The seven invariant moments are computed for the rotated version of the test image. The results demonstrated are found to be more competitive and promising, which are validated by ortho surgeons and rheumatologists.

Several lower-limb exoskeletons enable overcoming obstacles that would impair daily activities of wheelchair users, such as going upstairs. Still, as most of the currently commercialized exoskeletons require the use of crutches, they prevent the user from interacting efficiently with the environment. In a previous study, a bio-inspired controller was developed to allow dynamic standing balance for such exoskeletons. It was however only tested on the device without any user. This work describes and evaluates a new controller that extends this previous one with an online model compensation, and the contribution of the hip joint against strong perturbations. In addition, both controllers are tested with the exoskeleton TWIICE One, worn by a complete spinal cord injury pilot. Their performances are compared by the mean of three tasks: standing quietly, resisting external perturbations, and lifting barbells of increasing weight. The new controller exhibits a similar performance for quiet standing, longer recovery time for dynamic perturbations but better ability to sustain prolonged perturbations, and higher weightlifting capability.

Robot-assisted gait training (RAGT) devices are used in rehabilitation to improve patients' walking function. While there are some reports on the adverse events (AEs) and associated risks in overground exoskeletons, the risks of stationary gait trainers cannot be accurately assessed. We therefore aimed to collect information on AEs occurring during the use of stationary gait robots and identify associated risks, as well as gaps and needs, for safe use of these devices. We searched both bibliographic and full-text literature databases for peer-reviewed articles describing the outcomes of stationary RAGT and specifically mentioning AEs. We then compiled information on the occurrence and types of AEs and on the quality of AE reporting. Based on this, we analyzed the risks of RAGT in stationary gait robots. We included 50 studies involving 985 subjects and found reports of AEs in 18 of those studies. Many of the AE reports were incomplete or did not include sufficient detail on different aspects, such as severity or patient characteristics, which hinders the precise counts of AE-related information. Over 169 device-related AEs experienced by between 79 and 124 patients were reported. Soft tissue-related AEs occurred most frequently and were mostly reported in end-effector-type devices. Musculoskeletal AEs had the second highest prevalence and occurred mainly in exoskeleton-type devices. We further identified physiological AEs including blood pressure changes that occurred in both exoskeleton-type and end-effector-type devices. Training in stationary gait robots can cause injuries or discomfort to the skin, underlying tissue, and musculoskeletal system, as well as unwanted blood pressure changes. The underlying risks for the most prevalent injury types include excessive pressure and shear at the interface between robot and human (cuffs/harness), as well as increased moments and forces applied to the musculoskeletal system likely caused by misalignments (between joint axes of robot and human). There is a need for more structured and complete recording and dissemination of AEs related to robotic gait training to increase knowledge on risks. With this information, appropriate mitigation strategies can and should be developed and implemented in RAGT devices to increase their safety.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA Bay Area Robotics Symposium – November 20, 2020 – [Online] ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today's videos.

To prepare the Perseverance rover for its date with Mars, NASA’s Mars 2020 mission team conducted a wide array of tests to help ensure a successful entry, descent and landing at the Red Planet. From parachute verification in the world’s largest wind tunnel, to hazard avoidance practice in Death Valley, California, to wheel drop testing at NASA’s Jet Propulsion Laboratory and much more, every system was put through its paces to get ready for the big day. The Perseverance rover is scheduled to land on Mars on February 18, 2021.

[ JPL ]

Awesome to see Aquanaut—the “underwater transformer” we wrote about last year—take to the ocean!

Also their new website has SHARKS on it.

[ HMI ]

Nature has inspired engineers at UNSW Sydney to develop a soft fabric robotic gripper which behaves like an elephant's trunk to grasp, pick up and release objects without breaking them.

UNSW ]

Collaborative robots offer increased interaction capabilities at relatively low cost but, in contrast to their industrial counterparts, they inevitably lack precision. We address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception.

[ Paper ]

Developed by NAVER LABS, with Korea University of Technology & Education (Koreatech), the robot arm now features an added waist, extending the available workspace, as well as a sensor head that can perceive objects. It has also been equipped with a robot hand “BLT Gripper” that can change to various grasping methods.

[ NAVER Labs ]

In case you were still wondering why SoftBank acquired Aldebaran and Boston Dynamics:

[ RobotStart ]

DJI's new Mini 2 drone is here with a commercial so hip it makes my teeth scream.

[ DJI ]

Using simple materials, such as plastic struts and cardboard rolls, the first prototype of the RBO Hand 3 is already capable of grasping a large range of different objects thanks to its opposable thumb.

The RBO Hand 3 performs an edge grasp before handing-over the object to a person. The hand actively exploits constraints in the environment (the tabletop) for grasping the object. Thanks to its compliance, this interaction is safe and robust.

[ TU Berlin ]

Flyability's Elios 2 helped researchers inspect Reactor Five at the Chernobyl nuclear disaster site in order to determine whether any uranium was present. Prior to this mission, Reactor Five had not been investigated since the disaster in April of 1986.

[ Flyability ]

Thanks Zacc!

SOTO 2 is here! Together with our development partners from the industry, we have greatly enhanced the SOTO prototype over the last two years. With the new version of the robot, Industry 4.0 will become a great deal more real: SOTO brings materials to the assembly line, just-in-time and completely autonomously.

[ Magazino ]

A drone that can fly sustainably for long distances over land and water, and can land almost anywhere, will be able to serve a wide range of applications. There are already drones that fly using ‘green’ hydrogen, but they either fly very slowly or cannot land vertically. That’s why researchers at TU Delft, together with the Royal Netherlands Navy and the Netherlands Coastguard, developed a hydrogen-powered drone that is capable of vertical take-off and landing whilst also being able to fly horizontally efficiently for several hours, much like regular aircraft. The drone uses a combination of hydrogen and batteries as its power source.

[ MAVLab ]

The National Nuclear User Facility for Hot Robotics (NNUF-HR) is an EPSRC funded facility to support UK academia and industry to deliver ground-breaking, impactful research in robotics and artificial intelligence for application in extreme and challenging nuclear environments.

[ NNUF ]

At the Karolinska University Laboratory in Sweden, an innovation project based around an ABB collaborative robot has increased efficiency and created a better working environment for lab staff.

[ ABB ]

What I find interesting about DJI's enormous new agricultural drone is that it's got a spinning obstacle detecting sensor that's a radar, not a lidar.

Also worth noting is that it seems to detect the telephone pole, but not the support wire that you can see in the video feed, although the visualization does make it seem like it can spot the power lines above.

[ DJI ]

Josh Pieper has spend the last year building his own quadruped, and you can see what he's been up to in just 12 minutes.

[ mjbots ]

Thanks Josh!

Dr. Ryan Eustice, TRI Senior Vice President of Automated Driving, delivers a keynote speech -- "The Road to Vehicle Automation, a Toyota Guardian Approach" -- to SPIE's Future Sensing Technologies 2020. During the presentation, Eustice provides his perspective on the current state of automated driving, summarizes TRI's Guardian approach -- which amplifies human drivers, rather than replacing them -- and summarizes TRI's recent developments in core AD capabilities.

[ TRI ]

Two excellent talks this week from UPenn GRASP Lab, from Ruzena Bajcsy and Vijay Kumar.

A panel discussion on the future of robotics and societal challenges with Dr. Ruzena Bajcsy as a Roboticist and Founder of the GRASP Lab.

In this talk I will describe the role of the White House Office of Science and Technology Policy in supporting science and technology research and education, and the lessons I learned while serving in the office. I will also identify a few opportunities at the intersection of technology and policy and broad societal challenges.

[ UPenn ]

The IROS 2020 “Perception, Learning, and Control for Autonomous Agile Vehicles” workshop is all online—here's the intro, but you can click through for a playlist that includes videos of the entire program, and slides are available as well.

[ NYU ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA Bay Area Robotics Symposium – November 20, 2020 – [Online] ACRA 2020 – December 8-10, 2020 – [Online]

Let us know if you have suggestions for next week, and enjoy today's videos.

To prepare the Perseverance rover for its date with Mars, NASA’s Mars 2020 mission team conducted a wide array of tests to help ensure a successful entry, descent and landing at the Red Planet. From parachute verification in the world’s largest wind tunnel, to hazard avoidance practice in Death Valley, California, to wheel drop testing at NASA’s Jet Propulsion Laboratory and much more, every system was put through its paces to get ready for the big day. The Perseverance rover is scheduled to land on Mars on February 18, 2021.

[ JPL ]

Awesome to see Aquanaut—the “underwater transformer” we wrote about last year—take to the ocean!

Also their new website has SHARKS on it.

[ HMI ]

Nature has inspired engineers at UNSW Sydney to develop a soft fabric robotic gripper which behaves like an elephant's trunk to grasp, pick up and release objects without breaking them.

UNSW ]

Collaborative robots offer increased interaction capabilities at relatively low cost but, in contrast to their industrial counterparts, they inevitably lack precision. We address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception.

[ Paper ]

Developed by NAVER LABS, with Korea University of Technology & Education (Koreatech), the robot arm now features an added waist, extending the available workspace, as well as a sensor head that can perceive objects. It has also been equipped with a robot hand “BLT Gripper” that can change to various grasping methods.

[ NAVER Labs ]

In case you were still wondering why SoftBank acquired Aldebaran and Boston Dynamics:

[ RobotStart ]

DJI's new Mini 2 drone is here with a commercial so hip it makes my teeth scream.

[ DJI ]

Using simple materials, such as plastic struts and cardboard rolls, the first prototype of the RBO Hand 3 is already capable of grasping a large range of different objects thanks to its opposable thumb.

The RBO Hand 3 performs an edge grasp before handing-over the object to a person. The hand actively exploits constraints in the environment (the tabletop) for grasping the object. Thanks to its compliance, this interaction is safe and robust.

[ TU Berlin ]

Flyability's Elios 2 helped researchers inspect Reactor Five at the Chernobyl nuclear disaster site in order to determine whether any uranium was present. Prior to this mission, Reactor Five had not been investigated since the disaster in April of 1986.

[ Flyability ]

Thanks Zacc!

SOTO 2 is here! Together with our development partners from the industry, we have greatly enhanced the SOTO prototype over the last two years. With the new version of the robot, Industry 4.0 will become a great deal more real: SOTO brings materials to the assembly line, just-in-time and completely autonomously.

[ Magazino ]

A drone that can fly sustainably for long distances over land and water, and can land almost anywhere, will be able to serve a wide range of applications. There are already drones that fly using ‘green’ hydrogen, but they either fly very slowly or cannot land vertically. That’s why researchers at TU Delft, together with the Royal Netherlands Navy and the Netherlands Coastguard, developed a hydrogen-powered drone that is capable of vertical take-off and landing whilst also being able to fly horizontally efficiently for several hours, much like regular aircraft. The drone uses a combination of hydrogen and batteries as its power source.

[ MAVLab ]

The National Nuclear User Facility for Hot Robotics (NNUF-HR) is an EPSRC funded facility to support UK academia and industry to deliver ground-breaking, impactful research in robotics and artificial intelligence for application in extreme and challenging nuclear environments.

[ NNUF ]

At the Karolinska University Laboratory in Sweden, an innovation project based around an ABB collaborative robot has increased efficiency and created a better working environment for lab staff.

[ ABB ]

What I find interesting about DJI's enormous new agricultural drone is that it's got a spinning obstacle detecting sensor that's a radar, not a lidar.

Also worth noting is that it seems to detect the telephone pole, but not the support wire that you can see in the video feed, although the visualization does make it seem like it can spot the power lines above.

[ DJI ]

Josh Pieper has spend the last year building his own quadruped, and you can see what he's been up to in just 12 minutes.

[ mjbots ]

Thanks Josh!

Dr. Ryan Eustice, TRI Senior Vice President of Automated Driving, delivers a keynote speech -- "The Road to Vehicle Automation, a Toyota Guardian Approach" -- to SPIE's Future Sensing Technologies 2020. During the presentation, Eustice provides his perspective on the current state of automated driving, summarizes TRI's Guardian approach -- which amplifies human drivers, rather than replacing them -- and summarizes TRI's recent developments in core AD capabilities.

[ TRI ]

Two excellent talks this week from UPenn GRASP Lab, from Ruzena Bajcsy and Vijay Kumar.

A panel discussion on the future of robotics and societal challenges with Dr. Ruzena Bajcsy as a Roboticist and Founder of the GRASP Lab.

In this talk I will describe the role of the White House Office of Science and Technology Policy in supporting science and technology research and education, and the lessons I learned while serving in the office. I will also identify a few opportunities at the intersection of technology and policy and broad societal challenges.

[ UPenn ]

The IROS 2020 “Perception, Learning, and Control for Autonomous Agile Vehicles” workshop is all online—here's the intro, but you can click through for a playlist that includes videos of the entire program, and slides are available as well.

[ NYU ]

The development of AI that can socially engage with humans is exciting to imagine, but such advanced algorithms might prove harmful if people are no longer able to detect when they are interacting with non-humans in online environments. Because we cannot fully predict how socially intelligent AI will be applied, it is important to conduct research into how sensitive humans are to behaviors of humans compared to those produced by AI. This paper presents results from a behavioral Turing Test, in which participants interacted with a human, or a simple or “social” AI within a complex videogame environment. Participants (66 total) played an open world, interactive videogame with one of these co-players and were instructed that they could interact non-verbally however they desired for 30 min, after which time they would indicate their beliefs about the agent, including three Likert measures of how much participants trusted and liked the co-player, the extent to which they perceived them as a “real person,” and an interview about the overall perception and what cues participants used to determine humanness. T-tests, Analysis of Variance and Tukey's HSD was used to analyze quantitative data, and Cohen's Kappa and χ2 was used to analyze interview data. Our results suggest that it was difficult for participants to distinguish between humans and the social AI on the basis of behavior. An analysis of in-game behaviors, survey data and qualitative responses suggest that participants associated engagement in social interactions with humanness within the game.

Wearable robots (WRs) are increasingly moving out of the labs toward real-world applications. In order for WRs to be effectively and widely adopted by end-users, a common benchmarking framework needs to be established. In this article, we outline the perspectives that in our opinion are the main determinants of this endeavor, and exemplify the complex landscape into three areas. The first perspective is related to quantifying the technical performance of the device and the physical impact of the device on the user. The second one refers to the understanding of the user's perceptual, emotional, and cognitive experience of (and with) the technology. The third one proposes a strategic path for a global benchmarking methodology, composed by reproducible experimental procedures representing real-life conditions. We hope that this paper can enable developers, researchers, clinicians and end-users to efficiently identify the most promising directions for validating their technology and drive future research efforts in the short and medium term.

Contemporary research in human-machine symbiosis has mainly concentrated on enhancing relevant sensory, perceptual, and motor capacities, assuming short-term and nearly momentary interaction sessions. Still, human-machine confluence encompasses an inherent temporal dimension that is typically overlooked. The present work shifts the focus on the temporal and long-lasting aspects of symbiotic human-robot interaction (sHRI). We explore the integration of three time-aware modules, each one focusing on a diverse part of the sHRI timeline. Specifically, the Episodic Memory considers past experiences, the Generative Time Models estimate the progress of ongoing activities, and the Daisy Planner devices plans for the timely accomplishment of goals. The integrated system is employed to coordinate the activities of a multi-agent team. Accordingly, the proposed system (i) predicts human preferences based on past experience, (ii) estimates performance profile and task completion time, by monitoring human activity, and (iii) dynamically adapts multi-agent activity plans to changes in expectation and Human-Robot Interaction (HRI) performance. The system is deployed and extensively assessed in real-world and simulated environments. The obtained results suggest that building upon the unfolding and the temporal properties of team tasks can significantly enhance the fluency of sHRI.

In this paper, a new scheme for multi-lateral remote rehabilitation is proposed. There exist one therapist, one patient, and several trainees, who are participating in the process of telerehabilitation (TR) in this scheme. This kind of strategy helps the therapist to facilitate the neurorehabilitation remotely. Thus, the patients can stay in their homes, resulting in safer and less expensive costs. Meanwhile, several trainees in medical education centers can be trained by participating partially in the rehabilitation process. The trainees participate in a “hands-on” manner; so, they feel like they are rehabilitating the patient directly. For implementing such a scheme, a novel theoretical method is proposed using the power of multi-agent systems (MAS) theory into the multi-lateral teleoperation, based on the self-intelligence in the MAS. In the previous related works, changing the number of participants in the multi-lateral teleoperation tasks required redesigning the controllers; while, in this paper using both of the decentralized control and the self-intelligence of the MAS, avoids the need for redesigning the controller in the proposed structure. Moreover, in this research, uncertainties in the operators' dynamics, as well as time-varying delays in the communication channels, are taken into account. It is shown that the proposed structure has two tuning matrices (L and D) that can be used for different scenarios of multi-lateral teleoperation. By choosing proper tuning matrices, many related works about the multi-lateral teleoperation/telerehabilitation process can be implemented. In the final section of the paper, several scenarios were introduced to achieve “Simultaneous Training and Therapy” in TR and are implemented with the proposed structure. The results confirmed the stability and performance of the proposed framework.

It is well-established in the literature that biases (e. g., related to body size, ethnicity, race etc.) can occur during the employment interview and that applicants' fairness perceptions related to selection procedures can influence attitudes, intentions, and behaviors toward the recruiting organization. This study explores how social robotics may affect this situation. Using an online, video vignette-based experimental survey (n = 235), the study examines applicant fairness perceptions of two types of job interviews: a face-to-face and a robot-mediated interview. To reduce the risk of socially desirable responses, desensitize the topic, and detect any inconsistencies in the respondents' reactions to vignette scenarios, the study employs a first-person and a third-person perspective. In the robot-mediated interview, two teleoperated robots are used as fair proxies for the applicant and the interviewer, thus providing symmetrical visual anonymity unlike prior research that relied on asymmetrical anonymity, in which only one party was anonymized. This design is intended to eliminate visual cues that typically cause implicit biases and discrimination of applicants, but also to prevent biasing the interviewer's assessment through impression management tactics typically used by applicants. We hypothesize that fairness perception (i.e., procedural fairness and interactional fairness) and behavioral intentions (i.e., intentions of job acceptance, reapplication intentions, and recommendation intentions) will be higher in a robot-mediated job interview than in a face-to-face job interview, and that this effect will be stronger for introvert applicants. The study shows, contrary to our expectations, that the face-to-face interview is perceived as fairer, and that the applicant's personality (introvert vs. extravert) does not affect this perception. We discuss this finding and its implications, and address avenues for future research.

Traditionally, the robotic end-effectors that are employed in unstructured and dynamic environments are rigid and their operation requires sophisticated sensing elements and complicated control algorithms in order to handle and manipulate delicate and fragile objects. Over the last decade, considerable research effort has been put into the development of adaptive, under-actuated, soft robots that facilitate robust interactions with dynamic environments. In this paper, we present soft, retractable, pneumatically actuated, telescopic actuators that facilitate the efficient execution of stable grasps involving a plethora of everyday life objects. The efficiency of the proposed actuators is validated by employing them in two different soft and hybrid robotic grippers. The hybrid gripper uses three rigid fingers to accomplish the execution of all the tasks required by a traditional robotic gripper, while three inflatable, telescopic fingers provide soft interaction with objects. This synergistic combination of soft and rigid structures allows the gripper to cage/trap and firmly hold heavy and irregular objects. The second, simplistic and highly affordable robotic gripper employs just the telescopic actuators, exhibiting an adaptive behavior during the execution of stable grasps of fragile and delicate objects. The experiments demonstrate that both grippers can successfully and stably grasp a wide range of objects, being able to exert significantly high contact forces.

This is a guest post. The views expressed here are those of the authors and do not necessarily represent positions of IEEE or its organizational units.​

“Do you smell smoke?” It was three days before the qualification deadline for the Virtual Tunnel Circuit of the DARPA Subterranean Challenge Virtual Track, and our team was barrelling through last-minute updates to our robot controllers in a small conference room at the Michigan Tech Research Institute (MTRI) offices in Ann Arbor, Mich. That’s when we noticed the smell. We’d assumed that one of the benefits of entering a virtual disaster competition was that we wouldn’t be exposed to any actual disasters, but equipment in the basement of the building MTRI shares had started to smoke. We evacuated. The fire department showed up. And as soon as we could, the team went back into the building, hunkered down, and tried to make up for the unexpected loss of several critical hours.

Team BARCS joins the SubT Virtual Track

The smoke incident happened more than a year after we first learned of the DARPA Subterranean Challenge. DARPA announced SubT early in 2018, and at that time, we were interested in building internal collaborations on multi-agent autonomy problems, and SubT seemed like the perfect opportunity. Though a few of us had backgrounds in robotics, the majority of our team was new to the field. We knew that submitting a proposal as a largely non-traditional robotics team from an organization not known for research in robotics was a risk. However, the Virtual Track gave us the opportunity to focus on autonomy and multi-agent teaming strategies, areas requiring skill in asynchronous computing and sensor data processing that are strengths of our Institute. The prevalence of open source code, small inexpensive platforms, and customizable sensors has provided the opportunity for experts in fields other than robotics to apply novel approaches to robotics problems. This is precisely what makes the Virtual Track of SubT appealing to us, and since starting SubT, autonomy has developed into a significant research thrust for our Institute. Plus, robots are fun!

After many hours of research, discussion, and collaboration, we submitted our proposal early in 2018. And several months later, we found out that we had won a contract and became a funded team (Team BARCS) in the SubT Virtual Track. Now we needed to actually make our strategy work for the first SubT Tunnel Circuit competition, taking place in August of 2019.

Building a team of virtual robots

A natural approach to robotics competitions like SubT is to start with the question of “what can X-type robot do” and then build a team and strategy around individual capabilities. A particular challenge for the SubT Virtual Track is that we can’t design our own systems; instead, we have to choose from a predefined set of simulated robots and sensors that DARPA provides, based on the real robots used by Systems Track teams. Our approach is to look at what a team of robots can do together, determining experimentally what the best team configuration is for each environment. By the final competition, ideally we will be demonstrating the value of combining platforms across multiple Systems Track teams into a single Virtual Track team. Each of the robot configurations in the competition has an associated cost, and team size is constrained by a total cost. This provides another impetus for limiting dependence on complex sensor packages, though our ranging preference is 3D lidar, which is the most expensive sensor!

Image: Michigan Tech Research Institute The teams can rely on realistic physics and sensors but they start off with no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for their simulated robots.

One of the frequent questions we receive about the Virtual Track is if it’s like a video game. While it may look similar on the surface, everything under the hood in a video game is designed to service the game narrative and play experience, not require novel research in AI and autonomy. The purpose of simulations, on the other hand, is to include full physics and sensor models (including noise and errors) to provide a testbed for prototyping and developing solutions to those real-world challenges. We are starting with realistic physics and sensors but no maps of any kind, so the focus is on developing autonomous exploratory behavior, navigation methods, and object recognition for our simulated robots.

Though the simulation is more like real life than a video game, it is not real life. Due to occasional software bugs, there are still non-physical events, like the robots falling through an invisible hole in the world or driving through a rock instead of over it or flipping head over heels when driving over a tiny lip between world tiles. These glitches, while sometimes frustrating, still allow the SubT Virtual platform to be realistic enough to support rapid prototyping of controller modules that will transition straightforwardly onto hardware, closing the loop between simulation and real-world robots.

Full autonomy for DARPA-hard scenarios

The Virtual Track requirement that the robotic agents be fully autonomous, rather than have a human supervisor, is a significant distinction between the Systems and Virtual Tracks of SubT. Our solutions must be hardened against software faults caused by things like missing and bad data since our robots can’t turn to us for help. In order for a team of robots to complete this objective reliably with no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to autonomously identify and manage faults and failures anywhere in the control chain. 

The communications limitations in subterranean environments (both real and virtual) mean that we need to keep the amount of information shared between robots low, while making the usability of that information for joint decision-making high. This goal has guided much of our design for autonomous navigation and joint search strategy for our team. For example, instead of sharing the full SLAM map of the environment, our agents only share a simplified graphical representation of the space, along with data about frontiers it has not yet explored, and are able to merge its information with the graphs generated by other agents. The merged graph can then be used for planning and navigation without having full knowledge of the detailed 3D map.

The Virtual Track requires that the robotic agents be fully autonomous. With no human-in-the-loop, all of the internal systems, from perception to navigation to control to actuation to communications, must be able to identify and manage faults and failures anywhere in the control chain. 

Since the objective of the SubT program is to advance the state-of-the-art in rapid autonomous exploration and mapping of subterranean environments by robots, our first software design choices focused on the mapping task. The SubT virtual environments are sufficiently rich as to provide interesting problems in building so-called costmaps that accurately separate obstructions that are traversable (like ramps) from legitimately impassible obstructions. An extra complication we discovered in the first course, which took place in mining tunnels, was that the angle of the lowest beam of the lidar was parallel to the down ramps in the tunnel environment, so they could not “see” the ground (or sometimes even obstructions on the ramp) until they got close enough to the lip of the ramp to receive lidar reflections off the bottom of the ramp. In this case, we had to not only change the costmap to convince the robot that there was safe ground to reach over the lip of the ramp, but also had to change the path planner to get the robot to proceed with caution onto the top of the ramp in case there were previously unseen obstructions on the ramp. 

In addition to navigation in the costmaps, the robot must be able to generate its own goals to navigate to. This is what produces exploratory behavior when there is no map to start with. SLAM is used to generate a detailed map of the environment explored by a single robot—the space it has probed with its sensors. From the sensor data, we are able to extract information about the interior space of the environment while looking for holes in the data, to determine things like whether the current tunnel continues or ends, or how many tunnels meet at an intersection. Once we have some understanding of the interior space, we can place navigation goals in that space. These goals naturally update as the robot traverses the tunnel, allowing the entire space to be explored.

Sending our robots into the virtual unknown

The solutions for the Virtual Track competitions are tested by DARPA in multiple sequestered runs across many environments for each Circuit in the month prior to the Systems Track competition. We must wait until the joint award ceremony at the conclusion of the Systems Track to find out the results, and we are completely in the dark about placings before the awards are announced. It’s nerve-wracking! The challenges of the worlds used in the Circuit events are also hand-designed, so features of the worlds we use for development could be combined in ways we have not anticipated—it’s always interesting to see what features were prioritized after the event. We test everything in our controllers well enough to feel confident that we at least are submitting something reasonably stable and broadly capable, and once the solution is in, we can’t really do anything other than “let go” and get back to work on the next phase of development. Maybe it’s somewhat like sending your kid to college: “we did our best to prepare you for this world, little bots. Go do good.” 

Image: Michigan Tech Research Institute  The first SubT competition was the Tunnel Circuit, featuring a labyrinthine environment that simulated human-engineered tunnels, including hazards such as vertical shafts and rubble. 

The first competition was the Tunnel Circuit, in October 2019. This environment models human-engineered tunnels. Two substantial challenges in this environment were vertical shafts and rubble. Our team accrued 21 points over 15 competition runs in five separate tunnel environments for a second place finish, behind Team Coordinated Robotics.

The next phase of the SubT virtual competition was the Urban Circuit. Much of the difference between our Tunnel and Urban Circuit results came down to thorough testing to identify failure modes and implementations of checks and data filtering for fault tolerance. For example, in the SLAM nodes run by a single robot, the coordinates of the most recent sensor data are changed multiple times during processing and integration into the current global 3D map of the “visited” environment stored by that robot. If there is lag in IMU or clock data, the observation may be temporarily registered at a default location that is very far from the actual position. Since most of our decision processes for exploration are downstream from SLAM, this can cause faulty or impossible goals to be generated, and the robots then spend inordinate amounts of time trying to drive through walls. We updated our method to add a check to see if the new map position has jumped a far distance from the prior map position, and if so, we threw that data out. 

Image: Michigan Tech Research Institute In open spaces like the rooms in the Urban circuit, we adjusted our approach to exploration through graph generation to allow the robots to accurately identify viable routes while helping to prevent forays off platform edges.

Our approach to exploration through graph generation based on identification of interior spaces allowed us to thoroughly explore the centers of rooms, although we did have to make some changes from the Tunnel circuit to achieve that. In the Tunnel circuit, we used a simplified graph of the environment based on landmarks like intersections. The advantage of this approach is that it is straightforward for two robots to compare how the graphs of the space they explored individually overlap. In open spaces like the rooms in the Urban circuit, we chose to instead use a more complex, less directly comparable graph structure based on the individual robot’s trajectory. This allowed the robots to accurately identify viable routes between features like subway station platforms and subway tracks, as well as to build up the navigation space for room interiors, while helping to prevent forays off the platform edges. Frontier information is also integrated into the graph, providing a uniform data structure for both goal selection and route planning.

The results are in!

The award ceremony for the Urban Circuit was held concurrently with the Systems Track competition awards this past February in Washington State. We sent a team representative to participate in the Technical Interchange Meeting and present the approach for our team, and the rest of us followed along from our office space on the DARPAtv live stream. While we were confident in our solution, we had also been tracking the online leaderboard and knew our competitors were going to be submitting strong solutions. Since the competition environments are hand-designed, there are always novel challenges that could be presented in these environments as well. We knew we would put up a good fight, but it was very exciting to see BARCS appear in first place! 

Any time we implement a new module in our control system, there is a lot of parameter tuning that has to happen to produce reliably good autonomous behavior. In the Urban Circuit, we did not sufficiently test some parameter values in our exploration modules. The effect of this was that the robots only chose to go down small hallways after they explored everything else in their environment, which meant very often they ran out of time and missed a lot of small rooms. This may be the biggest source of lost points for us in the Urban Circuit. One of our major plans going forward from the Urban Circuit is to integrate more sophisticated node selection methods, which can help our robots more intelligently prioritize which frontier nodes to visit. By going through all three Circuit challenges, we will learn how to appropriately add weights to the frontiers based on features of the individual environments. For the Final Challenge, when all three Circuit environments will be combined into large systems, we plan to implement adaptive controllers that will identify their environments and use the appropriate optimized parameters for that environment. In this way, we expect our agents to be able to (for example) prioritize hallways and other small spaces in Urban environments, and perhaps prioritize large openings over small in the Cave environments, if the small openings end up being treacherous overall.

Next for our team: Cave Circuit

Coming up next for Team BARCS is the Virtual Cave Circuit. We are in the middle of testing our hypothesis that our controller will transition from UGVs to UAVs and developing strategies for refining our solution to handle Cave Circuit environmental hazards. The UAVs have a shorter battery life than the UGVs, so executing a joint exploration strategy will also be a high priority for this event, as will completing our work on graph sharing and merging, which will give our robot teams more sophisticated options for navigation and teamwork. We’re reaching a threshold in development where we can start increasing the “smarts” of the robots, which we anticipate will be critical for the final competition, where all of the challenges of SubT will be combined to push the limits of innovation. The Cave Circuit will also have new environmental challenges to tackle: dynamic features such as rock falls have been added, which will block previously accessible passages in the cave environment. We think our controllers are well-poised to handle this new challenge, and we’re eager to find out if that’s the case.

As of now, the biggest worries for us are time and team composition. The Cave Circuit deadline has been postponed to October 15 due to COVID-19 delays, with the award ceremony in mid-November, but there have also been several very compelling additions to the testbed that we would like to experiment with before submission, including droppable networking ‘breadcrumbs’ and new simulated platforms. There are design trade-offs when balancing general versus specialist approaches to the controllers for these robots—since we are adding UAVs to our team for the first time, there are new decisions that will have to be made. For example, the UAVs can ascend into vertical spaces, but only have a battery life of 20 minutes. The UGVs by contrast have 90 minute battery life. One of our strategies is to do an early return to base with one or more agents to buy down risk on making any artifact reports at all for the run, hedging against our other robots not making it back in time, a lesson learned from the Tunnel Circuit. Should  a UAV take on this role, or is it better to have them explore deeper into the environment and instead report their artifacts to a UGV or network node, which comes with its own risks? Testing and experimentation to determine the best options takes time, which is always a worry when preparing for a competition! We also anticipate new competitors and stiffer competition all around.

Image: Michigan Tech Research Institute Team BARCS has now a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021. 

Going forward from the Cave Circuit, we will have a year to prepare for the final DARPA SubT Challenge event, expected to take place in late 2021. What we are most excited about is increasing the level of intelligence of the agents in their teamwork and joint exploration of the environment. Since we will have (hopefully) built up robust approaches to handling each of the specific types of environments in the Tunnel, Urban, and Cave circuits, we will be aiming to push the limits on collaboration and efficiency among the agents in our team. We view this as a central research contribution of the Virtual Track to the Subterranean Challenge because intelligent, adaptive, multi-robot collaboration is an upcoming stage of development for integration of robots into our lives. 

The Subterranean Challenge Virtual Track gives us a bridge for transitioning our more abstract research ideas and algorithms relevant to this degree of autonomy and collaboration onto physical systems, and exploring the tangible outcomes of implementing our work in the real world. And the next time there’s an incident in the basement of our building, the robots (and humans) of Team BARCS will be ready to respond. 

Richard Chase, Ph.D., P.E., is a research scientist at Michigan Tech Research Institute (MTRI) and has 20 years of experience developing robotics and cyber physical systems in areas from remote sensing to autonomous vehicles. At MTRI, he works on a variety of topics such as swarm autonomy, human-swarm teaming, and autonomous vehicles. His research interests are the intersection of design, robotics, and embedded systems.

Sarah Kitchen is a Ph.D. mathematician working as a research scientist and an AI/Robotics focus area leader at MTRI. Her research interests include intelligent autonomous agents and multi-agent collaborative teams, as well as applications of autonomous robots to sensing systems. 

This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001118C0124 and is released under Distribution Statement (Approved for Public Release, Distribution Unlimited). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.

Pages