Feed aggregator

For the past eight months, Boston Dynamics has been trying to find ways in which their friendly yellow quadruped, Spot, can provide some kind of useful response to COVID-19. The company has been working with researchers from MIT and Brigham and Women’s Hospital in Massachusetts to use Spot as a telepresence-based extension for healthcare workers in suitable contexts, with the goal of minimizing exposure and preserving supplies of PPE.

For triaging sick patients, it’s necessary to collect a variety of vital data, including body temperature, respiration rate, pulse rate, and oxygen saturation. Boston Dynamics has helped to develop “a set of contactless  monitoring systems for measuring vital signs and a tablet computer to enable face-to-face medical interviewing,” all of which fits neatly on Spot’s back. This system was recently tested in a medical tent for COVID-19 triage, which appeared to be a well constrained and very flat environment that left us wondering whether a legged platform like Spot was really necessary in this particular application. What makes Spot unique (and relatively complex and expensive) is its ability to navigate around complex environments in an agile manner. But in a tent in a hospital parking lot, are you really getting your US $75k worth out of those legs, or would a wheeled platform do almost as well while being significantly simpler and more affordable?

As it turns out, we weren’t the only ones who wondered whether Spot is really the best platform for this application. “We had the same response when we started getting pitched these opportunities in Feb / March,” Michael Perry, Boston Dynamics’ VP of business development commented on Twitter. “As triage tents started popping up in late March, though, there wasn’t confidence wheeled robots would be able to handle arbitrary triage environments (parking lots, lawns, etc).”

To better understand Spot’s value in this role, we sent Boston Dynamics a few questions about their approach to healthcare robots.

This video shows Dr. Spot (their nickname, not ours) walking around Brigham and Women’s Hospital.

While the video is very focused on Spot itself, the researchers also released a  paper about the effectiveness of Spot’s payload, and about how well it worked in the triage tent, which was outside of the hospital and looks like this:

Photo: MIT/Brigham and Women’s Hospital/Boston Dynamics

The COVID-19 triage area at Brigham and Women’s Hospital includes a medical tent outside of the emergency department (a), where the researchers deployed a Spot with IR camera for fever screening and respiratory rate detection (b).

To me, this seems like somewhere a wheeled robot would do just fine, although Boston Dynamics told us that the tent also had “concrete bumps and curbs that made mobility a challenge.” Spot left the tent and wandered around the hospital when the small number of hospital staff that had been trained to operate the robot rotated to the emergency department instead. It turns out that there’s a second, separate paper in the works about the effectiveness of Spot for telemedicine that’s still under peer review, but it’ll more directly address how useful Spot itself is as a platform in a busy hospital. 

But back to our question of how useful a legged robot like Spot is in a well-constrained and mostly flat environment like the triage tent—concrete bumps and curbs could certainly be a challenge, but it seems like minor alterations to the environment (say, adding some ramps or something) would be much more cost effective than picking a legged robot over a wheeled robot. Even if there are obstacles (like stairs) that are difficult for a wheeled robot, using two (or more?) wheeled robots instead of one legged robot could still potentially be a more efficient solution.

Photo: MIT/Brigham and Women’s Hospital/Boston Dynamics

The researchers mounted four cameras on Spot and showed that they can measure skin temperature, breathing rate, pulse rate, and blood oxygen saturation in healthy patients, from a distance of 2 meters.

For that matter, why use a robot when you could just make your remote monitoring system stationary, instead? That was our first question for Boston Dynamics roboticist Marco da Silva and field applications lead Seth Davis.

IEEE Spectrum: From what I understand from the paper, the goal was to develop a system that can adapt its distance and angle of view to take more accurate readings of patients, rather than asking patients to adapt to a static system. Why make this a mobile robot at all, rather than (for example) something that sits on a table with a couple of actuated DoFs that make the necessary adjustments?

Marco da Silva: It’s possible that you could build an actuated device expressly for this purpose but Spot already existed and was ready to be deployed. Further, the Brigham and Women’s team was expecting long lines of patients at intake or patients seated in the tent. The expectation was that Spot could efficiently move from patient to patient.

Your Boston Dynamics colleague Michael Perry mentioned that “there wasn’t confidence wheeled robots would be able to handle arbitrary triage environments (parking lots, lawns, etc.).” Can you elaborate on that?

Seth Davis: We initially questioned the need for legged robots or even a mobile platform. In this case, the Brigham and Women’s and MIT teams informed us that a wheeled robot with sufficient payload capacity was not readily available and not well suited to the initial concept which was to operate outside the hospital in temporary treatment areas. In addition to its mobility, our robots’ obstacle avoidance abilities and simple user interface also seemed appealing to the Brigham and Women’s team as they worked right out of the box and did not require additional development or significant training in order to get something their staff could use. 

“In addition to its mobility, our robots’ obstacle avoidance abilities and simple user interface also seemed appealing to the Brigham and Women’s team as they worked right out of the box and did not require additional development or significant training in order to get something their staff could use.” —Seth Davis, Boston Dynamics

With the experience that you have now, do you think that legged robots are worth the extra cost and complexity in these situations, relative to a (likely much less expensive) wheeled platform?

Davis: It depends on the environment, the requirements for speed of deployment, and how flexible the solution needs to be. It’s highly likely that the pandemic is going to have researchers looking at creating one robotic solution (fixed or mobile) to interact with patients. In this instance, the rapidly evolving pandemic situation necessitated a robot that could be deployed in a tent, a parking lot, a lawn, or in the Emergency Department, and rapidly adapt to the sensor and data collection needs of their team. 

Has this experience suggested any other healthcare applications where legged robots would be uniquely useful?

Da Silva: We have a few hospitals around the world that are interested in this specific configuration of Spot as a “just in case” solution if or when their triage facilities need to be set up in an unknown environment. Moving forward we have teams that are looking at delivering goods and doing rounds in convalescent facilities, and mobile disinfection in ad hoc or unstructured environments. One thing we learned as well is that elevator usage is often over capacity in hospitals causing long wait times, so we’ve been approached to see how Spot can carry physical items up and down the stairs to alleviate elevator congestion.

sub, sup { position: relative; font-size: 75%; line-height: 0; vertical-align: baseline; margin-left: -5px; } Illustration: Marysia Machulska

Within moments of meeting each other at a conference last year, Nathan Collins and Yann Gaston-Mathé began devising a plan to work together. Gaston-Mathé runs a startup that applies automated software to the design of new drug candidates. Collins leads a team that uses an automated chemistry platform to synthesize new drug candidates.

“There was an obvious synergy between their technology and ours,” recalls Gaston-Mathé, CEO and cofounder of Paris-based Iktos.

In late 2019, the pair launched a project to create a brand-new antiviral drug that would block a specific protein exploited by influenza viruses. Then the COVID-19 pandemic erupted across the world stage, and Gaston-Mathé and Collins learned that the viral culprit, SARS-CoV-2, relied on a protein that was 97 percent similar to their influenza protein. The partners pivoted.

Their companies are just two of hundreds of biotech firms eager to overhaul the drug-discovery process, often with the aid of artificial intelligence (AI) tools. The first set of antiviral drugs to treat COVID-19 will likely come from sifting through existing drugs. Remdesivir, for example, was originally developed to treat Ebola, and it has been shown to speed the recovery of hospitalized COVID-19 patients. But a drug made for one condition often has side effects and limited potency when applied to another. If researchers can produce an ­antiviral that specifically targets SARS-CoV-2, the drug would likely be safer and more effective than a repurposed drug.

There’s one big problem: Traditional drug discovery is far too slow to react to a pandemic. Designing a drug from scratch typically takes three to five years—and that’s before human clinical trials. “Our goal, with the combination of AI and automation, is to reduce that down to six months or less,” says Collins, who is chief strategy officer at SRI Biosciences, a division of the Silicon Valley research nonprofit SRI International. “We want to get this to be very, very fast.”

That sentiment is shared by small biotech firms and big pharmaceutical companies alike, many of which are now ramping up automated technologies backed by supercomputing power to predict, design, and test new antivirals—for this pandemic as well as the next—with unprecedented speed and scope.

“The entire industry is embracing these tools,” says Kara Carter, president of the International Society for Antiviral Research and executive vice president of infectious disease at Evotec, a drug-discovery company in Hamburg. “Not only do we need [new antivirals] to treat the SARS-CoV-2 infection in the population, which is probably here to stay, but we’ll also need them to treat future agents that arrive.”

There are currently about 200 known viruses that infect humans. Although viruses represent less than 14 percent of all known human pathogens, they make up two-thirds of all new human pathogens discovered since 1980.

Antiviral drugs are fundamentally different from vaccines, which teach a person’s immune system to mount a defense against a viral invader, and antibody treatments, which enhance the body’s immune response. By contrast, anti­virals are chemical compounds that directly block a virus after a person has become infected. They do this by binding to specific proteins and preventing them from functioning, so that the virus cannot copy itself or enter or exit a cell.

The SARS-CoV-2 virus has an estimated 25 to 29 proteins, but not all of them are suitable drug targets. Researchers are investigating, among other targets, the virus’s exterior spike protein, which binds to a receptor on a human cell; two scissorlike enzymes, called proteases, that cut up long strings of viral proteins into functional pieces inside the cell; and a polymerase complex that makes the cell churn out copies of the virus’s genetic material, in the form of single-stranded RNA.

But it’s not enough for a drug candidate to simply attach to a target protein. Chemists also consider how tightly the compound binds to its target, whether it binds to other things as well, how quickly it metabolizes in the body, and so on. A drug candidate may have 10 to 20 such objectives. “Very often those objectives can appear to be anticorrelated or contradictory with each other,” says Gaston-Mathé.

Compared with antibiotics, antiviral drug discovery has proceeded at a snail’s pace. Scientists advanced from isolating the first antibacterial molecules in 1910 to developing an arsenal of powerful antibiotics by 1944. By contrast, it took until 1951 for researchers to be able to routinely grow large amounts of virus particles in cells in a dish, a breakthrough that earned the inventors a Nobel Prize in Medicine in 1954.

And the lag between the discovery of a virus and the creation of a treatment can be heartbreaking. According to the World Health Organization, 71 million people worldwide have chronic hepatitis C, a major cause of liver cancer. The virus that causes the infection was discovered in 1989, but effective antiviral drugs didn’t hit the market until 2014.

While many antibiotics work on a range of microbes, most antivirals are highly specific to a single virus—what those in the business call “one bug, one drug.” It takes a detailed understanding of a virus to develop an antiviral against it, says Che Colpitts, a virologist at Queen’s University, in Canada, who works on antivirals against RNA viruses. “When a new virus emerges, like SARS-CoV-2, we’re at a big disadvantage.”

Making drugs to stop viruses is hard for three main reasons. First, viruses are the Spartans of the pathogen world: They’re frugal, brutal, and expert at evading the human immune system. About 20 to 250 nanometers in diameter, viruses rely on just a few parts to operate, hijacking host cells to reproduce and often destroying those cells upon departure. They employ tricks to camouflage their presence from the host’s immune system, including preventing infected cells from sending out molecular distress beacons. “Viruses are really small, so they only have a few components, so there’s not that many drug targets available to start with,” says Colpitts.

Second, viruses replicate quickly, typically doubling in number in hours or days. This constant copying of their genetic material enables viruses to evolve quickly, producing mutations able to sidestep drug effects. The virus that causes AIDS soon develops resistance when exposed to a single drug. That’s why a cocktail of antiviral drugs is used to treat HIV infection.

Finally, unlike bacteria, which can exist independently outside human cells, viruses invade human cells to propagate, so any drug designed to eliminate a virus needs to spare the host cell. A drug that fails to distinguish between a virus and a cell can cause serious side effects. “Discriminating between the two is really quite difficult,” says Evotec’s Carter, who has worked in antiviral drug discovery for over three decades.

And then there’s the money barrier. Developing antivirals is rarely profitable. Health-policy researchers at the London School of Economics recently estimated that the average cost of developing a new drug is US $1 billion, and up to $2.8 billion for cancer and other specialty drugs. Because antivirals are usually taken for only short periods of time or during short outbreaks of disease, companies rarely recoup what they spent developing the drug, much less turn a profit, says Carter.

To change the status quo, drug discovery needs fresh approaches that leverage new technologies, rather than incremental improvements, says Christian Tidona, managing director of BioMed X, an independent research institute in Heidelberg, Germany. “We need breakthroughs.”

Putting Drug Development on Autopilot

Earlier this year, SRI Biosciences and Iktos began collaborating on a way to use artificial intelligence and automated chemistry to rapidly identify new drugs to target the COVID-19 virus. Within four months, they had designed and synthesized a first round of antiviral candidates. Here’s how they’re doing it.

1/5

STEP 1: Iktos’s AI platform uses deep-learning algorithms in an iterative process to come up with new molecular structures likely to bind to and disable a specific coronavirus protein. Illustrations: Chris Philpot

2/5

STEP 2: SRI Biosciences’s SynFini system is a three-part automated chemistry suite for producing new compounds. Starting with a target compound from Iktos, SynRoute uses machine learning to analyze and optimize routes for creating that compound, with results in about 10 seconds. It prioritizes routes based on cost, likelihood of success, and ease of implementation.

3/5

STEP 3: SynJet, an automated inkjet printer platform, tests the routes by printing out tiny quantities of chemical ingredients to see how they react. If the right compound is produced, the platform tests it.

4/5

STEP 4: AutoSyn, an automated tabletop chemical plant, synthesizes milligrams to grams of the desired compound for further testing. Computer-selected “maps” dictate paths through the plant’s modular components.

5/5

STEP 5: The most promising compounds are tested against live virus samples.

Previous Next $(document).ready(function(){$('#SS753059740').carousel({pause: true,interval: false});});

Iktos’s AI platform was created by a medicinal chemist and an AI expert. To tackle SARS-CoV-2, the company used generative models—deep-learning algorithms that generate new data—to “imagine” molecular structures with a good chance of disabling a key coronavirus protein.

For a new drug target, the software proposes and evaluates roughly 1 million compounds, says Gaston-Mathé. It’s an iterative process: At each step, the system generates 100 virtual compounds, which are tested in silico with predictive models to see how closely they meet the objectives. The test results are then used to design the next batch of compounds. “It’s like we have a very, very fast chemist who is designing compounds, testing compounds, getting back the data, then designing another batch of compounds,” he says.

The computer isn’t as smart as a human chemist, Gaston-Mathé notes, but it’s much faster, so it can explore far more of what people in the field call “chemical space”—the set of all possible organic compounds. Unexplored chemical space is huge: Biochemists estimate that there are at least 1063 possible druglike molecules, and that 99.9 percent of all possible small molecules or compounds have never been synthesized.

Still, designing a chemical compound isn’t the hardest part of creating a new drug. After a drug candidate is designed, it must be synthesized, and the highly manual process for synthesizing a new chemical hasn’t changed much in 200 years. It can take days to plan a synthesis process and then months to years to optimize it for manufacture.

That’s why Gaston-Mathé was eager to send Iktos’s AI-generated designs to Collins’s team at SRI Biosciences. With $13.8 million from the Defense Advanced Research Projects Agency, SRI Biosciences spent the last four years automating the synthesis process. The company’s automated suite of three technologies, called SynFini, can produce new chemical compounds in just hours or days, says Collins.

First, machine-learning software devises possible routes for making a desired molecule. Next, an inkjet printer platform tests the routes by printing out and mixing tiny quantities of chemical ingredients to see how they react with one another; if the right compound is produced, the platform runs tests on it. Finally, a tabletop chemical plant synthesizes milligrams to grams of the desired compound.

Less than four months after Iktos and SRI Biosciences announced their collaboration, they had designed and synthesized a first round of antiviral candidates for SARS-CoV-2. Now they’re testing how well the compounds work on actual samples of the virus.

Out of 10 63 possible druglike molecules, 99.9 percent have never been synthesized.

Theirs isn’t the only collaboration applying new tools to drug discovery. In late March, Alex Zhavoronkov, CEO of Hong Kong–based Insilico Medicine, came across a YouTube video showing three virtual-reality avatars positioning colorful, sticklike fragments in the side of a bulbous blue protein. The three researchers were using VR to explore how compounds might bind to a SARS-CoV-2 enzyme. Zhavoronkov contacted the startup that created the simulation—Nanome, in San Diego—and invited it to examine Insilico’s ­AI-generated molecules in virtual reality.

Insilico runs an AI platform that uses biological data to train deep-learning algorithms, then uses those algorithms to identify molecules with druglike features that will likely bind to a protein target. A four-day training sprint in late January yielded 100 molecules that appear to bind to an important SARS-CoV-2 protease. The company recently began synthesizing some of those molecules for laboratory testing.

Nanome’s VR software, meanwhile, allows researchers to import a molecular structure, then view and manipulate it on the scale of individual atoms. Like human chess players who use computer programs to explore potential moves, chemists can use VR to predict how to make molecules more druglike, says Nanome CEO Steve McCloskey. “The tighter the interface between the human and the computer, the more information goes both ways,” he says.

Zhavoronkov sent data about several of Insilico’s compounds to Nanome, which re-created them in VR. Nanome’s chemist demonstrated chemical tweaks to potentially improve each compound. “It was a very good experience,” says Zhavoronkov.

Meanwhile, in March, Takeda Pharmaceutical Co., of Japan, invited Schrödinger, a New York–based company that develops chemical-simulation software, to join an alliance working on antivirals. Schrödinger’s AI focuses on the physics of how proteins interact with small molecules and one another.

The software sifts through billions of molecules per week to predict a compound’s properties, and it optimizes for multiple desired properties simultaneously, says Karen Akinsanya, chief biomedical scientist and head of discovery R&D at Schrödinger. “There’s a huge sense of urgency here to come up with a potent molecule, but also to come up with molecules that are going to be well tolerated” by the body, she says. Drug developers are seeking compounds that can be broadly used and easily administered, such as an oral drug rather than an intravenous drug, she adds.

Schrödinger evaluated four protein targets and performed virtual screens for two of them, a computing-intensive process. In June, Google Cloud donated the equivalent of 16 million hours of Nvidia GPU time for the company’s calculations. Next, the alliance’s drug companies will synthesize and test the most promising compounds identified by the virtual screens.

Other companies, including Amazon Web Services, IBM, and Intel, as well as several U.S. national labs are also donating time and resources to the Covid-19 High Performance Computing Consortium. The consortium is supporting 87 projects, which now have access to 6.8 million CPU cores, 50,000 GPUs, and 600 petaflops of computational resources.

While advanced technologies could transform early drug discovery, any new drug candidate still has a long road after that. It must be tested in animals, manufactured in large batches for clinical trials, then tested in a series of trials that, for antivirals, lasts an average of seven years.

In May, the BioMed X Institute in Germany launched a five-year project to build a Rapid Antiviral Response Platform, which would speed drug discovery all the way through manufacturing for clinical trials. The €40 million ($47 million) project, backed by drug companies, will identify ­outside-the-box proposals from young scientists, then provide space and funding to develop their ideas.

“We’ll focus on technologies that allow us to go from identification of a new virus to 10,000 doses of a novel potential therapeutic ready for trials in less than six months,” says BioMed X’s Tidona, who leads the project.

While a vaccine will likely arrive long before a bespoke antiviral does, experts expect COVID-19 to be with us for a long time, so the effort to develop a direct-acting, potent antiviral continues. Plus, having new antivirals—and tools to rapidly create more—can only help us prepare for the next pandemic, whether it comes next month or in another 102 years.

“We’ve got to start thinking differently about how to be more responsive to these kinds of threats,” says Collins. “It’s pushing us out of our comfort zones.”

This article appears in the October 2020 print issue as “Automating Antivirals.”

iRobot has released several new robots over the last few years, including the i7 and s9 vacuums. Both of these models are very fancy and very capable, packed with innovative and useful features that we’ve been impressed by. They’re both also quite expensive—with dirt docks included, you’re looking at US $800 for the i7+, and a whopping $1,100 for the s9+. You can knock a couple hundred bucks off of those prices if you don’t want the docks, but still, these vacuums are absolutely luxury items.

If you just want something that’ll do some vacuuming so that you don’t have to, iRobot has recently announced a new Roomba option. The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400. It’s not nearly as smart as the i7 or the s9, but it can navigate (sort of) and make maps (sort of) and do some basic smart home integration. If that sounds like all you need, the i3 could be the robot vacuum for you.

iRobot calls the i3 “stylish,” and it does look pretty neat with that fabric top. Underneath, you get dual rubber primary brushes plus a side brush. There’s limited compatibility with the iRobot Home app and IFTTT, along with Alexa and Google Home. The i3 is also compatible with iRobot’s Clean Base, but that’ll cost you an extra $200, and iRobot refers to this bundle as the i3+.

The reason that the i3 only offers limited compatibility with iRobot’s app is that the i3 is missing the top-mounted camera that you’ll find in more expensive models. Instead, it relies on a downward-looking optical sensor to help it navigate, and it builds up a map as it’s cleaning by keeping track of when it bumps into obstacles and paying attention to internal sensors like a gyro and wheel odometers. The i3 can localize directly on its charging station or Clean Base (which have beacons on them that the robot can see if it’s close enough), which allows it to resume cleaning after emptying it’s bin or recharging. You’ll get a map of the area that the i3 has cleaned once it’s finished, but that map won’t persist between cleaning sessions, meaning that you can’t do things like set keep-out zones or identify specific rooms for the robot to clean. Many of the more useful features that iRobot’s app offers are based on persistent maps, and this is probably the biggest gap in functionality between the i3 and its more expensive siblings.

According to iRobot senior global product manager Sarah Wang, the kind of augmented dead-reckoning-based mapping that the i3 uses actually works really well: “Based on our internal and external testing, the performance is equivalent with our products that have cameras, like the Roomba 960,” she says. To get this level of performance, though, you do have to be careful, Wang adds. “If you kidnap i3, then it will be very confused, because it doesn’t have a reference to know where it is.” “Kidnapping” is a term that’s used often in robotics to refer to a situation in which an autonomous robot gets moved to an unmapped location, and in the context of a home robot, the best example of this is if you decide that you want your robot to vacuum a different room instead, so you pick it up and move it there.

iRobot used to make this easy by giving all of its robots carrying handles, but not anymore, because getting moved around makes things really difficult for any robot trying to keep track of where it is. While robots like the i7 can recover using their cameras to look for unique features that they recognize, the only permanent, unique landmark that the i3 can for sure identify is the beacon on its dock. What this means is that when it comes to the i3, even more than other Roomba models, the best strategy, is to just “let it do its thing,” says iRobot senior principal system engineer Landon Unninayar.

Photo: iRobot The Roomba i3 is iRobot’s new low to midrange vacuum, starting at $400.

If you’re looking to spend a bit less than the $400 starting price of the i3, there are other options to be aware of as well. The Roomba 614, for example, does a totally decent job and costs $250. It’s scheduling isn’t very clever, it doesn’t make maps, and it won’t empty itself, but it will absolutely help keep your floors clean as long as you don’t mind being a little bit more hands-on. (And there’s also Neato’s D4, which offers basic persistent maps—and lasers!—for $330.)

The other thing to consider if you’re trying to decide between the i3 and a more expensive Roomba is that without the camera, the i3 likely won’t be able to take advantage of nearly as many of the future improvements that iRobot has said it’s working on. Spending more money on a robot with additional sensors isn’t just buying what it can do now, but also investing in what it may be able to do later on, with its more sophisticated localization and ability to recognize objects. iRobot has promised major app updates every six months, and our guess is that most of the cool new stuff is going to show in the i7 and s9. So, if your top priority is just cleaner floors, the i3 is a solid choice. But if you want a part of what iRobot is working on next, the i3 might end up holding you back. 

aside.inlay.xlrg { display: none; } aside.inlay.pullquote.xlrg { display: block; }

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online] IROS 2020 – October 25-29, 2020 – [Online] CYBATHLON 2020 – November 13-14, 2020 – [Online] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Rongzhong Li, who is responsible for the adorable robotic cat Nybble, has an updated and even more adorable quadruped that's more robust and agile but only costs around US $200 in kit form on Kickstarter.

Looks like the early bird options are sold out, but a full kit is a $225 pledge, for delivery in December.

[ Kickstarter ]

Thanks Rz!

I still maintain that Stickybot was one of the most elegantly designed robots ever.

[ Stanford ]

With the unpredictable health crisis of COVID-19 continuing to place high demands on hospitals, PAL Robotics have successfully completed testing of their delivery robots in Barcelona hospitals this summer. The TIAGo Delivery and TIAGo Conveyor robots were deployed in Hospital Municipal of Badalona and Hospital Clínic Barcelona following a winning proposal submitted to the European DIH-Hero project. Accerion sensors were integrated onto the TIAGo Delivery Robot and TIAGo Conveyor Robot for use in this project.

[ PAL Robotics ]

Energy Robotics, a leading developer of software solutions for mobile robots used in industrial applications, announced that its remote sensing and inspection solution for Boston Dynamics’s agile mobile robot Spot was successfully deployed at Merck’s thermal exhaust treatment plant at its headquarters in Darmstadt, Germany. Energy Robotics equipped Spot with sensor technology and remote supervision functions to support the inspection mission.

Combining Boston Dynamics’ intuitive controls, robotic intelligence and open interface with Energy Robotics’ control and autonomy software, user interface and encrypted cloud connection, Spot can be taught to autonomously perform a specific inspection round while being supervised remotely from anywhere with internet connectivity. Multiple cameras and industrial sensors enable the robot to find its way around while recording and transmitting information about the facility’s onsite equipment operations.

Spot reads the displays of gauges in its immediate vicinity and can also zoom in on distant objects using an externally-mounted optical zoom lens. In the thermal exhaust treatment facility, for instance, it monitors cooling water levels and notes whether condensation water has accumulated. Outside the facility, Spot monitors pipe bridges for anomalies.

Among the robot’s many abilities, it can detect defects of wires or the temperature of pump components using thermal imaging. The robot was put through its paces on a comprehensive course that tested its ability to handle special challenges such as climbing stairs, scaling embankments and walking over grating.

Energy Robotics ]

Thanks Stefan!

Boston Dynamics really should give Dr. Guero an Atlas just to see what he can do with it.

[ DrGuero ]

World's First Socially Distanced Birthday Party: Located in London, the robotic arm was piloted in real time to light the candles on the cake by the founder of Extend Robotics, Chang Liu, who was sat 50 miles away in Reading. Other team members in Manchester and Reading were also able to join in the celebration as the robot was used to accurately light the candles on the birthday cake.

[ Extend Robotics ]

The Robocon in-person competition was canceled this year, but check out Tokyo University's robots in action:

[ Robocon ]

Sphero has managed to pack an entire Sphero into a much smaller sphere.

[ Sphero ]

Squishy Robotics, a small business funded by the National Science Foundation (NSF), is developing mobile sensor robots for use in disaster rescue, remote monitoring, and space exploration. The shape-shifting, mobile, senor robots from UC-Berkeley spin-off Squishy Robotics can be dropped from airplanes or drones and can provide first responders with ground-based situational awareness during fires, hazardous materials (HazMat) release, and natural and man-made disasters.

[ Squishy Robotics ]

Meet Jasper, the small girl with big dreams to FLY. Created by UTS Animal Logic Academy in partnership with the Royal Australian Air Force to encourage girls to soar above the clouds. Jasper was created using a hybrid of traditional animation techniques and technology such as robotics and 3D printing. A KUKA QUANTEC robot is used during the film making to help the Australian Royal Airforce tell their story in a unique way. UTS adapted their High Accurate robot to film consistent paths, creating a video with physical sets and digital characters.

[ AU AF ]

Impressive what the Ghost Robotics V60 can do without any vision sensors on it.

[ Ghost Robotics ]

Is your job moving tiny amounts of liquid around? Would you rather be doing something else? ABB’s YuMi got you.

[ Yumi ]

For his PhD work at the Media Lab, Biomechatronics researcher Roman Stolyarov developed a terrain-adaptive control system for robotic leg prostheses. as a way to help people with amputations feel as able-bodied and mobile as possible, by allowing them to walk seamlessly regardless of the ground terrain.

[ MIT ]

This robot collects data on each cow when she enters to be milked. Milk samples and 3D photos can be taken to monitor the cow’s health status. The Ontario Dairy Research Centre in Elora, Ontario, is leading dairy innovation through education and collaboration. It is a state-of-the-art 175,000 square foot facility for discovery, learning and outreach. This centre is a partnership between the Agricultural Research Institute of Ontario, OMAFRA, the University of Guelph and the Ontario dairy industry.

[ University of Guleph ]

Australia has one of these now, should the rest of us panic?

[ Boeing ]

Daimler and Torc are developing Level 4 automated trucks for the real world. Here is a glimpse into our closed-course testing, routes on public highways in Virginia, and self-driving capabilities development. Our year of collaborating on the future of transportation culminated in the announcement of our new truck testing center in New Mexico.

[ Torc Robotics ]

Soft grippers with soft and flexible materials have been widely researched to improve the functionality of grasping. Although grippers that can grasp various objects with different shapes are important, a large number of industrial applications require a gripper that is targeted for a specified object. In this paper, we propose a design methodology for soft grippers that are customized to grasp single dedicated objects. A customized soft gripper can safely and efficiently grasp a dedicated target object with lowered surface contact forces while maintaining a higher lifting force, compared to its non-customized counterpart. A simplified analytical model and a fabrication method that can rapidly customize and fabricate soft grippers are proposed. Stiffness patterns were implemented onto the constraint layers of pneumatic bending actuators to establish actuated postures with irregular bending curvatures in the longitudinal direction. Soft grippers with customized stiffness patterns yielded higher shape conformability to target objects than non-patterned regular soft grippers. The simplified analytical model represents the pneumatically actuated soft finger as a summation of interactions between its air chambers. Geometric approximations and pseudo-rigid-body modeling theory were employed to build the analytical model. The customized soft grippers were compared with non-patterned soft grippers by measuring their lifting forces and contact forces while they grasped objects. Under the identical actuating pressure, the conformable grasping postures enabled customized soft grippers to have almost three times the lifting force than that of non-patterned soft grippers, while the maximum contact force was reduced to two thirds.

Manipulation of deformable objects has given rise to an important set of open problems in the field of robotics. Application areas include robotic surgery, household robotics, manufacturing, logistics, and agriculture, to name a few. Related research problems span modeling and estimation of an object's shape, estimation of an object's material properties, such as elasticity and plasticity, object tracking and state estimation during manipulation, and manipulation planning and control. In this survey article, we start by providing a tutorial on foundational aspects of models of shape and shape dynamics. We then use this as the basis for a review of existing work on learning and estimation of these models and on motion planning and control to achieve desired deformations. We also discuss potential future lines of work.

The study of sustainability challenges requires the consideration of multiple coupled systems that are often complex and deeply uncertain. As a result, traditional analytical methods offer limited insights with respect to how to best address such challenges. By analyzing the case of global climate change mitigation, this paper shows that the combination of high-performance computing, mathematical modeling, and computational intelligence tools, such as optimization and clustering algorithms, leads to richer analytical insights. The paper concludes by proposing an analytical hierarchy of computational tools that can be applied to other sustainability challenges.

Automatic fingerprint identification systems (AFIS) make use of global fingerprint information like ridge flow, ridge frequency, and delta or core points for fingerprint alignment, before performing matching. In latent fingerprints, the ridges will be smudged and delta or core points may not be available. It becomes difficult to pre-align fingerprints with such partial fingerprint information. Further, global features are not robust against fingerprint deformations; rotation, scale, and fingerprint matching using global features pose more challenges. We have developed a local minutia-based convolution neural network (CNN) matching model called “Combination of Nearest Neighbor Arrangement Indexing (CNNAI).” This model makes use of a set of “n” local nearest minutiae neighbor features and generates rotation-scale invariant feature vectors. Our proposed system doesn't depend upon any fingerprint alignment information. In large fingerprint databases, it becomes very difficult to query every fingerprint against every other fingerprint in the database. To address this issue, we make use of hash indexing to reduce the number of retrievals. We have used a residual learning-based CNN model to enhance and extract the minutiae features. Matching was done on FVC2004 and NIST SD27 latent fingerprint databases against 640 and 3,758 gallery fingerprint images, respectively. We obtained a Rank-1 identification rate of 80% for FVC2004 fingerprints and 84.5% for NIST SD27 latent fingerprint databases. The experimental results show improvement in the Rank-1 identification rate compared to the state-of-art algorithms, and the results reveal that the system is robust against rotation and scale.

This article presents a method for grasping novel objects by learning from experience. Successful attempts are remembered and then used to guide future grasps such that more reliable grasping is achieved over time. To transfer the learned experience to unseen objects, we introduce the dense geometric correspondence matching network (DGCM-Net). This applies metric learning to encode objects with similar geometry nearby in feature space. Retrieving relevant experience for an unseen object is thus a nearest neighbor search with the encoded feature maps. DGCM-Net also reconstructs 3D-3D correspondences using the view-dependent normalized object coordinate space to transform grasp configurations from retrieved samples to unseen objects. In comparison to baseline methods, our approach achieves an equivalent grasp success rate. However, the baselines are significantly improved when fusing the knowledge from experience with their grasp proposal strategy. Offline experiments with a grasping dataset highlight the capability to transfer grasps to new instances as well as to improve success rate over time from increasing experience. Lastly, by learning task-relevant grasps, our approach can prioritize grasp configurations that enable the functional use of objects.

We describe and evaluate a neural network-based architecture aimed to imitate and improve the performance of a fully autonomous soccer team in RoboCup Soccer 2D Simulation environment. The approach utilizes deep Q-network architecture for action determination and a deep neural network for parameter learning. The proposed solution is shown to be feasible for replacing a selected behavioral module in a well-established RoboCup base team, Gliders2d, in which behavioral modules have been evolved with human experts in the loop. Furthermore, we introduce an additional performance-correlated signal (a delayed reward signal), enabling a search for local maxima during a training phase. The extension is compared against a known benchmark. Finally, we investigate the extent to which preserving the structure of expert-designed behaviors affects the performance of a neural network-based solution.

Continuously measuring the engagement of users with a robot in a Human-Robot Interaction (HRI) setting paves the way toward in-situ reinforcement learning, improve metrics of interaction quality, and can guide interaction design and behavior optimization. However, engagement is often considered very multi-faceted and difficult to capture in a workable and generic computational model that can serve as an overall measure of engagement. Building upon the intuitive ways humans successfully can assess situation for a degree of engagement when they see it, we propose a novel regression model (utilizing CNN and LSTM networks) enabling robots to compute a single scalar engagement during interactions with humans from standard video streams, obtained from the point of view of an interacting robot. The model is based on a long-term dataset from an autonomous tour guide robot deployed in a public museum, with continuous annotation of a numeric engagement assessment by three independent coders. We show that this model not only can predict engagement very well in our own application domain but show its successful transfer to an entirely different dataset (with different tasks, environment, camera, robot and people). The trained model and the software is available to the HRI community, at https://github.com/LCAS/engagement_detector, as a tool to measure engagement in a variety of settings.

aside.inlay.xlrg { display: none; } aside.inlay.pullquote.xlrg { display: block; }

We’ve been keeping a close watch on GITAI since early last year—what caught our interest initially is the history of the company, which includes a bunch of folks who started in the JSK Lab at the University of Tokyo, won the DARPA Robotics Challenge Trials as SCHAFT, got swallowed by Google, narrowly avoided being swallowed by SoftBank, and are now designing robots that can work in space.

The GITAI YouTube channel has kept us more to less up to date on their progress so far, and GITAI has recently announced the next step in this effort: The deployment of one of their robots on board the International Space Station in 2021.

Photo: GITAI GITAI’s S1 is a task-specific 8-degrees-of-freedom arm with an integrated sensing and computing system and 1-meter reach.

GITAI has been working on a variety of robots for space operations, the most sophisticated of which is a humanoid torso called G1, which is controlled through an immersive telepresence system. What will be launching into space next year is a more task-specific system called the S1, which is an 8-degrees-of-freedom arm with an integrated sensing and computing system that can be wall-mounted and has a 1-meter reach.

The S1 will be living on board a commercially funded, pressurized airlock-extension module called Bishop, developed by NanoRacks. Mounted on the inside of the Bishop module, the S1 will have access to a task board and a small assembly area, where it will demonstrate common crew intra-vehicular activity, or IVA—tasks like flipping switches, turning knobs, and managing cables. It’ll also do some in-space assembly, or ISA, attaching panels to create a solar array.

Here’s a demonstration of some task board activities, conducted on Earth in a mockup of Bishop:

GITAI says that “all operations conducted by the S1 GITAI robotic arm will be autonomous, followed by some teleoperations from Nanoracks’ in-house mission control.” This is interesting, because from what we’ve seen until now, GITAI has had a heavy emphasis on telepresence, with a human in the loop to get stuff done. As GITAI’s founder and CEO Sho Nakanose commented to us a year ago, “Telepresence robots have far better performance and can be made practical much quicker than autonomous robots, so first we are working on making telepresence robots practical.” 

So what’s changed? “GITAI has been concentrating on teleoperations to demonstrate the dexterity of our robot, but now it’s time to show our capabilities to do the same this time with autonomy,” Nakanose told us last week. “In an environment with minimum communication latency, it would be preferable to operate a robot more with teleoperations to enhance the capability of the robot, since with the current technology level of AI, what a robot can do autonomously is very limited. However, in an environment where the latency becomes noticeable, it would become more efficient to have a mixture of autonomy and teleoperations depending on the application. Eventually, in an ideal world, a robot will operate almost fully autonomously with minimum human cognizance.”

“In an environment where the latency becomes noticeable, it would become more efficient to have a mixture of autonomy and teleoperations depending on the application. Eventually, in an ideal world, a robot will operate almost fully autonomously with minimum human cognizance.” —Sho Nakanose, GITAI founder and CEO

Nakanose says that this mission will help GITAI to “acquire the skills, know-how, and experience necessary to prepare a robot to be ISS compatible, prov[ing] the maturity of our technology in the microgravity environment.” Success would mean conducting both IVA and ISA experiments as planned (autonomous and teleop for IVA, fully autonomous for ISA), which would be pretty awesome, but we’re told that GITAI has already received a research and development order for space robots from a private space company, and Nakanose expects that “by the mid-2020s, we will be able to show GITAI's robots working in space on an actual mission.”

NanoRacks is schedule to launch the Bishop module on SpaceX CRS-21 in November. The S1 will be launched separately in 2021, and a NASA astronaut will install the robot and then leave it alone to let it start demonstrating how work in space can be made both safer and cheaper once the humans have gotten out of the way.

Pervasive sensing is increasing our ability to monitor the status of patients not only when they are hospitalized but also during home recovery. As a result, lots of data are collected and are available for multiple purposes. If operations can take advantage of timely and detailed data, the huge amount of data collected can also be useful for analytics. However, these data may be unusable for two reasons: data quality and performance problems. First, if the quality of the collected values is low, the processing activities could produce insignificant results. Second, if the system does not guarantee adequate performance, the results may not be delivered at the right time. The goal of this document is to propose a data utility model that considers the impact of the quality of the data sources (e.g., collected data, biographical data, and clinical history) on the expected results and allows for improvement of the performance through utility-driven data management in a Fog environment. Regarding data quality, our approach aims to consider it as a context-dependent problem: a given dataset can be considered useful for one application and inadequate for another application. For this reason, we suggest a context-dependent quality assessment considering dimensions such as accuracy, completeness, consistency, and timeliness, and we argue that different applications have different quality requirements to consider. The management of data in Fog computing also requires particular attention to quality of service requirements. For this reason, we include QoS aspects in the data utility model, such as availability, response time, and latency. Based on the proposed data utility model, we present an approach based on a goal model capable of identifying when one or more dimensions of quality of service or data quality are violated and of suggesting which is the best action to be taken to address this violation. The proposed approach is evaluated with a real and appropriately anonymized dataset, obtained as part of the experimental procedure of a research project in which a device with a set of sensors (inertial, temperature, humidity, and light sensors) is used to collect motion and environmental data associated with the daily physical activities of healthy young volunteers.

Today, Walmart and Zipline are announcing preliminary plans “to bring first-of-its kind drone delivery service to the United States.” What makes this drone-delivery service the first of its kind is that Zipline uses fixed-wing drones rather than rotorcraft, giving them a relatively large payload capacity and very long range at the cost of a significantly more complicated launch, landing, and delivery process. Zipline has made this work very well in Rwanda, and more recently in North Carolina. But expanding into commercial delivery to individual households is a much different challenge. 

Along with a press release that doesn’t say much, Walmart and Zipline have released a short video of how they see the delivery operation happening, and it’s a little bit more, uh, optimistic than we’re entirely comfortable with.

Here’s the video:

And here’s all of the actually useful information from the one-page press release:

The new service will make on-demand deliveries of select health and wellness products with the potential to expand to general merchandise. Trial deliveries will take place near Walmart’s headquarters in Northwest Arkansas. Zipline will operate from a Walmart store and can service a 50-mile radius, which is about the size of the state of Connecticut. The operation will likely begin early next year, and, if successful, we’ll look to expand.

At first glance, there’s basic feasibility here, in the sense that most health and wellness products are likely to be of the size and weight to be transportable by one of Zipline’s drones—called Zips—and that a Zipline fulfillment center with a drone catapult and retrieval system could be set up to operate in a Walmart parking lot (or somewhere nearby) without any problems. However, drone delivery needs a lot more than basic feasibility to be successful—without more detail in the press release, we’ve had to look carefully at the video, and we’ve got some questions.

From the beginning of the video until about 20 seconds in, everything seems straightforward. A customer places an order, and a Zip is loaded and launched. Zipline has been doing this in Ghana and Rwanda for years, and we’ve seen firsthand how fast and efficient their operation is. It’s easy to see how this could translate into shipping items from a Walmart.

Our first question comes up at 22 seconds in, which shows a Zip flying along over a suburban or rural area a couple of hundred feet off the ground. Generally, this airspace is uncontrolled, meaning that other aircraft could be operating nearby. Zipline’s drones can detect other aircraft that are equipped with ADS-B transmitters, which covers an increasing number of manned aircraft. However, up to 400 feet of altitude, airspace is (with some exceptions) typically open to consumer drones as well, which usually do not have ADS-B transmitters. We know that Zipline is working on its own onboard sense and avoid system, but until they have that working, there’s a risk of a Zip colliding with another drone. The sky is big, so this may not be very likely, but it’s still something that should be taken into consideration. One way of mitigating this risk is by flying higher than 400 feet, but that starts getting into more complicated stuff with the U.S. Federal Aviation Administration. Zipline and Walmart are undoubtedly getting into complicated stuff with the FAA anyway, though, so maybe that’s the plan.

From 26 seconds to 30 seconds, we see what looks like the same kind of Zip delivery that we saw in Africa, so that’s cool. But between 31 and 35 seconds, the video shows exactly where that delivery happened: What appears to be a walkway up to a suburban house, in between a parked car, a porch, and the street. As far as we know, and based on what we’ve seen of Zips making deliveries, this kind of precision is simply not possible for a package on a parachute dropped from a fixed-wing drone.

As far as we’re aware Zipline’s parachute system fundamentally cannot achieve the porch-level precision that the video advertises. This is a big deal, because it places substantial constraints on where Walmart will be able to deliver to.

While Zips do their best to make pinpoint deliveries, even going as far as compensating for wind whenever possible, you really need a circular-ish open area with a radius of perhaps 5 meters or so for the Zips to deliver to. And you wouldn’t really want to have something like a house adjacent to that, since there’d be some risk of a package landing on the roof. Being close to a road would be even worse, because you can imagine how a driver might react if a wayward box on a parachute landed on their windshield by surprise. Finally, since Zips descend to somewhere between 35 and 50 feet to release the package, you need a flight path across the delivery area that’s free of obstructions. Zips can drop packages from higher altitudes, of course, but if they do, the delivery area needs to be even larger.

We sent Zipline and Walmart some specific questions about what’s going on in the video and how the delivery process will actually work, and received the following response:

The video represents the vision for how the delivery service to Walmart customer homes will work. We’ll be happy to keep you posted on the technical aspects of the operation as we get closer to launching the trial.

We sent a follow-up email to Walmart asking for some clarification, but they weren’t able to share any additional detail on the record.

The issue I have with Walmart’s desire to show their vision is that I really don’t see how this vision could ever become a reality through Zipline, because as far as we’re aware Zipline’s parachute system fundamentally cannot achieve the porch-level precision that the video advertises. This is a big deal, because it places substantial constraints on where Walmart will be able to deliver to, and dense suburbs as shown in the video may realistically be off the table. What the video shows is more the sort of thing that most consumers probably associate with drone delivery because it’s been relentlessly promoted by companies like Google and Amazon, relying on the precision of rotorcraft. But that’s just not what Zipline does, and honestly, the fact that Zipline doesn’t do that stuff is one of the reasons that we think Zipline’s tech is uniquely useful.

Putting the video and the press release aside, let’s think about what Zipline and Walmart could realistically accomplish together. Assuming that “we’ll be happy to keep you posted on the technical aspects of the operation” actually means “we don’t have any easy answers to the questions that you asked” rather than “we have some amazing and secret new parachute steering technology* that will solve every problem,” what would Zips delivering stuff from Walmart actually look like?

The biggest issue here, I think, is making deliveries with fixed-wing drones dropping boxes on parachutes in relatively dense suburban neighborhoods. I just don’t see how that’s going to work in a safe and scalable way, and of course urban deliveries would be even worse. But that’s totally fine—in high density areas, other delivery systems already exist and can operate efficiently. There are legacy delivery systems (like humans moving stuff in trucks) and gig workers, as well as new technologies like sidewalk robots, autonomous vehicles, or hybrid systems. In order for these delivery systems to make sense, though, there needs to be a certain density of customers, such that the balance of time making deliveries versus time spent getting from one place to another works out in your favor. Otherwise, your delivery system is hard to make sustainable.

What this means is that if you live in a rural area, your options for on-demand delivery are much more limited, which is part of the reason that Zipline exists in the first place: It excels in fast, efficient deliveries to isolated locations that are a substantial distance away. They do this kind of delivery better than anyone else, and rural delivery is a niche that rotorcraft or sidewalk robots or whatever just can’t compete in. Furthermore, for many people who live in rural areas, this kind of delivery would be incredibly valuable because options are so limited. For Zipline, the great thing about focusing on rural rather than suburban delivery is that delivery becomes much less complicated. People are more spread out, and it’s more likely that more homes will have backyards that can easily support a Zip parachute delivery. It really seems like rural areas, rather than suburbs, is where a Zipline Walmart partnership would have the most value, at least if Zipline is not going to somehow significantly alter its operation.

In the past, I’ve been super skeptical of urban (and to a lesser extent, suburban) delivery drones. I still am, primarily because I’m not convinced that the risk and expense of using drones to deliver things is worth it, relative to already established delivery systems or new delivery systems (like ground robots) that operate more conventionally. But rural delivery is different, and Zipline has shown that they can do it quickly and efficiently. So much of drone delivery really seems like it’s just companies reacting to the positive press that they inevitably get, combined with consumers asserting that it’s something they want without really thinking about whether it’s something that will make a tangible difference to their lives. For someone who lives far away from the nearest Walmart, though, being able to order and receive something like medicine in an hour without having to leave their yard could make a difference in a way that only Zipline can, at this point, deliver on.

*I desperately want this to be the case

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA CYBATHLON 2020 – November 13-14, 2020 – [Online Event] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Clearpath Robotics and Boston Dynamics were obviously destined to partner up with Spot, because Spot 100 percent stole its color scheme from Clearpath, which has a monopoly on yellow and black robots. But seriously, the news here is that thanks to Clearpath, Spot now works seamlessly with ROS.

[ Clearpath Robotics ]

A new video created by Swisscom Ventures highlights a research expedition sponsored by Moncler to explore the deepest ice caves in the world using Flyability’s Elios drone. [...] The expedition was sponsored by apparel company Moncler and took place over two weeks in 2018 on the Greenland ice sheet, the second largest body of ice in the world after Antarctica. Research focused on an area about 80 kilometers east of Kangerlussuaq, where scientists wanted to study the movement of water deep underground to better understand the effects of climate change on the melting ice.

[ Flyability ]

Shane Wighton of the “Stuff Made Here” YouTube channel, whose terrifying haircut machine we featured a few months ago, has improved on his robotic basketball hoop. It’s actually more than an improvement: It’s a complete redesign that nearly drove Wighton insane. But the result is pretty cool. It’s fun to watch him building a highly complicated system while always seeking simple and elegant designs for its components.

Stuff Made Here ]

SpaceX rockets are really just giant, explosion-powered drones that go into space sometimes. So let's watch more videos of them! This one is sped up, and puts a flight into just a couple of minutes.

[ SpaceX ]

Neato Robotics makes some solid autonomous vacuums, and these incremental upgrades feature improved battery life and better air filters.

[ Neato Robotics ]

A full-scale engineering model of NASA's Perseverance Mars rover now resides in a garage facing the Mars Yard at NASA's Jet Propulsion Laboratory in Southern California.

This vehicle system test bed rover (VSTB) is also known as OPTIMISM, which stands for Operational Perseverance Twin for Integration of Mechanisms and Instruments Sent to Mars. OPTIMISM was built in a warehouselike assembly room near the Mars Yard – an area that simulates the Red Planet's rocky surface. The rover helps the mission test hardware and software before it’s transmitted to the real rover on Mars. OPTIMISM will share the space with the Curiosity rover's twin MAGGIE.

[ JPL ]

Heavy asset industries like shipping, oil and gas, and manufacturing are grounded in repetitive tasks like locating items on large industrial sites -- a tedious task that can take as long 45 minutes to find critical items like a forklift in an area that spans the size of multiple football fields. Not only is this work boring, it’s dangerous and inefficient. Robots like Spot, however, love this sort of work.

Spot can provide real-time updates on the location of assets and complete other mundane tasks. In this case, Spot is using software from Cognite to roam the vast shipyard to locate and manage more than 100,000 assets stored across the facility. What used to take humans hours can be managed on an ongoing basis by Spot -- leaving employees to focus on more strategic tasks.

[ Cognite ]

The KNEXT Barista system helps high volume premium coffee providers who want to offer artisan coffee specialities in consistent quality.

[ Kuka ]

In this paper, we study this idea of generality in the locomotion domain. We develop a learning framework that can learn sophisticated locomotion behavior for a wide spectrum of legged robots, such as bipeds, tripeds, quadrupeds and hexapods, including wheeled variants. Our learning framework relies on a data-efficient, off-policy multi-task RL algorithm and a small set of reward functions that are semantically identical across robots.

[ DeepMind ]

Thanks Dave!

Even though it seems like the real risk of COVID is catching it from another person, robotics companies are doing what they can with UVC disinfecting systems.

[ BlueBotics ]

Aeditive develop robotic 3D printing solutions for the production of concrete components. At the heart of their production plant are two large robots that cooperate to manufacture the component. The automation technology they build on is a robotic shotcrete process. During this process, they apply concrete layer by layer and thus manufacture complete components. This means that their customers no longer dependent on formwork, which is expensive and time-consuming to create. Instead, their customers can manufacture components directly on a steel pallet without these moulds.

[ Aeditive ]

Something BIG is coming next month from Robotiq!

My guess: an elephant.

[ Robotiq ]

TurtleBot3 is a great little home robot, as long as you have a TurtleBot3-sized home.

[ Robotis ]

How do you calculate the coordinated movements of two robot arms so they can accurately guide a highly flexible tool? ETH researchers have integrated all aspects of the optimisation calculations into an algorithm. The hot-​wire cutter will be used, among other things, to develop building blocks for a mortar-​free structure.

[ ETH Zurich ]

And now, this.

[ RobotStart ]

Human beings can achieve a high level of motor performance that is still unmatched in robotic systems. These capabilities can be ascribed to two main enabling factors: (i) the physical proprieties of human musculoskeletal system, and (ii) the effectiveness of the control operated by the central nervous system. Regarding point (i), the introduction of compliant elements in the robotic structure can be regarded as an attempt to bridge the gap between the animal body and the robot one. Soft articulated robots aim at replicating the musculoskeletal characteristics of vertebrates. Yet, substantial advancements are still needed under a control point of view, to fully exploit the new possibilities provided by soft robotic bodies. This paper introduces a control framework that ensures natural movements in articulated soft robots, implementing specific functionalities of the human central nervous system, i.e., learning by repetition, after-effect on known and unknown trajectories, anticipatory behavior, its reactive re-planning, and state covariation in precise task execution. The control architecture we propose has a hierarchical structure composed of two levels. The low level deals with dynamic inversion and focuses on trajectory tracking problems. The high level manages the degree of freedom redundancy, and it allows to control the system through a reduced set of variables. The building blocks of this novel control architecture are well-rooted in the control theory, which can furnish an established vocabulary to describe the functional mechanisms underlying the motor control system. The proposed control architecture is validated through simulations and experiments on a bio-mimetic articulated soft robot.

Lower extremity exoskeletons offer the potential to restore ambulation to individuals with paraplegia due to spinal cord injury. However, they often rely on preprogrammed gait, initiated by switches, sensors, and/or EEG triggers. Users can exercise only limited independent control over the trajectory of the feet, the speed of walking, and the placement of feet to avoid obstacles. In this paper, we introduce and evaluate a novel approach that naturally decodes a neuromuscular surrogate for a user's neutrally planned foot control, uses the exoskeleton's motors to move the user's legs in real-time, and provides sensory feedback to the user allowing real-time sensation and path correction resulting in gait similar to biological ambulation. Users express their desired gait by applying Cartesian forces via their hands to rigid trekking poles that are connected to the exoskeleton feet through multi-axis force sensors. Using admittance control, the forces applied by the hands are converted into desired foot positions, every 10 milliseconds (ms), to which the exoskeleton is moved by its motors. As the trekking poles reflect the resulting foot movement, users receive sensory feedback of foot kinematics and ground contact that allows on-the-fly force corrections to maintain the desired foot behavior. We present preliminary results showing that our novel control can allow users to produce biologically similar exoskeleton gait.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference] Other Than Human – September 3-10, 2020 – Stockholm, Sweden ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA CYBATHLON 2020 – November 13-14, 2020 – [Online Event] ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA

Let us know if you have suggestions for next week, and enjoy today's videos.

From the Robotics and Perception Group at UZH comes Flightmare, a simulation environment for drones that combines a slick rendering engine with a robust physics engine that can run as fast as your system can handle.

Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc.

[ Flightmare ]

Quadruped robots yelling at people to maintain social distancing is really starting to become a thing, for better or worse.

We introduce a fully autonomous surveillance robot based on a quadruped platform that can promote social distancing in complex urban environments. Specifically, to achieve autonomy, we mount multiple cameras and a 3D LiDAR on the legged robot. The robot then uses an onboard real-time social distancing detection system to track nearby pedestrian groups. Next, the robot uses a crowd-aware navigation algorithm to move freely in highly dynamic scenarios. The robot finally uses a crowd aware routing algorithm to effectively promote social distancing by using human-friendly verbal cues to send suggestions to overcrowded pedestrians.

[ Project ]

Thanks Fan!

The Personal Robotics Group at Oregon State University is looking at UV germicidal irradiation for surface disinfection with a Fetch Manipulator Robot.

Fetch Robot disinfecting dance party woo!

[ Oregon State ]

How could you not take a mask from this robot?

[ Reachy ]

This work presents the design, development and autonomous navigation of the alpha-version of our Resilient Micro Flyer, a new type of collision-tolerant small aerial robot tailored to traversing and searching within highly confined environments including manhole-sized tubes. The robot is particularly lightweight and agile, while it implements a rigid collision-tolerant design which renders it resilient during forcible interaction with the environment. Furthermore, the design of the system is enhanced through passive flaps ensuring smoother and more compliant collision which was identified to be especially useful in very confined settings.

[ ARL ]

Pepper can make maps and autonomously navigate, which is interesting, but not as interesting as its posture when it's wandering around.

Dat backing into the charging dock tho.

[ Pepper ]

RatChair a strategy for displacing big objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect its pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a specified position.

This is from 2015, why isn't all of my furniture autonomous yet?!

[ KAIST ]

The new SeaDrone Pro is designed to be the underwater equivalent of a quadrotor. This video is a rendering, but we've been assured that it does actually exist.

[ SeaDrone ]

Thanks Eduardo!

Porous Loops is a lightweight composite facade panel that shows the potential of 3D printing of mineral foams for building scale applications.

[ ETH ]

Thanks Fan!

Here's an interesting idea for a robotic gripper- it's what appears to be a snap bracelet coupled to a pneumatic actuator that allows the snap bracelet to be reset.

[ Georgia Tech ]

Graze is developing a commercial robotic lawnmower. They're also doing a sort of crowdfunded investment thing, which probably explains the painfully overproduced nature of the following video:

A couple things about this: the hard part, which the video skips over almost entirely, is the mapping, localization, and understanding where to mow and where not to mow. The pitch deck seems to suggest that this is mostly done through computer vision, a thing that's perhaps easy to do under controlled ideal conditions, but difficult to apply to a world full lawns that are all different. The commercial aspect is interesting because golf courses are likely as standardized as you can get, but the emphasis here on how much money they can make without really addressing any of the technical stuff makes me raise an eyebrow or two.

[ Graze ]

The record & playback X-series arm demo allows the user to record the arm's movements while motors are torqued off. Then, the user may torque the motor's on and watch the movements they just made playback!

[ Interbotix ]

Shadow Robot has a new teleop system for its hand. I'm guessing that it's even trickier to use than it looks.

[ Shadow Robot ]

Quanser Interactive Labs is a collection of virtual hardware-based laboratory activities that supplement traditional or online courses. Same as working with physical systems in the lab, students work with virtual twins of Quanser's most popular plants, develop their mathematical models, implement and simulate the dynamic behavior of these systems, design controllers, and validate them on a high-fidelity 3D real-time virtual models. The virtual systems not only look like the real ones, they also behave, can be manipulated, measured, and controlled like real devices. And finally, when students go to the lab, they can deploy their virtually-validated designs on actual physical equipment.

[ Quanser ]

This video shows robot-assisted heart surgery. It's amazing to watch if you haven't seen this sort of thing before, but be aware that there is a lot of blood.

This video demonstrates a fascinating case of robotic left atrial myxoma excision, narrated by Joel Dunning, Middlesbrough, UK. The Robotic platform provides superior visualisation and enhanced dexterity, through keyhole incisions. Robotic surgery is an integral part of our Minimally Invasive Cardiothoracic Surgery Program.

[ Tristan D. Yan ]

Thanks Fan!

In this talk, we present our work on learning control policies directly in simulation that are deployed onto real drones without any fine tuning. The presentation covers autonomous drone racing, drone acrobatics, and uncertainty estimation in deep networks.

[ RPG ]

Last year, Spectrum reported on Japan’s public-private initiative to create a new industry around electric vertical takeoff and landing vehicles (eVTOLs) and flying cars. Last Friday, start-up company SkyDrive Inc. demonstrated the progress made since then when it held a press conference to spotlight its prototype vehicle and show reporters a video taken three days earlier of the craft undergoing a piloted test flight in front of staff and investors.

The sleek, single-seat eVTOL, dubbed SD-03 (SkyDrive third generation), resembles a hydroplane on skis and weighs in at 400 kilograms. The body is made of carbon fiber, aluminum, and other materials that have been chosen for their weight, balance, and durability. The craft measures 4 meters in length and width, and is about 2 meters tall. During operation, the nose of the craft is lit with white LED lights; red lights run around the bottom to enable the vehicle to be seen in the sky and to distinguish the direction the craft is flying. 

The SD-03 uses four pairs of electrically driven coaxial rotors, with one pair mounted at each quadrant. These enable a flight time of 5 to 10 minutes at speeds up to 50 kilometers per hour. “The propellers on each pair counter-rotate,” explains Nobuo Kishi, Sky Drive’s chief technology officer. “This cancels out propeller torque.” It also makes for a compact design, “so all the craft needs to land is the space of two parked cars,” he adds.

But when it came to providing more details of the drive system, Kishi declined, saying it’s a trade secret that’s a source of competitive advantage. The same goes for the craft’s energy storage system: Other than disclosing the fact that the flying taxi currently uses a lithium polymer battery, he’s also keeping details about the powertrain confidential.

Underlying this need for secrecy is the technology’s restricted capabilities. “Total energy that can be stored in a battery is a major limiting factor here,” says Steve Wright, Senior Research Fellow in Avionics and Aircraft Systems at the University of West England. “Which is why virtually every one of these projects is aiming at the air-taxi market within megacities.”

The SkyDrive video shows the SD-03 take off vertically then engage in maneuvers as it hovers up to two meters off the ground around a netted enclosure. The craft is shown moving about at walking speed for roughly 4 minutes before landing on a designated spot. For monitoring purposes and back-up, engineers used an additional computer-assisted control system to ensure the craft’s stability and safety.

Speaking at the press conference, Tomohiro Fukuzawa, SkyDrive’s CEO, estimated there are currently as many as 100 flying car projects underway around the world, “but only a few have succeeded with someone on board,” he said.

He went on to note that Japan lags behind other countries in the aviation industry but excels in manufacturing cars. Given the similarities between cars —especially electric cars—and VTOLs, he believes Japan can compete with companies in the United States, Europe, and China that are also developing eVTOLs.

SkyDrive’s advances have encouraged new venture capital investors to come on board and nearly triple investment to a total of 5.9 billion yen ($56 million). Original investors include large corporations that saw an opportunity to get in on the ground floor of a promising new industry backed by government. One investor, NEC, is aiming to create more options for its air-traffic management systems, while Japan’s largest oil company, Eneos, is interested in developing electric charging stations for all kinds of electric vehicles.

Photo: John Boyd SkyDrive's Cargo Drone (left) and SD-03 VTOL.

In May, SkyDrive unveiled a drone for commercial use that is based on the same drive and power systems as the SD-03. Named the Cargo Drone, it’s able to transport payloads of up to 30 kg and can be preprogrammed to fly autonomously or be piloted manually. It will be operated as a service by SkyDrive, starting at a minimum monthly rental charge of 380,000 yen ($3,600) that rises according to the purpose and frequency of use. 

Kishi says the drone is designed to work within a 3 km range in locations that are difficult or time-consuming to get to by road. For instance, Obayashi Corp., one of Japan’s big five construction companies and an investor in SkyDrive, has been testing the Cargo Drone to autonomously deliver materials like sandbags and timber to a remote, hard-to-reach location.

Fukuzawa established SkyDrive in 2018 after leaving Toyota Motor and working with Cartivator, a group of volunteer engineers interested in developing flying cars. SkyDrive now has a staff of fifty.

Also in 2018, the Japanese government formed the Public-Private Conference for Air Mobility made up of private companies, universities, and government ministries. The stated aim was to make flying vehicles a reality by 2023. Tomohiko Kojima of Japan’s Civil Aviation Bureau told Spectrum that since the Conference’s formation, the Ministry of Land, Infrastructure, Transport and Tourism has held a number of meetings with members to discuss matters like airspace for eVTOL use, flight rules, and permitted altitudes. “And last month, the Ministry established a working-level group to discuss certification standards for eVTOLs, a standard for pilots, and operational safety standards,” Kojima added.

Fukuzawa is also targeting 2023 to begin taxi services (single passenger and pilot) in the Osaka Bay area, flying between locations like Kansai and Kobe airports and tourist attractions such as Universal Studios Japan. These flights will take less than ten minutes—a practical nod to the limitations of the battery energy storage system.

“What SkyDrive is proposing is entirely do-able,” says Wright. “Almost all rotor-only eVTOL projects are limited to sub-30-minute endurance, which, with safety reserves, equate to about 10 to 20 minutes flying.”

Yi Chao likes to describe himself as an “armchair oceanographer” because he got incredibly seasick the one time he spent a week aboard a ship. So it’s maybe not surprising that the former NASA scientist has a vision for promoting remote study of the ocean on a grand scale by enabling underwater drones to recharge on the go using his company’s energy-harvesting technology.

Many of the robotic gliders and floating sensor stations currently monitoring the world’s oceans are effectively treated as disposable devices because the research community has a limited number of both ships and funding to retrieve drones after they’ve accomplished their mission of beaming data back home. That’s not only a waste of money, but may also contribute to a growing assortment of abandoned lithium-ion batteries polluting the ocean with their leaking toxic materials—a decidedly unsustainable approach to studying the secrets of the underwater world.

“Our goal is to deploy our energy harvesting system to use renewable energy to power those robots,” says Chao, president and CEO of the startup Seatrec. “We're going to save one battery at a time, so hopefully we're going to not to dispose more toxic batteries in the ocean.”

Chao’s California-based startup claims that its SL1 Thermal Energy Harvesting System can already help save researchers money equivalent to an order of magnitude reduction in the cost of using robotic probes for oceanographic data collection. The startup is working on adapting its system to work with autonomous underwater gliders. And it has partnered with defense giant Northrop Grumman to develop an underwater recharging station for oceangoing drones that incorporates Northrop Grumman’s self-insulating electrical connector capable of operating while the powered electrical contacts are submerged.

Seatrec’s energy-harvesting system works by taking advantage of how certain substances transition from solid-to-liquid phase and liquid-to-gas phase when they heat up. The company’s technology harnesses the pressure changes that result from such phase changes in order to generate electricity. 

Image: Seatrec

To make the phase changes happen, Seatrec’s solution taps the temperature differences between warmer water at the ocean surface and colder water at the ocean depths. Even a relatively simple robotic probe can generate additional electricity by changing its buoyancy to either float at the surface or sink down into the colder depths.

By attaching an external energy-harvesting module, Seatrec has already begun transforming robotic probes into assets that can be recharged and reused more affordably than sending out a ship each time to retrieve the probes. This renewable energy approach could keep such drones going almost indefinitely barring electrical or mechanical failures. “We just attach the backpack to the robots, we give them a cable providing power, and they go into the ocean,” Chao explains. 

The early buyers of Seatrec’s products are primarily academic researchers who use underwater drones to collect oceanographic data. But the startup has also attracted military and government interest. It has already received small business innovation research contracts from both the U.S. Office of Naval Research and National Oceanic and Atmospheric Administration (NOAA).

Seatrec has also won two $10,000 prizes under the Powering the Blue Economy: Ocean Observing Prize administered by the U.S. Department of Energy and NOAA. The prizes awarded during the DISCOVER Competition phase back in March 2020 included one prize split with Northrop Grumman for the joint Mission Unlimited UUV Station concept. The startup and defense giant are currently looking for a robotics company to partner with for the DEVELOP Competition phase of the Ocean Observing Prize that will offer a total of $3 million in prizes.

In the long run, Seatrec hopes its energy-harvesting technology can support commercial ventures such as the aquaculture industry that operates vast underwater farms. The technology could also support underwater drones carrying out seabed surveys that pave the way for deep sea mining ventures, although those are not without controversy because of their projected environmental impacts.

Among all the possible applications Chao seems especially enthusiastic about the prospect of Seatrec’s renewable power technology enabling underwater drones and floaters to collect oceanographic data for much longer periods of time. He spent the better part of two decades working at the NASA Jet Propulsion Laboratory in Pasadena, Calif., where he helped develop a satellite designed for monitoring the Earth’s oceans. But he and the JPL engineering team that developed Seatrec’s core technology believe that swarms of underwater drones can provide a continuous monitoring network to truly begin understanding the oceans in depth.

The COVID-19 pandemic has slowed production and delivery of Seatrec’s products somewhat given local shutdowns and supply chain disruptions. Still, the startup has been able to continue operating in part because it’s considered to be a defense contractor that is operating an essential manufacturing facility. Seatrec’s engineers and other staff members are working in shifts to practice social distancing.

“Rather than building one or two for the government, we want to scale up to build thousands, hundreds of thousands, hopefully millions, so we can improve our understanding and provide that data to the community,” Chao says. 

Pages