Feed aggregator

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, WA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.

Kuka has just announced the results of its annual Innovation Award. From an initial batch of 30 applicants, five teams reached the finals (we were part of the judging committee). The five finalists worked for nearly a year on their applications, which they demonstrated this week at the Medica trade show in Düsseldorf, Germany. And the winner of the €20,000 prize is...Team RoboFORCE, led by the STORM Lab in the U.K., which developed a “robotic magnetic flexible endoscope for painless colorectal cancer screening, surveillance, and intervention.”

The system could improve colonoscopy procedures by reducing pain and discomfort as well as other risks such as bleeding and perforation, according to the STORM Lab researchers. It uses a magnetic field to control the endoscope, pulling rather than pushing it through the colon.

The other four finalists also presented some really interesting applications—you can see their videos below.

“Because we were so please with the high quality of the submissions, we will have next year’s finals again at the Medica fair, and the challenge will be named ‘Medical Robotics’,” says Rainer Bischoff, vice president for corporate research at Kuka. He adds that the selected teams will again use Kuka’s LBR Med robot arm, which is “already certified for integration into medical products and makes it particularly easy for startups to use a robot as the main component for a particular solution.”

Applications are now open for Kuka’s Innovation Award 2020. You can find more information on how to enter here. The deadline is 5 January 2020.

[ Kuka ]

Oh good, Aibo needs to be fed now.

You know what comes next, right?

[ Aibo ]

Your cat needs this robot.

It's about $200 on Kickstarter.

[ Kickstarter ]

Enjoy this tour of the Skydio offices courtesy Skydio 2, which runs into not even one single thing.

If any Skydio employees had important piles of papers on their desks, well, they don’t anymore.

[ Skydio ]

Artificial intelligence is everywhere nowadays, but what exactly does it mean? We asked a group MIT computer science grad students and post-docs how they personally define AI.

“When most people say AI, they actually mean machine learning, which is just pattern recognition.” Yup.

[ MIT ]

Using event-based cameras, this drone control system can track attitude at 1600 degrees per second (!).

[ UZH ]

Introduced at CES 2018, Walker is an intelligent humanoid service robot from UBTECH Robotics. Below are the latest features and technologies used during our latest round of development to make Walker even better.

[ Ubtech ]

Introducing the Alpha Prime by #VelodyneLidar, the most advanced lidar sensor on the market! Alpha Prime delivers an unrivaled combination of field-of-view, range, high-resolution, clarity and operational performance.

Performance looks good, but don’t expect it to be cheap.

[ Velodyne ]

Ghost Robotics’ Spirit 40 will start shipping to researchers in January of next year.

[ Ghost Robotics ]

Unitree is about to ship the first batch of their AlienGo quadrupeds as well:

[ Unitree ]

Mechanical engineering’s Sarah Bergbreiter discusses her work on micro robotics, how they draw inspiration from insects and animals, and how tiny robots can help humans in a variety of fields.

[ CMU ]

Learning contact-rich, robotic manipulation skills is a challenging problem due to the high-dimensionality of the state and action space as well as uncertainty from noisy sensors and inaccurate motor control. To combat these factors and achieve more robust manipulation, humans actively exploit contact constraints in the environment. By adopting a similar strategy, robots can also achieve more robust manipulation. In this paper, we enable a robot to autonomously modify its environment and thereby discover how to ease manipulation skill learning. Specifically, we provide the robot with fixtures that it can freely place within the environment. These fixtures provide hard constraints that limit the outcome of robot actions. Thereby, they funnel uncertainty from perception and motor control and scaffold manipulation skill learning.

[ Stanford ]

Since 2016, Verity's drones have completed more than 200,000 flights around the world. Completely autonomous, client-operated and designed for live events, Verity is making the magic real by turning drones into flying lights, characters, and props.

[ Verity ]

To monitor and stop the spread of wildfires, University of Michigan engineers developed UAVs that could find, map and report fires. One day UAVs like this could work with disaster response units, firefighters and other emergency teams to provide real-time accurate information to reduce damage and save lives. For their research, the University of Michigan graduate students won first place at a competition for using a swarm of UAVs to successfully map and report simulated wildfires.

[ University of Michigan ]

Here’s an important issue that I haven’t heard talked about all that much: How first responders should interact with self-driving cars.

“To put the car in manual mode, you must call Waymo.” Huh.

[ Waymo ]

Here’s what Gitai has been up to recently, from a Humanoids 2019 workshop talk.

[ Gitai ]

The latest CMU RI seminar comes from Girish Chowdhary at the University of Illinois at Urbana-Champaign on “Autonomous and Intelligent Robots in Unstructured Field Environments.”

What if a team of collaborative autonomous robots grew your food for you? In this talk, I will discuss some key advances in robotics, machine learning, and autonomy that will one day enable teams of small robots to grow food for you in your backyard in a fundamentally more sustainable way than modern mega-farms! Teams of small aerial and ground robots could be a potential solution to many of the serious problems that modern agriculture is facing. However, fully autonomous robots that operate without supervision for weeks, months, or entire growing season are not yet practical. I will discuss my group’s theoretical and practical work towards the underlying challenging problems in robotic systems, autonomy, sensing, and learning. I will begin with our lightweight, compact, and autonomous field robot TerraSentia and the recent successes of this type of undercanopy robots for high-throughput phenotyping with deep learning-based machine vision. I will also discuss how to make a team of autonomous robots learn to coordinate to weed large agricultural farms under partial observability. These direct applications will help me make the case for the type of reinforcement learning and adaptive control that are necessary to usher in the next generation of autonomous field robots that learn to solve complex problems in harsh, changing, and dynamic environments. I will then end with an overview of our new MURI, in which we are working towards developing AI and control that leverages neurodynamics inspired by the Octopus brain.

[ CMU RI ]

This week at MIT, academics and industry officials compared notes, studies, and predictions about AI and the future of work. During the discussions, an insurance company executive shared details about one AI program that rolled out at his firm earlier this year. A chatbot the company introduced, the executive said, now handles 150,000 calls per month.

Later in the day, a panelist—David Fanning, founder of PBS’s Frontline—remarked that this statistic is emblematic of broader fears he saw when reporting a new Frontline documentary about AI. “People are scared,” Fanning said of the public’s AI anxiety.

Fanning was part of a daylong symposium  about AI’s economic consequences—good, bad, and otherwise—convened by MIT’s Task Force on the Work of the Future.

“Dig into every industry, and you’ll find AI changing the nature of work,” said Daniela Rus, director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). She cited recent McKinsey research that found 45 percent of the work people are paid to do today can be automated with currently available technologies. Those activities, McKinsey found, represent some US $2 trillion in wages.

However, the threat of automation—whether by AI or other technologies—isn’t as new as technologists on America’s coasts seem to believe, said panelist Fred Goff, CEO of Jobcase, Inc.

“If you live in Detroit or Toledo, where I come from, technology has been displacing jobs for the last half-century,” Goff said. “I don’t think that most people in this country have the increased anxiety that the coasts do, because they’ve been living this.”

Goff added that the challenge AI poses for the workforce is not, as he put it, “getting coal miners to code.” Rather, he said, as AI automates some jobs, it will also open opportunities for “reskilling” that may have nothing to do with AI or automation. He touted trade schools—teaching skills like welding, plumbing, and electrical work—and certification programs for sales industry software packages like Salesforce. 

On the other hand, a documentarian who reported another recent program on AIKrishna Andavolu, senior correspondent for Vice Media—said “reskilling” may not be an easy answer.

“People in rooms like this … don’t realize that a lot of people don’t want to work that much,” Andavolu said. “They’re not driven by passion for their career, they’re driven by passion for life. We’re telling a lot of these workers that they need to reskill. But to a lot of people that sounds like, ‘I’ve got to work twice as hard for what I have now.’ That sounds scary. We underestimate that at our peril.”

Part of the problem with “reskilling,” Andavolu said, is that some high-growth industries involve caregiving for seniors and in medical facilities—roles which are traditionally considered “feminized” careers. Destigmatizing these jobs, and increasing the pay to match the salaries of displaced jobs like long-haul truck drivers, is another challenge.

Daron Acemoglu, MIT Institute Professor of Economics, faulted the comparably slim funding of academic research into AI.

“There is nothing preordained about the progress of technology,” he said. Computers, the Internet, antibiotics, and sensors all grew out of government and academic research programs. What he called the “blue-sky thinking” of non-corporate AI research can also develop applications that are not purely focused on maximizing profits.

American companies, Acemoglu said, get tax breaks for capital R&D—but not for developing new technologies for their employees. “We turn around and [tell companies], ‘Use your technologies to empower workers,’” he said. “But why should they do that? Hiring workers is expensive in many ways. And we’re subsidizing capital.”

Said Sarita Gupta, director of the Ford Foundation’s Future of Work(ers) Program, “Low and middle income workers have for over 30 years been experiencing stagnant and declining pay, shrinking benefits, and less power on the job. Now technology is brilliant at enabling scale. But the question we sit with is—how do we make sure that we’re not scaling these longstanding problems?”

Andrew McAfee, co-director of MIT’s Initiative on the Digital Economy, said AI may not reduce the number of jobs available in the workplace today. But the quality of those jobs is another story. He cited the Dutch economist Jan Tinbergen who decades ago said that “Inequality is a race between technology and education.”

McAfee said, ultimately, the time to solve the economic problems AI poses for workers in the United States is when the U.S. economy is doing well—like right now.

“We do have the wind at our backs,” said Elisabeth Reynolds, executive director of MIT’s Task Force on the Work of the Future.

“We have some breathing room right now,” McAfee agreed. “Economic growth has been pretty good. Unemployment is pretty low. Interest rates are very, very low. We might not have that war chest in the future.”

Hand force estimation is critical for applications that involve physical human-machine interactions for force monitoring and machine control. Force Myography (FMG) is a potential technique to be used for estimating hand force/torque. The FMG signals reflect the volumetric changes in the arm muscles due to muscle contraction or expansion. This paper investigates the feasibility of employing force-sensing resistors (FSRs) worn on the arm to measure the FMG signals for isometric force/torque estimation. Nine participants were recruited in this study and were asked to exert isometric force along three perpendicular axes, torque about the same three axes, and force and torque simultaneously. During the tests, the isometric force and torque were measured using a 6-degree-of-freedom (DoF) (i.e., force in three axes and torque around the same axes) load cell for ground truth labels whereas the FMG signals were recorded using a total number of 60 FSRs, which were embedded into four bands worn on the different locations of the arm. A two-stage regression strategy was employed to enhance the performance of the FMG bands, where three regression algorithms including general regression neural network (GRNN), support vector regression (SVR), and random forest regression (RF) models were employed, respectively, in the first stage and GRNN was used in the second stage. Two cases were considered to explore the performance of the FMG bands in estimating: (1) 3-DoF force and 3-DoF torque at once and (2) 6-DoF force and torque. In addition, the impact of sensor placement and the spatial coverage of FMG measurements were studied. This preliminary investigation demonstrates promising potential of FMG to estimate multi-DoF isometric force/torque. Specifically, R2 accuracies of 0.83 for the 3-DoF force, 0.84 for 3-DoF torque, and 0.77 for the combination of force and torque (6-DoF) regressions were obtained using the four bands on the arm in cross-trial evaluation.

Highly stretchable sensors that can detect large strains are useful in deformable systems, such as soft robots and wearable devices. For stretchable strain sensors, two types of sensing methods exist, namely, resistive and capacitive. Capacitive sensing has several advantages over the resistive type, such as high linearity, repeatability, and low hysteresis. However, the sensitivity (gauge factor) of capacitive strain sensors is theoretically limited to 1, which is much lower than that of the resistive-type sensors. The objective of this study is to improve the sensitivity of highly stretchable capacitive strain sensors by integrating hierarchical auxetic structures into them. Auxetic structures have a negative Poisson's ratio that causes increase in change in capacitance with applied strains, and thereby improving sensitivity. In order to prove this concept, we fabricate and characterize two sensor samples with planar dimensions 60 mm × 16 mm. The samples have an acrylic elastomer (3M, VHB 4905) as the dielectric layer and a liquid metal (eutectic gallium-indium) for electrodes. On both sides of the sensor samples, hierarchical auxetic structures made of a silicone elastomer (Dow Corning, Sylgard 184) are attached. The samples are tested under strains up to 50% and the experimental results show that the sensitivity of the sensor with the auxetic structure exceeds the theoretical limit. In addition, it is observed that the sensitivity of this sensor is roughly two times higher than that of a sensor without the auxetic structure, while maintaining high linearity (R2 = 0.995), repeatability (≥104 cycles), and low hysteresis.

We’ve all seen this moment in the movies—on board, say, a submarine or a spaceship, the chief engineer will suddenly cock their ear to listen to the background hum and say “something’s wrong.” Bosch is hoping to teach a computer how to do that trick in real life, and is going all the way to the International Space Station to test its technology. 

Considering the amount of data that’s communicated through non-speech sound, humans do a remarkably poor job of leveraging sound information. We’re very good at reacting to sounds (especially new or loud sounds) over relatively short timescales, but beyond that, our brains are great at just classifying most ongoing sounds as “background” and ignoring them. Computers, which have the patience we generally lack, seem like they’d be much better at this, but the focus of most developers has been on discrete sound events (like smart home devices detecting smoke alarms or breaking glass) rather than longer term sound patterns. 

Why should those of us who aren’t movie characters care about how patterns of sound change over time? The simple reason is because our everyday lives are full of machines that both make a lot of noise and tend to break expensively from time to time. Right now, I’m listening to my washing machine, which makes some weird noises. I don’t have a very good idea of whether those weird noises are normal weird noises, and more to the point, I have an even worse idea whether it was making the same weird noises the last time I ran it. Knowing whether a machine is making weirder noises than it used to be, could potentially clue me in to an emerging problem, one that I could solve through cheap preventative maintenance rather than an expensive repair later on.

Bosch, the German company that almost certainly makes a significant percentage of the parts in your car as well as appliances, power tools, industrial systems, and a whole bunch of other stuff, is trying to figure out how they can use deep learning to identify and track the noises that machines make over time. The idea is to be able to identify subtle changes in sound to warn of pending problems before they happen. And one group of people very interesting in getting advanced warning of problems are the astronauts floating around in the orbiting bubble of life that is the ISS. 

The SoundSee directional microphone array is Bosch’s payload for NASA’s Astrobee robot, which we’ve written about extensively. Astrobee had its first autonomous flight aboard the ISS just last month, and after the robot finishes getting checked out and calibrated, SoundSee will take up residence in one of Astrobee’s modular payload bays. Once installed, it’ll go on a variety of missions, both passively recording audio as Astrobee goes about its business as well as recording targeted audio of specific systems. 

“These kinds of subtle, long-term patterns and variations could give us surprisingly rich information about system degradation”

One of SoundSee’s first tasks will be to make sound intensity surveys of the ISS, a fairly dull job that astronauts currently spend about two hours doing by hand every few months. Ideally, SoundSee and Astrobee will be able to automate this task. But the more interesting mission (especially for Earth applications) will be the acoustic monitoring of equipment, listening to the noises made by systems like the Environmental Control and Life Support System (ECLSS) and the Treadmill with Vibration Isolation and Stabilization (TVIS).

The audio that SoundSee records with its microphone array will be sent back down to Bosch, where researchers will use deep audio analytics to filter out background noise as well as the noise of the robot itself, with the goal of being able to isolate the sound being made by specific systems. By using deep learning algorithms trained on equivalent systems on Earth, Bosch hopes that SoundSee will be able to provide a sort of “internal snapshot” of how that system is functioning. Or as the case may be, not functioning, in plenty of time for astronauts to make repairs. 

“We’re working on unsupervised anomaly detection algorithms,” explains Sam Das, principal researcher and SoundSee project lead at Bosch, “and we have some deep learning-based approaches that could detect a gradual or sudden change of the machine’s operating characteristics.” SoundSee won’t be able to predict everything, he says, but “it will be a line of defense to track slow deviation from normal dynamical models, and tell us, ‘Hey, you should go check this out.’ It may be a false alarm, but our system will be trained to listen for suspicious behavior. These kinds of subtle, long-term patterns and variations could give us surprisingly rich information about system degradation. That’s the ultimate goal, that we’d be able to identify these things way before any other sensing capability.”

Das says that you can think of SoundSee as analogous to training a vision-based system to analyze someone walking. First, you’d train the system on what a normal walking gait looks like. Then, you’d train the system to be able to identify when someone falls. Eventually, the system would be able to identify stumbles, then muscle cramps, and the end goal would be a system that could say, ‘it looks like one of your muscles might be just starting to cramp up, better take it easy!’

Photo: Bosch

The reason to put the SoundSee system on a mobile robot, rather than use a distributed array of stationary microphones, it to be able to combine localization information with the audio data, which Das says provides much more useful data. “A moving platform means that you can localize sources of sound. Now, we can fuse the information from audio we’re getting at different points, aggregate that information along the motion trajectory, and then take that a step further by creating a sound map of the environment.”

This concept extends to operations on Earth as well, and Das sees one of the first potential applications of the SoundSee technology as warehouse environments full of mobile robots. “There are a lot of features of this experiment that could be immediately applied on a manufacturing floor or warehouse where you have ground robots moving around—think of deploying SoundSee for each machine, and you’d have a virtual inspector for physical infrastructure monitoring.”

Longer term, it’s pretty obvious where this kind of technology is destined, especially coming from Bosch, the world’s largest automotive parts supplier. Having a SoundSee-like system in your car already trained on algorithms for what normal operation sounds like would be able to predict maintenance needs and precisely identify emerging mechanical issues, almost certainly before they become audible to you, and very likely way before you’d have any other way of knowing. 

“Sound can give you so much more information about the environment,” says Das. “From the HVAC system in your house to the engine in your car, the operating state of machines and their functional health can be revealed by audio patterns.” And all we have to do is listen.

This paper presents results on recent developments pertaining to the coordinated motion control of a fleet of marine robotic vehicles. Specifically, we address the Cooperative Moving Path Following (CMPF) motion control problem, that consists of steering the robotic vehicles along a priori specified geometric paths that jointly move according to a target frame, while achieving a pre-defined coordination objective. To this end, each vehicle will need to communicate with their neighbors in order to cooperatively solve the CMPF task. Two distinct robust Moving Path Following motion control strategies for achieving robustness on the moving path following tasks are proposed. Experimental results demonstrating the application of CMPF to marine vehicles in the context of source localization and tracking of underwater targets are presented backed with stability and convergence guarantees.

Within the field of robotics and autonomous systems where there is a human in the loop, intent recognition plays an important role. This is especially true for wearable assistive devices used for rehabilitation, particularly post-stroke recovery. This paper reports results on the use of tactile patterns to detect weak muscle contractions in the forearm while at the same time associating these patterns with the muscle synergies during different grips. To investigate this concept, a series of experiments with healthy participants were carried out using a tactile arm brace (TAB) on the forearm while performing four different types of grip. The expected force patterns were established by analysing the muscle synergies of the four grip types and the forearm physiology. The results showed that the tactile signatures of the forearm recorded on the TAB align with the anticipated force patterns. Furthermore, a linear separability of the data across all four grip types was identified. Using the TAB data, machine learning algorithms achieved a 99% classification accuracy. The TAB results were highly comparable to a similar commercial intent recognition system based on a surface electromyography (sEMG) sensing.

Robots for underwater exploration are typically comprised of rigid materials and driven by propellers or jet thrusters, which consume a significant amount of power. Large power consumption necessitates a sizeable battery, which limits the ability to design a small robot. Propellers and jet thrusters generate considerable noise and vibration, which is counterproductive when studying acoustic signals or studying timid species. Bioinspired soft robots provide an approach for underwater exploration in which the robots are comprised of compliant materials that can better adapt to uncertain environments and take advantage of design elements that have been optimized in nature. In previous work, we demonstrated that frameless DEAs could use fluid electrodes to apply a voltage to the film and that effective locomotion in an eel-inspired robot could be achieved without the need for a rigid frame. However, the robot required an off-board power supply and a non-trivial control signal to achieve propulsion. To develop an untethered soft swimming robot powered by DEAs, we drew inspiration from the jellyfish and attached a ring of frameless DEAs to an inextensible layer to generate a unimorph structure that curves toward the passive side to generate power stroke, and efficiently recovers the original configuration as the robot coasts. This swimming strategy simplified the control system and allowed us to develop a soft robot capable of untethered swimming at an average speed of 3.2 mm/s and a cost of transport of 35. This work demonstrates the feasibility of using DEAs with fluid electrodes for low power, silent operation in underwater environments.

As useful as drones are up in the air, the process of getting them there tends to be annoying at best and dangerous at worst. Consider what it takes to launch something as simple as a DJI Mavic or a Parrot Anafi— you need to find a flat spot free of debris or obstructions, unfold the thing and let it boot up and calibrate and whatnot, stand somewhere safe(ish), and then get it airborne and high enough quick enough to avoid hitting any people or things that you care about.

I’m obviously being a little bit dramatic here, but ground launching drones is certainly both time consuming and risky, and there are occasions where getting a drone into the air as quickly and as safely as possible is a priority. At IROS in Macau earlier this month, researchers from Caltech and NASA’s Jet Propulsion Laboratory (JPL) presented a prototype for a ballistically launched drone—a football-shaped foldable quadrotor that gets fired out of a cannon, unfolds itself, and then flies off.

Test launching the SQUID (Streamlined Quick Unfolding Investigation Drone) from a truck as shown in the video effectively demonstrates why this is more than a novelty: It would otherwise be very difficult to conventionally launch a quadrotor from a vehicle moving that fast. You can imagine how useful this would be for first responders, ships dealing with waves, or even other aircraft in flight.

Image: Caltech & NASA JPL A CAD model of the SQUID system showing (from left): ballistic configuration, multirotor configuration, and section view with a closer look of a hinge.

The prototype SQUID shown here weighs 530 grams and is about 27 centimeters long. Folded up, it’s just over 8 cm in diameter. SQUID gets its initial boost of 15 meters per second (referred to as “muzzle velocity” in the paper) from a pneumatic baseball pitching machine, which gives the drone an apex of about 10 m. Immediately after the drone exits the launcher, a nichrome wire heats up and burns through a monofilament line holding the arms in place. Driven by springs, the arms snap out in just 70 ms, while the aerodynamic body of the drone passively orients it into the airstream.

As soon as the motors spin up (after about 200 ms), SQUID automatically orients itself into a hovering attitude, and it can be controlled just like a normal quadrotor within less than 1 second of launch. Landing is a bit of a challenge, although apparently “it can safely land if the bottom touches the ground first at a low speed,” after which it’ll gently topple over.

SQUID’s design is easily scalable, and the researchers are currently working on both smaller (2-inch diameter) and larger (6-inch diameter) prototypes. The 6-inch SQUID would be large enough to carry a battery and payload enabling significantly more autonomy, including vision-based ballistic stabilization.

Image: Caltech & NASA JPL To test the drone, the researchers launched it from a moving vehicle up to 50 mph.

We should mention that ballistically launched drones aren’t a completely new idea—we’ve seen a couple of examples in the past, like this very satisfying-sounding system from Raytheon. But getting something similar to work for a quadrotor is a bit more difficult, and as far as we know, nobody else has thought about applying this launch technique to drones for planetary exploration:

While the SQUID prototype, as outlined in this paper, has been designed for operation on Earth, the same concept is potentially adaptable to other planetary bodies, in particular Mars and Titan. The Mars helicopter, planned to deploy from the Mars 2020 rover, will provide a proof-of-concept for powered rotorcraft flight on the planet, despite the thin atmosphere. A rotorcraft greatly expands the data collection range of a rover, and allows access to sites that a rover would find impassible. However, the current deployment method for the Mars Helicopter from the underbelly of the rover reduces ground clearance, resulting in stricter terrain constraints. Additionally, the rover must move a significant distance away from the helicopter drop site before the helicopter can safely take off. The addition of a ballistic, deterministic launch system for future rovers or entry vehicles would isolate small rotorcraft from the primary mission asset, as well as enable deployment at longer distances or over steep terrain features.

“Design of a Ballistically-Launched Foldable Multirotor,” by Daniel Pastor, Jacob Izraelevitz, Paul Nadan, Amanda Bouman, Joel Burdick, and Brett Kennedy, from Caltech, JPL, and Olin College, was presented at IROS 2019 in Macau.

As useful as drones are up in the air, the process of getting them there tends to be annoying at best and dangerous at worst. Consider what it takes to launch something as simple as a DJI Mavic or a Parrot Anafi— you need to find a flat spot free of debris or obstructions, unfold the thing and let it boot up and calibrate and whatnot, stand somewhere safe(ish), and then get it airborne and high enough quick enough to avoid hitting any people or things that you care about.

I’m obviously being a little bit dramatic here, but ground launching drones is certainly both time consuming and risky, and there are occasions where getting a drone into the air as quickly and as safely as possible is a priority. At IROS in Macau earlier this month, researchers from Caltech and NASA’s Jet Propulsion Laboratory (JPL) presented a prototype for a ballistically launched drone—a football-shaped foldable quadrotor that gets fired out of a cannon, unfolds itself, and then flies off.

Test launching the SQUID (Streamlined Quick Unfolding Investigation Drone) from a truck as shown in the video effectively demonstrates why this is more than a novelty: It would otherwise be very difficult to conventionally launch a quadrotor from a vehicle moving that fast. You can imagine how useful this would be for first responders, ships dealing with waves, or even other aircraft in flight.

Image: Caltech & NASA JPL A CAD model of the SQUID system showing (from left): ballistic configuration, multirotor configuration, and section view with a closer look of a hinge.

The prototype SQUID shown here weighs 530 grams and is about 27 centimeters long. Folded up, it’s just over 8 cm in diameter. SQUID gets its initial boost of 15 meters per second (referred to as “muzzle velocity” in the paper) from a pneumatic baseball pitching machine, which gives the drone an apex of about 10 m. Immediately after the drone exits the launcher, a nichrome wire heats up and burns through a monofilament line holding the arms in place. Driven by springs, the arms snap out in just 70 ms, while the aerodynamic body of the drone passively orients it into the airstream.

As soon as the motors spin up (after about 200 ms), SQUID automatically orients itself into a hovering attitude, and it can be controlled just like a normal quadrotor within less than 1 second of launch. Landing is a bit of a challenge, although apparently “it can safely land if the bottom touches the ground first at a low speed,” after which it’ll gently topple over.

SQUID’s design is easily scalable, and the researchers are currently working on both smaller (2-inch diameter) and larger (6-inch diameter) prototypes. The 6-inch SQUID would be large enough to carry a battery and payload enabling significantly more autonomy, including vision-based ballistic stabilization.

Image: Caltech & NASA JPL To test the drone, the researchers launched it from a moving vehicle up to 50 mph.

We should mention that ballistically launched drones aren’t a completely new idea—we’ve seen a couple of examples in the past, like this very satisfying-sounding system from Raytheon. But getting something similar to work for a quadrotor is a bit more difficult, and as far as we know, nobody else has thought about applying this launch technique to drones for planetary exploration:

While the SQUID prototype, as outlined in this paper, has been designed for operation on Earth, the same concept is potentially adaptable to other planetary bodies, in particular Mars and Titan. The Mars helicopter, planned to deploy from the Mars 2020 rover, will provide a proof-of-concept for powered rotorcraft flight on the planet, despite the thin atmosphere. A rotorcraft greatly expands the data collection range of a rover, and allows access to sites that a rover would find impassible. However, the current deployment method for the Mars Helicopter from the underbelly of the rover reduces ground clearance, resulting in stricter terrain constraints. Additionally, the rover must move a significant distance away from the helicopter drop site before the helicopter can safely take off. The addition of a ballistic, deterministic launch system for future rovers or entry vehicles would isolate small rotorcraft from the primary mission asset, as well as enable deployment at longer distances or over steep terrain features.

“Design of a Ballistically-Launched Foldable Multirotor,” by Daniel Pastor, Jacob Izraelevitz, Paul Nadan, Amanda Bouman, Joel Burdick, and Brett Kennedy, from Caltech, JPL, and Olin College, was presented at IROS 2019 in Macau.

Humans perceive continuous high-dimensional information by dividing it into meaningful segments, such as words and units of motion. We believe that such unsupervised segmentation is also important for robots to learn topics such as language and motion. To this end, we previously proposed a hierarchical Dirichlet process–Gaussian process–hidden semi-Markov model (HDP-GP-HSMM). However, an important drawback of this model is that it cannot divide high-dimensional time-series data. Furthermore, low-dimensional features must be extracted in advance. Segmentation largely depends on the design of features, and it is difficult to design effective features, especially in the case of high-dimensional data. To overcome this problem, this study proposes a hierarchical Dirichlet process–variational autoencoder–Gaussian process–hidden semi-Markov model (HVGH). The parameters of the proposed HVGH are estimated through a mutual learning loop of the variational autoencoder and our previously proposed HDP-GP-HSMM. Hence, HVGH can extract features from high-dimensional time-series data while simultaneously dividing it into segments in an unsupervised manner. In an experiment, we used various motion-capture data to demonstrate that our proposed model estimates the correct number of classes and more accurate segments than baseline methods. Moreover, we show that the proposed method can learn latent space suitable for segmentation.

At Supercomputing 2019 in Denver, Colo., Cerebras Systems unveiled the computer powered by the world’s biggest chip. Cerebras says the computer, the CS-1, has the equivalent machine learning capabilities of hundreds of racks worth of GPU-based computers consuming hundreds of kilowatts, but it takes up only one-third of a standard rack and consumes about 17 kW. Argonne National Labs, future home of what’s expected to be the United States’ first exascale supercomputer, says it has already deployed a CS-1. Argonne is one of two announced U.S. National Laboratories customers for Cerebras, the other being Lawrence Livermore National Laboratory.

The system “is the fastest AI computer,” says CEO and cofounder Andrew Feldman. He compared it with Google's TPU clusters (the 2nd of three generations of that company’s AI computers), noting that one of those “takes 10 racks and over 100 kilowatts to deliver a third of the performance of a single [CS-1] box.”

The CS-1 is designed to speed the training of novel and large neural networks, a process that can take weeks or longer. Powered by a 400,000-core, 1-trillion-transistor wafer-scale processor chip, the CS-1 should collapse that task to minutes or even seconds. However, Cerebras did not provide data showing this performance in terms of standard AI benchmarks such as the new MLPerf standards. Instead it has been wooing potential customers by having them train their own neural network models on machines at Cerebras.

This approach is not unusual, according to analysts. “Everybody runs their own models that they developed for their own business,” says Karl Freund, an AI analyst at Moor Insights & Strategies. “That’s the only thing that matters to buyers.”

Image: Cerebras Systems A blowout of the CS-1 shows that most of the system is devoted to powering and cooling the Wafer Scale Engine chip at the back left.

Cerebras also unveiled some details of the software side of the system. The software allows users to write their machine learning models using standard frameworks such as Pytorch and Tensorflow. It then sets about devoting variously-sized portions of the wafer-scale engine to layers of the neural network. How does it do this? By solving an optimization problem in order to ensure that the layers all complete their work at roughly the same pace and are contiguous with their neighbors. The result: Information can flow through the network without any holdups.

The software can perform that optimization problem across multiple computers, allowing a cluster of computers to act as one big machine. Cerebras has linked as many as 32 CS-1s together to get a roughly 32-fold performance increase. This is in contrast with the behavior of GPU-based clusters, says Feldman. “Today, when you cluster GPUs, you don't get the behavior of one big machine. You get the behavior of lots of little machines.”

Argonne has been working with Cerebras for two years, said Rick Stevens, its director for computing, in a press release. “By deploying the CS-1, we have dramatically shrunk training time across neural networks, allowing our researchers to be vastly more productive to make strong advances across deep learning research in cancer, traumatic brain injury, and other areas important to society today and in the years to come.”

The CS-1’s first application is in predicting cancer drug response as part of a U.S. Department of Energy and National Cancer Institute collaboration. It is also being used to help understand the behavior of colliding black holes and the gravitational waves they produce. A previous instance of that problem required 1024 out of 4392 nodes of the Theta supercomputer.

Conducting polymers, particularly poly(3,4-ethylenedioxythiophene) (PEDOT) and its complex with poly(styrene sulfonate) (PEDOT:PSS), provide a promising materials platform to develop soft actuators or artificial muscles. To date, PEDOT-based actuators are available in the field of bionics, biomedicine, smart textiles, microactuators, and other functional applications. Compared to other conducting polymers, PEDOT provides higher conductivity and chemical stability, lower density and operating voltages, and the dispersion of PEDOT with PSS further enriches performances in solubility, hydrophility, processability, and flexibility, making them advantageous in actuator-based applications. However, the actuators fabricated by PEDOT-based materials are still in their infancy, with many unknowns and challenges that require more comprehensive understanding for their current and future development. This review is aimed at providing a comprehensive understanding of the actuation mechanisms, performance evaluation criteria, processing technologies and configurations, and the most recent progress of materials development and applications. Lastly, we also elaborate on future opportunities for improving and exploiting PEDOT-based actuators.

There’s no particular reason why knowing how to juggle would be a useful skill for a robot. Despite this, robots are frequently taught how to juggle things. Blind robots can juggle, humanoid robots can juggle, and even drones can juggle. Why? Because juggling is hard, man! You have to think about a bunch of different things at once, and also do a bunch of different things at once, which this particular human at least finds to be overly stressful. While juggling may not stress robots out, it does require carefully coordinated sensing and computing and actuation, which means that it’s as good a task as any (and a more entertaining task than most) for testing the capabilities of your system.

UC Berkeley’s Cassie Cal robot, which consists of two legs and what could be called a torso if you were feeling charitable, has just learned to juggle by bouncing a ball on what would be her head if she had one of those. The idea is that if Cassie can juggle while balancing at the same time, she’ll be better able to do other things that require dynamic multitasking, too. And if that doesn’t work out, she’ll still be able to join the circus.

Cassie’s juggling is assisted by an external motion capture system that tracks the location of the ball, but otherwise everything is autonomous. Cassie is able to juggle the ball by leaning forwards and backwards, left and right, and moving up and down. She does this while maintaining her own balance, which is the whole point of this research—successfully executing two dynamic behaviors that may sometimes be at odds with one another. The end goal here is not to make a better juggling robot, but rather to explore dynamic multitasking, a skill that robots will need in order to be successful in human environments.

This work is from the Hybrid Robotics Lab at UC Berkeley, led by Koushil Sreenath, and is being done by Katherine Poggensee, Albert Li, Daniel Sotsaikich, Bike Zhang, and Prasanth Kotaru.

For a bit more detail, we spoke with Albert Li via email.

Image: UC Berkeley UC Berkeley’s Cassie Cal getting ready to juggle.

IEEE Spectrum: What would be involved in getting Cassie to juggle without relying on motion capture?

Albert Li: Our motivation for starting off with motion capture was to first address the control challenge of juggling on a biped without worrying about implementing the perception. We actually do have a ball detector working on a camera, which would mean we wouldn’t have to rely on the motion capture system. However, we need to mount the camera in a way that it would provide the best upwards field of view, and we also have develop a reliable estimator. The estimator is particularly important because when the ball gets close enough to the camera, we actually can’t track the ball and have to assume our dynamic models describe its motion accurately enough until it bounces back up.

What keeps Cassie from juggling indefinitely?

There are a few factors that affect how long Cassie can sustain a juggle. While in simulation the paddle exhibits homogeneous properties like its stiffness and damping, in reality every surface has anisotropic contact properties. So, there are parts of the paddle which may be better for juggling than others (and importantly, react differently than modeled). These differences in contact are also exacerbated due to how the paddle is cantilevered when mounted on Cassie. When the ball hits these areas, it leads to a larger than expected error in a juggle. Due to the small size of the paddle, the ball may then just hit the paddle’s edge and end the juggling run. Over a very long run, this is a likely occurrence. Additionally, some large juggling errors could cause Cassie’s feet to slip slightly, which ends up changing the stable standing position over time. Since this version of the controller assumes Cassie is stationary, this change in position eventually leads to poor juggles and failure.

Would Cassie be able to juggle while walking (or hovershoe-ing)?

Walking (and hovershoe-ing) while juggling is a far more challenging problem and is certainly a goal for future research. Some of these challenges include getting the paddle to precise poses to juggle the ball while also moving to avoid any destabilizing effects of stepping incorrectly. The number of juggles per step of walking could also vary and make the mathematics of the problem more challenging. The controller goal is also more involved. While the current goal of the juggling controller is to juggle the ball to a static apex position, with a walking juggling controller, we may instead want to hit the ball forwards and also walk forwards to bounce it, juggle the ball along a particular path, etc. Solving such challenges would be the main thrusts of the follow-up research.

Can you give an example of a practical task that would be made possible by using a controller like this?

Studying juggling means studying contact behavior and leveraging our models of it to achieve a known objective. Juggling could also be used to study predictable post-contact flight behavior. Consider the scenario where a robot is attempting to make a catch, but fails, letting the ball to bounce off of its hand, and then recovering the catch. This behavior could also be intentional: It is often easier to first execute a bounce to direct the target and then perform a subsequent action. For example, volleyball players could in principle directly hit a spiked ball back, but almost always bump the ball back up and then return it.

Even beyond this motivating example, the kinds of models we employ to get juggling working are more generally applicable to any task that involves contact, which could include tasks besides bouncing like sliding and rolling. For example, clearing space on a desk by pushing objects to the side may be preferable than individually manipulating each and every object on it.

You mention collaborative juggling or juggling multiple balls—is that something you’ve tried yet? Can you talk a bit more about what you’re working on next? 

We haven’t yet started working on collaborative or multi-ball juggling, but that’s also a goal for future work. Juggling multiple balls statically is probably the most reasonable next goal, but presents additional challenges. For instance, you have to encode a notion of juggling urgency (if the second ball isn’t hit hard enough, you have less time to get the first ball up before you get back to the second one).  

On the other hand, collaborative human-robot juggling requires a more advanced decision-making framework. To get robust multi-agent juggling, the robot will need to employ some sort of probabilistic model of the expected human behavior (are they likely to move somewhere? Are they trying to catch the ball high or low? Is it safe to hit the ball back?). In general, developing such human models is difficult since humans are fairly unpredictable and often don’t exhibit rational behavior. This will be a focus of future work.

[ Hybrid Robotics Lab ]

There’s no particular reason why knowing how to juggle would be a useful skill for a robot. Despite this, robots are frequently taught how to juggle things. Blind robots can juggle, humanoid robots can juggle, and even drones can juggle. Why? Because juggling is hard, man! You have to think about a bunch of different things at once, and also do a bunch of different things at once, which this particular human at least finds to be overly stressful. While juggling may not stress robots out, it does require carefully coordinated sensing and computing and actuation, which means that it’s as good a task as any (and a more entertaining task than most) for testing the capabilities of your system.

UC Berkeley’s Cassie Cal robot, which consists of two legs and what could be called a torso if you were feeling charitable, has just learned to juggle by bouncing a ball on what would be her head if she had one of those. The idea is that if Cassie can juggle while balancing at the same time, she’ll be better able to do other things that require dynamic multitasking, too. And if that doesn’t work out, she’ll still be able to join the circus.

Cassie’s juggling is assisted by an external motion capture system that tracks the location of the ball, but otherwise everything is autonomous. Cassie is able to juggle the ball by leaning forwards and backwards, left and right, and moving up and down. She does this while maintaining her own balance, which is the whole point of this research—successfully executing two dynamic behaviors that may sometimes be at odds with one another. The end goal here is not to make a better juggling robot, but rather to explore dynamic multitasking, a skill that robots will need in order to be successful in human environments.

This work is from the Hybrid Robotics Lab at UC Berkeley, led by Koushil Sreenath, and is being done by Katherine Poggensee, Albert Li, Daniel Sotsaikich, Bike Zhang, and Prasanth Kotaru.

For a bit more detail, we spoke with Albert Li via email.

Image: UC Berkeley UC Berkeley’s Cassie Cal getting ready to juggle.

IEEE Spectrum: What would be involved in getting Cassie to juggle without relying on motion capture?

Albert Li: Our motivation for starting off with motion capture was to first address the control challenge of juggling on a biped without worrying about implementing the perception. We actually do have a ball detector working on a camera, which would mean we wouldn’t have to rely on the motion capture system. However, we need to mount the camera in a way that it would provide the best upwards field of view, and we also have develop a reliable estimator. The estimator is particularly important because when the ball gets close enough to the camera, we actually can’t track the ball and have to assume our dynamic models describe its motion accurately enough until it bounces back up.

What keeps Cassie from juggling indefinitely?

There are a few factors that affect how long Cassie can sustain a juggle. While in simulation the paddle exhibits homogeneous properties like its stiffness and damping, in reality every surface has anisotropic contact properties. So, there are parts of the paddle which may be better for juggling than others (and importantly, react differently than modeled). These differences in contact are also exacerbated due to how the paddle is cantilevered when mounted on Cassie. When the ball hits these areas, it leads to a larger than expected error in a juggle. Due to the small size of the paddle, the ball may then just hit the paddle’s edge and end the juggling run. Over a very long run, this is a likely occurrence. Additionally, some large juggling errors could cause Cassie’s feet to slip slightly, which ends up changing the stable standing position over time. Since this version of the controller assumes Cassie is stationary, this change in position eventually leads to poor juggles and failure.

Would Cassie be able to juggle while walking (or hovershoe-ing)?

Walking (and hovershoe-ing) while juggling is a far more challenging problem and is certainly a goal for future research. Some of these challenges include getting the paddle to precise poses to juggle the ball while also moving to avoid any destabilizing effects of stepping incorrectly. The number of juggles per step of walking could also vary and make the mathematics of the problem more challenging. The controller goal is also more involved. While the current goal of the juggling controller is to juggle the ball to a static apex position, with a walking juggling controller, we may instead want to hit the ball forwards and also walk forwards to bounce it, juggle the ball along a particular path, etc. Solving such challenges would be the main thrusts of the follow-up research.

Can you give an example of a practical task that would be made possible by using a controller like this?

Studying juggling means studying contact behavior and leveraging our models of it to achieve a known objective. Juggling could also be used to study predictable post-contact flight behavior. Consider the scenario where a robot is attempting to make a catch, but fails, letting the ball to bounce off of its hand, and then recovering the catch. This behavior could also be intentional: It is often easier to first execute a bounce to direct the target and then perform a subsequent action. For example, volleyball players could in principle directly hit a spiked ball back, but almost always bump the ball back up and then return it.

Even beyond this motivating example, the kinds of models we employ to get juggling working are more generally applicable to any task that involves contact, which could include tasks besides bouncing like sliding and rolling. For example, clearing space on a desk by pushing objects to the side may be preferable than individually manipulating each and every object on it.

You mention collaborative juggling or juggling multiple balls—is that something you’ve tried yet? Can you talk a bit more about what you’re working on next? 

We haven’t yet started working on collaborative or multi-ball juggling, but that’s also a goal for future work. Juggling multiple balls statically is probably the most reasonable next goal, but presents additional challenges. For instance, you have to encode a notion of juggling urgency (if the second ball isn’t hit hard enough, you have less time to get the first ball up before you get back to the second one).  

On the other hand, collaborative human-robot juggling requires a more advanced decision-making framework. To get robust multi-agent juggling, the robot will need to employ some sort of probabilistic model of the expected human behavior (are they likely to move somewhere? Are they trying to catch the ball high or low? Is it safe to hit the ball back?). In general, developing such human models is difficult since humans are fairly unpredictable and often don’t exhibit rational behavior. This will be a focus of future work.

[ Hybrid Robotics Lab ]

This is part four of a six-part series on the history of natural language processing.

Between 1964 and 1966, Joseph Weizenbaum, a German American computer scientist at MIT’s artificial intelligence lab, developed the first-ever chatbot [PDF].

While there were already some rudimentary digital language generators in existence—programs that could spit out somewhat coherent lines of text—Weizenbaum’s program was the first designed explicitly for interactions with humans. The user could type in some statement or set of statements in their normal language, press enter, and receive a response from the machine. As Weizenbaum explained, his program made “certain kinds of natural-language conversation between man and computer possible.”

He named the program Eliza after Eliza Doolittle, the working-class hero of George Bernard Shaw’s Pygmalion who learns how to talk with an upper-class accent. The new Eliza was written for the 36-bit IBM 7094, an early transistorized mainframe computer, in a programming language that Weizenbaum developed called MAD-SLIP.  

Because computer time was a valuable resource, Eliza could only be run via a time-sharing system; the user interacted with the program remotely via an electric typewriter and printer. When the user typed in a sentence and pressed enter, a message was sent to the mainframe computer. Eliza scanned the message for the presence of a keyword and used it in a new sentence to form a response that was sent back, printed out, and read by the user.

To encourage ongoing dialogue, Weizenbaum designed Eliza to simulate the type of conversational style used by a Rogerian psychoanalyst. The program would take something the user said and reflect it back with a question, as can be seen in the example below. (Notice how the program takes keywords like “boyfriend” and “depressed” and re-uses them.)

Human: Men are all alike. 

Computer: IN WHAT WAY? 

Human: They're always bugging us about something or other. 

Computer: CAN YOU THINK OF A SPECIFIC EXAMPLE? 

Human: Well, my boyfriend made me come here. 

Computer: YOUR BOYFRIEND MADE YOU COME HERE 

Human: He says I'm depressed much of the time. 

Computer: I AM SORRY TO HEAR YOU ARE DEPRESSED 

Human: It's true. I am unhappy. 

Weizenbaum chose this mode of dialogue for Eliza because it gave the impression that the computer understood what was being said without having to offer anything new to the conversation. It created the illusion of comprehension and engagement in a mere 200 lines of code.

To test Eliza’s capacity to engage an interlocutor, Weizenbaum invited students and colleagues into his office and let them chat with the machine while he looked on. He noticed, with some concern, that during their brief interactions with Eliza, many users began forming emotional attachments to the algorithm. They would open up to the machine and confess problems they were facing in their lives and relationships.

During their brief interactions with Eliza, many users began forming emotional attachments to the algorithm.

Even more surprising was that this sense of intimacy persisted even after Weizenbaum described how the machine worked and explained that it didn’t really understand anything that was being said. Weizenbaum was most troubled when his secretary, who had watched him build the program from scratch over many months, insisted that he leave the room so she could talk to Eliza in private.

For Weizenbaum, this experiment with Eliza made him question an idea that Alan Turing had proposed in 1950 about machine intelligence. In his paper, entitled “Computing Machinery and Intelligence,” Turing suggested that if a computer could conduct a convincingly human conversation in text, one could assume it was intelligent—an idea that became the basis of the famous Turing Test.  

But Eliza demonstrated that convincing communication between a human and a machine could take place even if comprehension only flowed from one side: The simulation of intelligence, rather than intelligence itself, was enough to fool people. Weizenbaum called this the Eliza effect, and believed it was a type of “delusional thinking” that humanity would collectively suffer from in the digital age. This insight was a profound shock for Weizenbaum, and one that came to define his intellectual trajectory over the next decade.  

The simulation of intelligence, rather than intelligence itself, was enough to fool people.

In 1976, he published Computing Power and Human Reason: From Judgment to Calculation [PDF], which offered a long meditation on why people are willing to believe that a simple machine might be able to understand their complex human emotions.

In this book, he argues that the Eliza effect signifies a broader pathology afflicting “modern man.” In a world conquered by science, technology, and capitalism, people had grown accustomed to viewing themselves as isolated cogs in a large and uncaring machine. In such a diminished social world, Weizenbaum reasoned, people had grown so desperate for connection that they put aside their reason and judgment in order to believe that a program could care about their problems.

Weizenbaum spent the rest of his life developing this humanistic critique of artificial intelligence and digital technology. His mission was to remind people that their machines were not as smart as they were often said to be. And that even though it sometimes appeared as though they could talk, they were never really listening.

This is the fourth installment of a six-part series on the history of natural language processing. Last week’s post described Andrey Markov and Claude Shannon’s painstaking efforts to create statistical models of language for text generation. Come back next Monday for part five, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Conversation.”

You can also check out our prior series on the untold history of AI.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

There will be a Mini-Cheetah Workshop (sponsored by Naver Labs) a year from now at IROS 2020 in Las Vegas. Mini-Cheetahs for everyone!

That’s just a rendering, of course, but this isn’t:

[ MCW ]

I was like 95 percent sure that the Urban Circuit of the DARPA SubT Challenge was going to be in something very subway station-y. Oops!

In the Subterranean (SubT) Challenge, teams deploy autonomous ground and aerial systems to attempt to map, identify, and report artifacts along competition courses in underground environments. The artifacts represent items a first responder or service member may encounter in unknown underground sites. This video provides a preview of the Urban Circuit event location. The Urban Circuit is scheduled for February 18-27, 2020, at Satsop Business Park west of Olympia, Washington.

[ SubT ]

Researchers at SEAS and the Wyss Institute for Biologically Inspired Engineering have developed a resilient RoboBee powered by soft artificial muscles that can crash into walls, fall onto the floor, and collide with other RoboBees without being damaged. It is the first microrobot powered by soft actuators to achieve controlled flight.

To solve the problem of power density, the researchers built upon the electrically-driven soft actuators developed in the lab of David Clarke, the Extended Tarr Family Professor of Materials. These soft actuators are made using dielectric elastomers, soft materials with good insulating properties, that deform when an electric field is applied. By improving the electrode conductivity, the researchers were able to operate the actuator at 500 Hertz, on par with the rigid actuators used previously in similar robots.

Next, the researchers aim to increase the efficiency of the soft-powered robot, which still lags far behind more traditional flying robots.

[ Harvard ]

We present a system for fast and robust handovers with a robot character, together with a user study investigating the effect of robot speed and reaction time on perceived interaction quality. The system can match and exceed human speeds and confirms that users prefer human-level timing.

In a 3×3 user study, we vary the speed of the robot and add variable sensorimotor delays. We evaluate the social perception of the robot using the Robot Social Attribute Scale (RoSAS). Inclusion of a small delay, mimicking the delay of the human sensorimotor system, leads to an improvement in perceived qualities over both no delay and long delay conditions. Specifically, with no delay the robot is perceived as more discomforting and with a long delay, it is perceived as less warm.

[ Disney Research ]

When cars are autonomous, they’re not going to be able to pump themselves full of gas. Or, more likely, electrons. Kuka has the solution.

[ Kuka ]

This looks like fun, right?

[ Robocoaster ]

NASA is leading the way in the use of On-orbit Servicing, Assembly, and Manufacturing to enable large, persistent, upgradable, and maintainable spacecraft. This video was developed by the Advanced Concepts Lab (ACL) at NASA Langley Research Center.

[ NASA ]

The noisiest workshop by far at Humanoids last month (by far) was Musical Interactions With Humanoids, the end result of which was this:

[ Workshop ]

IROS is an IEEE event, and in furthering the IEEE mission to benefit humanity through technological innovation, IROS is doing a great job. But don’t take it from us – we are joined by IEEE President-Elect Professor Toshio Fukuda to find out a bit more about the impact events like IROS can have, as well as examine some of the issues around intelligent robotics and systems - from privacy to transparency of the systems at play.

IROS ]

Speaking of IROS, we hope you’ve been enjoying our coverage. We have already featured Harvard’s strange sea-urchin-inspired robot and a Japanese quadruped that can climb vertical ladders, with more stories to come over the next several weeks.

In the mean time, enjoy these 10 videos from the conference (as usual, we’re including the title, authors, and abstract for each—if you’d like more details about any of these projects, let us know and we’ll find out more for you).

"A Passive Closing, Tendon Driven, Adaptive Robot Hand for Ultra-Fast, Aerial Grasping and Perching," by Andrew McLaren, Zak Fitzgerald, Geng Gao, and Minas Liarokapis from the University of Auckland, New Zealand.

Current grasping methods for aerial vehicles are slow, inaccurate and they cannot adapt to any target object. Thus, they do not allow for on-the-fly, ultra-fast grasping. In this paper, we present a passive closing, adaptive robot hand design that offers ultra-fast, aerial grasping for a wide range of everyday objects. We investigate alternative uses of structural compliance for the development of simple, adaptive robot grippers and hands and we propose an appropriate quick release mechanism that facilitates an instantaneous grasping execution. The quick release mechanism is triggered by a simple distance sensor. The proposed hand utilizes only two actuators to control multiple degrees of freedom over three fingers and it retains the superior grasping capabilities of adaptive grasping mechanisms, even under significant object pose or other environmental uncertainties. The hand achieves a grasping time of 96 ms, a maximum grasping force of 56 N and it is able to secure objects of various shapes at high speeds. The proposed hand can serve as the end-effector of grasping capable Unmanned Aerial Vehicle (UAV) platforms and it can offer perching capabilities, facilitating autonomous docking.

"Unstructured Terrain Navigation and Topographic Mapping With a Low-Cost Mobile Cuboid Robot," by Andrew S. Morgan, Robert L. Baines, Hayley McClintock, and Brian Scassellati from Yale University, USA.

Current robotic terrain mapping techniques require expensive sensor suites to construct an environmental representation. In this work, we present a cube-shaped robot that can roll through unstructured terrain and construct a detailed topographic map of the surface that it traverses in real time with low computational and monetary expense. Our approach devolves many of the complexities of locomotion and mapping to passive mechanical features. Namely, rolling movement is achieved by sequentially inflating latex bladders that are located on four sides of the robot to destabilize and tip it. Sensing is achieved via arrays of fine plastic pins that passively conform to the geometry of underlying terrain, retracting into the cube. We developed a topography by shade algorithm to process images of the displaced pins to reconstruct terrain contours and elevation. We experimentally validated the efficacy of the proposed robot through object mapping and terrain locomotion tasks.

"Toward a Ballbot for Physically Leading People: A Human-Centered Approach," by Zhongyu Li and Ralph Hollis from Carnegie Mellon University, USA.

This work presents a new human-centered method for indoor service robots to provide people with physical assistance and active guidance while traveling through congested and narrow spaces. As most previous work is robot-centered, this paper develops an end-to-end framework which includes a feedback path of the measured human positions. The framework combines a planning algorithm and a human-robot interaction module to guide the led person to a specified planned position. The approach is deployed on a person-size dynamically stable mobile robot, the CMU ballbot. Trials were conducted where the ballbot physically led a blindfolded person to safely navigate in a cluttered environment.

"Achievement of Online Agile Manipulation Task for Aerial Transformable Multilink Robot," by Fan Shi, Moju Zhao, Tomoki Anzai, Keita Ito, Xiangyu Chen, Kei Okada, and Masayuki Inaba from the University of Tokyo, Japan.

Transformable aerial robots are favorable in aerial manipulation tasks for their flexible ability to change configuration during the flight. By assuming robot keeping in the mild motion, the previous researches sacrifice aerial agility to simplify the complex non-linear system into a single rigid body with a linear controller. In this paper, we present a framework towards agile swing motion for the transformable multi-links aerial robot. We introduce a computational-efficient non-linear model predictive controller and joints motion primitive frame-work to achieve agile transforming motions and validate with a novel robot named HYRURS-X. Finally, we implement our framework under a table tennis task to validate the online and agile performance.

"Small-Scale Compliant Dual Arm With Tail for Winged Aerial Robots," by Alejandro Suarez, Manuel Perez, Guillermo Heredia, and Anibal Ollero from the University of Seville, Spain.

Winged aerial robots represent an evolution of aerial manipulation robots, replacing the multirotor vehicles by fixed or flapping wing platforms. The development of this morphology is motivated in terms of efficiency, endurance and safety in some inspection operations where multirotor platforms may not be suitable. This paper presents a first prototype of compliant dual arm as preliminary step towards the realization of a winged aerial robot capable of perching and manipulating with the wings folded. The dual arm provides 6 DOF (degrees of freedom) for end effector positioning in a human-like kinematic configuration, with a reach of 25 cm (half-scale w.r.t. the human arm), and 0.2 kg weight. The prototype is built with micro metal gear motors, measuring the joint angles and the deflection with small potentiometers. The paper covers the design, electronics, modeling and control of the arms. Experimental results in test-bench validate the developed prototype and its functionalities, including joint position and torque control, bimanual grasping, the dynamic equilibrium with the tail, and the generation of 3D maps with laser sensors attached at the arms.

"A Novel Small-Scale Turtle-inspired Amphibious Spherical Robot," by Huiming Xing, Shuxiang Guo, Liwei Shi, Xihuan Hou, Yu Liu, Huikang Liu, Yao Hu, Debin Xia, and Zan Li from Beijing Institute of Technology, China.

This paper describes a novel small-scale turtle-inspired Amphibious Spherical Robot (ASRobot) to accomplish exploration tasks in the restricted environment, such as amphibious areas and narrow underwater cave. A Legged, Multi-Vectored Water-Jet Composite Propulsion Mechanism (LMVWCPM) is designed with four legs, one of which contains three connecting rod parts, one water-jet thruster and three joints driven by digital servos. Using this mechanism, the robot is able to walk like amphibious turtles on various terrains and swim flexibly in submarine environment. A simplified kinematic model is established to analyze crawling gaits. With simulation of the crawling gait, the driving torques of different joints contributed to the choice of servos and the size of links of legs. Then we also modeled the robot in water and proposed several underwater locomotion. In order to assess the performance of the proposed robot, a series of experiments were carried out in the lab pool and on flat ground using the prototype robot. Experiments results verified the effectiveness of LMVWCPM and the amphibious control approaches.

"Advanced Autonomy on a Low-Cost Educational Drone Platform," by Luke Eller, Theo Guerin, Baichuan Huang, Garrett Warren, Sophie Yang, Josh Roy, and Stefanie Tellex from Brown University, USA.

PiDrone is a quadrotor platform created to accompany an introductory robotics course. Students build an autonomous flying robot from scratch and learn to program it through assignments and projects. Existing educational robots do not have significant autonomous capabilities, such as high-level planning and mapping. We present a hardware and software framework for an autonomous aerial robot, in which all software for autonomy can run onboard the drone, implemented in Python. We present an Unscented Kalman Filter (UKF) for accurate state estimation. Next, we present an implementation of Monte Carlo (MC) Localization and Fast-SLAM for Simultaneous Localization and Mapping (SLAM). The performance of UKF, localization, and SLAM is tested and compared to ground truth, provided by a motion-capture system. Our evaluation demonstrates that our autonomous educational framework runs quickly and accurately on a Raspberry Pi in Python, making it ideal for use in educational settings.

"FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality," by Winter Guerra, Ezra Tal, Varun Murali, Gilhyun Ryou and Sertac Karaman from the Massachusetts Institute of Technology, USA.

FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in flight in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s) in flight. While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex dynamics are generated organically through natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest. FlightGoggles is distributed as open-source software along with the photorealistic graphics assets for several simulation environments, under the MIT license at http://flightgoggles.mit.edu.

"An Autonomous Quadrotor System for Robust High-Speed Flight Through Cluttered Environments Without GPS," by Marc Rigter, Benjamin Morrell, Robert G. Reid, Gene B. Merewether, Theodore Tzanetos, Vinay Rajur, KC Wong, and Larry H. Matthies from University of Sydney, Australia; NASA Jet Propulsion Laboratory, California Institute of Technology, USA; and Georgia Institute of Technology, USA.

Robust autonomous flight without GPS is key to many emerging drone applications, such as delivery, search and rescue, and warehouse inspection. These and other appli- cations require accurate trajectory tracking through cluttered static environments, where GPS can be unreliable, while high- speed, agile, flight can increase efficiency. We describe the hardware and software of a quadrotor system that meets these requirements with onboard processing: a custom 300 mm wide quadrotor that uses two wide-field-of-view cameras for visual- inertial motion tracking and relocalization to a prior map. Collision-free trajectories are planned offline and tracked online with a custom tracking controller. This controller includes compensation for drag and variability in propeller performance, enabling accurate trajectory tracking, even at high speeds where aerodynamic effects are significant. We describe a system identification approach that identifies quadrotor-specific parameters via maximum likelihood estimation from flight data. Results from flight experiments are presented, which 1) validate the system identification method, 2) show that our controller with aerodynamic compensation reduces tracking error by more than 50% in both horizontal flights at up to 8.5 m/s and vertical flights at up to 3.1 m/s compared to the state-of-the-art, and 3) demonstrate our system tracking complex, aggressive, trajectories.

"Morphing Structure for Changing Hydrodynamic Characteristics of a Soft Underwater Walking Robot," by Michael Ishida, Dylan Drotman, Benjamin Shih, Mark Hermes, Mitul Luhar, and Michael T. Tolley from the University of California, San Diego (UCSD) and University of Southern California, USA.

Existing platforms for underwater exploration and inspection are often limited to traversing open water and must expend large amounts of energy to maintain a position in flow for long periods of time. Many benthic animals overcome these limitations using legged locomotion and have different hydrodynamic profiles dictated by different body morphologies. This work presents an underwater legged robot with soft legs and a soft inflatable morphing body that can change shape to influence its hydrodynamic characteristics. Flow over the morphing body separates behind the trailing edge of the inflated shape, so whether the protrusion is at the front, center, or back of the robot influences the amount of drag and lift. When the legged robot (2.87 N underwater weight) needs to remain stationary in flow, an asymmetrically inflated body resists sliding by reducing lift on the body by 40% (from 0.52 N to 0.31 N) at the highest flow rate tested while only increasing drag by 5.5% (from 1.75 N to 1.85 N). When the legged robot needs to walk with flow, a large inflated body is pushed along by the flow, causing the robot to walk 16% faster than it would with an uninflated body. The body shape significantly affects the ability of the robot to walk against flow as it is able to walk against 0.09 m/s flow with the uninflated body, but is pushed backwards with a large inflated body. We demonstrate that the robot can detect changes in flow velocity with a commercial force sensor and respond by morphing into a hydrodynamically preferable shape.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

There will be a Mini-Cheetah Workshop (sponsored by Naver Labs) a year from now at IROS 2020 in Las Vegas. Mini-Cheetahs for everyone!

That’s just a rendering, of course, but this isn’t:

[ MCW ]

I was like 95 percent sure that the Urban Circuit of the DARPA SubT Challenge was going to be in something very subway station-y. Oops!

In the Subterranean (SubT) Challenge, teams deploy autonomous ground and aerial systems to attempt to map, identify, and report artifacts along competition courses in underground environments. The artifacts represent items a first responder or service member may encounter in unknown underground sites. This video provides a preview of the Urban Circuit event location. The Urban Circuit is scheduled for February 18-27, 2020, at Satsop Business Park west of Olympia, Washington.

[ SubT ]

Researchers at SEAS and the Wyss Institute for Biologically Inspired Engineering have developed a resilient RoboBee powered by soft artificial muscles that can crash into walls, fall onto the floor, and collide with other RoboBees without being damaged. It is the first microrobot powered by soft actuators to achieve controlled flight.

To solve the problem of power density, the researchers built upon the electrically-driven soft actuators developed in the lab of David Clarke, the Extended Tarr Family Professor of Materials. These soft actuators are made using dielectric elastomers, soft materials with good insulating properties, that deform when an electric field is applied. By improving the electrode conductivity, the researchers were able to operate the actuator at 500 Hertz, on par with the rigid actuators used previously in similar robots.

Next, the researchers aim to increase the efficiency of the soft-powered robot, which still lags far behind more traditional flying robots.

[ Harvard ]

We present a system for fast and robust handovers with a robot character, together with a user study investigating the effect of robot speed and reaction time on perceived interaction quality. The system can match and exceed human speeds and confirms that users prefer human-level timing.

In a 3×3 user study, we vary the speed of the robot and add variable sensorimotor delays. We evaluate the social perception of the robot using the Robot Social Attribute Scale (RoSAS). Inclusion of a small delay, mimicking the delay of the human sensorimotor system, leads to an improvement in perceived qualities over both no delay and long delay conditions. Specifically, with no delay the robot is perceived as more discomforting and with a long delay, it is perceived as less warm.

[ Disney Research ]

When cars are autonomous, they’re not going to be able to pump themselves full of gas. Or, more likely, electrons. Kuka has the solution.

[ Kuka ]

This looks like fun, right?

[ Robocoaster ]

NASA is leading the way in the use of On-orbit Servicing, Assembly, and Manufacturing to enable large, persistent, upgradable, and maintainable spacecraft. This video was developed by the Advanced Concepts Lab (ACL) at NASA Langley Research Center.

[ NASA ]

The noisiest workshop by far at Humanoids last month (by far) was Musical Interactions With Humanoids, the end result of which was this:

[ Workshop ]

IROS is an IEEE event, and in furthering the IEEE mission to benefit humanity through technological innovation, IROS is doing a great job. But don’t take it from us – we are joined by IEEE President-Elect Professor Toshio Fukuda to find out a bit more about the impact events like IROS can have, as well as examine some of the issues around intelligent robotics and systems - from privacy to transparency of the systems at play.

IROS ]

Speaking of IROS, we hope you’ve been enjoying our coverage. We have already featured Harvard’s strange sea-urchin-inspired robot and a Japanese quadruped that can climb vertical ladders, with more stories to come over the next several weeks.

In the mean time, enjoy these 10 videos from the conference (as usual, we’re including the title, authors, and abstract for each—if you’d like more details about any of these projects, let us know and we’ll find out more for you).

"A Passive Closing, Tendon Driven, Adaptive Robot Hand for Ultra-Fast, Aerial Grasping and Perching," by Andrew McLaren, Zak Fitzgerald, Geng Gao, and Minas Liarokapis from the University of Auckland, New Zealand.

Current grasping methods for aerial vehicles are slow, inaccurate and they cannot adapt to any target object. Thus, they do not allow for on-the-fly, ultra-fast grasping. In this paper, we present a passive closing, adaptive robot hand design that offers ultra-fast, aerial grasping for a wide range of everyday objects. We investigate alternative uses of structural compliance for the development of simple, adaptive robot grippers and hands and we propose an appropriate quick release mechanism that facilitates an instantaneous grasping execution. The quick release mechanism is triggered by a simple distance sensor. The proposed hand utilizes only two actuators to control multiple degrees of freedom over three fingers and it retains the superior grasping capabilities of adaptive grasping mechanisms, even under significant object pose or other environmental uncertainties. The hand achieves a grasping time of 96 ms, a maximum grasping force of 56 N and it is able to secure objects of various shapes at high speeds. The proposed hand can serve as the end-effector of grasping capable Unmanned Aerial Vehicle (UAV) platforms and it can offer perching capabilities, facilitating autonomous docking.

"Unstructured Terrain Navigation and Topographic Mapping With a Low-Cost Mobile Cuboid Robot," by Andrew S. Morgan, Robert L. Baines, Hayley McClintock, and Brian Scassellati from Yale University, USA.

Current robotic terrain mapping techniques require expensive sensor suites to construct an environmental representation. In this work, we present a cube-shaped robot that can roll through unstructured terrain and construct a detailed topographic map of the surface that it traverses in real time with low computational and monetary expense. Our approach devolves many of the complexities of locomotion and mapping to passive mechanical features. Namely, rolling movement is achieved by sequentially inflating latex bladders that are located on four sides of the robot to destabilize and tip it. Sensing is achieved via arrays of fine plastic pins that passively conform to the geometry of underlying terrain, retracting into the cube. We developed a topography by shade algorithm to process images of the displaced pins to reconstruct terrain contours and elevation. We experimentally validated the efficacy of the proposed robot through object mapping and terrain locomotion tasks.

"Toward a Ballbot for Physically Leading People: A Human-Centered Approach," by Zhongyu Li and Ralph Hollis from Carnegie Mellon University, USA.

This work presents a new human-centered method for indoor service robots to provide people with physical assistance and active guidance while traveling through congested and narrow spaces. As most previous work is robot-centered, this paper develops an end-to-end framework which includes a feedback path of the measured human positions. The framework combines a planning algorithm and a human-robot interaction module to guide the led person to a specified planned position. The approach is deployed on a person-size dynamically stable mobile robot, the CMU ballbot. Trials were conducted where the ballbot physically led a blindfolded person to safely navigate in a cluttered environment.

"Achievement of Online Agile Manipulation Task for Aerial Transformable Multilink Robot," by Fan Shi, Moju Zhao, Tomoki Anzai, Keita Ito, Xiangyu Chen, Kei Okada, and Masayuki Inaba from the University of Tokyo, Japan.

Transformable aerial robots are favorable in aerial manipulation tasks for their flexible ability to change configuration during the flight. By assuming robot keeping in the mild motion, the previous researches sacrifice aerial agility to simplify the complex non-linear system into a single rigid body with a linear controller. In this paper, we present a framework towards agile swing motion for the transformable multi-links aerial robot. We introduce a computational-efficient non-linear model predictive controller and joints motion primitive frame-work to achieve agile transforming motions and validate with a novel robot named HYRURS-X. Finally, we implement our framework under a table tennis task to validate the online and agile performance.

"Small-Scale Compliant Dual Arm With Tail for Winged Aerial Robots," by Alejandro Suarez, Manuel Perez, Guillermo Heredia, and Anibal Ollero from the University of Seville, Spain.

Winged aerial robots represent an evolution of aerial manipulation robots, replacing the multirotor vehicles by fixed or flapping wing platforms. The development of this morphology is motivated in terms of efficiency, endurance and safety in some inspection operations where multirotor platforms may not be suitable. This paper presents a first prototype of compliant dual arm as preliminary step towards the realization of a winged aerial robot capable of perching and manipulating with the wings folded. The dual arm provides 6 DOF (degrees of freedom) for end effector positioning in a human-like kinematic configuration, with a reach of 25 cm (half-scale w.r.t. the human arm), and 0.2 kg weight. The prototype is built with micro metal gear motors, measuring the joint angles and the deflection with small potentiometers. The paper covers the design, electronics, modeling and control of the arms. Experimental results in test-bench validate the developed prototype and its functionalities, including joint position and torque control, bimanual grasping, the dynamic equilibrium with the tail, and the generation of 3D maps with laser sensors attached at the arms.

"A Novel Small-Scale Turtle-inspired Amphibious Spherical Robot," by Huiming Xing, Shuxiang Guo, Liwei Shi, Xihuan Hou, Yu Liu, Huikang Liu, Yao Hu, Debin Xia, and Zan Li from Beijing Institute of Technology, China.

This paper describes a novel small-scale turtle-inspired Amphibious Spherical Robot (ASRobot) to accomplish exploration tasks in the restricted environment, such as amphibious areas and narrow underwater cave. A Legged, Multi-Vectored Water-Jet Composite Propulsion Mechanism (LMVWCPM) is designed with four legs, one of which contains three connecting rod parts, one water-jet thruster and three joints driven by digital servos. Using this mechanism, the robot is able to walk like amphibious turtles on various terrains and swim flexibly in submarine environment. A simplified kinematic model is established to analyze crawling gaits. With simulation of the crawling gait, the driving torques of different joints contributed to the choice of servos and the size of links of legs. Then we also modeled the robot in water and proposed several underwater locomotion. In order to assess the performance of the proposed robot, a series of experiments were carried out in the lab pool and on flat ground using the prototype robot. Experiments results verified the effectiveness of LMVWCPM and the amphibious control approaches.

"Advanced Autonomy on a Low-Cost Educational Drone Platform," by Luke Eller, Theo Guerin, Baichuan Huang, Garrett Warren, Sophie Yang, Josh Roy, and Stefanie Tellex from Brown University, USA.

PiDrone is a quadrotor platform created to accompany an introductory robotics course. Students build an autonomous flying robot from scratch and learn to program it through assignments and projects. Existing educational robots do not have significant autonomous capabilities, such as high-level planning and mapping. We present a hardware and software framework for an autonomous aerial robot, in which all software for autonomy can run onboard the drone, implemented in Python. We present an Unscented Kalman Filter (UKF) for accurate state estimation. Next, we present an implementation of Monte Carlo (MC) Localization and Fast-SLAM for Simultaneous Localization and Mapping (SLAM). The performance of UKF, localization, and SLAM is tested and compared to ground truth, provided by a motion-capture system. Our evaluation demonstrates that our autonomous educational framework runs quickly and accurately on a Raspberry Pi in Python, making it ideal for use in educational settings.

"FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality," by Winter Guerra, Ezra Tal, Varun Murali, Gilhyun Ryou and Sertac Karaman from the Massachusetts Institute of Technology, USA.

FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in flight in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s) in flight. While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex dynamics are generated organically through natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest. FlightGoggles is distributed as open-source software along with the photorealistic graphics assets for several simulation environments, under the MIT license at http://flightgoggles.mit.edu.

"An Autonomous Quadrotor System for Robust High-Speed Flight Through Cluttered Environments Without GPS," by Marc Rigter, Benjamin Morrell, Robert G. Reid, Gene B. Merewether, Theodore Tzanetos, Vinay Rajur, KC Wong, and Larry H. Matthies from University of Sydney, Australia; NASA Jet Propulsion Laboratory, California Institute of Technology, USA; and Georgia Institute of Technology, USA.

Robust autonomous flight without GPS is key to many emerging drone applications, such as delivery, search and rescue, and warehouse inspection. These and other appli- cations require accurate trajectory tracking through cluttered static environments, where GPS can be unreliable, while high- speed, agile, flight can increase efficiency. We describe the hardware and software of a quadrotor system that meets these requirements with onboard processing: a custom 300 mm wide quadrotor that uses two wide-field-of-view cameras for visual- inertial motion tracking and relocalization to a prior map. Collision-free trajectories are planned offline and tracked online with a custom tracking controller. This controller includes compensation for drag and variability in propeller performance, enabling accurate trajectory tracking, even at high speeds where aerodynamic effects are significant. We describe a system identification approach that identifies quadrotor-specific parameters via maximum likelihood estimation from flight data. Results from flight experiments are presented, which 1) validate the system identification method, 2) show that our controller with aerodynamic compensation reduces tracking error by more than 50% in both horizontal flights at up to 8.5 m/s and vertical flights at up to 3.1 m/s compared to the state-of-the-art, and 3) demonstrate our system tracking complex, aggressive, trajectories.

"Morphing Structure for Changing Hydrodynamic Characteristics of a Soft Underwater Walking Robot," by Michael Ishida, Dylan Drotman, Benjamin Shih, Mark Hermes, Mitul Luhar, and Michael T. Tolley from the University of California, San Diego (UCSD) and University of Southern California, USA.

Existing platforms for underwater exploration and inspection are often limited to traversing open water and must expend large amounts of energy to maintain a position in flow for long periods of time. Many benthic animals overcome these limitations using legged locomotion and have different hydrodynamic profiles dictated by different body morphologies. This work presents an underwater legged robot with soft legs and a soft inflatable morphing body that can change shape to influence its hydrodynamic characteristics. Flow over the morphing body separates behind the trailing edge of the inflated shape, so whether the protrusion is at the front, center, or back of the robot influences the amount of drag and lift. When the legged robot (2.87 N underwater weight) needs to remain stationary in flow, an asymmetrically inflated body resists sliding by reducing lift on the body by 40% (from 0.52 N to 0.31 N) at the highest flow rate tested while only increasing drag by 5.5% (from 1.75 N to 1.85 N). When the legged robot needs to walk with flow, a large inflated body is pushed along by the flow, causing the robot to walk 16% faster than it would with an uninflated body. The body shape significantly affects the ability of the robot to walk against flow as it is able to walk against 0.09 m/s flow with the uninflated body, but is pushed backwards with a large inflated body. We demonstrate that the robot can detect changes in flow velocity with a commercial force sensor and respond by morphing into a hydrodynamically preferable shape.

The help of a remote expert in performing a maintenance task can be useful in many situations, and can save time as well as money. In this context, augmented reality (AR) technologies can improve remote guidance thanks to the direct overlay of 3D information onto the real world. Furthermore, virtual reality (VR) enables a remote expert to virtually share the place in which the physical maintenance is being carried out. In a traditional local collaboration, collaborators are face-to-face and are observing the same artifact, while being able to communicate verbally and use body language, such as gaze direction or facial expression. These interpersonal communication cues are usually limited in remote collaborative maintenance scenarios, in which the agent uses an AR setup while the remote expert uses VR. Providing users with adapted interaction and awareness features to compensate for the lack of essential communication signals is therefore a real challenge for remote MR collaboration. However, this context offers new opportunities for augmenting collaborative abilities, such as sharing an identical point of view, which is not possible in real life. Based on the current task of the maintenance procedure, such as navigation to the correct location or physical manipulation, the remote expert may choose to freely control his/her own viewpoint of the distant workspace, or instead may need to share the viewpoint of the agent in order to better understand the current situation. In this work, we first focus on the navigation task, which is essential to complete the diagnostic phase and to begin the maintenance task in the correct location. We then present a novel interaction paradigm, implemented in an early prototype, in which the guide can show the operator the manipulation gestures required to achieve a physical task that is necessary to perform the maintenance procedure. These concepts are evaluated, allowing us to provide guidelines for future systems targeting efficient remote collaboration in MR environments.

Social or humanoid robots do hardly show up in “the wild,” aiming at pervasive and enduring human benefits such as child health. This paper presents a socio-cognitive engineering (SCE) methodology that guides the ongoing research & development for an evolving, longer-lasting human-robot partnership in practice. The SCE methodology has been applied in a large European project to develop a robotic partner that supports the daily diabetes management processes of children, aged between 7 and 14 years (i.e., Personal Assistant for a healthy Lifestyle, PAL). Four partnership functions were identified and worked out (joint objectives, agreements, experience sharing, and feedback & explanation) together with a common knowledge-base and interaction design for child's prolonged disease self-management. In an iterative refinement process of three cycles, these functions, knowledge base and interactions were built, integrated, tested, refined, and extended so that the PAL robot could more and more act as an effective partner for diabetes management. The SCE methodology helped to integrate into the human-agent/robot system: (a) theories, models, and methods from different scientific disciplines, (b) technologies from different fields, (c) varying diabetes management practices, and (d) last but not least, the diverse individual and context-dependent needs of the patients and caregivers. The resulting robotic partner proved to support the children on the three basic needs of the Self-Determination Theory: autonomy, competence, and relatedness. This paper presents the R&D methodology and the human-robot partnership framework for prolonged “blended” care of children with a chronic disease (children could use it up to 6 months; the robot in the hospitals and diabetes camps, and its avatar at home). It represents a new type of human-agent/robot systems with an evolving collective intelligence. The underlying ontology and design rationale can be used as foundation for further developments of long-duration human-robot partnerships “in the wild.”

Pages